problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_523 | rasdani/github-patches | git_diff | streamlit__streamlit-2217 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Streamlit fails to start without Git executable
# Summary
Streamlit version `0.69.1` fails to start when run inside a Docker container that doesn't have Git installed.
# Steps to reproduce
1. Create a `Dockerfile` with the following contents:
```dockerfile
FROM python:3.8-slim
RUN pip install streamlit
CMD ["streamlit", "hello"]
```
2. Build the image:
```bash
docker build -t demo .
```
3. Run the app:
```bash
docker run -it --rm demo
```
## Expected behavior:
Streamlit starts without issues.
## Actual behavior:
Streamlit fails to start and displays the following error message:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/git/__init__.py", line 83, in <module>
refresh()
File "/usr/local/lib/python3.8/site-packages/git/__init__.py", line 73, in refresh
if not Git.refresh(path=path):
File "/usr/local/lib/python3.8/site-packages/git/cmd.py", line 278, in refresh
raise ImportError(err)
ImportError: Bad git executable.
The git executable must be specified in one of the following ways:
- be included in your $PATH
- be set via $GIT_PYTHON_GIT_EXECUTABLE
- explicitly set via git.refresh()
All git commands will error until this is rectified.
This initial warning can be silenced or aggravated in the future by setting the
$GIT_PYTHON_REFRESH environment variable. Use one of the following values:
- quiet|q|silence|s|none|n|0: for no warning or exception
- warn|w|warning|1: for a printed warning
- error|e|raise|r|2: for a raised exception
Example:
export GIT_PYTHON_REFRESH=quiet
```
## Is this a regression?
**yes** (worked up until at least version `0.67.1`)
# Debug info
- Streamlit version: `0.69.1`
- Python version: `3.8.6`
- Using Conda? PipEnv? PyEnv? Pex? **NO**
- OS version: `4.19.76-linuxkit`
# Additional information
This bug can be worked around by setting `GIT_PYTHON_REFRESH=quiet` environment variable inside the Docker image.
</issue>
<code>
[start of lib/setup.py]
1 import os
2 import platform
3 import setuptools
4 import subprocess
5 import sys
6
7 from pipenv.project import Project
8 from pipenv.utils import convert_deps_to_pip
9 from setuptools.command.install import install
10
11 VERSION = "0.69.1" # PEP-440
12
13 NAME = "streamlit"
14
15 DESCRIPTION = "The fastest way to build data apps in Python"
16
17 LONG_DESCRIPTION = (
18 "Streamlit's open-source app framework is the easiest way "
19 "for data scientists and machine learning engineers to "
20 "create beautiful, performant apps in only a few hours! "
21 "All in pure Python. All for free."
22 )
23
24 pipfile = Project(chdir=False).parsed_pipfile
25
26 packages = pipfile["packages"].copy()
27 requirements = convert_deps_to_pip(packages, r=False)
28
29 # Check whether xcode tools are available before making watchdog a
30 # dependency (only if the current system is a Mac).
31 if platform.system() == "Darwin":
32 has_xcode = subprocess.call(["xcode-select", "--version"], shell=False) == 0
33 has_gcc = subprocess.call(["gcc", "--version"], shell=False) == 0
34
35 if not (has_xcode and has_gcc):
36 try:
37 requirements.remove("watchdog")
38 except ValueError:
39 pass
40
41
42 class VerifyVersionCommand(install):
43 """Custom command to verify that the git tag matches our version"""
44
45 description = "verify that the git tag matches our version"
46
47 def run(self):
48 tag = os.getenv("CIRCLE_TAG")
49
50 if tag != VERSION:
51 info = "Git tag: {0} does not match the version of this app: {1}".format(
52 tag, VERSION
53 )
54 sys.exit(info)
55
56
57 setuptools.setup(
58 name=NAME,
59 version=VERSION,
60 description=DESCRIPTION,
61 long_description=LONG_DESCRIPTION,
62 url="https://streamlit.io",
63 author="Streamlit Inc",
64 author_email="[email protected]",
65 python_requires=">=3.6",
66 license="Apache 2",
67 packages=setuptools.find_packages(exclude=["tests", "tests.*"]),
68 # Requirements
69 install_requires=requirements,
70 zip_safe=False, # install source files not egg
71 include_package_data=True, # copy html and friends
72 entry_points={"console_scripts": ["streamlit = streamlit.cli:main"]},
73 # For Windows so that streamlit * commands work ie.
74 # - streamlit version
75 # - streamlit hello
76 scripts=["bin/streamlit.cmd"],
77 cmdclass={
78 "verify": VerifyVersionCommand,
79 },
80 )
81
[end of lib/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/setup.py b/lib/setup.py
--- a/lib/setup.py
+++ b/lib/setup.py
@@ -8,7 +8,7 @@
from pipenv.utils import convert_deps_to_pip
from setuptools.command.install import install
-VERSION = "0.69.1" # PEP-440
+VERSION = "0.69.2" # PEP-440
NAME = "streamlit"
| {"golden_diff": "diff --git a/lib/setup.py b/lib/setup.py\n--- a/lib/setup.py\n+++ b/lib/setup.py\n@@ -8,7 +8,7 @@\n from pipenv.utils import convert_deps_to_pip\n from setuptools.command.install import install\n \n-VERSION = \"0.69.1\" # PEP-440\n+VERSION = \"0.69.2\" # PEP-440\n \n NAME = \"streamlit\"\n", "issue": "Streamlit fails to start without Git executable\n# Summary\r\n\r\nStreamlit version `0.69.1` fails to start when run inside a Docker container that doesn't have Git installed.\r\n\r\n# Steps to reproduce\r\n\r\n1. Create a `Dockerfile` with the following contents:\r\n```dockerfile\r\nFROM python:3.8-slim\r\nRUN pip install streamlit\r\nCMD [\"streamlit\", \"hello\"]\r\n```\r\n2. Build the image:\r\n```bash\r\ndocker build -t demo .\r\n```\r\n3. Run the app:\r\n```bash\r\ndocker run -it --rm demo\r\n```\r\n\r\n## Expected behavior:\r\n\r\nStreamlit starts without issues.\r\n\r\n## Actual behavior:\r\n\r\nStreamlit fails to start and displays the following error message:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/git/__init__.py\", line 83, in <module>\r\n refresh()\r\n File \"/usr/local/lib/python3.8/site-packages/git/__init__.py\", line 73, in refresh\r\n if not Git.refresh(path=path):\r\n File \"/usr/local/lib/python3.8/site-packages/git/cmd.py\", line 278, in refresh\r\n raise ImportError(err)\r\nImportError: Bad git executable.\r\nThe git executable must be specified in one of the following ways:\r\n - be included in your $PATH\r\n - be set via $GIT_PYTHON_GIT_EXECUTABLE\r\n - explicitly set via git.refresh()\r\n\r\nAll git commands will error until this is rectified.\r\n\r\nThis initial warning can be silenced or aggravated in the future by setting the\r\n$GIT_PYTHON_REFRESH environment variable. Use one of the following values:\r\n - quiet|q|silence|s|none|n|0: for no warning or exception\r\n - warn|w|warning|1: for a printed warning\r\n - error|e|raise|r|2: for a raised exception\r\n\r\nExample:\r\n export GIT_PYTHON_REFRESH=quiet\r\n```\r\n\r\n## Is this a regression?\r\n\r\n**yes** (worked up until at least version `0.67.1`)\r\n\r\n# Debug info\r\n\r\n- Streamlit version: `0.69.1`\r\n- Python version: `3.8.6`\r\n- Using Conda? PipEnv? PyEnv? Pex? **NO**\r\n- OS version: `4.19.76-linuxkit`\r\n\r\n# Additional information\r\n\r\nThis bug can be worked around by setting `GIT_PYTHON_REFRESH=quiet` environment variable inside the Docker image.\r\n\n", "before_files": [{"content": "import os\nimport platform\nimport setuptools\nimport subprocess\nimport sys\n\nfrom pipenv.project import Project\nfrom pipenv.utils import convert_deps_to_pip\nfrom setuptools.command.install import install\n\nVERSION = \"0.69.1\" # PEP-440\n\nNAME = \"streamlit\"\n\nDESCRIPTION = \"The fastest way to build data apps in Python\"\n\nLONG_DESCRIPTION = (\n \"Streamlit's open-source app framework is the easiest way \"\n \"for data scientists and machine learning engineers to \"\n \"create beautiful, performant apps in only a few hours! \"\n \"All in pure Python. All for free.\"\n)\n\npipfile = Project(chdir=False).parsed_pipfile\n\npackages = pipfile[\"packages\"].copy()\nrequirements = convert_deps_to_pip(packages, r=False)\n\n# Check whether xcode tools are available before making watchdog a\n# dependency (only if the current system is a Mac).\nif platform.system() == \"Darwin\":\n has_xcode = subprocess.call([\"xcode-select\", \"--version\"], shell=False) == 0\n has_gcc = subprocess.call([\"gcc\", \"--version\"], shell=False) == 0\n\n if not (has_xcode and has_gcc):\n try:\n requirements.remove(\"watchdog\")\n except ValueError:\n pass\n\n\nclass VerifyVersionCommand(install):\n \"\"\"Custom command to verify that the git tag matches our version\"\"\"\n\n description = \"verify that the git tag matches our version\"\n\n def run(self):\n tag = os.getenv(\"CIRCLE_TAG\")\n\n if tag != VERSION:\n info = \"Git tag: {0} does not match the version of this app: {1}\".format(\n tag, VERSION\n )\n sys.exit(info)\n\n\nsetuptools.setup(\n name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n url=\"https://streamlit.io\",\n author=\"Streamlit Inc\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n license=\"Apache 2\",\n packages=setuptools.find_packages(exclude=[\"tests\", \"tests.*\"]),\n # Requirements\n install_requires=requirements,\n zip_safe=False, # install source files not egg\n include_package_data=True, # copy html and friends\n entry_points={\"console_scripts\": [\"streamlit = streamlit.cli:main\"]},\n # For Windows so that streamlit * commands work ie.\n # - streamlit version\n # - streamlit hello\n scripts=[\"bin/streamlit.cmd\"],\n cmdclass={\n \"verify\": VerifyVersionCommand,\n },\n)\n", "path": "lib/setup.py"}]} | 1,782 | 99 |
gh_patches_debug_34311 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2084 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False positive for CKV_AZURE_43: check storage account name
I'm building my Storage Account names like this
```
name = "${local.saname_prefix}diagnostics${module.tf-var-project.random_id}
```
With https://github.com/bridgecrewio/checkov/pull/429 merged I now get a Check failure on the SA name:
```
Check: CKV_AZURE_43: "Ensure the Storage Account naming rules"
FAILED for resource: azurerm_storage_account.diagnostics
File: /az_diag_sa.tf:8-22
8 | resource "azurerm_storage_account" "diagnostics" {
9 | #checkov:skip=CKV_AZURE_35:Public access is allowed
10 | name = "${local.saname_prefix}diagnostics${module.tf-var-project.random_id}"
````
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/StorageAccountName.py]
1 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
2 from checkov.common.models.enums import CheckResult, CheckCategories
3 import re
4 from typing import List
5
6 STO_NAME_REGEX = re.compile('^[a-z0-9]{3,24}$')
7
8
9 class StorageAccountName(BaseResourceCheck):
10 def __init__(self):
11 name = "Ensure Storage Accounts adhere to the naming rules"
12 id = "CKV_AZURE_43"
13 supported_resources = ['azurerm_storage_account']
14 categories = [CheckCategories.CONVENTION]
15 super().__init__(name=name, id=id, categories=categories,
16 supported_resources=supported_resources)
17
18 def scan_resource_conf(self, conf):
19 """
20 The Storage Account naming reference:
21 https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts
22 :param conf: azurerm_storage_account configuration
23 :return: <CheckResult>
24 """
25 return CheckResult.PASSED if conf.get('name') and re.findall(STO_NAME_REGEX, str(conf['name'][0])) else CheckResult.FAILED
26
27 def get_evaluated_keys(self) -> List[str]:
28 return ['name']
29
30
31 check = StorageAccountName()
32
[end of checkov/terraform/checks/resource/azure/StorageAccountName.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/StorageAccountName.py b/checkov/terraform/checks/resource/azure/StorageAccountName.py
--- a/checkov/terraform/checks/resource/azure/StorageAccountName.py
+++ b/checkov/terraform/checks/resource/azure/StorageAccountName.py
@@ -1,31 +1,41 @@
+import re
+from typing import List, Dict, Any
+
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.models.enums import CheckResult, CheckCategories
-import re
-from typing import List
-STO_NAME_REGEX = re.compile('^[a-z0-9]{3,24}$')
+STO_NAME_REGEX = re.compile(r"^[a-z0-9]{3,24}$")
+VARIABLE_REFS = ("local.", "module.", "var.")
class StorageAccountName(BaseResourceCheck):
- def __init__(self):
+ def __init__(self) -> None:
name = "Ensure Storage Accounts adhere to the naming rules"
id = "CKV_AZURE_43"
- supported_resources = ['azurerm_storage_account']
+ supported_resources = ["azurerm_storage_account"]
categories = [CheckCategories.CONVENTION]
- super().__init__(name=name, id=id, categories=categories,
- supported_resources=supported_resources)
+ super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def scan_resource_conf(self, conf):
+ def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:
"""
The Storage Account naming reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts
:param conf: azurerm_storage_account configuration
:return: <CheckResult>
"""
- return CheckResult.PASSED if conf.get('name') and re.findall(STO_NAME_REGEX, str(conf['name'][0])) else CheckResult.FAILED
+ name = conf.get("name")
+ if name:
+ name = name[0]
+ if any(x in name for x in VARIABLE_REFS):
+ # in the case we couldn't evaluate the name, just ignore
+ return CheckResult.UNKNOWN
+ if re.findall(STO_NAME_REGEX, str(conf["name"][0])):
+ return CheckResult.PASSED
+
+ return CheckResult.FAILED
def get_evaluated_keys(self) -> List[str]:
- return ['name']
+ return ["name"]
check = StorageAccountName()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/StorageAccountName.py b/checkov/terraform/checks/resource/azure/StorageAccountName.py\n--- a/checkov/terraform/checks/resource/azure/StorageAccountName.py\n+++ b/checkov/terraform/checks/resource/azure/StorageAccountName.py\n@@ -1,31 +1,41 @@\n+import re\n+from typing import List, Dict, Any\n+\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n from checkov.common.models.enums import CheckResult, CheckCategories\n-import re\n-from typing import List\n \n-STO_NAME_REGEX = re.compile('^[a-z0-9]{3,24}$')\n+STO_NAME_REGEX = re.compile(r\"^[a-z0-9]{3,24}$\")\n+VARIABLE_REFS = (\"local.\", \"module.\", \"var.\")\n \n \n class StorageAccountName(BaseResourceCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n name = \"Ensure Storage Accounts adhere to the naming rules\"\n id = \"CKV_AZURE_43\"\n- supported_resources = ['azurerm_storage_account']\n+ supported_resources = [\"azurerm_storage_account\"]\n categories = [CheckCategories.CONVENTION]\n- super().__init__(name=name, id=id, categories=categories,\n- supported_resources=supported_resources)\n+ super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def scan_resource_conf(self, conf):\n+ def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:\n \"\"\"\n The Storage Account naming reference:\n https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts\n :param conf: azurerm_storage_account configuration\n :return: <CheckResult>\n \"\"\"\n- return CheckResult.PASSED if conf.get('name') and re.findall(STO_NAME_REGEX, str(conf['name'][0])) else CheckResult.FAILED\n+ name = conf.get(\"name\")\n+ if name:\n+ name = name[0]\n+ if any(x in name for x in VARIABLE_REFS):\n+ # in the case we couldn't evaluate the name, just ignore\n+ return CheckResult.UNKNOWN\n+ if re.findall(STO_NAME_REGEX, str(conf[\"name\"][0])):\n+ return CheckResult.PASSED\n+\n+ return CheckResult.FAILED\n \n def get_evaluated_keys(self) -> List[str]:\n- return ['name']\n+ return [\"name\"]\n \n \n check = StorageAccountName()\n", "issue": "False positive for CKV_AZURE_43: check storage account name\nI'm building my Storage Account names like this\r\n```\r\nname = \"${local.saname_prefix}diagnostics${module.tf-var-project.random_id}\r\n```\r\n\r\nWith https://github.com/bridgecrewio/checkov/pull/429 merged I now get a Check failure on the SA name:\r\n\r\n```\r\nCheck: CKV_AZURE_43: \"Ensure the Storage Account naming rules\"\r\n\tFAILED for resource: azurerm_storage_account.diagnostics\r\n\tFile: /az_diag_sa.tf:8-22\r\n\r\n\t\t8 | resource \"azurerm_storage_account\" \"diagnostics\" {\r\n\t\t9 | #checkov:skip=CKV_AZURE_35:Public access is allowed\r\n\t\t10 | name = \"${local.saname_prefix}diagnostics${module.tf-var-project.random_id}\"\r\n\r\n````\n", "before_files": [{"content": "from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nimport re\nfrom typing import List\n\nSTO_NAME_REGEX = re.compile('^[a-z0-9]{3,24}$')\n\n\nclass StorageAccountName(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure Storage Accounts adhere to the naming rules\"\n id = \"CKV_AZURE_43\"\n supported_resources = ['azurerm_storage_account']\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories,\n supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n The Storage Account naming reference:\n https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts\n :param conf: azurerm_storage_account configuration\n :return: <CheckResult>\n \"\"\"\n return CheckResult.PASSED if conf.get('name') and re.findall(STO_NAME_REGEX, str(conf['name'][0])) else CheckResult.FAILED\n\n def get_evaluated_keys(self) -> List[str]:\n return ['name']\n\n\ncheck = StorageAccountName()\n", "path": "checkov/terraform/checks/resource/azure/StorageAccountName.py"}]} | 1,078 | 572 |
gh_patches_debug_23360 | rasdani/github-patches | git_diff | allegro__ralph-3159 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Doc fixes
Some minor doc fixes with a bit of style change
</issue>
<code>
[start of src/ralph/dashboards/management/commands/push_graphs_to_statsd.py]
1 # -*- coding: utf-8 -*-
2 import logging
3 import textwrap
4
5 from django.conf import settings
6 from django.core.management.base import BaseCommand
7 from django.utils.text import slugify
8
9 from ralph.dashboards.models import Graph
10 from ralph.lib.metrics import build_statsd_client
11
12 logger = logging.getLogger(__name__)
13 PREFIX = settings.STATSD_GRAPHS_PREFIX
14 STATSD_PATH = '{}.{{}}.{{}}'.format(PREFIX)
15
16
17 def normalize(s):
18 s = slugify(s)
19 return s.replace('-', '_')
20
21
22 class Command(BaseCommand):
23 """Push to statsd data generated by graphs."""
24 help = textwrap.dedent(__doc__).strip()
25
26 def handle(self, *args, **kwargs):
27 statsd = build_statsd_client(prefix=STATSD_PATH)
28 graphs = Graph.objects.filter(push_to_statsd=True)
29 for graph in graphs:
30 graph_data = graph.get_data()
31 graph_name = normalize(graph.name)
32 for label, value in zip(graph_data['labels'], graph_data['series']):
33 path = STATSD_PATH.format(graph_name, normalize(label))
34 statsd.gauge(path, value)
35
[end of src/ralph/dashboards/management/commands/push_graphs_to_statsd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py b/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py
--- a/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py
+++ b/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py
@@ -10,8 +10,6 @@
from ralph.lib.metrics import build_statsd_client
logger = logging.getLogger(__name__)
-PREFIX = settings.STATSD_GRAPHS_PREFIX
-STATSD_PATH = '{}.{{}}.{{}}'.format(PREFIX)
def normalize(s):
@@ -24,11 +22,11 @@
help = textwrap.dedent(__doc__).strip()
def handle(self, *args, **kwargs):
- statsd = build_statsd_client(prefix=STATSD_PATH)
+ statsd = build_statsd_client(prefix=settings.STATSD_GRAPHS_PREFIX)
graphs = Graph.objects.filter(push_to_statsd=True)
for graph in graphs:
graph_data = graph.get_data()
graph_name = normalize(graph.name)
for label, value in zip(graph_data['labels'], graph_data['series']):
- path = STATSD_PATH.format(graph_name, normalize(label))
+ path = '.'.join((graph_name, normalize(label)))
statsd.gauge(path, value)
| {"golden_diff": "diff --git a/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py b/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py\n--- a/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py\n+++ b/src/ralph/dashboards/management/commands/push_graphs_to_statsd.py\n@@ -10,8 +10,6 @@\n from ralph.lib.metrics import build_statsd_client\n \n logger = logging.getLogger(__name__)\n-PREFIX = settings.STATSD_GRAPHS_PREFIX\n-STATSD_PATH = '{}.{{}}.{{}}'.format(PREFIX)\n \n \n def normalize(s):\n@@ -24,11 +22,11 @@\n help = textwrap.dedent(__doc__).strip()\n \n def handle(self, *args, **kwargs):\n- statsd = build_statsd_client(prefix=STATSD_PATH)\n+ statsd = build_statsd_client(prefix=settings.STATSD_GRAPHS_PREFIX)\n graphs = Graph.objects.filter(push_to_statsd=True)\n for graph in graphs:\n graph_data = graph.get_data()\n graph_name = normalize(graph.name)\n for label, value in zip(graph_data['labels'], graph_data['series']):\n- path = STATSD_PATH.format(graph_name, normalize(label))\n+ path = '.'.join((graph_name, normalize(label)))\n statsd.gauge(path, value)\n", "issue": "Doc fixes\nSome minor doc fixes with a bit of style change\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport logging\nimport textwrap\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.utils.text import slugify\n\nfrom ralph.dashboards.models import Graph\nfrom ralph.lib.metrics import build_statsd_client\n\nlogger = logging.getLogger(__name__)\nPREFIX = settings.STATSD_GRAPHS_PREFIX\nSTATSD_PATH = '{}.{{}}.{{}}'.format(PREFIX)\n\n\ndef normalize(s):\n s = slugify(s)\n return s.replace('-', '_')\n\n\nclass Command(BaseCommand):\n \"\"\"Push to statsd data generated by graphs.\"\"\"\n help = textwrap.dedent(__doc__).strip()\n\n def handle(self, *args, **kwargs):\n statsd = build_statsd_client(prefix=STATSD_PATH)\n graphs = Graph.objects.filter(push_to_statsd=True)\n for graph in graphs:\n graph_data = graph.get_data()\n graph_name = normalize(graph.name)\n for label, value in zip(graph_data['labels'], graph_data['series']):\n path = STATSD_PATH.format(graph_name, normalize(label))\n statsd.gauge(path, value)\n", "path": "src/ralph/dashboards/management/commands/push_graphs_to_statsd.py"}]} | 885 | 314 |
gh_patches_debug_23048 | rasdani/github-patches | git_diff | cupy__cupy-5759 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`cupy.concatenate()` misses arguments `dtype` and `casting`
Refs:
- NumPy: https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html
- CuPy: https://docs.cupy.dev/en/stable/reference/generated/cupy.concatenate.html
The `dtype` argument is needed by the Array API standard (#5698, #4789).
</issue>
<code>
[start of cupy/_manipulation/join.py]
1 import cupy
2 from cupy import _core
3
4
5 def column_stack(tup):
6 """Stacks 1-D and 2-D arrays as columns into a 2-D array.
7
8 A 1-D array is first converted to a 2-D column array. Then, the 2-D arrays
9 are concatenated along the second axis.
10
11 Args:
12 tup (sequence of arrays): 1-D or 2-D arrays to be stacked.
13
14 Returns:
15 cupy.ndarray: A new 2-D array of stacked columns.
16
17 .. seealso:: :func:`numpy.column_stack`
18
19 """
20 if any(not isinstance(a, cupy.ndarray) for a in tup):
21 raise TypeError('Only cupy arrays can be column stacked')
22
23 lst = list(tup)
24 for i, a in enumerate(lst):
25 if a.ndim == 1:
26 a = a[:, cupy.newaxis]
27 lst[i] = a
28 elif a.ndim != 2:
29 raise ValueError(
30 'Only 1 or 2 dimensional arrays can be column stacked')
31
32 return concatenate(lst, axis=1)
33
34
35 def concatenate(tup, axis=0, out=None):
36 """Joins arrays along an axis.
37
38 Args:
39 tup (sequence of arrays): Arrays to be joined. All of these should have
40 same dimensionalities except the specified axis.
41 axis (int or None): The axis to join arrays along.
42 If axis is None, arrays are flattened before use.
43 Default is 0.
44 out (cupy.ndarray): Output array.
45
46 Returns:
47 cupy.ndarray: Joined array.
48
49 .. seealso:: :func:`numpy.concatenate`
50
51 """
52 if axis is None:
53 tup = [m.ravel() for m in tup]
54 axis = 0
55 return _core.concatenate_method(tup, axis, out)
56
57
58 def dstack(tup):
59 """Stacks arrays along the third axis.
60
61 Args:
62 tup (sequence of arrays): Arrays to be stacked. Each array is converted
63 by :func:`cupy.atleast_3d` before stacking.
64
65 Returns:
66 cupy.ndarray: Stacked array.
67
68 .. seealso:: :func:`numpy.dstack`
69
70 """
71 return concatenate([cupy.atleast_3d(m) for m in tup], 2)
72
73
74 def hstack(tup):
75 """Stacks arrays horizontally.
76
77 If an input array has one dimension, then the array is treated as a
78 horizontal vector and stacked along the first axis. Otherwise, the array is
79 stacked along the second axis.
80
81 Args:
82 tup (sequence of arrays): Arrays to be stacked.
83
84 Returns:
85 cupy.ndarray: Stacked array.
86
87 .. seealso:: :func:`numpy.hstack`
88
89 """
90 arrs = [cupy.atleast_1d(a) for a in tup]
91 axis = 1
92 if arrs[0].ndim == 1:
93 axis = 0
94 return concatenate(arrs, axis)
95
96
97 def vstack(tup):
98 """Stacks arrays vertically.
99
100 If an input array has one dimension, then the array is treated as a
101 horizontal vector and stacked along the additional axis at the head.
102 Otherwise, the array is stacked along the first axis.
103
104 Args:
105 tup (sequence of arrays): Arrays to be stacked. Each array is converted
106 by :func:`cupy.atleast_2d` before stacking.
107
108 Returns:
109 cupy.ndarray: Stacked array.
110
111 .. seealso:: :func:`numpy.dstack`
112
113 """
114 return concatenate([cupy.atleast_2d(m) for m in tup], 0)
115
116
117 def stack(tup, axis=0, out=None):
118 """Stacks arrays along a new axis.
119
120 Args:
121 tup (sequence of arrays): Arrays to be stacked.
122 axis (int): Axis along which the arrays are stacked.
123 out (cupy.ndarray): Output array.
124
125 Returns:
126 cupy.ndarray: Stacked array.
127
128 .. seealso:: :func:`numpy.stack`
129 """
130 return concatenate([cupy.expand_dims(x, axis) for x in tup], axis, out)
131
[end of cupy/_manipulation/join.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/_manipulation/join.py b/cupy/_manipulation/join.py
--- a/cupy/_manipulation/join.py
+++ b/cupy/_manipulation/join.py
@@ -32,7 +32,7 @@
return concatenate(lst, axis=1)
-def concatenate(tup, axis=0, out=None):
+def concatenate(tup, axis=0, out=None, *, dtype=None, casting='same_kind'):
"""Joins arrays along an axis.
Args:
@@ -42,6 +42,11 @@
If axis is None, arrays are flattened before use.
Default is 0.
out (cupy.ndarray): Output array.
+ dtype (str or dtype): If provided, the destination array will have this
+ dtype. Cannot be provided together with ``out``.
+ casting ({‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional):
+ Controls what kind of data casting may occur. Defaults to
+ ``'same_kind'``.
Returns:
cupy.ndarray: Joined array.
@@ -52,7 +57,7 @@
if axis is None:
tup = [m.ravel() for m in tup]
axis = 0
- return _core.concatenate_method(tup, axis, out)
+ return _core.concatenate_method(tup, axis, out, dtype, casting)
def dstack(tup):
| {"golden_diff": "diff --git a/cupy/_manipulation/join.py b/cupy/_manipulation/join.py\n--- a/cupy/_manipulation/join.py\n+++ b/cupy/_manipulation/join.py\n@@ -32,7 +32,7 @@\n return concatenate(lst, axis=1)\n \n \n-def concatenate(tup, axis=0, out=None):\n+def concatenate(tup, axis=0, out=None, *, dtype=None, casting='same_kind'):\n \"\"\"Joins arrays along an axis.\n \n Args:\n@@ -42,6 +42,11 @@\n If axis is None, arrays are flattened before use.\n Default is 0.\n out (cupy.ndarray): Output array.\n+ dtype (str or dtype): If provided, the destination array will have this\n+ dtype. Cannot be provided together with ``out``.\n+ casting ({\u2018no\u2019, \u2018equiv\u2019, \u2018safe\u2019, \u2018same_kind\u2019, \u2018unsafe\u2019}, optional):\n+ Controls what kind of data casting may occur. Defaults to\n+ ``'same_kind'``.\n \n Returns:\n cupy.ndarray: Joined array.\n@@ -52,7 +57,7 @@\n if axis is None:\n tup = [m.ravel() for m in tup]\n axis = 0\n- return _core.concatenate_method(tup, axis, out)\n+ return _core.concatenate_method(tup, axis, out, dtype, casting)\n \n \n def dstack(tup):\n", "issue": "`cupy.concatenate()` misses arguments `dtype` and `casting`\nRefs:\r\n- NumPy: https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html\r\n- CuPy: https://docs.cupy.dev/en/stable/reference/generated/cupy.concatenate.html\r\n\r\nThe `dtype` argument is needed by the Array API standard (#5698, #4789).\n", "before_files": [{"content": "import cupy\nfrom cupy import _core\n\n\ndef column_stack(tup):\n \"\"\"Stacks 1-D and 2-D arrays as columns into a 2-D array.\n\n A 1-D array is first converted to a 2-D column array. Then, the 2-D arrays\n are concatenated along the second axis.\n\n Args:\n tup (sequence of arrays): 1-D or 2-D arrays to be stacked.\n\n Returns:\n cupy.ndarray: A new 2-D array of stacked columns.\n\n .. seealso:: :func:`numpy.column_stack`\n\n \"\"\"\n if any(not isinstance(a, cupy.ndarray) for a in tup):\n raise TypeError('Only cupy arrays can be column stacked')\n\n lst = list(tup)\n for i, a in enumerate(lst):\n if a.ndim == 1:\n a = a[:, cupy.newaxis]\n lst[i] = a\n elif a.ndim != 2:\n raise ValueError(\n 'Only 1 or 2 dimensional arrays can be column stacked')\n\n return concatenate(lst, axis=1)\n\n\ndef concatenate(tup, axis=0, out=None):\n \"\"\"Joins arrays along an axis.\n\n Args:\n tup (sequence of arrays): Arrays to be joined. All of these should have\n same dimensionalities except the specified axis.\n axis (int or None): The axis to join arrays along.\n If axis is None, arrays are flattened before use.\n Default is 0.\n out (cupy.ndarray): Output array.\n\n Returns:\n cupy.ndarray: Joined array.\n\n .. seealso:: :func:`numpy.concatenate`\n\n \"\"\"\n if axis is None:\n tup = [m.ravel() for m in tup]\n axis = 0\n return _core.concatenate_method(tup, axis, out)\n\n\ndef dstack(tup):\n \"\"\"Stacks arrays along the third axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked. Each array is converted\n by :func:`cupy.atleast_3d` before stacking.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.dstack`\n\n \"\"\"\n return concatenate([cupy.atleast_3d(m) for m in tup], 2)\n\n\ndef hstack(tup):\n \"\"\"Stacks arrays horizontally.\n\n If an input array has one dimension, then the array is treated as a\n horizontal vector and stacked along the first axis. Otherwise, the array is\n stacked along the second axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.hstack`\n\n \"\"\"\n arrs = [cupy.atleast_1d(a) for a in tup]\n axis = 1\n if arrs[0].ndim == 1:\n axis = 0\n return concatenate(arrs, axis)\n\n\ndef vstack(tup):\n \"\"\"Stacks arrays vertically.\n\n If an input array has one dimension, then the array is treated as a\n horizontal vector and stacked along the additional axis at the head.\n Otherwise, the array is stacked along the first axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked. Each array is converted\n by :func:`cupy.atleast_2d` before stacking.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.dstack`\n\n \"\"\"\n return concatenate([cupy.atleast_2d(m) for m in tup], 0)\n\n\ndef stack(tup, axis=0, out=None):\n \"\"\"Stacks arrays along a new axis.\n\n Args:\n tup (sequence of arrays): Arrays to be stacked.\n axis (int): Axis along which the arrays are stacked.\n out (cupy.ndarray): Output array.\n\n Returns:\n cupy.ndarray: Stacked array.\n\n .. seealso:: :func:`numpy.stack`\n \"\"\"\n return concatenate([cupy.expand_dims(x, axis) for x in tup], axis, out)\n", "path": "cupy/_manipulation/join.py"}]} | 1,819 | 323 |
gh_patches_debug_7541 | rasdani/github-patches | git_diff | twisted__twisted-12106 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 24.1.0 to unbreak users who use the latest PyPy
#12084 is breaking CI for Tahoe-LAFS, so probably is breaking real-world usage for someone somewhere too. So it'd be good to have a release sooner rather than later.
</issue>
<code>
[start of src/twisted/_version.py]
1 """
2 Provides Twisted version information.
3 """
4
5 # This file is auto-generated! Do not edit!
6 # Use `python -m incremental.update Twisted` to change this file.
7
8 from incremental import Version
9
10 __version__ = Version("Twisted", 23, 10, 0, post=0)
11 __all__ = ["__version__"]
12
[end of src/twisted/_version.py]
[start of src/twisted/copyright.py]
1 # Copyright (c) Twisted Matrix Laboratories.
2 # See LICENSE for details.
3
4 """
5 Copyright information for Twisted.
6 """
7
8
9 __all__ = ["copyright", "disclaimer", "longversion", "version"]
10
11 from twisted import __version__ as version, version as _longversion
12
13 longversion = str(_longversion)
14
15 copyright = """\
16 Copyright (c) 2001-2023 Twisted Matrix Laboratories.
17 See LICENSE for details."""
18
19 disclaimer = """
20 Twisted, the Framework of Your Internet
21 {}
22
23 Permission is hereby granted, free of charge, to any person obtaining
24 a copy of this software and associated documentation files (the
25 "Software"), to deal in the Software without restriction, including
26 without limitation the rights to use, copy, modify, merge, publish,
27 distribute, sublicense, and/or sell copies of the Software, and to
28 permit persons to whom the Software is furnished to do so, subject to
29 the following conditions:
30
31 The above copyright notice and this permission notice shall be
32 included in all copies or substantial portions of the Software.
33
34 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
35 EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
36 MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
37 NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
38 LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
39 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
40 WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
41
42 """.format(
43 copyright,
44 )
45
[end of src/twisted/copyright.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/twisted/_version.py b/src/twisted/_version.py
--- a/src/twisted/_version.py
+++ b/src/twisted/_version.py
@@ -7,5 +7,5 @@
from incremental import Version
-__version__ = Version("Twisted", 23, 10, 0, post=0)
+__version__ = Version("Twisted", 24, 3, 0, post=0)
__all__ = ["__version__"]
diff --git a/src/twisted/copyright.py b/src/twisted/copyright.py
--- a/src/twisted/copyright.py
+++ b/src/twisted/copyright.py
@@ -13,7 +13,7 @@
longversion = str(_longversion)
copyright = """\
-Copyright (c) 2001-2023 Twisted Matrix Laboratories.
+Copyright (c) 2001-2024 Twisted Matrix Laboratories.
See LICENSE for details."""
disclaimer = """
| {"golden_diff": "diff --git a/src/twisted/_version.py b/src/twisted/_version.py\n--- a/src/twisted/_version.py\n+++ b/src/twisted/_version.py\n@@ -7,5 +7,5 @@\n \n from incremental import Version\n \n-__version__ = Version(\"Twisted\", 23, 10, 0, post=0)\n+__version__ = Version(\"Twisted\", 24, 3, 0, post=0)\n __all__ = [\"__version__\"]\ndiff --git a/src/twisted/copyright.py b/src/twisted/copyright.py\n--- a/src/twisted/copyright.py\n+++ b/src/twisted/copyright.py\n@@ -13,7 +13,7 @@\n longversion = str(_longversion)\n \n copyright = \"\"\"\\\n-Copyright (c) 2001-2023 Twisted Matrix Laboratories.\n+Copyright (c) 2001-2024 Twisted Matrix Laboratories.\n See LICENSE for details.\"\"\"\n \n disclaimer = \"\"\"\n", "issue": "Release 24.1.0 to unbreak users who use the latest PyPy\n#12084 is breaking CI for Tahoe-LAFS, so probably is breaking real-world usage for someone somewhere too. So it'd be good to have a release sooner rather than later.\n", "before_files": [{"content": "\"\"\"\nProvides Twisted version information.\n\"\"\"\n\n# This file is auto-generated! Do not edit!\n# Use `python -m incremental.update Twisted` to change this file.\n\nfrom incremental import Version\n\n__version__ = Version(\"Twisted\", 23, 10, 0, post=0)\n__all__ = [\"__version__\"]\n", "path": "src/twisted/_version.py"}, {"content": "# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nCopyright information for Twisted.\n\"\"\"\n\n\n__all__ = [\"copyright\", \"disclaimer\", \"longversion\", \"version\"]\n\nfrom twisted import __version__ as version, version as _longversion\n\nlongversion = str(_longversion)\n\ncopyright = \"\"\"\\\nCopyright (c) 2001-2023 Twisted Matrix Laboratories.\nSee LICENSE for details.\"\"\"\n\ndisclaimer = \"\"\"\nTwisted, the Framework of Your Internet\n{}\n\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\n\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\nNONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\nLIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\nOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\nWITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\".format(\n copyright,\n)\n", "path": "src/twisted/copyright.py"}]} | 1,120 | 229 |
gh_patches_debug_6887 | rasdani/github-patches | git_diff | sherlock-project__sherlock-911 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[site_list.py] change numbering to reduce commit size
letting the markdown renderer do the counting lets us reduce commit size and avoide possible merge conflicts.
---
```
1.
1.
1.
```
renders to:
1.
1.
1.
</issue>
<code>
[start of site_list.py]
1 """Sherlock: Supported Site Listing
2 This module generates the listing of supported sites
3 which can be found in sites.md
4 It also organizes all the sites in alphanumeric order
5 """
6 import json
7
8 pool = list()
9
10 with open("sherlock/resources/data.json", "r", encoding="utf-8") as data_file:
11 data = json.load(data_file)
12
13 with open("sites.md", "w") as site_file:
14 data_length = len(data)
15 site_file.write(f'## List Of Supported Sites ({data_length} Sites In Total!)\n')
16
17 for social_network in data:
18 url_main = data.get(social_network).get("urlMain")
19 pool.append((social_network, url_main))
20
21 index = 1
22 for social_network, url_main in pool:
23 site_file.write(f'{index}. [{social_network}]({url_main})\n')
24 index = index + 1
25
26
27 sorted_json_data = json.dumps(data, indent=2, sort_keys=True)
28
29 with open("sherlock/resources/data.json", "w") as data_file:
30 data_file.write(sorted_json_data)
31
32 print("Finished updating supported site listing!")
33
[end of site_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/site_list.py b/site_list.py
--- a/site_list.py
+++ b/site_list.py
@@ -18,11 +18,8 @@
url_main = data.get(social_network).get("urlMain")
pool.append((social_network, url_main))
- index = 1
for social_network, url_main in pool:
- site_file.write(f'{index}. [{social_network}]({url_main})\n')
- index = index + 1
-
+ site_file.write(f'1. [{social_network}]({url_main})\n')
sorted_json_data = json.dumps(data, indent=2, sort_keys=True)
| {"golden_diff": "diff --git a/site_list.py b/site_list.py\n--- a/site_list.py\n+++ b/site_list.py\n@@ -18,11 +18,8 @@\n url_main = data.get(social_network).get(\"urlMain\")\n pool.append((social_network, url_main))\n \n- index = 1\n for social_network, url_main in pool:\n- site_file.write(f'{index}. [{social_network}]({url_main})\\n')\n- index = index + 1\n-\n+ site_file.write(f'1. [{social_network}]({url_main})\\n')\n \n sorted_json_data = json.dumps(data, indent=2, sort_keys=True)\n", "issue": "[site_list.py] change numbering to reduce commit size\nletting the markdown renderer do the counting lets us reduce commit size and avoide possible merge conflicts.\r\n\r\n---\r\n\r\n```\r\n1.\r\n1.\r\n1.\r\n```\r\nrenders to:\r\n\r\n1.\r\n1.\r\n1.\n", "before_files": [{"content": "\"\"\"Sherlock: Supported Site Listing\nThis module generates the listing of supported sites\nwhich can be found in sites.md\nIt also organizes all the sites in alphanumeric order\n\"\"\"\nimport json\n\npool = list()\n\nwith open(\"sherlock/resources/data.json\", \"r\", encoding=\"utf-8\") as data_file:\n data = json.load(data_file)\n\nwith open(\"sites.md\", \"w\") as site_file:\n data_length = len(data)\n site_file.write(f'## List Of Supported Sites ({data_length} Sites In Total!)\\n')\n\n for social_network in data:\n url_main = data.get(social_network).get(\"urlMain\")\n pool.append((social_network, url_main))\n\n index = 1\n for social_network, url_main in pool:\n site_file.write(f'{index}. [{social_network}]({url_main})\\n')\n index = index + 1\n\n\nsorted_json_data = json.dumps(data, indent=2, sort_keys=True)\n\nwith open(\"sherlock/resources/data.json\", \"w\") as data_file:\n data_file.write(sorted_json_data)\n\nprint(\"Finished updating supported site listing!\")\n", "path": "site_list.py"}]} | 891 | 148 |
gh_patches_debug_10290 | rasdani/github-patches | git_diff | goauthentik__authentik-8139 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2023.10.6 - "Please select a username" after Azure AD login
**Describe your question/**
Is it now a expected behavior in 2023.10.6 version to ask every user for username input after logging in with azure ad?

In previous versions it was simply authenticating without any prompt, using email address from Azure AD as username.
Now it expects user to input username (and it leads to duplicated accounts, because users with mail as username already exist), and if you enter already existing mail as username it shows error:

I think it can be related to this fix:
https://github.com/goauthentik/authentik/pull/7970
Is it possible somehow to set this username automatically, or revert back to using email address so old user accounts will work again?
**Version and Deployment (please complete the following information):**
- authentik version: 2023.10.6
- Deployment: helm
</issue>
<code>
[start of authentik/sources/oauth/types/azure_ad.py]
1 """AzureAD OAuth2 Views"""
2 from typing import Any
3
4 from structlog.stdlib import get_logger
5
6 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
7 from authentik.sources.oauth.types.oidc import OpenIDConnectOAuth2Callback
8 from authentik.sources.oauth.types.registry import SourceType, registry
9 from authentik.sources.oauth.views.redirect import OAuthRedirect
10
11 LOGGER = get_logger()
12
13
14 class AzureADOAuthRedirect(OAuthRedirect):
15 """Azure AD OAuth2 Redirect"""
16
17 def get_additional_parameters(self, source): # pragma: no cover
18 return {
19 "scope": ["openid", "https://graph.microsoft.com/User.Read"],
20 }
21
22
23 class AzureADOAuthCallback(OpenIDConnectOAuth2Callback):
24 """AzureAD OAuth2 Callback"""
25
26 client_class = UserprofileHeaderAuthClient
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 mail = info.get("mail", None) or info.get("otherMails", [None])[0]
33 return {
34 "username": info.get("userPrincipalName"),
35 "email": mail,
36 "name": info.get("displayName"),
37 }
38
39
40 @registry.register()
41 class AzureADType(SourceType):
42 """Azure AD Type definition"""
43
44 callback_view = AzureADOAuthCallback
45 redirect_view = AzureADOAuthRedirect
46 verbose_name = "Azure AD"
47 name = "azuread"
48
49 urls_customizable = True
50
51 authorization_url = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize"
52 access_token_url = "https://login.microsoftonline.com/common/oauth2/v2.0/token" # nosec
53 profile_url = "https://login.microsoftonline.com/common/openid/userinfo"
54 oidc_well_known_url = (
55 "https://login.microsoftonline.com/common/.well-known/openid-configuration"
56 )
57 oidc_jwks_url = "https://login.microsoftonline.com/common/discovery/keys"
58
[end of authentik/sources/oauth/types/azure_ad.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/sources/oauth/types/azure_ad.py b/authentik/sources/oauth/types/azure_ad.py
--- a/authentik/sources/oauth/types/azure_ad.py
+++ b/authentik/sources/oauth/types/azure_ad.py
@@ -50,7 +50,7 @@
authorization_url = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize"
access_token_url = "https://login.microsoftonline.com/common/oauth2/v2.0/token" # nosec
- profile_url = "https://login.microsoftonline.com/common/openid/userinfo"
+ profile_url = "https://graph.microsoft.com/v1.0/me"
oidc_well_known_url = (
"https://login.microsoftonline.com/common/.well-known/openid-configuration"
)
| {"golden_diff": "diff --git a/authentik/sources/oauth/types/azure_ad.py b/authentik/sources/oauth/types/azure_ad.py\n--- a/authentik/sources/oauth/types/azure_ad.py\n+++ b/authentik/sources/oauth/types/azure_ad.py\n@@ -50,7 +50,7 @@\n \n authorization_url = \"https://login.microsoftonline.com/common/oauth2/v2.0/authorize\"\n access_token_url = \"https://login.microsoftonline.com/common/oauth2/v2.0/token\" # nosec\n- profile_url = \"https://login.microsoftonline.com/common/openid/userinfo\"\n+ profile_url = \"https://graph.microsoft.com/v1.0/me\"\n oidc_well_known_url = (\n \"https://login.microsoftonline.com/common/.well-known/openid-configuration\"\n )\n", "issue": "2023.10.6 - \"Please select a username\" after Azure AD login\n**Describe your question/**\r\n\r\nIs it now a expected behavior in 2023.10.6 version to ask every user for username input after logging in with azure ad?\r\n\r\n\r\nIn previous versions it was simply authenticating without any prompt, using email address from Azure AD as username.\r\n\r\nNow it expects user to input username (and it leads to duplicated accounts, because users with mail as username already exist), and if you enter already existing mail as username it shows error:\r\n\r\n\r\nI think it can be related to this fix:\r\nhttps://github.com/goauthentik/authentik/pull/7970\r\n\r\nIs it possible somehow to set this username automatically, or revert back to using email address so old user accounts will work again?\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.10.6\r\n- Deployment: helm\r\n\r\n\n", "before_files": [{"content": "\"\"\"AzureAD OAuth2 Views\"\"\"\nfrom typing import Any\n\nfrom structlog.stdlib import get_logger\n\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\nfrom authentik.sources.oauth.types.oidc import OpenIDConnectOAuth2Callback\nfrom authentik.sources.oauth.types.registry import SourceType, registry\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\n\nLOGGER = get_logger()\n\n\nclass AzureADOAuthRedirect(OAuthRedirect):\n \"\"\"Azure AD OAuth2 Redirect\"\"\"\n\n def get_additional_parameters(self, source): # pragma: no cover\n return {\n \"scope\": [\"openid\", \"https://graph.microsoft.com/User.Read\"],\n }\n\n\nclass AzureADOAuthCallback(OpenIDConnectOAuth2Callback):\n \"\"\"AzureAD OAuth2 Callback\"\"\"\n\n client_class = UserprofileHeaderAuthClient\n\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n ) -> dict[str, Any]:\n mail = info.get(\"mail\", None) or info.get(\"otherMails\", [None])[0]\n return {\n \"username\": info.get(\"userPrincipalName\"),\n \"email\": mail,\n \"name\": info.get(\"displayName\"),\n }\n\n\[email protected]()\nclass AzureADType(SourceType):\n \"\"\"Azure AD Type definition\"\"\"\n\n callback_view = AzureADOAuthCallback\n redirect_view = AzureADOAuthRedirect\n verbose_name = \"Azure AD\"\n name = \"azuread\"\n\n urls_customizable = True\n\n authorization_url = \"https://login.microsoftonline.com/common/oauth2/v2.0/authorize\"\n access_token_url = \"https://login.microsoftonline.com/common/oauth2/v2.0/token\" # nosec\n profile_url = \"https://login.microsoftonline.com/common/openid/userinfo\"\n oidc_well_known_url = (\n \"https://login.microsoftonline.com/common/.well-known/openid-configuration\"\n )\n oidc_jwks_url = \"https://login.microsoftonline.com/common/discovery/keys\"\n", "path": "authentik/sources/oauth/types/azure_ad.py"}]} | 1,414 | 176 |
gh_patches_debug_15565 | rasdani/github-patches | git_diff | deepset-ai__haystack-7796 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[V2.2.0] ChatPromptBuilder is not export
**Describe the bug**
v2.2.0 => ChatPromptBuilder is not export
**Error message**
<img width="1102" alt="image" src="https://github.com/deepset-ai/haystack/assets/15232298/b9372767-42f5-464c-832f-cca38a00cf60">
</issue>
<code>
[start of haystack/components/builders/__init__.py]
1 # SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>
2 #
3 # SPDX-License-Identifier: Apache-2.0
4
5 from haystack.components.builders.answer_builder import AnswerBuilder
6 from haystack.components.builders.dynamic_chat_prompt_builder import DynamicChatPromptBuilder
7 from haystack.components.builders.dynamic_prompt_builder import DynamicPromptBuilder
8 from haystack.components.builders.prompt_builder import PromptBuilder
9
10 __all__ = ["AnswerBuilder", "PromptBuilder", "DynamicPromptBuilder", "DynamicChatPromptBuilder"]
11
[end of haystack/components/builders/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/haystack/components/builders/__init__.py b/haystack/components/builders/__init__.py
--- a/haystack/components/builders/__init__.py
+++ b/haystack/components/builders/__init__.py
@@ -3,8 +3,9 @@
# SPDX-License-Identifier: Apache-2.0
from haystack.components.builders.answer_builder import AnswerBuilder
+from haystack.components.builders.chat_prompt_builder import ChatPromptBuilder
from haystack.components.builders.dynamic_chat_prompt_builder import DynamicChatPromptBuilder
from haystack.components.builders.dynamic_prompt_builder import DynamicPromptBuilder
from haystack.components.builders.prompt_builder import PromptBuilder
-__all__ = ["AnswerBuilder", "PromptBuilder", "DynamicPromptBuilder", "DynamicChatPromptBuilder"]
+__all__ = ["AnswerBuilder", "PromptBuilder", "DynamicPromptBuilder", "DynamicChatPromptBuilder", "ChatPromptBuilder"]
| {"golden_diff": "diff --git a/haystack/components/builders/__init__.py b/haystack/components/builders/__init__.py\n--- a/haystack/components/builders/__init__.py\n+++ b/haystack/components/builders/__init__.py\n@@ -3,8 +3,9 @@\n # SPDX-License-Identifier: Apache-2.0\n \n from haystack.components.builders.answer_builder import AnswerBuilder\n+from haystack.components.builders.chat_prompt_builder import ChatPromptBuilder\n from haystack.components.builders.dynamic_chat_prompt_builder import DynamicChatPromptBuilder\n from haystack.components.builders.dynamic_prompt_builder import DynamicPromptBuilder\n from haystack.components.builders.prompt_builder import PromptBuilder\n \n-__all__ = [\"AnswerBuilder\", \"PromptBuilder\", \"DynamicPromptBuilder\", \"DynamicChatPromptBuilder\"]\n+__all__ = [\"AnswerBuilder\", \"PromptBuilder\", \"DynamicPromptBuilder\", \"DynamicChatPromptBuilder\", \"ChatPromptBuilder\"]\n", "issue": "[V2.2.0] ChatPromptBuilder is not export\n**Describe the bug**\r\nv2.2.0 => ChatPromptBuilder is not export\r\n\r\n**Error message**\r\n<img width=\"1102\" alt=\"image\" src=\"https://github.com/deepset-ai/haystack/assets/15232298/b9372767-42f5-464c-832f-cca38a00cf60\">\r\n\r\n\n", "before_files": [{"content": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom haystack.components.builders.answer_builder import AnswerBuilder\nfrom haystack.components.builders.dynamic_chat_prompt_builder import DynamicChatPromptBuilder\nfrom haystack.components.builders.dynamic_prompt_builder import DynamicPromptBuilder\nfrom haystack.components.builders.prompt_builder import PromptBuilder\n\n__all__ = [\"AnswerBuilder\", \"PromptBuilder\", \"DynamicPromptBuilder\", \"DynamicChatPromptBuilder\"]\n", "path": "haystack/components/builders/__init__.py"}]} | 770 | 186 |
gh_patches_debug_36530 | rasdani/github-patches | git_diff | getsentry__sentry-python-541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
0.12.0 breaks Django function-based middleware
Similar to #504, but a different stack trace:
AttributeError: 'method-wrapper' object has no attribute '__module__'
File "django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
According to sentry (kind-of neat how I get this in this case...), the `get_response` object at that point in time is `<sentry_sdk.integrations.django.middleware.AuditMiddleware object at 0x7f37d64d4450>`.
This problem only occurs in 0.12.0 and newer, and with Django 1.11.x
</issue>
<code>
[start of sentry_sdk/integrations/django/middleware.py]
1 """
2 Create spans from Django middleware invocations
3 """
4
5 from functools import wraps
6
7 from django import VERSION as DJANGO_VERSION
8
9 from sentry_sdk import Hub
10 from sentry_sdk.utils import ContextVar, transaction_from_function
11
12 from sentry_sdk._types import MYPY
13
14 if MYPY:
15 from typing import Any
16 from typing import Callable
17 from typing import TypeVar
18
19 F = TypeVar("F", bound=Callable[..., Any])
20
21 _import_string_should_wrap_middleware = ContextVar(
22 "import_string_should_wrap_middleware"
23 )
24
25 if DJANGO_VERSION < (1, 7):
26 import_string_name = "import_by_path"
27 else:
28 import_string_name = "import_string"
29
30
31 def patch_django_middlewares():
32 # type: () -> None
33 from django.core.handlers import base
34
35 old_import_string = getattr(base, import_string_name)
36
37 def sentry_patched_import_string(dotted_path):
38 # type: (str) -> Any
39 rv = old_import_string(dotted_path)
40
41 if _import_string_should_wrap_middleware.get(None):
42 rv = _wrap_middleware(rv, dotted_path)
43
44 return rv
45
46 setattr(base, import_string_name, sentry_patched_import_string)
47
48 old_load_middleware = base.BaseHandler.load_middleware
49
50 def sentry_patched_load_middleware(self):
51 # type: (base.BaseHandler) -> Any
52 _import_string_should_wrap_middleware.set(True)
53 try:
54 return old_load_middleware(self)
55 finally:
56 _import_string_should_wrap_middleware.set(False)
57
58 base.BaseHandler.load_middleware = sentry_patched_load_middleware
59
60
61 def _wrap_middleware(middleware, middleware_name):
62 # type: (Any, str) -> Any
63 from sentry_sdk.integrations.django import DjangoIntegration
64
65 def _get_wrapped_method(old_method):
66 # type: (F) -> F
67 @wraps(old_method)
68 def sentry_wrapped_method(*args, **kwargs):
69 # type: (*Any, **Any) -> Any
70 hub = Hub.current
71 integration = hub.get_integration(DjangoIntegration)
72 if integration is None or not integration.middleware_spans:
73 return old_method(*args, **kwargs)
74
75 function_name = transaction_from_function(old_method)
76
77 description = middleware_name
78 function_basename = getattr(old_method, "__name__", None)
79 if function_basename:
80 description = "{}.{}".format(description, function_basename)
81
82 with hub.start_span(
83 op="django.middleware", description=description
84 ) as span:
85 span.set_tag("django.function_name", function_name)
86 span.set_tag("django.middleware_name", middleware_name)
87 return old_method(*args, **kwargs)
88
89 return sentry_wrapped_method # type: ignore
90
91 class SentryWrappingMiddleware(object):
92 def __init__(self, *args, **kwargs):
93 # type: (*Any, **Any) -> None
94 self._inner = middleware(*args, **kwargs)
95 self._call_method = None
96
97 # We need correct behavior for `hasattr()`, which we can only determine
98 # when we have an instance of the middleware we're wrapping.
99 def __getattr__(self, method_name):
100 # type: (str) -> Any
101 if method_name not in (
102 "process_request",
103 "process_view",
104 "process_template_response",
105 "process_response",
106 "process_exception",
107 ):
108 raise AttributeError()
109
110 old_method = getattr(self._inner, method_name)
111 rv = _get_wrapped_method(old_method)
112 self.__dict__[method_name] = rv
113 return rv
114
115 def __call__(self, *args, **kwargs):
116 # type: (*Any, **Any) -> Any
117 f = self._call_method
118 if f is None:
119 self._call_method = f = _get_wrapped_method(self._inner.__call__)
120 return f(*args, **kwargs)
121
122 if hasattr(middleware, "__name__"):
123 SentryWrappingMiddleware.__name__ = middleware.__name__
124
125 return SentryWrappingMiddleware
126
[end of sentry_sdk/integrations/django/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/django/middleware.py b/sentry_sdk/integrations/django/middleware.py
--- a/sentry_sdk/integrations/django/middleware.py
+++ b/sentry_sdk/integrations/django/middleware.py
@@ -7,7 +7,11 @@
from django import VERSION as DJANGO_VERSION
from sentry_sdk import Hub
-from sentry_sdk.utils import ContextVar, transaction_from_function
+from sentry_sdk.utils import (
+ ContextVar,
+ transaction_from_function,
+ capture_internal_exceptions,
+)
from sentry_sdk._types import MYPY
@@ -64,29 +68,36 @@
def _get_wrapped_method(old_method):
# type: (F) -> F
- @wraps(old_method)
- def sentry_wrapped_method(*args, **kwargs):
- # type: (*Any, **Any) -> Any
- hub = Hub.current
- integration = hub.get_integration(DjangoIntegration)
- if integration is None or not integration.middleware_spans:
- return old_method(*args, **kwargs)
-
- function_name = transaction_from_function(old_method)
-
- description = middleware_name
- function_basename = getattr(old_method, "__name__", None)
- if function_basename:
- description = "{}.{}".format(description, function_basename)
-
- with hub.start_span(
- op="django.middleware", description=description
- ) as span:
- span.set_tag("django.function_name", function_name)
- span.set_tag("django.middleware_name", middleware_name)
- return old_method(*args, **kwargs)
-
- return sentry_wrapped_method # type: ignore
+ with capture_internal_exceptions():
+
+ def sentry_wrapped_method(*args, **kwargs):
+ # type: (*Any, **Any) -> Any
+ hub = Hub.current
+ integration = hub.get_integration(DjangoIntegration)
+ if integration is None or not integration.middleware_spans:
+ return old_method(*args, **kwargs)
+
+ function_name = transaction_from_function(old_method)
+
+ description = middleware_name
+ function_basename = getattr(old_method, "__name__", None)
+ if function_basename:
+ description = "{}.{}".format(description, function_basename)
+
+ with hub.start_span(
+ op="django.middleware", description=description
+ ) as span:
+ span.set_tag("django.function_name", function_name)
+ span.set_tag("django.middleware_name", middleware_name)
+ return old_method(*args, **kwargs)
+
+ try:
+ # fails for __call__ of function on Python 2 (see py2.7-django-1.11)
+ return wraps(old_method)(sentry_wrapped_method) # type: ignore
+ except Exception:
+ return sentry_wrapped_method # type: ignore
+
+ return old_method
class SentryWrappingMiddleware(object):
def __init__(self, *args, **kwargs):
| {"golden_diff": "diff --git a/sentry_sdk/integrations/django/middleware.py b/sentry_sdk/integrations/django/middleware.py\n--- a/sentry_sdk/integrations/django/middleware.py\n+++ b/sentry_sdk/integrations/django/middleware.py\n@@ -7,7 +7,11 @@\n from django import VERSION as DJANGO_VERSION\n \n from sentry_sdk import Hub\n-from sentry_sdk.utils import ContextVar, transaction_from_function\n+from sentry_sdk.utils import (\n+ ContextVar,\n+ transaction_from_function,\n+ capture_internal_exceptions,\n+)\n \n from sentry_sdk._types import MYPY\n \n@@ -64,29 +68,36 @@\n \n def _get_wrapped_method(old_method):\n # type: (F) -> F\n- @wraps(old_method)\n- def sentry_wrapped_method(*args, **kwargs):\n- # type: (*Any, **Any) -> Any\n- hub = Hub.current\n- integration = hub.get_integration(DjangoIntegration)\n- if integration is None or not integration.middleware_spans:\n- return old_method(*args, **kwargs)\n-\n- function_name = transaction_from_function(old_method)\n-\n- description = middleware_name\n- function_basename = getattr(old_method, \"__name__\", None)\n- if function_basename:\n- description = \"{}.{}\".format(description, function_basename)\n-\n- with hub.start_span(\n- op=\"django.middleware\", description=description\n- ) as span:\n- span.set_tag(\"django.function_name\", function_name)\n- span.set_tag(\"django.middleware_name\", middleware_name)\n- return old_method(*args, **kwargs)\n-\n- return sentry_wrapped_method # type: ignore\n+ with capture_internal_exceptions():\n+\n+ def sentry_wrapped_method(*args, **kwargs):\n+ # type: (*Any, **Any) -> Any\n+ hub = Hub.current\n+ integration = hub.get_integration(DjangoIntegration)\n+ if integration is None or not integration.middleware_spans:\n+ return old_method(*args, **kwargs)\n+\n+ function_name = transaction_from_function(old_method)\n+\n+ description = middleware_name\n+ function_basename = getattr(old_method, \"__name__\", None)\n+ if function_basename:\n+ description = \"{}.{}\".format(description, function_basename)\n+\n+ with hub.start_span(\n+ op=\"django.middleware\", description=description\n+ ) as span:\n+ span.set_tag(\"django.function_name\", function_name)\n+ span.set_tag(\"django.middleware_name\", middleware_name)\n+ return old_method(*args, **kwargs)\n+\n+ try:\n+ # fails for __call__ of function on Python 2 (see py2.7-django-1.11)\n+ return wraps(old_method)(sentry_wrapped_method) # type: ignore\n+ except Exception:\n+ return sentry_wrapped_method # type: ignore\n+\n+ return old_method\n \n class SentryWrappingMiddleware(object):\n def __init__(self, *args, **kwargs):\n", "issue": "0.12.0 breaks Django function-based middleware\nSimilar to #504, but a different stack trace:\r\n\r\n AttributeError: 'method-wrapper' object has no attribute '__module__'\r\n File \"django/core/handlers/exception.py\", line 41, in inner\r\n response = get_response(request)\r\n File \"functools.py\", line 33, in update_wrapper\r\n setattr(wrapper, attr, getattr(wrapped, attr))\r\n\r\nAccording to sentry (kind-of neat how I get this in this case...), the `get_response` object at that point in time is `<sentry_sdk.integrations.django.middleware.AuditMiddleware object at 0x7f37d64d4450>`.\r\n\r\nThis problem only occurs in 0.12.0 and newer, and with Django 1.11.x\n", "before_files": [{"content": "\"\"\"\nCreate spans from Django middleware invocations\n\"\"\"\n\nfrom functools import wraps\n\nfrom django import VERSION as DJANGO_VERSION\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.utils import ContextVar, transaction_from_function\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Callable\n from typing import TypeVar\n\n F = TypeVar(\"F\", bound=Callable[..., Any])\n\n_import_string_should_wrap_middleware = ContextVar(\n \"import_string_should_wrap_middleware\"\n)\n\nif DJANGO_VERSION < (1, 7):\n import_string_name = \"import_by_path\"\nelse:\n import_string_name = \"import_string\"\n\n\ndef patch_django_middlewares():\n # type: () -> None\n from django.core.handlers import base\n\n old_import_string = getattr(base, import_string_name)\n\n def sentry_patched_import_string(dotted_path):\n # type: (str) -> Any\n rv = old_import_string(dotted_path)\n\n if _import_string_should_wrap_middleware.get(None):\n rv = _wrap_middleware(rv, dotted_path)\n\n return rv\n\n setattr(base, import_string_name, sentry_patched_import_string)\n\n old_load_middleware = base.BaseHandler.load_middleware\n\n def sentry_patched_load_middleware(self):\n # type: (base.BaseHandler) -> Any\n _import_string_should_wrap_middleware.set(True)\n try:\n return old_load_middleware(self)\n finally:\n _import_string_should_wrap_middleware.set(False)\n\n base.BaseHandler.load_middleware = sentry_patched_load_middleware\n\n\ndef _wrap_middleware(middleware, middleware_name):\n # type: (Any, str) -> Any\n from sentry_sdk.integrations.django import DjangoIntegration\n\n def _get_wrapped_method(old_method):\n # type: (F) -> F\n @wraps(old_method)\n def sentry_wrapped_method(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is None or not integration.middleware_spans:\n return old_method(*args, **kwargs)\n\n function_name = transaction_from_function(old_method)\n\n description = middleware_name\n function_basename = getattr(old_method, \"__name__\", None)\n if function_basename:\n description = \"{}.{}\".format(description, function_basename)\n\n with hub.start_span(\n op=\"django.middleware\", description=description\n ) as span:\n span.set_tag(\"django.function_name\", function_name)\n span.set_tag(\"django.middleware_name\", middleware_name)\n return old_method(*args, **kwargs)\n\n return sentry_wrapped_method # type: ignore\n\n class SentryWrappingMiddleware(object):\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n self._inner = middleware(*args, **kwargs)\n self._call_method = None\n\n # We need correct behavior for `hasattr()`, which we can only determine\n # when we have an instance of the middleware we're wrapping.\n def __getattr__(self, method_name):\n # type: (str) -> Any\n if method_name not in (\n \"process_request\",\n \"process_view\",\n \"process_template_response\",\n \"process_response\",\n \"process_exception\",\n ):\n raise AttributeError()\n\n old_method = getattr(self._inner, method_name)\n rv = _get_wrapped_method(old_method)\n self.__dict__[method_name] = rv\n return rv\n\n def __call__(self, *args, **kwargs):\n # type: (*Any, **Any) -> Any\n f = self._call_method\n if f is None:\n self._call_method = f = _get_wrapped_method(self._inner.__call__)\n return f(*args, **kwargs)\n\n if hasattr(middleware, \"__name__\"):\n SentryWrappingMiddleware.__name__ = middleware.__name__\n\n return SentryWrappingMiddleware\n", "path": "sentry_sdk/integrations/django/middleware.py"}]} | 1,899 | 673 |
gh_patches_debug_28409 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-182 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
recycleapp_be not working for some addresses
when I enter my address into the configuration.yaml I receive this error on restart:
```
fetch failed for source Recycle!: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py",
line 116, in fetch entries = self._source.fetch() File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py",
line 79, in fetch entries.append(Collection(date, item["fraction"]["name"]["en"])) KeyError: 'name'
```
when I use the example address or some other addresses everything works fine. Is it a problem with my city? Because other addresses of this city also don't work, even though those addresses work on [Recycle!](https://recycleapp.be/home).
this is what I have in configuration.yaml
```
waste_collection_schedule:
sources:
- name: recycleapp_be
args:
postcode: 3001
street: Waversebaan
house_number: 276
```
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py]
1 import logging
2 from datetime import datetime, timedelta
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Recycle!"
8 DESCRIPTION = "Source for RecycleApp.be"
9 URL = "https://www.recycleapp.be"
10 TEST_CASES = {
11 "1140 Evere, Bazellaan 1": {
12 "postcode": 1140,
13 "street": "Bazellaan",
14 "house_number": 1,
15 }
16 }
17
18 _LOGGER = logging.getLogger(__name__)
19
20
21 class Source:
22 def __init__(self, postcode, street, house_number):
23 self._postcode = postcode
24 self._street = street
25 self._house_number = house_number
26
27 def fetch(self):
28 url = "https://recycleapp.be/api/app/v1"
29 headers = {
30 "x-secret": "Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW",
31 "x-consumer": "recycleapp.be",
32 "User-Agent": "",
33 "Authorization": "",
34 }
35 r = requests.get(f"{url}/access-token", headers=headers)
36 headers["Authorization"] = r.json()["accessToken"]
37
38 params = {"q": self._postcode}
39 r = requests.get(f"{url}/zipcodes", params=params, headers=headers)
40 if r.status_code != 200:
41 _LOGGER.error("Get zip code failed")
42 return []
43 zipcodeId = r.json()["items"][0]["id"]
44
45 params = {"q": self._street, "zipcodes": zipcodeId}
46 r = requests.get(f"{url}/streets", params=params, headers=headers)
47 if r.status_code != 200:
48 _LOGGER.error("Get street id failed")
49 return []
50
51 for item in r.json()["items"]:
52 if item["name"] == self._street:
53 streetId = item["id"]
54 if streetId is None:
55 streetId = r.json()["items"][0]["id"]
56
57 now = datetime.now()
58 fromDate = now.strftime("%Y-%m-%d")
59 untilDate = (now + timedelta(days=365)).strftime("%Y-%m-%d")
60 params = {
61 "zipcodeId": zipcodeId,
62 "streetId": streetId,
63 "houseNumber": self._house_number,
64 "fromDate": fromDate,
65 "untilDate": untilDate,
66 # "size":100,
67 }
68 r = requests.get(f"{url}/collections", params=params, headers=headers)
69 if r.status_code != 200:
70 _LOGGER.error("Get data failed")
71 return []
72
73 entries = []
74 for item in r.json()["items"]:
75 if "exception" in item and "replacedBy" in item["exception"]:
76 continue
77
78 date = datetime.strptime(item["timestamp"], "%Y-%m-%dT%H:%M:%S.000Z").date()
79 entries.append(Collection(date, item["fraction"]["name"]["en"]))
80 return entries
81
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
@@ -12,17 +12,29 @@
"postcode": 1140,
"street": "Bazellaan",
"house_number": 1,
- }
+ },
+ "3001, Waversebaan 276 with events": {
+ "postcode": 3001,
+ "street": "Waversebaan",
+ "house_number": 276,
+ },
+ "3001, Waversebaan 276 without events": {
+ "postcode": 3001,
+ "street": "Waversebaan",
+ "house_number": 276,
+ "add_events": False,
+ },
}
_LOGGER = logging.getLogger(__name__)
class Source:
- def __init__(self, postcode, street, house_number):
+ def __init__(self, postcode, street, house_number, add_events=True):
self._postcode = postcode
self._street = street
self._house_number = house_number
+ self._add_events = add_events
def fetch(self):
url = "https://recycleapp.be/api/app/v1"
@@ -76,5 +88,9 @@
continue
date = datetime.strptime(item["timestamp"], "%Y-%m-%dT%H:%M:%S.000Z").date()
- entries.append(Collection(date, item["fraction"]["name"]["en"]))
+ if item["type"] == "collection":
+ entries.append(Collection(date, item["fraction"]["name"]["en"]))
+ elif item["type"] == "event" and self._add_events:
+ entries.append(Collection(date, item["event"]["title"]["en"]))
+
return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n@@ -12,17 +12,29 @@\n \"postcode\": 1140,\n \"street\": \"Bazellaan\",\n \"house_number\": 1,\n- }\n+ },\n+ \"3001, Waversebaan 276 with events\": {\n+ \"postcode\": 3001,\n+ \"street\": \"Waversebaan\",\n+ \"house_number\": 276,\n+ },\n+ \"3001, Waversebaan 276 without events\": {\n+ \"postcode\": 3001,\n+ \"street\": \"Waversebaan\",\n+ \"house_number\": 276,\n+ \"add_events\": False,\n+ },\n }\n \n _LOGGER = logging.getLogger(__name__)\n \n \n class Source:\n- def __init__(self, postcode, street, house_number):\n+ def __init__(self, postcode, street, house_number, add_events=True):\n self._postcode = postcode\n self._street = street\n self._house_number = house_number\n+ self._add_events = add_events\n \n def fetch(self):\n url = \"https://recycleapp.be/api/app/v1\"\n@@ -76,5 +88,9 @@\n continue\n \n date = datetime.strptime(item[\"timestamp\"], \"%Y-%m-%dT%H:%M:%S.000Z\").date()\n- entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"]))\n+ if item[\"type\"] == \"collection\":\n+ entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"]))\n+ elif item[\"type\"] == \"event\" and self._add_events:\n+ entries.append(Collection(date, item[\"event\"][\"title\"][\"en\"]))\n+\n return entries\n", "issue": "recycleapp_be not working for some addresses\nwhen I enter my address into the configuration.yaml I receive this error on restart:\r\n```\r\nfetch failed for source Recycle!: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\", \r\nline 116, in fetch entries = self._source.fetch() File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\", \r\nline 79, in fetch entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"])) KeyError: 'name'\r\n```\r\nwhen I use the example address or some other addresses everything works fine. Is it a problem with my city? Because other addresses of this city also don't work, even though those addresses work on [Recycle!](https://recycleapp.be/home).\r\nthis is what I have in configuration.yaml\r\n```\r\nwaste_collection_schedule:\r\n sources:\r\n - name: recycleapp_be\r\n args:\r\n postcode: 3001\r\n street: Waversebaan\r\n house_number: 276\r\n```\n", "before_files": [{"content": "import logging\nfrom datetime import datetime, timedelta\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Recycle!\"\nDESCRIPTION = \"Source for RecycleApp.be\"\nURL = \"https://www.recycleapp.be\"\nTEST_CASES = {\n \"1140 Evere, Bazellaan 1\": {\n \"postcode\": 1140,\n \"street\": \"Bazellaan\",\n \"house_number\": 1,\n }\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, postcode, street, house_number):\n self._postcode = postcode\n self._street = street\n self._house_number = house_number\n\n def fetch(self):\n url = \"https://recycleapp.be/api/app/v1\"\n headers = {\n \"x-secret\": \"Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW\",\n \"x-consumer\": \"recycleapp.be\",\n \"User-Agent\": \"\",\n \"Authorization\": \"\",\n }\n r = requests.get(f\"{url}/access-token\", headers=headers)\n headers[\"Authorization\"] = r.json()[\"accessToken\"]\n\n params = {\"q\": self._postcode}\n r = requests.get(f\"{url}/zipcodes\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get zip code failed\")\n return []\n zipcodeId = r.json()[\"items\"][0][\"id\"]\n\n params = {\"q\": self._street, \"zipcodes\": zipcodeId}\n r = requests.get(f\"{url}/streets\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get street id failed\")\n return []\n\n for item in r.json()[\"items\"]:\n if item[\"name\"] == self._street:\n streetId = item[\"id\"]\n if streetId is None:\n streetId = r.json()[\"items\"][0][\"id\"]\n\n now = datetime.now()\n fromDate = now.strftime(\"%Y-%m-%d\")\n untilDate = (now + timedelta(days=365)).strftime(\"%Y-%m-%d\")\n params = {\n \"zipcodeId\": zipcodeId,\n \"streetId\": streetId,\n \"houseNumber\": self._house_number,\n \"fromDate\": fromDate,\n \"untilDate\": untilDate,\n # \"size\":100,\n }\n r = requests.get(f\"{url}/collections\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get data failed\")\n return []\n\n entries = []\n for item in r.json()[\"items\"]:\n if \"exception\" in item and \"replacedBy\" in item[\"exception\"]:\n continue\n\n date = datetime.strptime(item[\"timestamp\"], \"%Y-%m-%dT%H:%M:%S.000Z\").date()\n entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py"}]} | 1,722 | 467 |
gh_patches_debug_27458 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-4889 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Py3.6: Unable to find .../site-packages/importlib_resources/version.txt"
Hello,
On latest version of pyinstaller, the hook for importlib_resource seems to look for a non existing version.txt file. It is not provided by the latest version 1.2.0 of the backport: https://gitlab.com/python-devs/importlib_resources
</issue>
<code>
[start of PyInstaller/hooks/hook-importlib_resources.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2019-2020, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11 """
12 `importlib_resources` is a backport of the 3.7+ module `importlib.resources`
13 """
14
15 import os
16 from PyInstaller.compat import is_py37
17 from PyInstaller.utils.hooks import get_module_file_attribute
18
19 # Include the version.txt file, used to set __version__
20 res_loc = os.path.dirname(get_module_file_attribute('importlib_resources'))
21 datas = [
22 (os.path.join(res_loc, 'version.txt'), 'importlib_resources'),
23 ]
24
25 # Replicate the module's version checks to exclude unused modules.
26 if is_py37:
27 # Stdlib now has the implmentation of this, so the backports
28 # aren't used at all
29 excludedmodules = [
30 'importlib_resources._py2',
31 'importlib_resources._py3',
32 ]
33 else:
34 excludedmodules = ['importlib_resources._py2']
35
[end of PyInstaller/hooks/hook-importlib_resources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PyInstaller/hooks/hook-importlib_resources.py b/PyInstaller/hooks/hook-importlib_resources.py
--- a/PyInstaller/hooks/hook-importlib_resources.py
+++ b/PyInstaller/hooks/hook-importlib_resources.py
@@ -9,26 +9,25 @@
# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
#-----------------------------------------------------------------------------
"""
-`importlib_resources` is a backport of the 3.7+ module `importlib.resources`
+`importlib_resources` is a backport of the 3.9+ module `importlib.resources`
"""
import os
-from PyInstaller.compat import is_py37
-from PyInstaller.utils.hooks import get_module_file_attribute
+from PyInstaller.utils.hooks import get_module_file_attribute, \
+ is_module_satisfies, copy_metadata
-# Include the version.txt file, used to set __version__
-res_loc = os.path.dirname(get_module_file_attribute('importlib_resources'))
-datas = [
- (os.path.join(res_loc, 'version.txt'), 'importlib_resources'),
-]
-
-# Replicate the module's version checks to exclude unused modules.
-if is_py37:
- # Stdlib now has the implmentation of this, so the backports
- # aren't used at all
- excludedmodules = [
- 'importlib_resources._py2',
- 'importlib_resources._py3',
- ]
+if is_module_satisfies("importlib_resources >= 1.2.0"):
+ # since 1.2.0 importlib.metadata is used
+ datas = copy_metadata('importlib_resources')
else:
- excludedmodules = ['importlib_resources._py2']
+ # include the version.txt file, used to set __version__
+ res_loc = os.path.dirname(get_module_file_attribute('importlib_resources'))
+ datas = [
+ (os.path.join(res_loc, 'version.txt'), 'importlib_resources'),
+ ]
+
+if is_module_satisfies("importlib_resources >= 1.3.1"):
+ hiddenimports = ['importlib_resources.trees']
+
+# this is only required for python2 support
+excludedimports = ['importlib_resources._py2']
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-importlib_resources.py b/PyInstaller/hooks/hook-importlib_resources.py\n--- a/PyInstaller/hooks/hook-importlib_resources.py\n+++ b/PyInstaller/hooks/hook-importlib_resources.py\n@@ -9,26 +9,25 @@\n # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n #-----------------------------------------------------------------------------\n \"\"\"\n-`importlib_resources` is a backport of the 3.7+ module `importlib.resources`\n+`importlib_resources` is a backport of the 3.9+ module `importlib.resources`\n \"\"\"\n \n import os\n-from PyInstaller.compat import is_py37\n-from PyInstaller.utils.hooks import get_module_file_attribute\n+from PyInstaller.utils.hooks import get_module_file_attribute, \\\n+ is_module_satisfies, copy_metadata\n \n-# Include the version.txt file, used to set __version__\n-res_loc = os.path.dirname(get_module_file_attribute('importlib_resources'))\n-datas = [\n- (os.path.join(res_loc, 'version.txt'), 'importlib_resources'),\n-]\n-\n-# Replicate the module's version checks to exclude unused modules.\n-if is_py37:\n- # Stdlib now has the implmentation of this, so the backports\n- # aren't used at all\n- excludedmodules = [\n- 'importlib_resources._py2',\n- 'importlib_resources._py3',\n- ]\n+if is_module_satisfies(\"importlib_resources >= 1.2.0\"):\n+ # since 1.2.0 importlib.metadata is used\n+ datas = copy_metadata('importlib_resources')\n else:\n- excludedmodules = ['importlib_resources._py2']\n+ # include the version.txt file, used to set __version__\n+ res_loc = os.path.dirname(get_module_file_attribute('importlib_resources'))\n+ datas = [\n+ (os.path.join(res_loc, 'version.txt'), 'importlib_resources'),\n+ ]\n+\n+if is_module_satisfies(\"importlib_resources >= 1.3.1\"):\n+ hiddenimports = ['importlib_resources.trees']\n+\n+# this is only required for python2 support\n+excludedimports = ['importlib_resources._py2']\n", "issue": "Py3.6: Unable to find .../site-packages/importlib_resources/version.txt\"\nHello,\r\n\r\nOn latest version of pyinstaller, the hook for importlib_resource seems to look for a non existing version.txt file. It is not provided by the latest version 1.2.0 of the backport: https://gitlab.com/python-devs/importlib_resources\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2019-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\"\"\"\n`importlib_resources` is a backport of the 3.7+ module `importlib.resources`\n\"\"\"\n\nimport os\nfrom PyInstaller.compat import is_py37\nfrom PyInstaller.utils.hooks import get_module_file_attribute\n\n# Include the version.txt file, used to set __version__\nres_loc = os.path.dirname(get_module_file_attribute('importlib_resources'))\ndatas = [\n (os.path.join(res_loc, 'version.txt'), 'importlib_resources'),\n]\n\n# Replicate the module's version checks to exclude unused modules.\nif is_py37:\n # Stdlib now has the implmentation of this, so the backports\n # aren't used at all\n excludedmodules = [\n 'importlib_resources._py2',\n 'importlib_resources._py3',\n ]\nelse:\n excludedmodules = ['importlib_resources._py2']\n", "path": "PyInstaller/hooks/hook-importlib_resources.py"}]} | 957 | 492 |
gh_patches_debug_27546 | rasdani/github-patches | git_diff | TheAlgorithms__Python-10822 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
</issue>
<code>
[start of maths/binomial_coefficient.py]
1 def binomial_coefficient(n: int, r: int) -> int:
2 """
3 Find binomial coefficient using pascals triangle.
4
5 >>> binomial_coefficient(10, 5)
6 252
7 """
8 c = [0 for i in range(r + 1)]
9 # nc0 = 1
10 c[0] = 1
11 for i in range(1, n + 1):
12 # to compute current row from previous row.
13 j = min(i, r)
14 while j > 0:
15 c[j] += c[j - 1]
16 j -= 1
17 return c[r]
18
19
20 print(binomial_coefficient(n=10, r=5))
21
[end of maths/binomial_coefficient.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/maths/binomial_coefficient.py b/maths/binomial_coefficient.py
--- a/maths/binomial_coefficient.py
+++ b/maths/binomial_coefficient.py
@@ -1,10 +1,48 @@
def binomial_coefficient(n: int, r: int) -> int:
"""
- Find binomial coefficient using pascals triangle.
+ Find binomial coefficient using Pascal's triangle.
+
+ Calculate C(n, r) using Pascal's triangle.
+
+ :param n: The total number of items.
+ :param r: The number of items to choose.
+ :return: The binomial coefficient C(n, r).
>>> binomial_coefficient(10, 5)
252
+ >>> binomial_coefficient(10, 0)
+ 1
+ >>> binomial_coefficient(0, 10)
+ 1
+ >>> binomial_coefficient(10, 10)
+ 1
+ >>> binomial_coefficient(5, 2)
+ 10
+ >>> binomial_coefficient(5, 6)
+ 0
+ >>> binomial_coefficient(3, 5)
+ 0
+ >>> binomial_coefficient(-2, 3)
+ Traceback (most recent call last):
+ ...
+ ValueError: n and r must be non-negative integers
+ >>> binomial_coefficient(5, -1)
+ Traceback (most recent call last):
+ ...
+ ValueError: n and r must be non-negative integers
+ >>> binomial_coefficient(10.1, 5)
+ Traceback (most recent call last):
+ ...
+ TypeError: 'float' object cannot be interpreted as an integer
+ >>> binomial_coefficient(10, 5.1)
+ Traceback (most recent call last):
+ ...
+ TypeError: 'float' object cannot be interpreted as an integer
"""
+ if n < 0 or r < 0:
+ raise ValueError("n and r must be non-negative integers")
+ if 0 in (n, r):
+ return 1
c = [0 for i in range(r + 1)]
# nc0 = 1
c[0] = 1
@@ -17,4 +55,8 @@
return c[r]
-print(binomial_coefficient(n=10, r=5))
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
+ print(binomial_coefficient(n=10, r=5))
| {"golden_diff": "diff --git a/maths/binomial_coefficient.py b/maths/binomial_coefficient.py\n--- a/maths/binomial_coefficient.py\n+++ b/maths/binomial_coefficient.py\n@@ -1,10 +1,48 @@\n def binomial_coefficient(n: int, r: int) -> int:\n \"\"\"\n- Find binomial coefficient using pascals triangle.\n+ Find binomial coefficient using Pascal's triangle.\n+\n+ Calculate C(n, r) using Pascal's triangle.\n+\n+ :param n: The total number of items.\n+ :param r: The number of items to choose.\n+ :return: The binomial coefficient C(n, r).\n \n >>> binomial_coefficient(10, 5)\n 252\n+ >>> binomial_coefficient(10, 0)\n+ 1\n+ >>> binomial_coefficient(0, 10)\n+ 1\n+ >>> binomial_coefficient(10, 10)\n+ 1\n+ >>> binomial_coefficient(5, 2)\n+ 10\n+ >>> binomial_coefficient(5, 6)\n+ 0\n+ >>> binomial_coefficient(3, 5)\n+ 0\n+ >>> binomial_coefficient(-2, 3)\n+ Traceback (most recent call last):\n+ ...\n+ ValueError: n and r must be non-negative integers\n+ >>> binomial_coefficient(5, -1)\n+ Traceback (most recent call last):\n+ ...\n+ ValueError: n and r must be non-negative integers\n+ >>> binomial_coefficient(10.1, 5)\n+ Traceback (most recent call last):\n+ ...\n+ TypeError: 'float' object cannot be interpreted as an integer\n+ >>> binomial_coefficient(10, 5.1)\n+ Traceback (most recent call last):\n+ ...\n+ TypeError: 'float' object cannot be interpreted as an integer\n \"\"\"\n+ if n < 0 or r < 0:\n+ raise ValueError(\"n and r must be non-negative integers\")\n+ if 0 in (n, r):\n+ return 1\n c = [0 for i in range(r + 1)]\n # nc0 = 1\n c[0] = 1\n@@ -17,4 +55,8 @@\n return c[r]\n \n \n-print(binomial_coefficient(n=10, r=5))\n+if __name__ == \"__main__\":\n+ from doctest import testmod\n+\n+ testmod()\n+ print(binomial_coefficient(n=10, r=5))\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "def binomial_coefficient(n: int, r: int) -> int:\n \"\"\"\n Find binomial coefficient using pascals triangle.\n\n >>> binomial_coefficient(10, 5)\n 252\n \"\"\"\n c = [0 for i in range(r + 1)]\n # nc0 = 1\n c[0] = 1\n for i in range(1, n + 1):\n # to compute current row from previous row.\n j = min(i, r)\n while j > 0:\n c[j] += c[j - 1]\n j -= 1\n return c[r]\n\n\nprint(binomial_coefficient(n=10, r=5))\n", "path": "maths/binomial_coefficient.py"}]} | 1,570 | 596 |
gh_patches_debug_16578 | rasdani/github-patches | git_diff | doccano__doccano-1668 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pagination of the project list
When fetching projects in the project list page, is it intentional that all projects are fetched at once even though there is pagination?
Endpoint of project list fetching: `/v1/projects`
When there are a lot of projects, it takes a long time to display them.
Your Environment
---------
doccano v1.5.5
</issue>
<code>
[start of backend/api/views/project.py]
1 from django.conf import settings
2 from rest_framework import generics, status
3 from rest_framework.permissions import IsAdminUser, IsAuthenticated
4 from rest_framework.response import Response
5
6 from members.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly
7
8 from ..models import Project
9 from ..serializers import ProjectPolymorphicSerializer
10
11
12 class ProjectList(generics.ListCreateAPIView):
13 serializer_class = ProjectPolymorphicSerializer
14 pagination_class = None
15
16 def get_permissions(self):
17 if self.request.method == 'GET':
18 self.permission_classes = [IsAuthenticated, ]
19 else:
20 self.permission_classes = [IsAuthenticated & IsAdminUser]
21 return super().get_permissions()
22
23 def get_queryset(self):
24 return Project.objects.filter(role_mappings__user=self.request.user)
25
26 def perform_create(self, serializer):
27 serializer.save(created_by=self.request.user)
28
29 def delete(self, request, *args, **kwargs):
30 delete_ids = request.data['ids']
31 projects = Project.objects.filter(
32 role_mappings__user=self.request.user,
33 role_mappings__role__name=settings.ROLE_PROJECT_ADMIN,
34 pk__in=delete_ids
35 )
36 # Todo: I want to use bulk delete.
37 # But it causes the constraint error.
38 # See https://github.com/django-polymorphic/django-polymorphic/issues/229
39 for project in projects:
40 project.delete()
41 return Response(status=status.HTTP_204_NO_CONTENT)
42
43
44 class ProjectDetail(generics.RetrieveUpdateDestroyAPIView):
45 queryset = Project.objects.all()
46 serializer_class = ProjectPolymorphicSerializer
47 lookup_url_kwarg = 'project_id'
48 permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]
49
[end of backend/api/views/project.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/api/views/project.py b/backend/api/views/project.py
--- a/backend/api/views/project.py
+++ b/backend/api/views/project.py
@@ -1,5 +1,6 @@
from django.conf import settings
-from rest_framework import generics, status
+from django_filters.rest_framework import DjangoFilterBackend
+from rest_framework import filters, generics, status
from rest_framework.permissions import IsAdminUser, IsAuthenticated
from rest_framework.response import Response
@@ -11,7 +12,8 @@
class ProjectList(generics.ListCreateAPIView):
serializer_class = ProjectPolymorphicSerializer
- pagination_class = None
+ filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)
+ search_fields = ('name', 'description')
def get_permissions(self):
if self.request.method == 'GET':
| {"golden_diff": "diff --git a/backend/api/views/project.py b/backend/api/views/project.py\n--- a/backend/api/views/project.py\n+++ b/backend/api/views/project.py\n@@ -1,5 +1,6 @@\n from django.conf import settings\n-from rest_framework import generics, status\n+from django_filters.rest_framework import DjangoFilterBackend\n+from rest_framework import filters, generics, status\n from rest_framework.permissions import IsAdminUser, IsAuthenticated\n from rest_framework.response import Response\n \n@@ -11,7 +12,8 @@\n \n class ProjectList(generics.ListCreateAPIView):\n serializer_class = ProjectPolymorphicSerializer\n- pagination_class = None\n+ filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)\n+ search_fields = ('name', 'description')\n \n def get_permissions(self):\n if self.request.method == 'GET':\n", "issue": "Pagination of the project list\nWhen fetching projects in the project list page, is it intentional that all projects are fetched at once even though there is pagination?\r\n\r\nEndpoint of project list fetching: `/v1/projects`\r\n\r\nWhen there are a lot of projects, it takes a long time to display them.\r\n\r\nYour Environment\r\n---------\r\ndoccano v1.5.5\n", "before_files": [{"content": "from django.conf import settings\nfrom rest_framework import generics, status\nfrom rest_framework.permissions import IsAdminUser, IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom members.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly\n\nfrom ..models import Project\nfrom ..serializers import ProjectPolymorphicSerializer\n\n\nclass ProjectList(generics.ListCreateAPIView):\n serializer_class = ProjectPolymorphicSerializer\n pagination_class = None\n\n def get_permissions(self):\n if self.request.method == 'GET':\n self.permission_classes = [IsAuthenticated, ]\n else:\n self.permission_classes = [IsAuthenticated & IsAdminUser]\n return super().get_permissions()\n\n def get_queryset(self):\n return Project.objects.filter(role_mappings__user=self.request.user)\n\n def perform_create(self, serializer):\n serializer.save(created_by=self.request.user)\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data['ids']\n projects = Project.objects.filter(\n role_mappings__user=self.request.user,\n role_mappings__role__name=settings.ROLE_PROJECT_ADMIN,\n pk__in=delete_ids\n )\n # Todo: I want to use bulk delete.\n # But it causes the constraint error.\n # See https://github.com/django-polymorphic/django-polymorphic/issues/229\n for project in projects:\n project.delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass ProjectDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Project.objects.all()\n serializer_class = ProjectPolymorphicSerializer\n lookup_url_kwarg = 'project_id'\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n", "path": "backend/api/views/project.py"}]} | 1,069 | 186 |
gh_patches_debug_20614 | rasdani/github-patches | git_diff | pytorch__examples-1189 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `save_model` arg to `mnist_hogwild` example
Currently the example doesn't support the `--save_model` argument like the other examples
</issue>
<code>
[start of mnist_hogwild/main.py]
1 from __future__ import print_function
2 import argparse
3 import torch
4 import torch.nn as nn
5 import torch.nn.functional as F
6 import torch.multiprocessing as mp
7 from torch.utils.data.sampler import Sampler
8 from torchvision import datasets, transforms
9
10 from train import train, test
11
12 # Training settings
13 parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
14 parser.add_argument('--batch-size', type=int, default=64, metavar='N',
15 help='input batch size for training (default: 64)')
16 parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
17 help='input batch size for testing (default: 1000)')
18 parser.add_argument('--epochs', type=int, default=10, metavar='N',
19 help='number of epochs to train (default: 10)')
20 parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
21 help='learning rate (default: 0.01)')
22 parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
23 help='SGD momentum (default: 0.5)')
24 parser.add_argument('--seed', type=int, default=1, metavar='S',
25 help='random seed (default: 1)')
26 parser.add_argument('--log-interval', type=int, default=10, metavar='N',
27 help='how many batches to wait before logging training status')
28 parser.add_argument('--num-processes', type=int, default=2, metavar='N',
29 help='how many training processes to use (default: 2)')
30 parser.add_argument('--cuda', action='store_true', default=False,
31 help='enables CUDA training')
32 parser.add_argument('--mps', action='store_true', default=False,
33 help='enables macOS GPU training')
34 parser.add_argument('--dry-run', action='store_true', default=False,
35 help='quickly check a single pass')
36
37 class Net(nn.Module):
38 def __init__(self):
39 super(Net, self).__init__()
40 self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
41 self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
42 self.conv2_drop = nn.Dropout2d()
43 self.fc1 = nn.Linear(320, 50)
44 self.fc2 = nn.Linear(50, 10)
45
46 def forward(self, x):
47 x = F.relu(F.max_pool2d(self.conv1(x), 2))
48 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
49 x = x.view(-1, 320)
50 x = F.relu(self.fc1(x))
51 x = F.dropout(x, training=self.training)
52 x = self.fc2(x)
53 return F.log_softmax(x, dim=1)
54
55
56 if __name__ == '__main__':
57 args = parser.parse_args()
58
59 use_cuda = args.cuda and torch.cuda.is_available()
60 use_mps = args.mps and torch.backends.mps.is_available()
61 if use_cuda:
62 device = torch.device("cuda")
63 elif use_mps:
64 device = torch.device("mps")
65 else:
66 device = torch.device("cpu")
67
68 transform=transforms.Compose([
69 transforms.ToTensor(),
70 transforms.Normalize((0.1307,), (0.3081,))
71 ])
72 dataset1 = datasets.MNIST('../data', train=True, download=True,
73 transform=transform)
74 dataset2 = datasets.MNIST('../data', train=False,
75 transform=transform)
76 kwargs = {'batch_size': args.batch_size,
77 'shuffle': True}
78 if use_cuda:
79 kwargs.update({'num_workers': 1,
80 'pin_memory': True,
81 })
82
83 torch.manual_seed(args.seed)
84 mp.set_start_method('spawn', force=True)
85
86 model = Net().to(device)
87 model.share_memory() # gradients are allocated lazily, so they are not shared here
88
89 processes = []
90 for rank in range(args.num_processes):
91 p = mp.Process(target=train, args=(rank, args, model, device,
92 dataset1, kwargs))
93 # We first train the model across `num_processes` processes
94 p.start()
95 processes.append(p)
96 for p in processes:
97 p.join()
98
99 # Once training is complete, we can test the model
100 test(args, model, device, dataset2, kwargs)
101
[end of mnist_hogwild/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mnist_hogwild/main.py b/mnist_hogwild/main.py
--- a/mnist_hogwild/main.py
+++ b/mnist_hogwild/main.py
@@ -30,7 +30,9 @@
parser.add_argument('--cuda', action='store_true', default=False,
help='enables CUDA training')
parser.add_argument('--mps', action='store_true', default=False,
- help='enables macOS GPU training')
+ help='enables macOS GPU training')
+parser.add_argument('--save_model', action='store_true', default=False,
+ help='save the trained model to state_dict')
parser.add_argument('--dry-run', action='store_true', default=False,
help='quickly check a single pass')
@@ -96,5 +98,8 @@
for p in processes:
p.join()
+ if args.save_model:
+ torch.save(model.state_dict(), "MNIST_hogwild.pt")
+
# Once training is complete, we can test the model
test(args, model, device, dataset2, kwargs)
| {"golden_diff": "diff --git a/mnist_hogwild/main.py b/mnist_hogwild/main.py\n--- a/mnist_hogwild/main.py\n+++ b/mnist_hogwild/main.py\n@@ -30,7 +30,9 @@\n parser.add_argument('--cuda', action='store_true', default=False,\n help='enables CUDA training')\n parser.add_argument('--mps', action='store_true', default=False,\n- help='enables macOS GPU training')\n+ help='enables macOS GPU training')\n+parser.add_argument('--save_model', action='store_true', default=False,\n+ help='save the trained model to state_dict')\n parser.add_argument('--dry-run', action='store_true', default=False,\n help='quickly check a single pass')\n \n@@ -96,5 +98,8 @@\n for p in processes:\n p.join()\n \n+ if args.save_model:\n+ torch.save(model.state_dict(), \"MNIST_hogwild.pt\")\n+\n # Once training is complete, we can test the model\n test(args, model, device, dataset2, kwargs)\n", "issue": "Add `save_model` arg to `mnist_hogwild` example\nCurrently the example doesn't support the `--save_model` argument like the other examples\r\n\n", "before_files": [{"content": "from __future__ import print_function\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.multiprocessing as mp\nfrom torch.utils.data.sampler import Sampler\nfrom torchvision import datasets, transforms\n\nfrom train import train, test\n\n# Training settings\nparser = argparse.ArgumentParser(description='PyTorch MNIST Example')\nparser.add_argument('--batch-size', type=int, default=64, metavar='N',\n help='input batch size for training (default: 64)')\nparser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\n help='input batch size for testing (default: 1000)')\nparser.add_argument('--epochs', type=int, default=10, metavar='N',\n help='number of epochs to train (default: 10)')\nparser.add_argument('--lr', type=float, default=0.01, metavar='LR',\n help='learning rate (default: 0.01)')\nparser.add_argument('--momentum', type=float, default=0.5, metavar='M',\n help='SGD momentum (default: 0.5)')\nparser.add_argument('--seed', type=int, default=1, metavar='S',\n help='random seed (default: 1)')\nparser.add_argument('--log-interval', type=int, default=10, metavar='N',\n help='how many batches to wait before logging training status')\nparser.add_argument('--num-processes', type=int, default=2, metavar='N',\n help='how many training processes to use (default: 2)')\nparser.add_argument('--cuda', action='store_true', default=False,\n help='enables CUDA training')\nparser.add_argument('--mps', action='store_true', default=False,\n help='enables macOS GPU training')\nparser.add_argument('--dry-run', action='store_true', default=False,\n help='quickly check a single pass')\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=1)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n\n use_cuda = args.cuda and torch.cuda.is_available()\n use_mps = args.mps and torch.backends.mps.is_available()\n if use_cuda:\n device = torch.device(\"cuda\")\n elif use_mps:\n device = torch.device(\"mps\")\n else:\n device = torch.device(\"cpu\")\n\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])\n dataset1 = datasets.MNIST('../data', train=True, download=True,\n transform=transform)\n dataset2 = datasets.MNIST('../data', train=False,\n transform=transform)\n kwargs = {'batch_size': args.batch_size,\n 'shuffle': True}\n if use_cuda:\n kwargs.update({'num_workers': 1,\n 'pin_memory': True,\n })\n\n torch.manual_seed(args.seed)\n mp.set_start_method('spawn', force=True)\n\n model = Net().to(device)\n model.share_memory() # gradients are allocated lazily, so they are not shared here\n\n processes = []\n for rank in range(args.num_processes):\n p = mp.Process(target=train, args=(rank, args, model, device,\n dataset1, kwargs))\n # We first train the model across `num_processes` processes\n p.start()\n processes.append(p)\n for p in processes:\n p.join()\n\n # Once training is complete, we can test the model\n test(args, model, device, dataset2, kwargs)\n", "path": "mnist_hogwild/main.py"}]} | 1,737 | 236 |
gh_patches_debug_5067 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-1111 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passwords beginning or ending with a whitespace are not supported
Due to POST argument stripping, passwords with a beginning or ending whitespace are not allowed.
**How to reproduce the issue**
Set up a user password with an ending or beginning whitespace.
**What you expected to happen**
The user should be allowed to login with the password, given that the password should be any complicated sequence of characters the user can reproduce.
**What actually happens**
The user is denied access, because the LoginHandler will strip all posted values before considering the password for authentication (line 81, get_argument has a default "strip=True")
**Share what version of JupyterHub you are using**
HEAD (006488fc749923851df97d47d8850bdf5fd157cf)
</issue>
<code>
[start of jupyterhub/handlers/login.py]
1 """HTTP Handlers for the hub server"""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from urllib.parse import urlparse
7
8 from tornado.escape import url_escape
9 from tornado import gen
10 from tornado.httputil import url_concat
11
12 from .base import BaseHandler
13
14
15 class LogoutHandler(BaseHandler):
16 """Log a user out by clearing their login cookie."""
17 def get(self):
18 user = self.get_current_user()
19 if user:
20 self.log.info("User logged out: %s", user.name)
21 self.clear_login_cookie()
22 self.statsd.incr('logout')
23 if self.authenticator.auto_login:
24 self.render('logout.html')
25 else:
26 self.redirect(self.settings['login_url'], permanent=False)
27
28
29 class LoginHandler(BaseHandler):
30 """Render the login page."""
31
32 def _render(self, login_error=None, username=None):
33 return self.render_template('login.html',
34 next=url_escape(self.get_argument('next', default='')),
35 username=username,
36 login_error=login_error,
37 custom_html=self.authenticator.custom_html,
38 login_url=self.settings['login_url'],
39 authenticator_login_url=self.authenticator.login_url(self.hub.server.base_url),
40 )
41
42 def get(self):
43 self.statsd.incr('login.request')
44 next_url = self.get_argument('next', '')
45 if (next_url + '/').startswith('%s://%s/' % (self.request.protocol, self.request.host)):
46 # treat absolute URLs for our host as absolute paths:
47 next_url = urlparse(next_url).path
48 elif not next_url.startswith('/'):
49 # disallow non-absolute next URLs (e.g. full URLs to other hosts)
50 next_url = ''
51 user = self.get_current_user()
52 if user:
53 if not next_url:
54 if user.running:
55 next_url = user.url
56 else:
57 next_url = self.hub.server.base_url
58 # set new login cookie
59 # because single-user cookie may have been cleared or incorrect
60 self.set_login_cookie(self.get_current_user())
61 self.redirect(next_url, permanent=False)
62 else:
63 if self.authenticator.auto_login:
64 auto_login_url = self.authenticator.login_url(self.hub.server.base_url)
65 if auto_login_url == self.settings['login_url']:
66 self.authenticator.auto_login = False
67 self.log.warning("Authenticator.auto_login cannot be used without a custom login_url")
68 else:
69 if next_url:
70 auto_login_url = url_concat(auto_login_url, {'next': next_url})
71 self.redirect(auto_login_url)
72 return
73 username = self.get_argument('username', default='')
74 self.finish(self._render(username=username))
75
76 @gen.coroutine
77 def post(self):
78 # parse the arguments dict
79 data = {}
80 for arg in self.request.arguments:
81 data[arg] = self.get_argument(arg)
82
83 auth_timer = self.statsd.timer('login.authenticate').start()
84 username = yield self.authenticate(data)
85 auth_timer.stop(send=False)
86
87 if username:
88 self.statsd.incr('login.success')
89 self.statsd.timing('login.authenticate.success', auth_timer.ms)
90 user = self.user_from_username(username)
91 already_running = False
92 if user.spawner:
93 status = yield user.spawner.poll()
94 already_running = (status == None)
95 if not already_running and not user.spawner.options_form:
96 yield self.spawn_single_user(user)
97 self.set_login_cookie(user)
98 next_url = self.get_argument('next', default='')
99 if not next_url.startswith('/'):
100 next_url = ''
101 next_url = next_url or self.hub.server.base_url
102 self.redirect(next_url)
103 self.log.info("User logged in: %s", username)
104 else:
105 self.statsd.incr('login.failure')
106 self.statsd.timing('login.authenticate.failure', auth_timer.ms)
107 self.log.debug("Failed login for %s", data.get('username', 'unknown user'))
108 html = self._render(
109 login_error='Invalid username or password',
110 username=username,
111 )
112 self.finish(html)
113
114
115 # /login renders the login page or the "Login with..." link,
116 # so it should always be registered.
117 # /logout clears cookies.
118 default_handlers = [
119 (r"/login", LoginHandler),
120 (r"/logout", LogoutHandler),
121 ]
122
[end of jupyterhub/handlers/login.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jupyterhub/handlers/login.py b/jupyterhub/handlers/login.py
--- a/jupyterhub/handlers/login.py
+++ b/jupyterhub/handlers/login.py
@@ -78,7 +78,7 @@
# parse the arguments dict
data = {}
for arg in self.request.arguments:
- data[arg] = self.get_argument(arg)
+ data[arg] = self.get_argument(arg, strip=False)
auth_timer = self.statsd.timer('login.authenticate').start()
username = yield self.authenticate(data)
| {"golden_diff": "diff --git a/jupyterhub/handlers/login.py b/jupyterhub/handlers/login.py\n--- a/jupyterhub/handlers/login.py\n+++ b/jupyterhub/handlers/login.py\n@@ -78,7 +78,7 @@\n # parse the arguments dict\n data = {}\n for arg in self.request.arguments:\n- data[arg] = self.get_argument(arg)\n+ data[arg] = self.get_argument(arg, strip=False)\n \n auth_timer = self.statsd.timer('login.authenticate').start()\n username = yield self.authenticate(data)\n", "issue": "Passwords beginning or ending with a whitespace are not supported\nDue to POST argument stripping, passwords with a beginning or ending whitespace are not allowed.\r\n\r\n**How to reproduce the issue**\r\nSet up a user password with an ending or beginning whitespace.\r\n\r\n**What you expected to happen**\r\nThe user should be allowed to login with the password, given that the password should be any complicated sequence of characters the user can reproduce.\r\n\r\n**What actually happens**\r\nThe user is denied access, because the LoginHandler will strip all posted values before considering the password for authentication (line 81, get_argument has a default \"strip=True\")\r\n\r\n**Share what version of JupyterHub you are using**\r\nHEAD (006488fc749923851df97d47d8850bdf5fd157cf)\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"HTTP Handlers for the hub server\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom urllib.parse import urlparse\n\nfrom tornado.escape import url_escape\nfrom tornado import gen\nfrom tornado.httputil import url_concat\n\nfrom .base import BaseHandler\n\n\nclass LogoutHandler(BaseHandler):\n \"\"\"Log a user out by clearing their login cookie.\"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n self.log.info(\"User logged out: %s\", user.name)\n self.clear_login_cookie()\n self.statsd.incr('logout')\n if self.authenticator.auto_login:\n self.render('logout.html')\n else:\n self.redirect(self.settings['login_url'], permanent=False)\n\n\nclass LoginHandler(BaseHandler):\n \"\"\"Render the login page.\"\"\"\n\n def _render(self, login_error=None, username=None):\n return self.render_template('login.html',\n next=url_escape(self.get_argument('next', default='')),\n username=username,\n login_error=login_error,\n custom_html=self.authenticator.custom_html,\n login_url=self.settings['login_url'],\n authenticator_login_url=self.authenticator.login_url(self.hub.server.base_url),\n )\n\n def get(self):\n self.statsd.incr('login.request')\n next_url = self.get_argument('next', '')\n if (next_url + '/').startswith('%s://%s/' % (self.request.protocol, self.request.host)):\n # treat absolute URLs for our host as absolute paths:\n next_url = urlparse(next_url).path\n elif not next_url.startswith('/'):\n # disallow non-absolute next URLs (e.g. full URLs to other hosts)\n next_url = ''\n user = self.get_current_user()\n if user:\n if not next_url:\n if user.running:\n next_url = user.url\n else:\n next_url = self.hub.server.base_url\n # set new login cookie\n # because single-user cookie may have been cleared or incorrect\n self.set_login_cookie(self.get_current_user())\n self.redirect(next_url, permanent=False)\n else:\n if self.authenticator.auto_login:\n auto_login_url = self.authenticator.login_url(self.hub.server.base_url)\n if auto_login_url == self.settings['login_url']:\n self.authenticator.auto_login = False\n self.log.warning(\"Authenticator.auto_login cannot be used without a custom login_url\")\n else:\n if next_url:\n auto_login_url = url_concat(auto_login_url, {'next': next_url})\n self.redirect(auto_login_url)\n return\n username = self.get_argument('username', default='')\n self.finish(self._render(username=username))\n\n @gen.coroutine\n def post(self):\n # parse the arguments dict\n data = {}\n for arg in self.request.arguments:\n data[arg] = self.get_argument(arg)\n\n auth_timer = self.statsd.timer('login.authenticate').start()\n username = yield self.authenticate(data)\n auth_timer.stop(send=False)\n\n if username:\n self.statsd.incr('login.success')\n self.statsd.timing('login.authenticate.success', auth_timer.ms)\n user = self.user_from_username(username)\n already_running = False\n if user.spawner:\n status = yield user.spawner.poll()\n already_running = (status == None)\n if not already_running and not user.spawner.options_form:\n yield self.spawn_single_user(user)\n self.set_login_cookie(user)\n next_url = self.get_argument('next', default='')\n if not next_url.startswith('/'):\n next_url = ''\n next_url = next_url or self.hub.server.base_url\n self.redirect(next_url)\n self.log.info(\"User logged in: %s\", username)\n else:\n self.statsd.incr('login.failure')\n self.statsd.timing('login.authenticate.failure', auth_timer.ms)\n self.log.debug(\"Failed login for %s\", data.get('username', 'unknown user'))\n html = self._render(\n login_error='Invalid username or password',\n username=username,\n )\n self.finish(html)\n\n\n# /login renders the login page or the \"Login with...\" link,\n# so it should always be registered.\n# /logout clears cookies.\ndefault_handlers = [\n (r\"/login\", LoginHandler),\n (r\"/logout\", LogoutHandler),\n]\n", "path": "jupyterhub/handlers/login.py"}]} | 1,902 | 124 |
gh_patches_debug_34606 | rasdani/github-patches | git_diff | ansible__awx-8016 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add insignts_credential paramter to tower_inventory
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Feature Idea
##### SUMMARY
<!-- Briefly describe the problem or desired enhancement. -->
Per PR #7963 tower_inventory is missing support for the insights_credential API parameter.
</issue>
<code>
[start of awx_collection/plugins/modules/tower_inventory.py]
1 #!/usr/bin/python
2 # coding: utf-8 -*-
3
4 # (c) 2017, Wayne Witzel III <[email protected]>
5 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
6
7 from __future__ import absolute_import, division, print_function
8 __metaclass__ = type
9
10
11 ANSIBLE_METADATA = {'metadata_version': '1.1',
12 'status': ['preview'],
13 'supported_by': 'community'}
14
15
16 DOCUMENTATION = '''
17 ---
18 module: tower_inventory
19 author: "Wayne Witzel III (@wwitzel3)"
20 short_description: create, update, or destroy Ansible Tower inventory.
21 description:
22 - Create, update, or destroy Ansible Tower inventories. See
23 U(https://www.ansible.com/tower) for an overview.
24 options:
25 name:
26 description:
27 - The name to use for the inventory.
28 required: True
29 type: str
30 description:
31 description:
32 - The description to use for the inventory.
33 type: str
34 organization:
35 description:
36 - Organization the inventory belongs to.
37 required: True
38 type: str
39 variables:
40 description:
41 - Inventory variables.
42 type: dict
43 kind:
44 description:
45 - The kind field. Cannot be modified after created.
46 default: ""
47 choices: ["", "smart"]
48 type: str
49 host_filter:
50 description:
51 - The host_filter field. Only useful when C(kind=smart).
52 type: str
53 state:
54 description:
55 - Desired state of the resource.
56 default: "present"
57 choices: ["present", "absent"]
58 type: str
59 extends_documentation_fragment: awx.awx.auth
60 '''
61
62
63 EXAMPLES = '''
64 - name: Add tower inventory
65 tower_inventory:
66 name: "Foo Inventory"
67 description: "Our Foo Cloud Servers"
68 organization: "Bar Org"
69 state: present
70 tower_config_file: "~/tower_cli.cfg"
71 '''
72
73
74 from ..module_utils.tower_api import TowerAPIModule
75 import json
76
77
78 def main():
79 # Any additional arguments that are not fields of the item can be added here
80 argument_spec = dict(
81 name=dict(required=True),
82 description=dict(),
83 organization=dict(required=True),
84 variables=dict(type='dict'),
85 kind=dict(choices=['', 'smart'], default=''),
86 host_filter=dict(),
87 state=dict(choices=['present', 'absent'], default='present'),
88 )
89
90 # Create a module for ourselves
91 module = TowerAPIModule(argument_spec=argument_spec)
92
93 # Extract our parameters
94 name = module.params.get('name')
95 description = module.params.get('description')
96 organization = module.params.get('organization')
97 variables = module.params.get('variables')
98 state = module.params.get('state')
99 kind = module.params.get('kind')
100 host_filter = module.params.get('host_filter')
101
102 # Attempt to look up the related items the user specified (these will fail the module if not found)
103 org_id = module.resolve_name_to_id('organizations', organization)
104
105 # Attempt to look up inventory based on the provided name and org ID
106 inventory = module.get_one('inventories', **{
107 'data': {
108 'name': name,
109 'organization': org_id
110 }
111 })
112
113 if state == 'absent':
114 # If the state was absent we can let the module delete it if needed, the module will handle exiting from this
115 module.delete_if_needed(inventory)
116
117 # Create the data that gets sent for create and update
118 inventory_fields = {
119 'name': name,
120 'organization': org_id,
121 'kind': kind,
122 'host_filter': host_filter,
123 }
124 if description is not None:
125 inventory_fields['description'] = description
126 if variables is not None:
127 inventory_fields['variables'] = json.dumps(variables)
128
129 # We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.
130 if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart':
131 module.fail_json(msg='You cannot turn a regular inventory into a "smart" inventory.')
132
133 # If the state was present and we can let the module build or update the existing inventory, this will return on its own
134 module.create_or_update_if_needed(inventory, inventory_fields, endpoint='inventories', item_type='inventory')
135
136
137 if __name__ == '__main__':
138 main()
139
[end of awx_collection/plugins/modules/tower_inventory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awx_collection/plugins/modules/tower_inventory.py b/awx_collection/plugins/modules/tower_inventory.py
--- a/awx_collection/plugins/modules/tower_inventory.py
+++ b/awx_collection/plugins/modules/tower_inventory.py
@@ -48,7 +48,11 @@
type: str
host_filter:
description:
- - The host_filter field. Only useful when C(kind=smart).
+ - The host_filter field. Only useful when C(kind=smart).
+ type: str
+ insights_credential:
+ description:
+ - Credentials to be used by hosts belonging to this inventory when accessing Red Hat Insights API.
type: str
state:
description:
@@ -84,6 +88,7 @@
variables=dict(type='dict'),
kind=dict(choices=['', 'smart'], default=''),
host_filter=dict(),
+ insights_credential=dict(),
state=dict(choices=['present', 'absent'], default='present'),
)
@@ -98,6 +103,7 @@
state = module.params.get('state')
kind = module.params.get('kind')
host_filter = module.params.get('host_filter')
+ insights_credential = module.params.get('insights_credential')
# Attempt to look up the related items the user specified (these will fail the module if not found)
org_id = module.resolve_name_to_id('organizations', organization)
@@ -125,6 +131,8 @@
inventory_fields['description'] = description
if variables is not None:
inventory_fields['variables'] = json.dumps(variables)
+ if insights_credential is not None:
+ inventory_fields['insights_credential'] = module.resolve_name_to_id('credentials', insights_credential)
# We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.
if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart':
| {"golden_diff": "diff --git a/awx_collection/plugins/modules/tower_inventory.py b/awx_collection/plugins/modules/tower_inventory.py\n--- a/awx_collection/plugins/modules/tower_inventory.py\n+++ b/awx_collection/plugins/modules/tower_inventory.py\n@@ -48,7 +48,11 @@\n type: str\n host_filter:\n description:\n- - The host_filter field. Only useful when C(kind=smart).\n+ - The host_filter field. Only useful when C(kind=smart).\n+ type: str\n+ insights_credential:\n+ description:\n+ - Credentials to be used by hosts belonging to this inventory when accessing Red Hat Insights API.\n type: str\n state:\n description:\n@@ -84,6 +88,7 @@\n variables=dict(type='dict'),\n kind=dict(choices=['', 'smart'], default=''),\n host_filter=dict(),\n+ insights_credential=dict(),\n state=dict(choices=['present', 'absent'], default='present'),\n )\n \n@@ -98,6 +103,7 @@\n state = module.params.get('state')\n kind = module.params.get('kind')\n host_filter = module.params.get('host_filter')\n+ insights_credential = module.params.get('insights_credential')\n \n # Attempt to look up the related items the user specified (these will fail the module if not found)\n org_id = module.resolve_name_to_id('organizations', organization)\n@@ -125,6 +131,8 @@\n inventory_fields['description'] = description\n if variables is not None:\n inventory_fields['variables'] = json.dumps(variables)\n+ if insights_credential is not None:\n+ inventory_fields['insights_credential'] = module.resolve_name_to_id('credentials', insights_credential)\n \n # We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.\n if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart':\n", "issue": "Add insignts_credential paramter to tower_inventory\n<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:\r\n\r\n- http://webchat.freenode.net/?channels=ansible-awx\r\n- https://groups.google.com/forum/#!forum/awx-project\r\n\r\nWe have to limit this because of limited volunteer time to respond to issues! -->\r\n\r\n##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### SUMMARY\r\n<!-- Briefly describe the problem or desired enhancement. -->\r\nPer PR #7963 tower_inventory is missing support for the insights_credential API parameter.\n", "before_files": [{"content": "#!/usr/bin/python\n# coding: utf-8 -*-\n\n# (c) 2017, Wayne Witzel III <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: tower_inventory\nauthor: \"Wayne Witzel III (@wwitzel3)\"\nshort_description: create, update, or destroy Ansible Tower inventory.\ndescription:\n - Create, update, or destroy Ansible Tower inventories. See\n U(https://www.ansible.com/tower) for an overview.\noptions:\n name:\n description:\n - The name to use for the inventory.\n required: True\n type: str\n description:\n description:\n - The description to use for the inventory.\n type: str\n organization:\n description:\n - Organization the inventory belongs to.\n required: True\n type: str\n variables:\n description:\n - Inventory variables.\n type: dict\n kind:\n description:\n - The kind field. Cannot be modified after created.\n default: \"\"\n choices: [\"\", \"smart\"]\n type: str\n host_filter:\n description:\n - The host_filter field. Only useful when C(kind=smart).\n type: str\n state:\n description:\n - Desired state of the resource.\n default: \"present\"\n choices: [\"present\", \"absent\"]\n type: str\nextends_documentation_fragment: awx.awx.auth\n'''\n\n\nEXAMPLES = '''\n- name: Add tower inventory\n tower_inventory:\n name: \"Foo Inventory\"\n description: \"Our Foo Cloud Servers\"\n organization: \"Bar Org\"\n state: present\n tower_config_file: \"~/tower_cli.cfg\"\n'''\n\n\nfrom ..module_utils.tower_api import TowerAPIModule\nimport json\n\n\ndef main():\n # Any additional arguments that are not fields of the item can be added here\n argument_spec = dict(\n name=dict(required=True),\n description=dict(),\n organization=dict(required=True),\n variables=dict(type='dict'),\n kind=dict(choices=['', 'smart'], default=''),\n host_filter=dict(),\n state=dict(choices=['present', 'absent'], default='present'),\n )\n\n # Create a module for ourselves\n module = TowerAPIModule(argument_spec=argument_spec)\n\n # Extract our parameters\n name = module.params.get('name')\n description = module.params.get('description')\n organization = module.params.get('organization')\n variables = module.params.get('variables')\n state = module.params.get('state')\n kind = module.params.get('kind')\n host_filter = module.params.get('host_filter')\n\n # Attempt to look up the related items the user specified (these will fail the module if not found)\n org_id = module.resolve_name_to_id('organizations', organization)\n\n # Attempt to look up inventory based on the provided name and org ID\n inventory = module.get_one('inventories', **{\n 'data': {\n 'name': name,\n 'organization': org_id\n }\n })\n\n if state == 'absent':\n # If the state was absent we can let the module delete it if needed, the module will handle exiting from this\n module.delete_if_needed(inventory)\n\n # Create the data that gets sent for create and update\n inventory_fields = {\n 'name': name,\n 'organization': org_id,\n 'kind': kind,\n 'host_filter': host_filter,\n }\n if description is not None:\n inventory_fields['description'] = description\n if variables is not None:\n inventory_fields['variables'] = json.dumps(variables)\n\n # We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.\n if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart':\n module.fail_json(msg='You cannot turn a regular inventory into a \"smart\" inventory.')\n\n # If the state was present and we can let the module build or update the existing inventory, this will return on its own\n module.create_or_update_if_needed(inventory, inventory_fields, endpoint='inventories', item_type='inventory')\n\n\nif __name__ == '__main__':\n main()\n", "path": "awx_collection/plugins/modules/tower_inventory.py"}]} | 1,973 | 434 |
gh_patches_debug_8100 | rasdani/github-patches | git_diff | WeblateOrg__weblate-11568 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Time to use `build` from `setuptools` instead of `distutils`?
### Describe the problem
The following feature in setuptools has been deprecated for almost 2 years and is about to be removed:
https://github.com/pypa/setuptools/blob/1ed759173983656734c3606e9c97a348895e5e0c/setuptools/command/build.py#L13-L27
It might be a good idea to import `build` directly from setuptools for the following code:
https://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L9
https://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L51-L58
(`build` is available directly from setuptools, starting on version v62.4.0)
### Describe the solution you would like
Whenever possible, it might be a good idea to import from setuptools (and minimise imports to `distutils` to the minimum viable).
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
Time to use `build` from `setuptools` instead of `distutils`?
### Describe the problem
The following feature in setuptools has been deprecated for almost 2 years and is about to be removed:
https://github.com/pypa/setuptools/blob/1ed759173983656734c3606e9c97a348895e5e0c/setuptools/command/build.py#L13-L27
It might be a good idea to import `build` directly from setuptools for the following code:
https://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L9
https://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L51-L58
(`build` is available directly from setuptools, starting on version v62.4.0)
### Describe the solution you would like
Whenever possible, it might be a good idea to import from setuptools (and minimise imports to `distutils` to the minimum viable).
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 # Copyright © Michal Čihař <[email protected]>
4 #
5 # SPDX-License-Identifier: GPL-3.0-or-later
6
7 import os
8 from distutils import log
9 from distutils.command.build import build
10 from distutils.core import Command
11 from glob import glob
12 from itertools import chain
13
14 from setuptools import setup
15 from setuptools.command.build_py import build_py
16 from setuptools.modified import newer
17 from translate.tools.pocompile import convertmo
18
19 LOCALE_MASKS = [
20 "weblate/locale/*/LC_MESSAGES/*.po",
21 ]
22
23
24 class WeblateBuildPy(build_py):
25 def find_package_modules(self, package, package_dir):
26 """Filter settings.py from built module."""
27 result = super().find_package_modules(package, package_dir)
28 return [item for item in result if item[2] != "weblate/settings.py"]
29
30
31 class BuildMo(Command):
32 description = "update MO files to match PO"
33 user_options = []
34
35 def initialize_options(self) -> None:
36 self.build_base = None
37
38 def finalize_options(self) -> None:
39 self.set_undefined_options("build", ("build_base", "build_base"))
40
41 def run(self) -> None:
42 for name in chain.from_iterable(glob(mask) for mask in LOCALE_MASKS):
43 output = os.path.splitext(name)[0] + ".mo"
44 if not newer(name, output):
45 continue
46 self.announce(f"compiling {name} -> {output}", level=log.INFO)
47 with open(name, "rb") as pofile, open(output, "wb") as mofile:
48 convertmo(pofile, mofile, None)
49
50
51 class WeblateBuild(build):
52 """Override the default build with new subcommands."""
53
54 # The build_mo has to be before build_data
55 sub_commands = [
56 ("build_mo", lambda self: True), # noqa: ARG005
57 *build.sub_commands,
58 ]
59
60
61 setup(
62 cmdclass={"build_py": WeblateBuildPy, "build_mo": BuildMo, "build": WeblateBuild},
63 )
64
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,12 +6,12 @@
import os
from distutils import log
-from distutils.command.build import build
from distutils.core import Command
from glob import glob
from itertools import chain
from setuptools import setup
+from setuptools.command.build import build
from setuptools.command.build_py import build_py
from setuptools.modified import newer
from translate.tools.pocompile import convertmo
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,12 +6,12 @@\n \n import os\n from distutils import log\n-from distutils.command.build import build\n from distutils.core import Command\n from glob import glob\n from itertools import chain\n \n from setuptools import setup\n+from setuptools.command.build import build\n from setuptools.command.build_py import build_py\n from setuptools.modified import newer\n from translate.tools.pocompile import convertmo\n", "issue": "Time to use `build` from `setuptools` instead of `distutils`?\n### Describe the problem\n\nThe following feature in setuptools has been deprecated for almost 2 years and is about to be removed:\r\n\r\nhttps://github.com/pypa/setuptools/blob/1ed759173983656734c3606e9c97a348895e5e0c/setuptools/command/build.py#L13-L27\r\n\r\nIt might be a good idea to import `build` directly from setuptools for the following code:\r\n\r\nhttps://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L9\r\nhttps://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L51-L58\r\n\r\n(`build` is available directly from setuptools, starting on version v62.4.0)\n\n### Describe the solution you would like\n\nWhenever possible, it might be a good idea to import from setuptools (and minimise imports to `distutils` to the minimum viable).\n\n### Describe alternatives you have considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\nTime to use `build` from `setuptools` instead of `distutils`?\n### Describe the problem\n\nThe following feature in setuptools has been deprecated for almost 2 years and is about to be removed:\r\n\r\nhttps://github.com/pypa/setuptools/blob/1ed759173983656734c3606e9c97a348895e5e0c/setuptools/command/build.py#L13-L27\r\n\r\nIt might be a good idea to import `build` directly from setuptools for the following code:\r\n\r\nhttps://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L9\r\nhttps://github.com/WeblateOrg/weblate/blob/47f9f2870c4ed9fd5429eebfacc61d2267a5bb31/setup.py#L51-L58\r\n\r\n(`build` is available directly from setuptools, starting on version v62.4.0)\n\n### Describe the solution you would like\n\nWhenever possible, it might be a good idea to import from setuptools (and minimise imports to `distutils` to the minimum viable).\n\n### Describe alternatives you have considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport os\nfrom distutils import log\nfrom distutils.command.build import build\nfrom distutils.core import Command\nfrom glob import glob\nfrom itertools import chain\n\nfrom setuptools import setup\nfrom setuptools.command.build_py import build_py\nfrom setuptools.modified import newer\nfrom translate.tools.pocompile import convertmo\n\nLOCALE_MASKS = [\n \"weblate/locale/*/LC_MESSAGES/*.po\",\n]\n\n\nclass WeblateBuildPy(build_py):\n def find_package_modules(self, package, package_dir):\n \"\"\"Filter settings.py from built module.\"\"\"\n result = super().find_package_modules(package, package_dir)\n return [item for item in result if item[2] != \"weblate/settings.py\"]\n\n\nclass BuildMo(Command):\n description = \"update MO files to match PO\"\n user_options = []\n\n def initialize_options(self) -> None:\n self.build_base = None\n\n def finalize_options(self) -> None:\n self.set_undefined_options(\"build\", (\"build_base\", \"build_base\"))\n\n def run(self) -> None:\n for name in chain.from_iterable(glob(mask) for mask in LOCALE_MASKS):\n output = os.path.splitext(name)[0] + \".mo\"\n if not newer(name, output):\n continue\n self.announce(f\"compiling {name} -> {output}\", level=log.INFO)\n with open(name, \"rb\") as pofile, open(output, \"wb\") as mofile:\n convertmo(pofile, mofile, None)\n\n\nclass WeblateBuild(build):\n \"\"\"Override the default build with new subcommands.\"\"\"\n\n # The build_mo has to be before build_data\n sub_commands = [\n (\"build_mo\", lambda self: True), # noqa: ARG005\n *build.sub_commands,\n ]\n\n\nsetup(\n cmdclass={\"build_py\": WeblateBuildPy, \"build_mo\": BuildMo, \"build\": WeblateBuild},\n)\n", "path": "setup.py"}]} | 1,744 | 106 |
gh_patches_debug_5037 | rasdani/github-patches | git_diff | facebookresearch__hydra-793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] MISSING for Object Conf cls argument
# 🐛 Bug
OmegaConf cls argument should not be a mandatory value if target is defined. Can we change this to be an optional value with None being the default?
** Stack trace/error message **
```
omegaconf.errors.MissingMandatoryValue: Missing mandatory value: scheduler.cls
full_key: scheduler.cls
reference_type=ObjectConf
object_type=ObjectConf
```
## System information
- **Hydra Version** : 1.0.0rc2
- **Python version** : 3.7.7
</issue>
<code>
[start of hydra/types.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from enum import Enum
4 from typing import Any, Callable, Dict
5
6 from omegaconf import MISSING
7
8 TaskFunction = Callable[[Any], Any]
9
10
11 @dataclass
12 # This extends Dict[str, Any] to allow for the deprecated "class" field.
13 # Once support for class field removed this can stop extending Dict.
14 class ObjectConf(Dict[str, Any]):
15 # class, class method or function name
16 target: str = MISSING
17
18 # parameters to pass to cls when calling it
19 params: Any = field(default_factory=dict)
20
21 # cls is deprecated, use target, cls will be removed in Hydra 1.1
22 cls: str = MISSING
23
24 # class is deprecated, use target, class will be removed in Hydra 1.1
25 # (class is Python keyword and is only supported through DictConfig)
26 # class: str = MISSING
27
28
29 class RunMode(Enum):
30 RUN = 1
31 MULTIRUN = 2
32
[end of hydra/types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/types.py b/hydra/types.py
--- a/hydra/types.py
+++ b/hydra/types.py
@@ -18,13 +18,6 @@
# parameters to pass to cls when calling it
params: Any = field(default_factory=dict)
- # cls is deprecated, use target, cls will be removed in Hydra 1.1
- cls: str = MISSING
-
- # class is deprecated, use target, class will be removed in Hydra 1.1
- # (class is Python keyword and is only supported through DictConfig)
- # class: str = MISSING
-
class RunMode(Enum):
RUN = 1
| {"golden_diff": "diff --git a/hydra/types.py b/hydra/types.py\n--- a/hydra/types.py\n+++ b/hydra/types.py\n@@ -18,13 +18,6 @@\n # parameters to pass to cls when calling it\n params: Any = field(default_factory=dict)\n \n- # cls is deprecated, use target, cls will be removed in Hydra 1.1\n- cls: str = MISSING\n-\n- # class is deprecated, use target, class will be removed in Hydra 1.1\n- # (class is Python keyword and is only supported through DictConfig)\n- # class: str = MISSING\n-\n \n class RunMode(Enum):\n RUN = 1\n", "issue": "[Bug] MISSING for Object Conf cls argument\n# \ud83d\udc1b Bug\r\n\r\nOmegaConf cls argument should not be a mandatory value if target is defined. Can we change this to be an optional value with None being the default?\r\n \r\n** Stack trace/error message **\r\n```\r\nomegaconf.errors.MissingMandatoryValue: Missing mandatory value: scheduler.cls\r\n full_key: scheduler.cls\r\n reference_type=ObjectConf\r\n object_type=ObjectConf\r\n```\r\n\r\n\r\n## System information\r\n- **Hydra Version** : 1.0.0rc2\r\n- **Python version** : 3.7.7\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom enum import Enum\nfrom typing import Any, Callable, Dict\n\nfrom omegaconf import MISSING\n\nTaskFunction = Callable[[Any], Any]\n\n\n@dataclass\n# This extends Dict[str, Any] to allow for the deprecated \"class\" field.\n# Once support for class field removed this can stop extending Dict.\nclass ObjectConf(Dict[str, Any]):\n # class, class method or function name\n target: str = MISSING\n\n # parameters to pass to cls when calling it\n params: Any = field(default_factory=dict)\n\n # cls is deprecated, use target, cls will be removed in Hydra 1.1\n cls: str = MISSING\n\n # class is deprecated, use target, class will be removed in Hydra 1.1\n # (class is Python keyword and is only supported through DictConfig)\n # class: str = MISSING\n\n\nclass RunMode(Enum):\n RUN = 1\n MULTIRUN = 2\n", "path": "hydra/types.py"}]} | 954 | 157 |
gh_patches_debug_9165 | rasdani/github-patches | git_diff | microsoft__DeepSpeed-1921 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] No module named 'fcntl' while importing the package
Hey,
Working on windows 11, Python 3.7 and tried importing the package.
Got the error in the title, is there a way around it since it's exclusive to Linux?

</issue>
<code>
[start of deepspeed/utils/debug.py]
1 """ debug utils """
2
3 import fcntl
4
5 # for debug purposes map module and param objects to their fully qualified names
6 module_names = {}
7 param_names = {}
8
9
10 def debug_extract_module_and_param_names(model):
11 # extract the fully qualified names as soon as the model is acquired
12 global module_names
13 global param_names
14 # XXX: can probably make a map of param2module and vice-versa
15 module_names = {module: name for name, module in model.named_modules()}
16 param_names = {param: name for name, param in model.named_parameters()}
17
18
19 def debug_module2name(module):
20 if module in module_names:
21 return module_names[module]
22 else:
23 return "unknown"
24
25
26 def debug_module2name_id(module):
27 return f"name={debug_module2name(module)} id={module.id}"
28
29
30 def debug_module2name_class(module):
31 return f"name={debug_module2name(module)} {module.__class__.__name__}"
32
33
34 def debug_param2name(param):
35 if param in param_names:
36 return param_names[param]
37 else:
38 return "unknown"
39
40
41 def debug_param2name_id(param):
42 return f"name={debug_param2name(param)} id={param.ds_id}"
43
44
45 def debug_param2name_id_shape(param):
46 return f"name={debug_param2name(param)} id={param.ds_id} shape={param.data.shape}"
47
48
49 def debug_param2name_id_shape_device(param):
50 return f"name={debug_param2name(param)} id={param.ds_id} shape={param.data.shape} device={param.device}"
51
52
53 def debug_param2name_id_numel(param):
54 return f"name={debug_param2name(param)} id={param.ds_id} numel={param.numel()}"
55
56
57 def debug_param2name_id_shape_status(param):
58 return f"name={debug_param2name(param)} id={param.ds_id} shape={param.data.shape} status={param.ds_status}"
59
60
61 def printflock(*msgs):
62 """
63
64 For printing messages for all concurrent gpus w/o getting interleaved text.
65
66 This is useful when debugging issues where multi-gpus don't sync.
67
68 1. Enable the force debug in say partitioning and zero3 files
69 2. Override the usual versions with ::
70
71 def print_rank_0(message, debug=False, force=False):
72 rank = torch.distributed.get_rank()
73 printflock(f"[{rank}] {message}")
74 3. run the program and you get both logs non-interleaved
75
76 But this makes it very difficult to make sense of the output, so the ``log_rank_file`` helper
77 function might be more useful, as it's easier to send each log stream into a separate file and
78 then compare those.
79
80 """
81
82 with open(__file__, "r") as fh:
83 fcntl.flock(fh, fcntl.LOCK_EX)
84 try:
85 print(*msgs)
86 finally:
87 fcntl.flock(fh, fcntl.LOCK_UN)
88
89
90 fh = None
91
92
93 def log_rank_file(rank, *msgs):
94 """
95 Print to a log file of the given rank
96
97 This is useful for debugging hanging in sync processes. Here is a possible workflow:
98
99 1. Enable the force debug in say partitioning and zero3 files
100 2. Override the usual versions of print_rank_0 in those files with ::
101
102 def print_rank_0(message, debug=False, force=False):
103 rank = torch.distributed.get_rank()
104 log_rank_file(rank, message)
105
106 3. run the program
107 4. fix up the expected differences, e.g. different cuda numbers ::
108
109 perl -pi -e 's|cuda:1|cuda:0|' log_rank_*
110
111 5. now diff and see where names and ids diverge - you will find where the gpus don't do the same
112 work (e.g. when some layers get conditionally skipped on one gpu but not all)
113
114 diff -u log_rank_0.txt log_rank_1.txt | less
115
116 """
117 global fh
118 if fh is None:
119 fh = open(f"log_rank_{rank}.txt", "w")
120 for m in msgs:
121 fh.write(f"{m}\n")
122 fh.flush()
123
124
125 def print_backward_tensors(tensor):
126 def _print_bwd_tensors(grad_fn):
127 print(f"Backward tensors in {grad_fn}")
128 for funcs in grad_fn.next_functions:
129 if funcs[0]:
130 try:
131 tensor = getattr(funcs[0], 'variable')
132 print(funcs[0])
133 print(
134 f"Tensor - id: {id(tensor)}, shape: {tensor.shape}, data: {tensor}, grad: {tensor.grad}"
135 )
136 except AttributeError as e:
137 _print_bwd_tensors(funcs[0])
138
139 if hasattr(tensor, 'grad_fn'):
140 _print_bwd_tensors(tensor.grad_fn)
141
[end of deepspeed/utils/debug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/deepspeed/utils/debug.py b/deepspeed/utils/debug.py
--- a/deepspeed/utils/debug.py
+++ b/deepspeed/utils/debug.py
@@ -1,6 +1,7 @@
""" debug utils """
-import fcntl
+# For lazy import with printflock()
+fcntl = None
# for debug purposes map module and param objects to their fully qualified names
module_names = {}
@@ -78,6 +79,9 @@
then compare those.
"""
+ global fcntl
+ if fcntl == None:
+ import fcntl
with open(__file__, "r") as fh:
fcntl.flock(fh, fcntl.LOCK_EX)
| {"golden_diff": "diff --git a/deepspeed/utils/debug.py b/deepspeed/utils/debug.py\n--- a/deepspeed/utils/debug.py\n+++ b/deepspeed/utils/debug.py\n@@ -1,6 +1,7 @@\n \"\"\" debug utils \"\"\"\n \n-import fcntl\n+# For lazy import with printflock()\n+fcntl = None\n \n # for debug purposes map module and param objects to their fully qualified names\n module_names = {}\n@@ -78,6 +79,9 @@\n then compare those.\n \n \"\"\"\n+ global fcntl\n+ if fcntl == None:\n+ import fcntl\n \n with open(__file__, \"r\") as fh:\n fcntl.flock(fh, fcntl.LOCK_EX)\n", "issue": "[BUG] No module named 'fcntl' while importing the package\nHey,\r\nWorking on windows 11, Python 3.7 and tried importing the package.\r\nGot the error in the title, is there a way around it since it's exclusive to Linux?\r\n\r\n\r\n \n", "before_files": [{"content": "\"\"\" debug utils \"\"\"\n\nimport fcntl\n\n# for debug purposes map module and param objects to their fully qualified names\nmodule_names = {}\nparam_names = {}\n\n\ndef debug_extract_module_and_param_names(model):\n # extract the fully qualified names as soon as the model is acquired\n global module_names\n global param_names\n # XXX: can probably make a map of param2module and vice-versa\n module_names = {module: name for name, module in model.named_modules()}\n param_names = {param: name for name, param in model.named_parameters()}\n\n\ndef debug_module2name(module):\n if module in module_names:\n return module_names[module]\n else:\n return \"unknown\"\n\n\ndef debug_module2name_id(module):\n return f\"name={debug_module2name(module)} id={module.id}\"\n\n\ndef debug_module2name_class(module):\n return f\"name={debug_module2name(module)} {module.__class__.__name__}\"\n\n\ndef debug_param2name(param):\n if param in param_names:\n return param_names[param]\n else:\n return \"unknown\"\n\n\ndef debug_param2name_id(param):\n return f\"name={debug_param2name(param)} id={param.ds_id}\"\n\n\ndef debug_param2name_id_shape(param):\n return f\"name={debug_param2name(param)} id={param.ds_id} shape={param.data.shape}\"\n\n\ndef debug_param2name_id_shape_device(param):\n return f\"name={debug_param2name(param)} id={param.ds_id} shape={param.data.shape} device={param.device}\"\n\n\ndef debug_param2name_id_numel(param):\n return f\"name={debug_param2name(param)} id={param.ds_id} numel={param.numel()}\"\n\n\ndef debug_param2name_id_shape_status(param):\n return f\"name={debug_param2name(param)} id={param.ds_id} shape={param.data.shape} status={param.ds_status}\"\n\n\ndef printflock(*msgs):\n \"\"\"\n\n For printing messages for all concurrent gpus w/o getting interleaved text.\n\n This is useful when debugging issues where multi-gpus don't sync.\n\n 1. Enable the force debug in say partitioning and zero3 files\n 2. Override the usual versions with ::\n\n def print_rank_0(message, debug=False, force=False):\n rank = torch.distributed.get_rank()\n printflock(f\"[{rank}] {message}\")\n 3. run the program and you get both logs non-interleaved\n\n But this makes it very difficult to make sense of the output, so the ``log_rank_file`` helper\n function might be more useful, as it's easier to send each log stream into a separate file and\n then compare those.\n\n \"\"\"\n\n with open(__file__, \"r\") as fh:\n fcntl.flock(fh, fcntl.LOCK_EX)\n try:\n print(*msgs)\n finally:\n fcntl.flock(fh, fcntl.LOCK_UN)\n\n\nfh = None\n\n\ndef log_rank_file(rank, *msgs):\n \"\"\"\n Print to a log file of the given rank\n\n This is useful for debugging hanging in sync processes. Here is a possible workflow:\n\n 1. Enable the force debug in say partitioning and zero3 files\n 2. Override the usual versions of print_rank_0 in those files with ::\n\n def print_rank_0(message, debug=False, force=False):\n rank = torch.distributed.get_rank()\n log_rank_file(rank, message)\n\n 3. run the program\n 4. fix up the expected differences, e.g. different cuda numbers ::\n\n perl -pi -e 's|cuda:1|cuda:0|' log_rank_*\n\n 5. now diff and see where names and ids diverge - you will find where the gpus don't do the same\n work (e.g. when some layers get conditionally skipped on one gpu but not all)\n\n diff -u log_rank_0.txt log_rank_1.txt | less\n\n \"\"\"\n global fh\n if fh is None:\n fh = open(f\"log_rank_{rank}.txt\", \"w\")\n for m in msgs:\n fh.write(f\"{m}\\n\")\n fh.flush()\n\n\ndef print_backward_tensors(tensor):\n def _print_bwd_tensors(grad_fn):\n print(f\"Backward tensors in {grad_fn}\")\n for funcs in grad_fn.next_functions:\n if funcs[0]:\n try:\n tensor = getattr(funcs[0], 'variable')\n print(funcs[0])\n print(\n f\"Tensor - id: {id(tensor)}, shape: {tensor.shape}, data: {tensor}, grad: {tensor.grad}\"\n )\n except AttributeError as e:\n _print_bwd_tensors(funcs[0])\n\n if hasattr(tensor, 'grad_fn'):\n _print_bwd_tensors(tensor.grad_fn)\n", "path": "deepspeed/utils/debug.py"}]} | 2,029 | 153 |
gh_patches_debug_31923 | rasdani/github-patches | git_diff | alpa-projects__alpa-511 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding the `pjit` in the comparison
Some people are more familiar with using model parallel via [`pjit`](https://github.com/google/jax/blob/main/jax/experimental/pjit.py). What about adding one more rows [here](https://github.com/alpa-projects/alpa/blob/main/docs/gallery/tutorials/alpa_vs_pmap.py#L46-L52)?
</issue>
<code>
[start of docs/gallery/tutorials/alpa_vs_pmap.py]
1 """
2 Differences between alpa.parallelize and jax.pmap
3 =================================================
4
5 The most common tool for parallelization or distributed computing in jax is
6 `pmap <https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap>`_.
7 With several lines of code change, we can use ``pmap`` for data parallel
8 training. However, we cannot use ``pmap`` for model parallel training,
9 which is required for training large models with billions of parameters.
10
11 On the contrary, ``alpa.parallelize`` supports both data parallelism and
12 model parallelism in an automatic way. ``alpa.parallelize`` analyzes the
13 jax computational graph and picks the best strategy.
14 If data parallelism is more suitable, ``alpa.parallelize`` achieves the same
15 performance as ``pmap`` but with less code change.
16 If model parallelism is more suitable, ``alpa.parallelize`` achieves better performance
17 and uses less memory than ``pmap``.
18
19 In this tutorial, we are going to compare ``alpa.parallelize`` and ``pmap`` on two
20 workloads. A more detailed comparison among ``alpa.parallelize``, ``pmap``, and ``xmap``
21 is also attached at the end of the article.
22 """
23
24 ################################################################################
25 # When data parallelism is prefered
26 # ---------------------------------
27
28 # TODO
29
30 ################################################################################
31 # When model parallelism is prefered
32 # ----------------------------------
33
34 # TODO
35
36 ################################################################################
37 # Comparing ``alpa.parallelize``, ``pmap``, and ``xmap``
38 # ------------------------------------------------------
39 # Besides ``pmap``, jax also provides
40 # `xmap <https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html>`_
41 # for more advanced parallelization.
42 # The table below compares the features of ``alpa.parallelize``, ``pmap``, and ``xmap``.
43 # In summary, ``alpa.parallelize`` supports more parallelism techniques in a
44 # more automatic way.
45 #
46 # ================ ================ ==================== ==================== =========
47 # Transformation Data Parallelism Operator Parallelism Pipeline Parallelism Automated
48 # ================ ================ ==================== ==================== =========
49 # alpa.parallelize yes yes yes yes
50 # pmap yes no no no
51 # xmap yes yes no no
52 # ================ ================ ==================== ==================== =========
53 #
54 # .. note::
55 # Operator parallelism and pipeline parallelism are two forms of model parallelism.
56 # Operator parallelism partitions the work in a single operator and assigns them
57 # to different devices. Pipeline parallelism partitions the computational
58 # graphs and assigns different operators to different devices.
59
[end of docs/gallery/tutorials/alpa_vs_pmap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/gallery/tutorials/alpa_vs_pmap.py b/docs/gallery/tutorials/alpa_vs_pmap.py
--- a/docs/gallery/tutorials/alpa_vs_pmap.py
+++ b/docs/gallery/tutorials/alpa_vs_pmap.py
@@ -34,14 +34,15 @@
# TODO
################################################################################
-# Comparing ``alpa.parallelize``, ``pmap``, and ``xmap``
-# ------------------------------------------------------
+# Comparing ``alpa.parallelize``, ``pmap``, ``xmap``, and ``pjit``
+# -----------------------------------------------------------------
# Besides ``pmap``, jax also provides
-# `xmap <https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html>`_
+# `xmap <https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html>`_ and
+# `pjit <https://jax.readthedocs.io/en/latest/jax-101/08-pjit.html>`_
# for more advanced parallelization.
-# The table below compares the features of ``alpa.parallelize``, ``pmap``, and ``xmap``.
-# In summary, ``alpa.parallelize`` supports more parallelism techniques in a
-# more automatic way.
+# The table below compares the features of ``alpa.parallelize``, ``pmap``, ``xmap``
+# and ``pjit``. In summary, ``alpa.parallelize`` supports more parallelism
+# techniques in a more automatic way.
#
# ================ ================ ==================== ==================== =========
# Transformation Data Parallelism Operator Parallelism Pipeline Parallelism Automated
@@ -49,6 +50,7 @@
# alpa.parallelize yes yes yes yes
# pmap yes no no no
# xmap yes yes no no
+# pjit yes yes no no
# ================ ================ ==================== ==================== =========
#
# .. note::
| {"golden_diff": "diff --git a/docs/gallery/tutorials/alpa_vs_pmap.py b/docs/gallery/tutorials/alpa_vs_pmap.py\n--- a/docs/gallery/tutorials/alpa_vs_pmap.py\n+++ b/docs/gallery/tutorials/alpa_vs_pmap.py\n@@ -34,14 +34,15 @@\n # TODO\n \n ################################################################################\n-# Comparing ``alpa.parallelize``, ``pmap``, and ``xmap``\n-# ------------------------------------------------------\n+# Comparing ``alpa.parallelize``, ``pmap``, ``xmap``, and ``pjit``\n+# -----------------------------------------------------------------\n # Besides ``pmap``, jax also provides\n-# `xmap <https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html>`_\n+# `xmap <https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html>`_ and \n+# `pjit <https://jax.readthedocs.io/en/latest/jax-101/08-pjit.html>`_\n # for more advanced parallelization.\n-# The table below compares the features of ``alpa.parallelize``, ``pmap``, and ``xmap``.\n-# In summary, ``alpa.parallelize`` supports more parallelism techniques in a\n-# more automatic way.\n+# The table below compares the features of ``alpa.parallelize``, ``pmap``, ``xmap`` \n+# and ``pjit``. In summary, ``alpa.parallelize`` supports more parallelism \n+# techniques in a more automatic way.\n #\n # ================ ================ ==================== ==================== =========\n # Transformation Data Parallelism Operator Parallelism Pipeline Parallelism Automated\n@@ -49,6 +50,7 @@\n # alpa.parallelize yes yes yes yes\n # pmap yes no no no\n # xmap yes yes no no\n+# pjit yes yes no no\n # ================ ================ ==================== ==================== =========\n #\n # .. note::\n", "issue": "Adding the `pjit` in the comparison\nSome people are more familiar with using model parallel via [`pjit`](https://github.com/google/jax/blob/main/jax/experimental/pjit.py). What about adding one more rows [here](https://github.com/alpa-projects/alpa/blob/main/docs/gallery/tutorials/alpa_vs_pmap.py#L46-L52)?\n", "before_files": [{"content": "\"\"\"\nDifferences between alpa.parallelize and jax.pmap\n=================================================\n\nThe most common tool for parallelization or distributed computing in jax is\n`pmap <https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap>`_.\nWith several lines of code change, we can use ``pmap`` for data parallel\ntraining. However, we cannot use ``pmap`` for model parallel training,\nwhich is required for training large models with billions of parameters.\n\nOn the contrary, ``alpa.parallelize`` supports both data parallelism and\nmodel parallelism in an automatic way. ``alpa.parallelize`` analyzes the\njax computational graph and picks the best strategy.\nIf data parallelism is more suitable, ``alpa.parallelize`` achieves the same\nperformance as ``pmap`` but with less code change.\nIf model parallelism is more suitable, ``alpa.parallelize`` achieves better performance\nand uses less memory than ``pmap``.\n\nIn this tutorial, we are going to compare ``alpa.parallelize`` and ``pmap`` on two\nworkloads. A more detailed comparison among ``alpa.parallelize``, ``pmap``, and ``xmap``\nis also attached at the end of the article.\n\"\"\"\n\n################################################################################\n# When data parallelism is prefered\n# ---------------------------------\n\n# TODO\n\n################################################################################\n# When model parallelism is prefered\n# ----------------------------------\n\n# TODO\n\n################################################################################\n# Comparing ``alpa.parallelize``, ``pmap``, and ``xmap``\n# ------------------------------------------------------\n# Besides ``pmap``, jax also provides\n# `xmap <https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html>`_\n# for more advanced parallelization.\n# The table below compares the features of ``alpa.parallelize``, ``pmap``, and ``xmap``.\n# In summary, ``alpa.parallelize`` supports more parallelism techniques in a\n# more automatic way.\n#\n# ================ ================ ==================== ==================== =========\n# Transformation Data Parallelism Operator Parallelism Pipeline Parallelism Automated\n# ================ ================ ==================== ==================== =========\n# alpa.parallelize yes yes yes yes\n# pmap yes no no no\n# xmap yes yes no no\n# ================ ================ ==================== ==================== =========\n#\n# .. note::\n# Operator parallelism and pipeline parallelism are two forms of model parallelism.\n# Operator parallelism partitions the work in a single operator and assigns them\n# to different devices. Pipeline parallelism partitions the computational\n# graphs and assigns different operators to different devices.\n", "path": "docs/gallery/tutorials/alpa_vs_pmap.py"}]} | 1,299 | 425 |
gh_patches_debug_4479 | rasdani/github-patches | git_diff | freedomofpress__securedrop-3709 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[functional testing] Fix staging CI job on tbb-0.9.0
We removed the application/functional test run from the staging environment in #3697. We should also update the testinfra test references and remove the application test run from CI, otherwise we get a few testinfra test failures due to pip deps, and an error when we attempt to run the application tests in CI:
```
TASK [Run application tests] ***************************************************
Friday 10 August 2018 19:28:17 +0000 (0:00:00.037) 0:01:08.223 *********
fatal: [app-staging]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 127, "stderr": "Shared connection to 52.36.194.59 closed.\r\n", "stdout": "/home/sdrop/.ansible/tmp/ansible-tmp-1533929297.62-93522333058246/app-tests.sh: line 13: pytest: command not found\r\n", "stdout_lines": ["/home/sdrop/.ansible/tmp/ansible-tmp-1533929297.62-93522333058246/app-tests.sh: line 13: pytest: command not found"]}
...ignoring
```
</issue>
<code>
[start of securedrop/create-dev-data.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import datetime
5 import os
6 import sys
7 import argparse
8 from sqlalchemy.exc import IntegrityError
9
10 os.environ["SECUREDROP_ENV"] = "dev" # noqa
11 import journalist_app
12 from sdconfig import config
13 from db import db
14 from models import Journalist, Source, Submission
15
16
17 def add_test_user(username, password, otp_secret, is_admin=False):
18 context = journalist_app.create_app(config).app_context()
19 context.push()
20
21 try:
22 user = Journalist(username=username,
23 password=password,
24 is_admin=is_admin)
25 user.otp_secret = otp_secret
26 db.session.add(user)
27 db.session.commit()
28 print('Test user successfully added: '
29 'username={}, password={}, otp_secret={}, is_admin={}'
30 ''.format(username, password, otp_secret, is_admin))
31 except IntegrityError:
32 print("Test user already added")
33 db.session.rollback()
34
35 context.pop()
36
37
38 def create_source_and_submissions(num_submissions=2):
39 app = journalist_app.create_app(config)
40
41 with app.app_context():
42 # Store source in database
43 codename = app.crypto_util.genrandomid()
44 filesystem_id = app.crypto_util.hash_codename(codename)
45 journalist_designation = app.crypto_util.display_id()
46 source = Source(filesystem_id, journalist_designation)
47 source.pending = False
48 db.session.add(source)
49 db.session.commit()
50
51 # Generate submissions directory and generate source key
52 os.mkdir(app.storage.path(source.filesystem_id))
53 app.crypto_util.genkeypair(source.filesystem_id, codename)
54
55 # Generate some test submissions
56 for _ in range(num_submissions):
57 source.interaction_count += 1
58 fpath = app.storage.save_message_submission(
59 source.filesystem_id,
60 source.interaction_count,
61 source.journalist_filename,
62 'test submission!'
63 )
64 source.last_updated = datetime.datetime.utcnow()
65 submission = Submission(source, fpath)
66 db.session.add(submission)
67
68 db.session.commit()
69 print("Test source '{}' added with {} submissions".format(
70 journalist_designation, num_submissions)
71 )
72
73
74 if __name__ == "__main__": # pragma: no cover
75 # Add two test users
76 test_password = "correct horse battery staple profanity oil chewy"
77 test_otp_secret = "JHCOGO7VCER3EJ4L"
78
79 parser = argparse.ArgumentParser()
80 parser.add_argument("--staging", help="Adding user for staging tests.",
81 action="store_true")
82 args = parser.parse_args()
83 add_test_user("journalist",
84 test_password,
85 test_otp_secret,
86 is_admin=True)
87
88 # If staging, we only need the journalist user (admin)
89 if args.staging:
90 sys.exit(0)
91
92 add_test_user("dellsberg",
93 test_password,
94 test_otp_secret,
95 is_admin=False)
96
97 # Add test sources and submissions
98 num_sources = 2
99 for _ in range(num_sources):
100 create_source_and_submissions()
101
[end of securedrop/create-dev-data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/create-dev-data.py b/securedrop/create-dev-data.py
--- a/securedrop/create-dev-data.py
+++ b/securedrop/create-dev-data.py
@@ -78,7 +78,7 @@
parser = argparse.ArgumentParser()
parser.add_argument("--staging", help="Adding user for staging tests.",
- action="store_true")
+ action="store_true")
args = parser.parse_args()
add_test_user("journalist",
test_password,
| {"golden_diff": "diff --git a/securedrop/create-dev-data.py b/securedrop/create-dev-data.py\n--- a/securedrop/create-dev-data.py\n+++ b/securedrop/create-dev-data.py\n@@ -78,7 +78,7 @@\n \n parser = argparse.ArgumentParser()\n parser.add_argument(\"--staging\", help=\"Adding user for staging tests.\",\n- action=\"store_true\")\n+ action=\"store_true\")\n args = parser.parse_args()\n add_test_user(\"journalist\",\n test_password,\n", "issue": "[functional testing] Fix staging CI job on tbb-0.9.0\nWe removed the application/functional test run from the staging environment in #3697. We should also update the testinfra test references and remove the application test run from CI, otherwise we get a few testinfra test failures due to pip deps, and an error when we attempt to run the application tests in CI: \r\n\r\n```\r\nTASK [Run application tests] ***************************************************\r\n Friday 10 August 2018 19:28:17 +0000 (0:00:00.037) 0:01:08.223 *********\r\n fatal: [app-staging]: FAILED! => {\"changed\": true, \"msg\": \"non-zero return code\", \"rc\": 127, \"stderr\": \"Shared connection to 52.36.194.59 closed.\\r\\n\", \"stdout\": \"/home/sdrop/.ansible/tmp/ansible-tmp-1533929297.62-93522333058246/app-tests.sh: line 13: pytest: command not found\\r\\n\", \"stdout_lines\": [\"/home/sdrop/.ansible/tmp/ansible-tmp-1533929297.62-93522333058246/app-tests.sh: line 13: pytest: command not found\"]}\r\n ...ignoring\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport datetime\nimport os\nimport sys\nimport argparse\nfrom sqlalchemy.exc import IntegrityError\n\nos.environ[\"SECUREDROP_ENV\"] = \"dev\" # noqa\nimport journalist_app\nfrom sdconfig import config\nfrom db import db\nfrom models import Journalist, Source, Submission\n\n\ndef add_test_user(username, password, otp_secret, is_admin=False):\n context = journalist_app.create_app(config).app_context()\n context.push()\n\n try:\n user = Journalist(username=username,\n password=password,\n is_admin=is_admin)\n user.otp_secret = otp_secret\n db.session.add(user)\n db.session.commit()\n print('Test user successfully added: '\n 'username={}, password={}, otp_secret={}, is_admin={}'\n ''.format(username, password, otp_secret, is_admin))\n except IntegrityError:\n print(\"Test user already added\")\n db.session.rollback()\n\n context.pop()\n\n\ndef create_source_and_submissions(num_submissions=2):\n app = journalist_app.create_app(config)\n\n with app.app_context():\n # Store source in database\n codename = app.crypto_util.genrandomid()\n filesystem_id = app.crypto_util.hash_codename(codename)\n journalist_designation = app.crypto_util.display_id()\n source = Source(filesystem_id, journalist_designation)\n source.pending = False\n db.session.add(source)\n db.session.commit()\n\n # Generate submissions directory and generate source key\n os.mkdir(app.storage.path(source.filesystem_id))\n app.crypto_util.genkeypair(source.filesystem_id, codename)\n\n # Generate some test submissions\n for _ in range(num_submissions):\n source.interaction_count += 1\n fpath = app.storage.save_message_submission(\n source.filesystem_id,\n source.interaction_count,\n source.journalist_filename,\n 'test submission!'\n )\n source.last_updated = datetime.datetime.utcnow()\n submission = Submission(source, fpath)\n db.session.add(submission)\n\n db.session.commit()\n print(\"Test source '{}' added with {} submissions\".format(\n journalist_designation, num_submissions)\n )\n\n\nif __name__ == \"__main__\": # pragma: no cover\n # Add two test users\n test_password = \"correct horse battery staple profanity oil chewy\"\n test_otp_secret = \"JHCOGO7VCER3EJ4L\"\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--staging\", help=\"Adding user for staging tests.\",\n action=\"store_true\")\n args = parser.parse_args()\n add_test_user(\"journalist\",\n test_password,\n test_otp_secret,\n is_admin=True)\n\n # If staging, we only need the journalist user (admin)\n if args.staging:\n sys.exit(0)\n\n add_test_user(\"dellsberg\",\n test_password,\n test_otp_secret,\n is_admin=False)\n\n # Add test sources and submissions\n num_sources = 2\n for _ in range(num_sources):\n create_source_and_submissions()\n", "path": "securedrop/create-dev-data.py"}]} | 1,736 | 109 |
gh_patches_debug_57973 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-1191 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[INF/CI] Add `--cov-append` for `pytest`
<!-- Thank you for your PR!
BEFORE YOU CONTINUE! Please add the appropriate three-letter abbreviation to your title.
The abbreviations can be:
- [DOC]: Documentation fixes.
- [ENH]: Code contributions and new features.
- [TST]: Test-related contributions.
- [INF]: Infrastructure-related contributions.
Also, do not forget to tag the relevant issue here as well.
Finally, as commits come in, don't forget to regularly rebase!
-->
# PR Description
Please describe the changes proposed in the pull request:
> Another reason code coverage failed is that pytest doesn't add `--cov-append` option.
`--cov-append` can get a sum coverage. I'll add this option in the next PR.
First let us merge `codecov.yml` into `tests.yml`. Keep the same test logic for the dev branch or a PR.
_Originally posted by @Zeroto521 in https://github.com/pyjanitor-devs/pyjanitor/issues/1185#issuecomment-1296479926_
<!-- Doing so provides maintainers with context on what the PR is, and can help us more effectively review your PR. -->
<!-- Please also identify below which issue that has been raised that you are going to close. -->
<!-- As you go down the PR template, please feel free to delete sections that are irrelevant. -->
# PR Checklist
<!-- This checklist exists for newcomers who are not yet familiar with our requirements. If you are experienced with
the project, please feel free to delete this section. -->
Please ensure that you have done the following:
1. [x] PR in from a fork off your branch. Do not PR from `<your_username>`:`dev`, but rather from `<your_username>`:`<feature-branch_name>`.
<!-- Doing this helps us keep the commit history much cleaner than it would otherwise be. -->
2. [x] If you're not on the contributors list, add yourself to `AUTHORS.md`.
<!-- We'd like to acknowledge your contributions! -->
3. [x] Add a line to `CHANGELOG.md` under the latest version header (i.e. the one that is "on deck") describing the contribution.
- Do use some discretion here; if there are multiple PRs that are related, keep them in a single line.
# Automatic checks
There will be automatic checks run on the PR. These include:
- Building a preview of the docs on Netlify
- Automatically linting the code
- Making sure the code is documented
- Making sure that all tests are passed
- Making sure that code coverage doesn't go down.
# Relevant Reviewers
<!-- Finally, please tag relevant maintainers to review. -->
Please tag maintainers to review.
- @ericmjl
</issue>
<code>
[start of janitor/accessors/__init__.py]
1 """Miscellaneous mathematical operators.
2
3 Lazy loading used here to speed up imports.
4 """
5
6 import warnings
7 from typing import Tuple
8
9
10 import lazy_loader as lazy
11
12 scipy_special = lazy.load("scipy.special")
13 ss = lazy.load("scipy.stats")
14 pf = lazy.load("pandas_flavor")
15 pd = lazy.load("pandas")
16 np = lazy.load("numpy")
17 pdtypes = lazy.load("pandas.api.types")
18
[end of janitor/accessors/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/janitor/accessors/__init__.py b/janitor/accessors/__init__.py
--- a/janitor/accessors/__init__.py
+++ b/janitor/accessors/__init__.py
@@ -1,17 +1,3 @@
-"""Miscellaneous mathematical operators.
+"""Miscellaneous mathematical operators."""
-Lazy loading used here to speed up imports.
-"""
-
-import warnings
-from typing import Tuple
-
-
-import lazy_loader as lazy
-
-scipy_special = lazy.load("scipy.special")
-ss = lazy.load("scipy.stats")
-pf = lazy.load("pandas_flavor")
-pd = lazy.load("pandas")
-np = lazy.load("numpy")
-pdtypes = lazy.load("pandas.api.types")
+from janitor.accessors.data_description import DataDescription # noqa: F401
| {"golden_diff": "diff --git a/janitor/accessors/__init__.py b/janitor/accessors/__init__.py\n--- a/janitor/accessors/__init__.py\n+++ b/janitor/accessors/__init__.py\n@@ -1,17 +1,3 @@\n-\"\"\"Miscellaneous mathematical operators.\n+\"\"\"Miscellaneous mathematical operators.\"\"\"\n \n-Lazy loading used here to speed up imports.\n-\"\"\"\n-\n-import warnings\n-from typing import Tuple\n-\n-\n-import lazy_loader as lazy\n-\n-scipy_special = lazy.load(\"scipy.special\")\n-ss = lazy.load(\"scipy.stats\")\n-pf = lazy.load(\"pandas_flavor\")\n-pd = lazy.load(\"pandas\")\n-np = lazy.load(\"numpy\")\n-pdtypes = lazy.load(\"pandas.api.types\")\n+from janitor.accessors.data_description import DataDescription # noqa: F401\n", "issue": "[INF/CI] Add `--cov-append` for `pytest`\n<!-- Thank you for your PR!\r\n\r\nBEFORE YOU CONTINUE! Please add the appropriate three-letter abbreviation to your title.\r\n\r\nThe abbreviations can be:\r\n- [DOC]: Documentation fixes.\r\n- [ENH]: Code contributions and new features.\r\n- [TST]: Test-related contributions.\r\n- [INF]: Infrastructure-related contributions.\r\n\r\nAlso, do not forget to tag the relevant issue here as well.\r\n\r\nFinally, as commits come in, don't forget to regularly rebase!\r\n-->\r\n\r\n# PR Description\r\n\r\nPlease describe the changes proposed in the pull request:\r\n\r\n> Another reason code coverage failed is that pytest doesn't add `--cov-append` option.\r\n`--cov-append` can get a sum coverage. I'll add this option in the next PR.\r\nFirst let us merge `codecov.yml` into `tests.yml`. Keep the same test logic for the dev branch or a PR.\r\n\r\n_Originally posted by @Zeroto521 in https://github.com/pyjanitor-devs/pyjanitor/issues/1185#issuecomment-1296479926_\r\n\r\n<!-- Doing so provides maintainers with context on what the PR is, and can help us more effectively review your PR. -->\r\n\r\n<!-- Please also identify below which issue that has been raised that you are going to close. -->\r\n\r\n<!-- As you go down the PR template, please feel free to delete sections that are irrelevant. -->\r\n\r\n# PR Checklist\r\n\r\n<!-- This checklist exists for newcomers who are not yet familiar with our requirements. If you are experienced with\r\nthe project, please feel free to delete this section. -->\r\n\r\nPlease ensure that you have done the following:\r\n\r\n1. [x] PR in from a fork off your branch. Do not PR from `<your_username>`:`dev`, but rather from `<your_username>`:`<feature-branch_name>`.\r\n<!-- Doing this helps us keep the commit history much cleaner than it would otherwise be. -->\r\n2. [x] If you're not on the contributors list, add yourself to `AUTHORS.md`.\r\n<!-- We'd like to acknowledge your contributions! -->\r\n3. [x] Add a line to `CHANGELOG.md` under the latest version header (i.e. the one that is \"on deck\") describing the contribution.\r\n - Do use some discretion here; if there are multiple PRs that are related, keep them in a single line.\r\n\r\n# Automatic checks\r\n\r\nThere will be automatic checks run on the PR. These include:\r\n\r\n- Building a preview of the docs on Netlify\r\n- Automatically linting the code\r\n- Making sure the code is documented\r\n- Making sure that all tests are passed\r\n- Making sure that code coverage doesn't go down.\r\n\r\n# Relevant Reviewers\r\n\r\n<!-- Finally, please tag relevant maintainers to review. -->\r\n\r\nPlease tag maintainers to review.\r\n\r\n- @ericmjl\r\n\n", "before_files": [{"content": "\"\"\"Miscellaneous mathematical operators.\n\nLazy loading used here to speed up imports.\n\"\"\"\n\nimport warnings\nfrom typing import Tuple\n\n\nimport lazy_loader as lazy\n\nscipy_special = lazy.load(\"scipy.special\")\nss = lazy.load(\"scipy.stats\")\npf = lazy.load(\"pandas_flavor\")\npd = lazy.load(\"pandas\")\nnp = lazy.load(\"numpy\")\npdtypes = lazy.load(\"pandas.api.types\")\n", "path": "janitor/accessors/__init__.py"}]} | 1,252 | 186 |
gh_patches_debug_6944 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-1805 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Logger is not passed to translator
When building the translator, there is a logger created but not passed to the translator:
https://github.com/OpenNMT/OpenNMT-py/blob/35cf4f0ae774a4aa500318879a1a4d53408ac129/onmt/bin/translate.py#L18
This results in a log file that only contains a single entry:
https://github.com/OpenNMT/OpenNMT-py/blob/35cf4f0ae774a4aa500318879a1a4d53408ac129/onmt/bin/translate.py#L24
</issue>
<code>
[start of onmt/bin/translate.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import unicode_literals
5
6 from onmt.utils.logging import init_logger
7 from onmt.utils.misc import split_corpus
8 from onmt.translate.translator import build_translator
9
10 import onmt.opts as opts
11 from onmt.utils.parse import ArgumentParser
12
13
14 def translate(opt):
15 ArgumentParser.validate_translate_opts(opt)
16 logger = init_logger(opt.log_file)
17
18 translator = build_translator(opt, report_score=True)
19 src_shards = split_corpus(opt.src, opt.shard_size)
20 tgt_shards = split_corpus(opt.tgt, opt.shard_size)
21 shard_pairs = zip(src_shards, tgt_shards)
22
23 for i, (src_shard, tgt_shard) in enumerate(shard_pairs):
24 logger.info("Translating shard %d." % i)
25 translator.translate(
26 src=src_shard,
27 tgt=tgt_shard,
28 src_dir=opt.src_dir,
29 batch_size=opt.batch_size,
30 batch_type=opt.batch_type,
31 attn_debug=opt.attn_debug,
32 align_debug=opt.align_debug
33 )
34
35
36 def _get_parser():
37 parser = ArgumentParser(description='translate.py')
38
39 opts.config_opts(parser)
40 opts.translate_opts(parser)
41 return parser
42
43
44 def main():
45 parser = _get_parser()
46
47 opt = parser.parse_args()
48 translate(opt)
49
50
51 if __name__ == "__main__":
52 main()
53
[end of onmt/bin/translate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/onmt/bin/translate.py b/onmt/bin/translate.py
--- a/onmt/bin/translate.py
+++ b/onmt/bin/translate.py
@@ -15,7 +15,7 @@
ArgumentParser.validate_translate_opts(opt)
logger = init_logger(opt.log_file)
- translator = build_translator(opt, report_score=True)
+ translator = build_translator(opt, logger=logger, report_score=True)
src_shards = split_corpus(opt.src, opt.shard_size)
tgt_shards = split_corpus(opt.tgt, opt.shard_size)
shard_pairs = zip(src_shards, tgt_shards)
| {"golden_diff": "diff --git a/onmt/bin/translate.py b/onmt/bin/translate.py\n--- a/onmt/bin/translate.py\n+++ b/onmt/bin/translate.py\n@@ -15,7 +15,7 @@\n ArgumentParser.validate_translate_opts(opt)\n logger = init_logger(opt.log_file)\n \n- translator = build_translator(opt, report_score=True)\n+ translator = build_translator(opt, logger=logger, report_score=True)\n src_shards = split_corpus(opt.src, opt.shard_size)\n tgt_shards = split_corpus(opt.tgt, opt.shard_size)\n shard_pairs = zip(src_shards, tgt_shards)\n", "issue": "Logger is not passed to translator\nWhen building the translator, there is a logger created but not passed to the translator:\r\nhttps://github.com/OpenNMT/OpenNMT-py/blob/35cf4f0ae774a4aa500318879a1a4d53408ac129/onmt/bin/translate.py#L18\r\nThis results in a log file that only contains a single entry:\r\nhttps://github.com/OpenNMT/OpenNMT-py/blob/35cf4f0ae774a4aa500318879a1a4d53408ac129/onmt/bin/translate.py#L24\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import unicode_literals\n\nfrom onmt.utils.logging import init_logger\nfrom onmt.utils.misc import split_corpus\nfrom onmt.translate.translator import build_translator\n\nimport onmt.opts as opts\nfrom onmt.utils.parse import ArgumentParser\n\n\ndef translate(opt):\n ArgumentParser.validate_translate_opts(opt)\n logger = init_logger(opt.log_file)\n\n translator = build_translator(opt, report_score=True)\n src_shards = split_corpus(opt.src, opt.shard_size)\n tgt_shards = split_corpus(opt.tgt, opt.shard_size)\n shard_pairs = zip(src_shards, tgt_shards)\n\n for i, (src_shard, tgt_shard) in enumerate(shard_pairs):\n logger.info(\"Translating shard %d.\" % i)\n translator.translate(\n src=src_shard,\n tgt=tgt_shard,\n src_dir=opt.src_dir,\n batch_size=opt.batch_size,\n batch_type=opt.batch_type,\n attn_debug=opt.attn_debug,\n align_debug=opt.align_debug\n )\n\n\ndef _get_parser():\n parser = ArgumentParser(description='translate.py')\n\n opts.config_opts(parser)\n opts.translate_opts(parser)\n return parser\n\n\ndef main():\n parser = _get_parser()\n\n opt = parser.parse_args()\n translate(opt)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "onmt/bin/translate.py"}]} | 1,111 | 141 |
gh_patches_debug_19727 | rasdani/github-patches | git_diff | facebookresearch__hydra-1424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade to OmegaConf 2.1
OmegaConf 2.1 is adding many important new features.
For example:
* Powerful interpolation grammar supporting nested interpolations
* Relative interpolations
* And many many bug fixes
Release notes: [omegaconf==2.1.0.rc1](https://github.com/omry/omegaconf/releases/tag/v2.1.0.rc1).
</issue>
<code>
[start of plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass
3 from typing import Optional
4
5 from hydra.core.config_store import ConfigStore
6 from omegaconf import II
7
8
9 @dataclass
10 class RedisConf:
11 # host address via REDIS_HOST environment variable, default: localhost
12 host: str = II("env:REDIS_HOST,localhost")
13 # port via REDIS_PORT environment variable, default: 6379
14 port: int = II("env:REDIS_PORT,6379")
15 # database via REDIS_DB environment variable, default: 0
16 db: Optional[str] = II("env:REDIS_DB,0")
17 # password via REDIS_PASSWORD environment variable, default: no password
18 password: str = II("env:REDIS_PASSWORD,")
19 # switch to run without redis server in single thread, for testing purposes only
20 mock: bool = II("env:REDIS_MOCK,False")
21
22
23 @dataclass
24 class EnqueueConf:
25 # maximum runtime of the job before it's killed (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
26 job_timeout: Optional[str] = None
27 # maximum queued time before the job before is discarded (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
28 ttl: Optional[str] = None
29 # how long successful jobs and their results are kept (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
30 result_ttl: Optional[str] = None
31 # specifies how long failed jobs are kept (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
32 failure_ttl: Optional[str] = None
33 # place job at the front of the queue, instead of the back
34 at_front: bool = False
35 # job id, will be overidden automatically by a uuid unless specified explicitly
36 job_id: Optional[str] = None
37 # description, will be overidden automatically unless specified explicitly
38 description: Optional[str] = None
39
40
41 @dataclass
42 class RQLauncherConf:
43 _target_: str = "hydra_plugins.hydra_rq_launcher.rq_launcher.RQLauncher"
44 # enqueue configuration
45 enqueue: EnqueueConf = EnqueueConf()
46 # queue name
47 queue: str = "default"
48 # redis configuration
49 redis: RedisConf = RedisConf()
50 # stop after enqueueing by raising custom exception
51 stop_after_enqueue: bool = False
52 # wait time in seconds when polling results
53 wait_polling: float = 1.0
54
55
56 ConfigStore.instance().store(
57 group="hydra/launcher", name="rq", node=RQLauncherConf, provider="rq_launcher"
58 )
59
[end of plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py]
[start of plugins/hydra_ax_sweeper/setup.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 from setuptools import find_namespace_packages, setup
4
5 with open("README.md", "r") as fh:
6 LONG_DESC = fh.read()
7 setup(
8 name="hydra-ax-sweeper",
9 version="1.1.0rc1",
10 author="Omry Yadan, Shagun Sodhani",
11 author_email="[email protected], [email protected]",
12 description="Hydra Ax Sweeper plugin",
13 long_description=LONG_DESC,
14 long_description_content_type="text/markdown",
15 url="https://github.com/facebookresearch/hydra/",
16 packages=find_namespace_packages(include=["hydra_plugins.*"]),
17 classifiers=[
18 "License :: OSI Approved :: MIT License",
19 "Programming Language :: Python :: 3.7",
20 "Programming Language :: Python :: 3.8",
21 "Programming Language :: Python :: 3.9",
22 "Operating System :: POSIX :: Linux",
23 "Operating System :: MacOS",
24 "Development Status :: 4 - Beta",
25 ],
26 install_requires=[
27 "hydra-core>=1.0.0",
28 "ax-platform>=0.1.13",
29 "numpy<1.20.0", # remove once ax is upgraded to support numpy 1.20
30 ],
31 include_package_data=True,
32 )
33
[end of plugins/hydra_ax_sweeper/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/hydra_ax_sweeper/setup.py b/plugins/hydra_ax_sweeper/setup.py
--- a/plugins/hydra_ax_sweeper/setup.py
+++ b/plugins/hydra_ax_sweeper/setup.py
@@ -25,8 +25,7 @@
],
install_requires=[
"hydra-core>=1.0.0",
- "ax-platform>=0.1.13",
- "numpy<1.20.0", # remove once ax is upgraded to support numpy 1.20
+ "ax-platform>=0.1.20",
],
include_package_data=True,
)
diff --git a/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py b/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py
--- a/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py
+++ b/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py
@@ -15,7 +15,7 @@
# database via REDIS_DB environment variable, default: 0
db: Optional[str] = II("env:REDIS_DB,0")
# password via REDIS_PASSWORD environment variable, default: no password
- password: str = II("env:REDIS_PASSWORD,")
+ password: str = II("env:REDIS_PASSWORD")
# switch to run without redis server in single thread, for testing purposes only
mock: bool = II("env:REDIS_MOCK,False")
| {"golden_diff": "diff --git a/plugins/hydra_ax_sweeper/setup.py b/plugins/hydra_ax_sweeper/setup.py\n--- a/plugins/hydra_ax_sweeper/setup.py\n+++ b/plugins/hydra_ax_sweeper/setup.py\n@@ -25,8 +25,7 @@\n ],\n install_requires=[\n \"hydra-core>=1.0.0\",\n- \"ax-platform>=0.1.13\",\n- \"numpy<1.20.0\", # remove once ax is upgraded to support numpy 1.20\n+ \"ax-platform>=0.1.20\",\n ],\n include_package_data=True,\n )\ndiff --git a/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py b/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py\n--- a/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py\n+++ b/plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py\n@@ -15,7 +15,7 @@\n # database via REDIS_DB environment variable, default: 0\n db: Optional[str] = II(\"env:REDIS_DB,0\")\n # password via REDIS_PASSWORD environment variable, default: no password\n- password: str = II(\"env:REDIS_PASSWORD,\")\n+ password: str = II(\"env:REDIS_PASSWORD\")\n # switch to run without redis server in single thread, for testing purposes only\n mock: bool = II(\"env:REDIS_MOCK,False\")\n", "issue": "Upgrade to OmegaConf 2.1\nOmegaConf 2.1 is adding many important new features.\r\nFor example:\r\n* Powerful interpolation grammar supporting nested interpolations\r\n* Relative interpolations\r\n* And many many bug fixes\r\n\r\nRelease notes: [omegaconf==2.1.0.rc1](https://github.com/omry/omegaconf/releases/tag/v2.1.0.rc1).\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nfrom hydra.core.config_store import ConfigStore\nfrom omegaconf import II\n\n\n@dataclass\nclass RedisConf:\n # host address via REDIS_HOST environment variable, default: localhost\n host: str = II(\"env:REDIS_HOST,localhost\")\n # port via REDIS_PORT environment variable, default: 6379\n port: int = II(\"env:REDIS_PORT,6379\")\n # database via REDIS_DB environment variable, default: 0\n db: Optional[str] = II(\"env:REDIS_DB,0\")\n # password via REDIS_PASSWORD environment variable, default: no password\n password: str = II(\"env:REDIS_PASSWORD,\")\n # switch to run without redis server in single thread, for testing purposes only\n mock: bool = II(\"env:REDIS_MOCK,False\")\n\n\n@dataclass\nclass EnqueueConf:\n # maximum runtime of the job before it's killed (e.g. \"1d\" for 1 day, units: d/h/m/s), default: no limit\n job_timeout: Optional[str] = None\n # maximum queued time before the job before is discarded (e.g. \"1d\" for 1 day, units: d/h/m/s), default: no limit\n ttl: Optional[str] = None\n # how long successful jobs and their results are kept (e.g. \"1d\" for 1 day, units: d/h/m/s), default: no limit\n result_ttl: Optional[str] = None\n # specifies how long failed jobs are kept (e.g. \"1d\" for 1 day, units: d/h/m/s), default: no limit\n failure_ttl: Optional[str] = None\n # place job at the front of the queue, instead of the back\n at_front: bool = False\n # job id, will be overidden automatically by a uuid unless specified explicitly\n job_id: Optional[str] = None\n # description, will be overidden automatically unless specified explicitly\n description: Optional[str] = None\n\n\n@dataclass\nclass RQLauncherConf:\n _target_: str = \"hydra_plugins.hydra_rq_launcher.rq_launcher.RQLauncher\"\n # enqueue configuration\n enqueue: EnqueueConf = EnqueueConf()\n # queue name\n queue: str = \"default\"\n # redis configuration\n redis: RedisConf = RedisConf()\n # stop after enqueueing by raising custom exception\n stop_after_enqueue: bool = False\n # wait time in seconds when polling results\n wait_polling: float = 1.0\n\n\nConfigStore.instance().store(\n group=\"hydra/launcher\", name=\"rq\", node=RQLauncherConf, provider=\"rq_launcher\"\n)\n", "path": "plugins/hydra_rq_launcher/hydra_plugins/hydra_rq_launcher/config.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom setuptools import find_namespace_packages, setup\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n name=\"hydra-ax-sweeper\",\n version=\"1.1.0rc1\",\n author=\"Omry Yadan, Shagun Sodhani\",\n author_email=\"[email protected], [email protected]\",\n description=\"Hydra Ax Sweeper plugin\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\n \"hydra-core>=1.0.0\",\n \"ax-platform>=0.1.13\",\n \"numpy<1.20.0\", # remove once ax is upgraded to support numpy 1.20\n ],\n include_package_data=True,\n )\n", "path": "plugins/hydra_ax_sweeper/setup.py"}]} | 1,766 | 343 |
gh_patches_debug_31077 | rasdani/github-patches | git_diff | sopel-irc__sopel-1441 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
imdb module not working anymore
I just noticed that anytime you make a call to imdb now the bot responds:
> [MOVIE] No API key provided.
I know it used to work, not sure how recently. Maybe it can be switched to a different database that doesn't require an API key?
</issue>
<code>
[start of sopel/modules/movie.py]
1 # coding=utf-8
2 """
3 imdb.py - Sopel Movie Information Module
4 Copyright © 2012-2013, Elad Alfassa, <[email protected]>
5 Licensed under the Eiffel Forum License 2.
6
7 This module relies on omdbapi.com
8 """
9 from __future__ import unicode_literals, absolute_import, print_function, division
10
11 import requests
12 import sopel.module
13 from sopel.logger import get_logger
14
15 LOGGER = get_logger(__name__)
16
17
18 @sopel.module.commands('movie', 'imdb')
19 @sopel.module.example('.movie ThisTitleDoesNotExist', '[MOVIE] Movie not found!')
20 @sopel.module.example('.movie Citizen Kane', '[MOVIE] Title: Citizen Kane | Year: 1941 | Rating: 8.4 | Genre: Drama, Mystery | IMDB Link: http://imdb.com/title/tt0033467')
21 def movie(bot, trigger):
22 """
23 Returns some information about a movie, like Title, Year, Rating, Genre and IMDB Link.
24 """
25 if not trigger.group(2):
26 return
27 word = trigger.group(2).rstrip()
28 uri = "http://www.omdbapi.com/"
29 data = requests.get(uri, params={'t': word}, timeout=30,
30 verify=bot.config.core.verify_ssl).json()
31 if data['Response'] == 'False':
32 if 'Error' in data:
33 message = '[MOVIE] %s' % data['Error']
34 else:
35 LOGGER.warning(
36 'Got an error from the OMDb api, search phrase was %s; data was %s',
37 word, str(data))
38 message = '[MOVIE] Got an error from OMDbapi'
39 else:
40 message = '[MOVIE] Title: ' + data['Title'] + \
41 ' | Year: ' + data['Year'] + \
42 ' | Rating: ' + data['imdbRating'] + \
43 ' | Genre: ' + data['Genre'] + \
44 ' | IMDB Link: http://imdb.com/title/' + data['imdbID']
45 bot.say(message)
46
47
48 if __name__ == "__main__":
49 from sopel.test_tools import run_example_tests
50 run_example_tests(__file__)
51
[end of sopel/modules/movie.py]
[start of conftest.py]
1 # This file lists files which should be ignored by pytest
2 collect_ignore = ["setup.py", "sopel.py", "sopel/modules/ipython.py", "sopel/modules/movie.py"]
3
[end of conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -1,2 +1,2 @@
# This file lists files which should be ignored by pytest
-collect_ignore = ["setup.py", "sopel.py", "sopel/modules/ipython.py", "sopel/modules/movie.py"]
+collect_ignore = ["setup.py", "sopel.py", "sopel/modules/ipython.py"]
diff --git a/sopel/modules/movie.py b/sopel/modules/movie.py
deleted file mode 100644
--- a/sopel/modules/movie.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# coding=utf-8
-"""
-imdb.py - Sopel Movie Information Module
-Copyright © 2012-2013, Elad Alfassa, <[email protected]>
-Licensed under the Eiffel Forum License 2.
-
-This module relies on omdbapi.com
-"""
-from __future__ import unicode_literals, absolute_import, print_function, division
-
-import requests
-import sopel.module
-from sopel.logger import get_logger
-
-LOGGER = get_logger(__name__)
-
-
[email protected]('movie', 'imdb')
[email protected]('.movie ThisTitleDoesNotExist', '[MOVIE] Movie not found!')
[email protected]('.movie Citizen Kane', '[MOVIE] Title: Citizen Kane | Year: 1941 | Rating: 8.4 | Genre: Drama, Mystery | IMDB Link: http://imdb.com/title/tt0033467')
-def movie(bot, trigger):
- """
- Returns some information about a movie, like Title, Year, Rating, Genre and IMDB Link.
- """
- if not trigger.group(2):
- return
- word = trigger.group(2).rstrip()
- uri = "http://www.omdbapi.com/"
- data = requests.get(uri, params={'t': word}, timeout=30,
- verify=bot.config.core.verify_ssl).json()
- if data['Response'] == 'False':
- if 'Error' in data:
- message = '[MOVIE] %s' % data['Error']
- else:
- LOGGER.warning(
- 'Got an error from the OMDb api, search phrase was %s; data was %s',
- word, str(data))
- message = '[MOVIE] Got an error from OMDbapi'
- else:
- message = '[MOVIE] Title: ' + data['Title'] + \
- ' | Year: ' + data['Year'] + \
- ' | Rating: ' + data['imdbRating'] + \
- ' | Genre: ' + data['Genre'] + \
- ' | IMDB Link: http://imdb.com/title/' + data['imdbID']
- bot.say(message)
-
-
-if __name__ == "__main__":
- from sopel.test_tools import run_example_tests
- run_example_tests(__file__)
| {"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -1,2 +1,2 @@\n # This file lists files which should be ignored by pytest\n-collect_ignore = [\"setup.py\", \"sopel.py\", \"sopel/modules/ipython.py\", \"sopel/modules/movie.py\"]\n+collect_ignore = [\"setup.py\", \"sopel.py\", \"sopel/modules/ipython.py\"]\ndiff --git a/sopel/modules/movie.py b/sopel/modules/movie.py\ndeleted file mode 100644\n--- a/sopel/modules/movie.py\n+++ /dev/null\n@@ -1,50 +0,0 @@\n-# coding=utf-8\n-\"\"\"\n-imdb.py - Sopel Movie Information Module\n-Copyright \u00a9 2012-2013, Elad Alfassa, <[email protected]>\n-Licensed under the Eiffel Forum License 2.\n-\n-This module relies on omdbapi.com\n-\"\"\"\n-from __future__ import unicode_literals, absolute_import, print_function, division\n-\n-import requests\n-import sopel.module\n-from sopel.logger import get_logger\n-\n-LOGGER = get_logger(__name__)\n-\n-\[email protected]('movie', 'imdb')\[email protected]('.movie ThisTitleDoesNotExist', '[MOVIE] Movie not found!')\[email protected]('.movie Citizen Kane', '[MOVIE] Title: Citizen Kane | Year: 1941 | Rating: 8.4 | Genre: Drama, Mystery | IMDB Link: http://imdb.com/title/tt0033467')\n-def movie(bot, trigger):\n- \"\"\"\n- Returns some information about a movie, like Title, Year, Rating, Genre and IMDB Link.\n- \"\"\"\n- if not trigger.group(2):\n- return\n- word = trigger.group(2).rstrip()\n- uri = \"http://www.omdbapi.com/\"\n- data = requests.get(uri, params={'t': word}, timeout=30,\n- verify=bot.config.core.verify_ssl).json()\n- if data['Response'] == 'False':\n- if 'Error' in data:\n- message = '[MOVIE] %s' % data['Error']\n- else:\n- LOGGER.warning(\n- 'Got an error from the OMDb api, search phrase was %s; data was %s',\n- word, str(data))\n- message = '[MOVIE] Got an error from OMDbapi'\n- else:\n- message = '[MOVIE] Title: ' + data['Title'] + \\\n- ' | Year: ' + data['Year'] + \\\n- ' | Rating: ' + data['imdbRating'] + \\\n- ' | Genre: ' + data['Genre'] + \\\n- ' | IMDB Link: http://imdb.com/title/' + data['imdbID']\n- bot.say(message)\n-\n-\n-if __name__ == \"__main__\":\n- from sopel.test_tools import run_example_tests\n- run_example_tests(__file__)\n", "issue": "imdb module not working anymore\nI just noticed that anytime you make a call to imdb now the bot responds: \r\n\r\n> [MOVIE] No API key provided.\r\n\r\nI know it used to work, not sure how recently. Maybe it can be switched to a different database that doesn't require an API key?\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nimdb.py - Sopel Movie Information Module\nCopyright \u00a9 2012-2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nThis module relies on omdbapi.com\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport requests\nimport sopel.module\nfrom sopel.logger import get_logger\n\nLOGGER = get_logger(__name__)\n\n\[email protected]('movie', 'imdb')\[email protected]('.movie ThisTitleDoesNotExist', '[MOVIE] Movie not found!')\[email protected]('.movie Citizen Kane', '[MOVIE] Title: Citizen Kane | Year: 1941 | Rating: 8.4 | Genre: Drama, Mystery | IMDB Link: http://imdb.com/title/tt0033467')\ndef movie(bot, trigger):\n \"\"\"\n Returns some information about a movie, like Title, Year, Rating, Genre and IMDB Link.\n \"\"\"\n if not trigger.group(2):\n return\n word = trigger.group(2).rstrip()\n uri = \"http://www.omdbapi.com/\"\n data = requests.get(uri, params={'t': word}, timeout=30,\n verify=bot.config.core.verify_ssl).json()\n if data['Response'] == 'False':\n if 'Error' in data:\n message = '[MOVIE] %s' % data['Error']\n else:\n LOGGER.warning(\n 'Got an error from the OMDb api, search phrase was %s; data was %s',\n word, str(data))\n message = '[MOVIE] Got an error from OMDbapi'\n else:\n message = '[MOVIE] Title: ' + data['Title'] + \\\n ' | Year: ' + data['Year'] + \\\n ' | Rating: ' + data['imdbRating'] + \\\n ' | Genre: ' + data['Genre'] + \\\n ' | IMDB Link: http://imdb.com/title/' + data['imdbID']\n bot.say(message)\n\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/movie.py"}, {"content": "# This file lists files which should be ignored by pytest\ncollect_ignore = [\"setup.py\", \"sopel.py\", \"sopel/modules/ipython.py\", \"sopel/modules/movie.py\"]\n", "path": "conftest.py"}]} | 1,251 | 696 |
gh_patches_debug_37251 | rasdani/github-patches | git_diff | databricks__koalas-189 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document all the methods in Metadata
There are a bunch of methods like index_info, index_fields. It's pretty difficult to figure out what they do. We should just add some basic docstring comments.
@ueshin you are probably the best person to take this since you created the file.
</issue>
<code>
[start of databricks/koalas/metadata.py]
1 #
2 # Copyright (C) 2019 Databricks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 """
18 A metadata to manage indexes.
19 """
20 import pandas as pd
21
22 from databricks.koalas.dask.compatibility import string_types
23
24
25 class Metadata(object):
26 """
27 Manages column names and index information
28 """
29
30 def __init__(self, column_fields, index_info=None):
31 """ Create a new metadata to manage column fields and index fields and names.
32
33 :param column_fields: list of string
34 Field names to appear as columns.
35 :param index_info: list of string pair
36 Each pair holds the index field name which exists in Spark fields,
37 and the index name.
38 """
39 assert all(isinstance(col, string_types) for col in column_fields)
40 assert index_info is None \
41 or all(isinstance(index_field, string_types)
42 and (index_name is None or isinstance(index_name, string_types))
43 for index_field, index_name in index_info)
44 self._column_fields = column_fields
45 self._index_info = index_info or []
46
47 @property
48 def column_fields(self):
49 return self._column_fields
50
51 @property
52 def index_info(self):
53 return self._index_info
54
55 @property
56 def index_fields(self):
57 return [index_field for index_field, _ in self._index_info]
58
59 @property
60 def index_names(self):
61 return [name for _, name in self._index_info]
62
63 @property
64 def all_fields(self):
65 index_fields = self.index_fields
66 return index_fields + [field for field in self._column_fields
67 if field not in index_fields]
68
69 def copy(self, column_fields=None, index_info=None):
70 if column_fields is None:
71 column_fields = self._column_fields
72 if index_info is None:
73 index_info = self._index_info
74 return Metadata(column_fields=column_fields.copy(), index_info=index_info.copy())
75
76 @staticmethod
77 def from_pandas(pdf):
78 column_fields = [str(col) for col in pdf.columns]
79 index = pdf.index
80 if isinstance(index, pd.MultiIndex):
81 if index.names is None:
82 index_info = [('__index_level_{}__'.format(i), None)
83 for i in range(len(index.levels))]
84 else:
85 index_info = [('__index_level_{}__'.format(i) if name is None else name, name)
86 for i, name in enumerate(index.names)]
87 else:
88 index_info = [(index.name
89 if index.name is not None else '__index_level_0__', index.name)]
90
91 return Metadata(column_fields=column_fields, index_info=index_info)
92
[end of databricks/koalas/metadata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/databricks/koalas/metadata.py b/databricks/koalas/metadata.py
--- a/databricks/koalas/metadata.py
+++ b/databricks/koalas/metadata.py
@@ -24,7 +24,11 @@
class Metadata(object):
"""
- Manages column names and index information
+ Manages column names and index information.
+
+ :ivar _column_fields: list of the Spark field names to be seen as columns in Koalas DataFrame.
+ :ivar _index_info: list of pair holding the Spark field names for indexes,
+ and the index name to be seen in Koalas DataFrame.
"""
def __init__(self, column_fields, index_info=None):
@@ -46,27 +50,38 @@
@property
def column_fields(self):
+ """ Returns the managed column field names. """
return self._column_fields
@property
def index_info(self):
+ """ Return the managed index information. """
return self._index_info
@property
def index_fields(self):
+ """ Returns the managed index field names. """
return [index_field for index_field, _ in self._index_info]
@property
def index_names(self):
+ """ Return the managed index names. """
return [name for _, name in self._index_info]
@property
def all_fields(self):
+ """ Return all the field names including index field names. """
index_fields = self.index_fields
return index_fields + [field for field in self._column_fields
if field not in index_fields]
def copy(self, column_fields=None, index_info=None):
+ """ Copy the metadata.
+
+ :param column_fields: the new column field names. If None, then the original ones are used.
+ :param index_info: the new index information. If None, then the original one is used.
+ :return: the copied metadata.
+ """
if column_fields is None:
column_fields = self._column_fields
if index_info is None:
@@ -75,6 +90,11 @@
@staticmethod
def from_pandas(pdf):
+ """ Create a metadata from pandas DataFrame.
+
+ :param pdf: :class:`pd.DataFrame`
+ :return: the created metadata
+ """
column_fields = [str(col) for col in pdf.columns]
index = pdf.index
if isinstance(index, pd.MultiIndex):
| {"golden_diff": "diff --git a/databricks/koalas/metadata.py b/databricks/koalas/metadata.py\n--- a/databricks/koalas/metadata.py\n+++ b/databricks/koalas/metadata.py\n@@ -24,7 +24,11 @@\n \n class Metadata(object):\n \"\"\"\n- Manages column names and index information\n+ Manages column names and index information.\n+\n+ :ivar _column_fields: list of the Spark field names to be seen as columns in Koalas DataFrame.\n+ :ivar _index_info: list of pair holding the Spark field names for indexes,\n+ and the index name to be seen in Koalas DataFrame.\n \"\"\"\n \n def __init__(self, column_fields, index_info=None):\n@@ -46,27 +50,38 @@\n \n @property\n def column_fields(self):\n+ \"\"\" Returns the managed column field names. \"\"\"\n return self._column_fields\n \n @property\n def index_info(self):\n+ \"\"\" Return the managed index information. \"\"\"\n return self._index_info\n \n @property\n def index_fields(self):\n+ \"\"\" Returns the managed index field names. \"\"\"\n return [index_field for index_field, _ in self._index_info]\n \n @property\n def index_names(self):\n+ \"\"\" Return the managed index names. \"\"\"\n return [name for _, name in self._index_info]\n \n @property\n def all_fields(self):\n+ \"\"\" Return all the field names including index field names. \"\"\"\n index_fields = self.index_fields\n return index_fields + [field for field in self._column_fields\n if field not in index_fields]\n \n def copy(self, column_fields=None, index_info=None):\n+ \"\"\" Copy the metadata.\n+\n+ :param column_fields: the new column field names. If None, then the original ones are used.\n+ :param index_info: the new index information. If None, then the original one is used.\n+ :return: the copied metadata.\n+ \"\"\"\n if column_fields is None:\n column_fields = self._column_fields\n if index_info is None:\n@@ -75,6 +90,11 @@\n \n @staticmethod\n def from_pandas(pdf):\n+ \"\"\" Create a metadata from pandas DataFrame.\n+\n+ :param pdf: :class:`pd.DataFrame`\n+ :return: the created metadata\n+ \"\"\"\n column_fields = [str(col) for col in pdf.columns]\n index = pdf.index\n if isinstance(index, pd.MultiIndex):\n", "issue": "Document all the methods in Metadata\nThere are a bunch of methods like index_info, index_fields. It's pretty difficult to figure out what they do. We should just add some basic docstring comments.\r\n\r\n@ueshin you are probably the best person to take this since you created the file.\r\n\n", "before_files": [{"content": "#\n# Copyright (C) 2019 Databricks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nA metadata to manage indexes.\n\"\"\"\nimport pandas as pd\n\nfrom databricks.koalas.dask.compatibility import string_types\n\n\nclass Metadata(object):\n \"\"\"\n Manages column names and index information\n \"\"\"\n\n def __init__(self, column_fields, index_info=None):\n \"\"\" Create a new metadata to manage column fields and index fields and names.\n\n :param column_fields: list of string\n Field names to appear as columns.\n :param index_info: list of string pair\n Each pair holds the index field name which exists in Spark fields,\n and the index name.\n \"\"\"\n assert all(isinstance(col, string_types) for col in column_fields)\n assert index_info is None \\\n or all(isinstance(index_field, string_types)\n and (index_name is None or isinstance(index_name, string_types))\n for index_field, index_name in index_info)\n self._column_fields = column_fields\n self._index_info = index_info or []\n\n @property\n def column_fields(self):\n return self._column_fields\n\n @property\n def index_info(self):\n return self._index_info\n\n @property\n def index_fields(self):\n return [index_field for index_field, _ in self._index_info]\n\n @property\n def index_names(self):\n return [name for _, name in self._index_info]\n\n @property\n def all_fields(self):\n index_fields = self.index_fields\n return index_fields + [field for field in self._column_fields\n if field not in index_fields]\n\n def copy(self, column_fields=None, index_info=None):\n if column_fields is None:\n column_fields = self._column_fields\n if index_info is None:\n index_info = self._index_info\n return Metadata(column_fields=column_fields.copy(), index_info=index_info.copy())\n\n @staticmethod\n def from_pandas(pdf):\n column_fields = [str(col) for col in pdf.columns]\n index = pdf.index\n if isinstance(index, pd.MultiIndex):\n if index.names is None:\n index_info = [('__index_level_{}__'.format(i), None)\n for i in range(len(index.levels))]\n else:\n index_info = [('__index_level_{}__'.format(i) if name is None else name, name)\n for i, name in enumerate(index.names)]\n else:\n index_info = [(index.name\n if index.name is not None else '__index_level_0__', index.name)]\n\n return Metadata(column_fields=column_fields, index_info=index_info)\n", "path": "databricks/koalas/metadata.py"}]} | 1,472 | 558 |
gh_patches_debug_63301 | rasdani/github-patches | git_diff | scikit-hep__pyhf-372 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update tensorflow-probability to the next release that includes continuous approximations
# Description
This is a follow up to #302. As the bug is fixed in upstream tensorflow-probability, we just need to wait for a new release to be shipped.
This bug was because of a change in the API to get rid of the continuous approximation to the Poisson pmf which broke our tests.
### Describe the solution you'd like
Unfix tensorflow-probability to `0.3.0` and bump to the next available release post-0.4.0.
Update Tensorflow to TensorFlow 1.12.0 release
# Description
[TensorFlow 1.12.0 has been released](https://github.com/tensorflow/tensorflow/releases/tag/v1.12.0) and it has breaking changes. Most notably
> Remove `tf.contrib.linalg`. `tf.linalg` should be used instead.
Once there is a new release of TensorFlow probability (`v0.5.0` — c.f. Issue #360 and #330) that upgrades to `v1.12.0` then we can follow them in upgrading.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4 from os import path
5 import sys
6
7 this_directory = path.abspath(path.dirname(__file__))
8 if sys.version_info.major < 3:
9 from io import open
10 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:
11 long_description = readme_md.read()
12
13 extras_require = {
14 'tensorflow': [
15 'tensorflow<1.12.0,>=1.10.0',
16 'tensorflow-probability==0.3.0',
17 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
18 'setuptools<=39.1.0',
19 ],
20 'torch': ['torch>=0.4.0'],
21 'mxnet': [
22 'mxnet>=1.0.0',
23 'requests<2.19.0,>=2.18.4',
24 'numpy<1.15.0,>=1.8.2',
25 'requests<2.19.0,>=2.18.4',
26 ],
27 # 'dask': [
28 # 'dask[array]'
29 # ],
30 'xmlimport': ['uproot'],
31 'minuit': ['iminuit'],
32 'develop': [
33 'pyflakes',
34 'pytest<4.0.0,>=3.5.1',
35 'pytest-cov>=2.5.1',
36 'pytest-mock',
37 'pytest-benchmark[histogram]',
38 'pytest-console-scripts',
39 'python-coveralls',
40 'coverage>=4.0', # coveralls
41 'matplotlib',
42 'jupyter',
43 'nbdime',
44 'uproot>=3.0.0',
45 'papermill>=0.16.0',
46 'graphviz',
47 'bumpversion',
48 'sphinx',
49 'sphinxcontrib-bibtex',
50 'sphinxcontrib-napoleon',
51 'sphinx_rtd_theme',
52 'nbsphinx',
53 'sphinx-issues',
54 'm2r',
55 'jsonpatch',
56 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now
57 'pre-commit',
58 'black;python_version>="3.6"', # Black is Python3 only
59 'twine',
60 ],
61 }
62 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
63
64 setup(
65 name='pyhf',
66 version='0.0.15',
67 description='(partial) pure python histfactory implementation',
68 long_description=long_description,
69 long_description_content_type='text/markdown',
70 url='https://github.com/diana-hep/pyhf',
71 author='Lukas Heinrich',
72 author_email='[email protected]',
73 license='Apache',
74 keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',
75 classifiers=[
76 "Programming Language :: Python :: 2",
77 "Programming Language :: Python :: 2.7",
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.6",
80 ],
81 packages=find_packages(),
82 include_package_data=True,
83 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
84 install_requires=[
85 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
86 'click>=6.0', # for console scripts,
87 'tqdm', # for readxml
88 'six', # for modifiers
89 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
90 'jsonpatch',
91 ],
92 extras_require=extras_require,
93 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
94 dependency_links=[],
95 )
96
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,8 +12,8 @@
extras_require = {
'tensorflow': [
- 'tensorflow<1.12.0,>=1.10.0',
- 'tensorflow-probability==0.3.0',
+ 'tensorflow>=1.12.0',
+ 'tensorflow-probability>=0.5.0',
'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
'setuptools<=39.1.0',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,8 +12,8 @@\n \n extras_require = {\n 'tensorflow': [\n- 'tensorflow<1.12.0,>=1.10.0',\n- 'tensorflow-probability==0.3.0',\n+ 'tensorflow>=1.12.0',\n+ 'tensorflow-probability>=0.5.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n", "issue": "Update tensorflow-probability to the next release that includes continuous approximations\n# Description\r\n\r\nThis is a follow up to #302. As the bug is fixed in upstream tensorflow-probability, we just need to wait for a new release to be shipped.\r\n\r\nThis bug was because of a change in the API to get rid of the continuous approximation to the Poisson pmf which broke our tests.\r\n\r\n### Describe the solution you'd like\r\n\r\nUnfix tensorflow-probability to `0.3.0` and bump to the next available release post-0.4.0.\nUpdate Tensorflow to TensorFlow 1.12.0 release\n# Description\r\n\r\n[TensorFlow 1.12.0 has been released](https://github.com/tensorflow/tensorflow/releases/tag/v1.12.0) and it has breaking changes. Most notably\r\n\r\n> Remove `tf.contrib.linalg`. `tf.linalg` should be used instead. \r\n\r\nOnce there is a new release of TensorFlow probability (`v0.5.0` — c.f. Issue #360 and #330) that upgrades to `v1.12.0` then we can follow them in upgrading.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow<1.12.0,>=1.10.0',\n 'tensorflow-probability==0.3.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch>=0.4.0'],\n 'mxnet': [\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlimport': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest<4.0.0,>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot>=3.0.0',\n 'papermill>=0.16.0',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.15',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}]} | 1,861 | 162 |
gh_patches_debug_14775 | rasdani/github-patches | git_diff | hylang__hy-1122 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Hy sets are really broken.
``` Hy
(env-hy) C:\Users\ME\Code>hy
hy 0.11.0 using CPython(v3.4.3:9b73f1c3e601) 3.4.3 on Windows
=> #{:a 'a}
Traceback (most recent call last):
File "C:\Python34\Scripts\env-hy\Scripts\hy-script.py", line 9, in <module>
load_entry_point('hy==0.11.0', 'console_scripts', 'hy')()
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\cmdline.py", line 341, in hy_main
sys.exit(cmdline_handler("hy", sys.argv))
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\cmdline.py", line 336, in cmdline_handler
return run_repl(spy=options.spy)
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\cmdline.py", line 234, in run_repl
os=platform.system()
File "C:\Python34\Lib\code.py", line 234, in interact
more = self.push(line)
File "C:\Python34\Lib\code.py", line 256, in push
more = self.runsource(source, self.filename)
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\cmdline.py", line 93, in runsource
tokens = tokenize(source)
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\lex\__init__.py", line 33, in tokenize
return parser.parse(lexer.lex(buf))
File "C:\Python34\Scripts\env-hy\lib\site-packages\rply\parser.py", line 23, in parse
t, symstack, statestack, state
File "C:\Python34\Scripts\env-hy\lib\site-packages\rply\parser.py", line 80, in _reduce_production
value = p.func(targ)
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\lex\parser.py", line 69, in wrapped
ret = fun(p)
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\lex\parser.py", line 214, in t_set
return HySet(p[1])
File "C:\Python34\Scripts\env-hy\lib\site-packages\hy\models\set.py", line 31, in __init__
items = sorted(items)
TypeError: unorderable types: HyExpression() < HyKeyword()
(env-hy) C:\Users\ME\Code>
```
That is NOT supposed to happen.
</issue>
<code>
[start of hy/models/set.py]
1 # Copyright (c) 2013 Paul Tagliamonte <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 from hy.models.list import HyList
22 from functools import reduce
23
24
25 class HySet(HyList):
26 """
27 Hy set (actually a list that pretends to be a set)
28 """
29
30 def __init__(self, items):
31 items = sorted(items)
32 items = list(reduce(lambda r, v: v in r and r or r+[v], items, []))
33 super(HySet, self).__init__(items)
34
35 def __repr__(self):
36 return "#{%s}" % (" ".join([repr(x) for x in self]))
37
[end of hy/models/set.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hy/models/set.py b/hy/models/set.py
--- a/hy/models/set.py
+++ b/hy/models/set.py
@@ -18,19 +18,16 @@
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
+from hy.models import _wrappers, wrap_value
from hy.models.list import HyList
-from functools import reduce
class HySet(HyList):
"""
- Hy set (actually a list that pretends to be a set)
+ Hy set (just a representation of a set)
"""
- def __init__(self, items):
- items = sorted(items)
- items = list(reduce(lambda r, v: v in r and r or r+[v], items, []))
- super(HySet, self).__init__(items)
-
def __repr__(self):
return "#{%s}" % (" ".join([repr(x) for x in self]))
+
+_wrappers[set] = lambda s: HySet(wrap_value(x) for x in s)
| {"golden_diff": "diff --git a/hy/models/set.py b/hy/models/set.py\n--- a/hy/models/set.py\n+++ b/hy/models/set.py\n@@ -18,19 +18,16 @@\n # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n # DEALINGS IN THE SOFTWARE.\n \n+from hy.models import _wrappers, wrap_value\n from hy.models.list import HyList\n-from functools import reduce\n \n \n class HySet(HyList):\n \"\"\"\n- Hy set (actually a list that pretends to be a set)\n+ Hy set (just a representation of a set)\n \"\"\"\n \n- def __init__(self, items):\n- items = sorted(items)\n- items = list(reduce(lambda r, v: v in r and r or r+[v], items, []))\n- super(HySet, self).__init__(items)\n-\n def __repr__(self):\n return \"#{%s}\" % (\" \".join([repr(x) for x in self]))\n+\n+_wrappers[set] = lambda s: HySet(wrap_value(x) for x in s)\n", "issue": "Hy sets are really broken.\n``` Hy\n(env-hy) C:\\Users\\ME\\Code>hy\nhy 0.11.0 using CPython(v3.4.3:9b73f1c3e601) 3.4.3 on Windows\n=> #{:a 'a}\nTraceback (most recent call last):\n File \"C:\\Python34\\Scripts\\env-hy\\Scripts\\hy-script.py\", line 9, in <module>\n load_entry_point('hy==0.11.0', 'console_scripts', 'hy')()\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\cmdline.py\", line 341, in hy_main\n sys.exit(cmdline_handler(\"hy\", sys.argv))\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\cmdline.py\", line 336, in cmdline_handler\n return run_repl(spy=options.spy)\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\cmdline.py\", line 234, in run_repl\n os=platform.system()\n File \"C:\\Python34\\Lib\\code.py\", line 234, in interact\n more = self.push(line)\n File \"C:\\Python34\\Lib\\code.py\", line 256, in push\n more = self.runsource(source, self.filename)\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\cmdline.py\", line 93, in runsource\n tokens = tokenize(source)\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\lex\\__init__.py\", line 33, in tokenize\n return parser.parse(lexer.lex(buf))\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\rply\\parser.py\", line 23, in parse\n t, symstack, statestack, state\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\rply\\parser.py\", line 80, in _reduce_production\n value = p.func(targ)\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\lex\\parser.py\", line 69, in wrapped\n ret = fun(p)\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\lex\\parser.py\", line 214, in t_set\n return HySet(p[1])\n File \"C:\\Python34\\Scripts\\env-hy\\lib\\site-packages\\hy\\models\\set.py\", line 31, in __init__\n items = sorted(items)\nTypeError: unorderable types: HyExpression() < HyKeyword()\n\n(env-hy) C:\\Users\\ME\\Code>\n```\n\nThat is NOT supposed to happen.\n\n", "before_files": [{"content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom hy.models.list import HyList\nfrom functools import reduce\n\n\nclass HySet(HyList):\n \"\"\"\n Hy set (actually a list that pretends to be a set)\n \"\"\"\n\n def __init__(self, items):\n items = sorted(items)\n items = list(reduce(lambda r, v: v in r and r or r+[v], items, []))\n super(HySet, self).__init__(items)\n\n def __repr__(self):\n return \"#{%s}\" % (\" \".join([repr(x) for x in self]))\n", "path": "hy/models/set.py"}]} | 1,628 | 243 |
gh_patches_debug_1429 | rasdani/github-patches | git_diff | google__turbinia-785 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
import TurbiniaException to partitions.py
```
Traceback (most recent call last):
File "PATH/v2/lib/python3.8/site-packages/turbinia/workers/__init__.py", line 916, in run_wrapper
self.result = self.run(evidence, self.result)
File "PATH/v2/lib/python3.8/site-packages/turbinia/workers/partitions.py", line 144, in run
path_specs = partitions.Enumerate(evidence)
File "/PATH/v2/lib/python3.8/site-packages/turbinia/processors/partitions.py", line 49, in Enumerate
raise TurbiniaException(
NameError: name 'TurbiniaException' is not defined
2021-03-05 18:45:56 [ERROR] PartitionEnumerationTask Task failed with exception: [name 'TurbiniaException' is not defined]
```
</issue>
<code>
[start of turbinia/processors/partitions.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2021 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # https://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Evidence processor to enumerate partitions."""
16
17 import logging
18
19 from dfvfs.helpers import volume_scanner
20 from dfvfs.lib import definitions as dfvfs_definitions
21 from dfvfs.lib import errors as dfvfs_errors
22
23 from turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator
24
25 log = logging.getLogger('turbinia')
26
27
28 def Enumerate(evidence):
29 """Uses dfVFS to enumerate partitions in a disk / image.
30
31 Args:
32 evidence: Evidence object to be scanned.
33
34 Raises:
35 TurbiniaException if source evidence can't be scanned.
36
37 Returns:
38 list[dfVFS.path_spec]: path specs for identified partitions
39 """
40 dfvfs_definitions.PREFERRED_GPT_BACK_END = (
41 dfvfs_definitions.TYPE_INDICATOR_GPT)
42 mediator = UnattendedVolumeScannerMediator()
43 mediator.credentials = evidence.credentials
44 path_specs = []
45 try:
46 scanner = volume_scanner.VolumeScanner(mediator=mediator)
47 path_specs = scanner.GetBasePathSpecs(evidence.local_path)
48 except dfvfs_errors.ScannerError as e:
49 raise TurbiniaException(
50 'Could not enumerate partitions [{0!s}]: {1!s}'.format(
51 evidence.local_path, e))
52
53 return path_specs
54
55
56 def GetPartitionEncryptionType(path_spec):
57 """Checks a partition for encryption.
58
59 Args:
60 path_spec (dfVFS.path_spec): Partition path_spec.
61
62 Returns:
63 String representing the type of encryption, or None.
64 """
65 encryption_type = None
66 if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:
67 encryption_type = 'BDE'
68 return encryption_type
69
70
71 def GetPathSpecByLocation(path_specs, location):
72 """Finds a path_spec from a list of path_specs for a given location.
73
74 Args:
75 path_specs (list[dfVFS.path_spec]): List of path_specs from volume scanner.
76 location (str): dfVFS location to search for.
77
78 Returns:
79 dfVFS.path_spec for the given location or None if not found.
80 """
81 for path_spec in path_specs:
82 child_path_spec = path_spec
83 fs_location = getattr(path_spec, 'location', None)
84 while path_spec.HasParent():
85 type_indicator = path_spec.type_indicator
86 if type_indicator in (dfvfs_definitions.TYPE_INDICATOR_TSK_PARTITION,
87 dfvfs_definitions.TYPE_INDICATOR_GPT):
88 if fs_location in ('\\', '/'):
89 fs_location = getattr(path_spec, 'location', None)
90 break
91 path_spec = path_spec.parent
92 if fs_location == location:
93 return child_path_spec
94 return None
95
[end of turbinia/processors/partitions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/processors/partitions.py b/turbinia/processors/partitions.py
--- a/turbinia/processors/partitions.py
+++ b/turbinia/processors/partitions.py
@@ -21,6 +21,7 @@
from dfvfs.lib import errors as dfvfs_errors
from turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator
+from turbinia import TurbiniaException
log = logging.getLogger('turbinia')
| {"golden_diff": "diff --git a/turbinia/processors/partitions.py b/turbinia/processors/partitions.py\n--- a/turbinia/processors/partitions.py\n+++ b/turbinia/processors/partitions.py\n@@ -21,6 +21,7 @@\n from dfvfs.lib import errors as dfvfs_errors\n \n from turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator\n+from turbinia import TurbiniaException\n \n log = logging.getLogger('turbinia')\n", "issue": "import TurbiniaException to partitions.py\n```\r\nTraceback (most recent call last):\r\n File \"PATH/v2/lib/python3.8/site-packages/turbinia/workers/__init__.py\", line 916, in run_wrapper\r\n self.result = self.run(evidence, self.result)\r\n File \"PATH/v2/lib/python3.8/site-packages/turbinia/workers/partitions.py\", line 144, in run\r\n path_specs = partitions.Enumerate(evidence)\r\n File \"/PATH/v2/lib/python3.8/site-packages/turbinia/processors/partitions.py\", line 49, in Enumerate\r\n raise TurbiniaException(\r\nNameError: name 'TurbiniaException' is not defined\r\n\r\n2021-03-05 18:45:56 [ERROR] PartitionEnumerationTask Task failed with exception: [name 'TurbiniaException' is not defined]\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Evidence processor to enumerate partitions.\"\"\"\n\nimport logging\n\nfrom dfvfs.helpers import volume_scanner\nfrom dfvfs.lib import definitions as dfvfs_definitions\nfrom dfvfs.lib import errors as dfvfs_errors\n\nfrom turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator\n\nlog = logging.getLogger('turbinia')\n\n\ndef Enumerate(evidence):\n \"\"\"Uses dfVFS to enumerate partitions in a disk / image.\n\n Args:\n evidence: Evidence object to be scanned.\n\n Raises:\n TurbiniaException if source evidence can't be scanned.\n\n Returns:\n list[dfVFS.path_spec]: path specs for identified partitions\n \"\"\"\n dfvfs_definitions.PREFERRED_GPT_BACK_END = (\n dfvfs_definitions.TYPE_INDICATOR_GPT)\n mediator = UnattendedVolumeScannerMediator()\n mediator.credentials = evidence.credentials\n path_specs = []\n try:\n scanner = volume_scanner.VolumeScanner(mediator=mediator)\n path_specs = scanner.GetBasePathSpecs(evidence.local_path)\n except dfvfs_errors.ScannerError as e:\n raise TurbiniaException(\n 'Could not enumerate partitions [{0!s}]: {1!s}'.format(\n evidence.local_path, e))\n\n return path_specs\n\n\ndef GetPartitionEncryptionType(path_spec):\n \"\"\"Checks a partition for encryption.\n\n Args:\n path_spec (dfVFS.path_spec): Partition path_spec.\n\n Returns:\n String representing the type of encryption, or None.\n \"\"\"\n encryption_type = None\n if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:\n encryption_type = 'BDE'\n return encryption_type\n\n\ndef GetPathSpecByLocation(path_specs, location):\n \"\"\"Finds a path_spec from a list of path_specs for a given location.\n\n Args:\n path_specs (list[dfVFS.path_spec]): List of path_specs from volume scanner.\n location (str): dfVFS location to search for.\n\n Returns:\n dfVFS.path_spec for the given location or None if not found.\n \"\"\"\n for path_spec in path_specs:\n child_path_spec = path_spec\n fs_location = getattr(path_spec, 'location', None)\n while path_spec.HasParent():\n type_indicator = path_spec.type_indicator\n if type_indicator in (dfvfs_definitions.TYPE_INDICATOR_TSK_PARTITION,\n dfvfs_definitions.TYPE_INDICATOR_GPT):\n if fs_location in ('\\\\', '/'):\n fs_location = getattr(path_spec, 'location', None)\n break\n path_spec = path_spec.parent\n if fs_location == location:\n return child_path_spec\n return None\n", "path": "turbinia/processors/partitions.py"}]} | 1,637 | 109 |
gh_patches_debug_25484 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-1259 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DJStripeSubscriptionPermission issue returning bool
This permission is not returning properly the bool.
**Current behaviour**
```python
class DJStripeSubscriptionPermission(BasePermission):
"""
A permission to be used when wanting to permit users with active subscriptions.
"""
def has_permission(self, request, view):
"""
Check if the subscriber has an active subscription.
Returns false if:
* a subscriber isn't passed through the request
See ``utils.subscriber_has_active_subscription`` for more rules.
"""
try:
subscriber_has_active_subscription(subscriber_request_callback(request))
except AttributeError:
return False
```
Here is not returning True or False except if it falls in the exception.
**Expected Behaviour**
```python
class DJStripeSubscriptionPermission(BasePermission):
"""
A permission to be used when wanting to permit users with active subscriptions.
"""
def has_permission(self, request, view):
"""
Check if the subscriber has an active subscription.
Returns false if:
* a subscriber isn't passed through the request
See ``utils.subscriber_has_active_subscription`` for more rules.
"""
try:
return bool(subscriber_has_active_subscription(subscriber_request_callback(request)))
except AttributeError:
return False
```
Just missing a return and it solves the problem. We don't need a bool directly there, I just added just to follow the same patterns as the DRF (also being added to the other project :-))
</issue>
<code>
[start of djstripe/contrib/rest_framework/serializers.py]
1 """
2 .. module:: dj-stripe.contrib.rest_framework.serializers.
3
4 :synopsis: dj-stripe - Serializers to be used with the dj-stripe REST API.
5
6 .. moduleauthor:: Philippe Luickx (@philippeluickx)
7
8 """
9
10 from rest_framework import serializers
11 from rest_framework.serializers import ModelSerializer
12
13 from djstripe.models import Subscription
14
15
16 class SubscriptionSerializer(ModelSerializer):
17 """A serializer used for the Subscription model."""
18
19 class Meta:
20 """Model class options."""
21
22 model = Subscription
23 exclude = ["default_tax_rates"]
24
25
26 class CreateSubscriptionSerializer(serializers.Serializer):
27 """A serializer used to create a Subscription."""
28
29 stripe_token = serializers.CharField(max_length=200)
30 plan = serializers.CharField(max_length=50)
31 charge_immediately = serializers.BooleanField(required=False, allow_null=True, default=None)
32 tax_percent = serializers.DecimalField(
33 required=False, max_digits=5, decimal_places=2
34 )
35
[end of djstripe/contrib/rest_framework/serializers.py]
[start of djstripe/contrib/rest_framework/permissions.py]
1 """
2 .. module:: dj-stripe.contrib.rest_framework.permissions.
3
4 :synopsis: dj-stripe - Permissions to be used with the dj-stripe REST API.
5
6 .. moduleauthor:: @kavdev, @pydanny
7
8 """
9 from rest_framework.permissions import BasePermission
10
11 from ...settings import subscriber_request_callback
12 from ...utils import subscriber_has_active_subscription
13
14
15 class DJStripeSubscriptionPermission(BasePermission):
16 """
17 A permission to be used when wanting to permit users with active subscriptions.
18 """
19
20 def has_permission(self, request, view):
21 """
22 Check if the subscriber has an active subscription.
23
24 Returns false if:
25 * a subscriber isn't passed through the request
26
27 See ``utils.subscriber_has_active_subscription`` for more rules.
28
29 """
30 try:
31 subscriber_has_active_subscription(subscriber_request_callback(request))
32 except AttributeError:
33 return False
34
[end of djstripe/contrib/rest_framework/permissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/djstripe/contrib/rest_framework/permissions.py b/djstripe/contrib/rest_framework/permissions.py
--- a/djstripe/contrib/rest_framework/permissions.py
+++ b/djstripe/contrib/rest_framework/permissions.py
@@ -17,7 +17,7 @@
A permission to be used when wanting to permit users with active subscriptions.
"""
- def has_permission(self, request, view):
+ def has_permission(self, request, view) -> bool:
"""
Check if the subscriber has an active subscription.
@@ -28,6 +28,8 @@
"""
try:
- subscriber_has_active_subscription(subscriber_request_callback(request))
+ return subscriber_has_active_subscription(
+ subscriber_request_callback(request)
+ )
except AttributeError:
return False
diff --git a/djstripe/contrib/rest_framework/serializers.py b/djstripe/contrib/rest_framework/serializers.py
--- a/djstripe/contrib/rest_framework/serializers.py
+++ b/djstripe/contrib/rest_framework/serializers.py
@@ -28,7 +28,9 @@
stripe_token = serializers.CharField(max_length=200)
plan = serializers.CharField(max_length=50)
- charge_immediately = serializers.BooleanField(required=False, allow_null=True, default=None)
+ charge_immediately = serializers.BooleanField(
+ required=False, allow_null=True, default=None
+ )
tax_percent = serializers.DecimalField(
required=False, max_digits=5, decimal_places=2
)
| {"golden_diff": "diff --git a/djstripe/contrib/rest_framework/permissions.py b/djstripe/contrib/rest_framework/permissions.py\n--- a/djstripe/contrib/rest_framework/permissions.py\n+++ b/djstripe/contrib/rest_framework/permissions.py\n@@ -17,7 +17,7 @@\n A permission to be used when wanting to permit users with active subscriptions.\n \"\"\"\n \n- def has_permission(self, request, view):\n+ def has_permission(self, request, view) -> bool:\n \"\"\"\n Check if the subscriber has an active subscription.\n \n@@ -28,6 +28,8 @@\n \n \"\"\"\n try:\n- subscriber_has_active_subscription(subscriber_request_callback(request))\n+ return subscriber_has_active_subscription(\n+ subscriber_request_callback(request)\n+ )\n except AttributeError:\n return False\ndiff --git a/djstripe/contrib/rest_framework/serializers.py b/djstripe/contrib/rest_framework/serializers.py\n--- a/djstripe/contrib/rest_framework/serializers.py\n+++ b/djstripe/contrib/rest_framework/serializers.py\n@@ -28,7 +28,9 @@\n \n stripe_token = serializers.CharField(max_length=200)\n plan = serializers.CharField(max_length=50)\n- charge_immediately = serializers.BooleanField(required=False, allow_null=True, default=None)\n+ charge_immediately = serializers.BooleanField(\n+ required=False, allow_null=True, default=None\n+ )\n tax_percent = serializers.DecimalField(\n required=False, max_digits=5, decimal_places=2\n )\n", "issue": "DJStripeSubscriptionPermission issue returning bool\nThis permission is not returning properly the bool.\r\n\r\n**Current behaviour**\r\n\r\n```python\r\nclass DJStripeSubscriptionPermission(BasePermission):\r\n \"\"\"\r\n A permission to be used when wanting to permit users with active subscriptions.\r\n \"\"\"\r\n\r\n def has_permission(self, request, view):\r\n \"\"\"\r\n Check if the subscriber has an active subscription.\r\n\r\n Returns false if:\r\n * a subscriber isn't passed through the request\r\n\r\n See ``utils.subscriber_has_active_subscription`` for more rules.\r\n\r\n \"\"\"\r\n try:\r\n subscriber_has_active_subscription(subscriber_request_callback(request))\r\n except AttributeError:\r\n return False\r\n```\r\n\r\nHere is not returning True or False except if it falls in the exception.\r\n\r\n\r\n**Expected Behaviour**\r\n\r\n\r\n```python\r\nclass DJStripeSubscriptionPermission(BasePermission):\r\n \"\"\"\r\n A permission to be used when wanting to permit users with active subscriptions.\r\n \"\"\"\r\n\r\n def has_permission(self, request, view):\r\n \"\"\"\r\n Check if the subscriber has an active subscription.\r\n\r\n Returns false if:\r\n * a subscriber isn't passed through the request\r\n\r\n See ``utils.subscriber_has_active_subscription`` for more rules.\r\n\r\n \"\"\"\r\n try:\r\n return bool(subscriber_has_active_subscription(subscriber_request_callback(request)))\r\n except AttributeError:\r\n return False\r\n```\r\n\r\nJust missing a return and it solves the problem. We don't need a bool directly there, I just added just to follow the same patterns as the DRF (also being added to the other project :-))\n", "before_files": [{"content": "\"\"\"\n.. module:: dj-stripe.contrib.rest_framework.serializers.\n\n :synopsis: dj-stripe - Serializers to be used with the dj-stripe REST API.\n\n.. moduleauthor:: Philippe Luickx (@philippeluickx)\n\n\"\"\"\n\nfrom rest_framework import serializers\nfrom rest_framework.serializers import ModelSerializer\n\nfrom djstripe.models import Subscription\n\n\nclass SubscriptionSerializer(ModelSerializer):\n \"\"\"A serializer used for the Subscription model.\"\"\"\n\n class Meta:\n \"\"\"Model class options.\"\"\"\n\n model = Subscription\n exclude = [\"default_tax_rates\"]\n\n\nclass CreateSubscriptionSerializer(serializers.Serializer):\n \"\"\"A serializer used to create a Subscription.\"\"\"\n\n stripe_token = serializers.CharField(max_length=200)\n plan = serializers.CharField(max_length=50)\n charge_immediately = serializers.BooleanField(required=False, allow_null=True, default=None)\n tax_percent = serializers.DecimalField(\n required=False, max_digits=5, decimal_places=2\n )\n", "path": "djstripe/contrib/rest_framework/serializers.py"}, {"content": "\"\"\"\n.. module:: dj-stripe.contrib.rest_framework.permissions.\n\n :synopsis: dj-stripe - Permissions to be used with the dj-stripe REST API.\n\n.. moduleauthor:: @kavdev, @pydanny\n\n\"\"\"\nfrom rest_framework.permissions import BasePermission\n\nfrom ...settings import subscriber_request_callback\nfrom ...utils import subscriber_has_active_subscription\n\n\nclass DJStripeSubscriptionPermission(BasePermission):\n \"\"\"\n A permission to be used when wanting to permit users with active subscriptions.\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Check if the subscriber has an active subscription.\n\n Returns false if:\n * a subscriber isn't passed through the request\n\n See ``utils.subscriber_has_active_subscription`` for more rules.\n\n \"\"\"\n try:\n subscriber_has_active_subscription(subscriber_request_callback(request))\n except AttributeError:\n return False\n", "path": "djstripe/contrib/rest_framework/permissions.py"}]} | 1,386 | 335 |
gh_patches_debug_35132 | rasdani/github-patches | git_diff | CTFd__CTFd-1352 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Submission search
Search submissions akin to how users are searched
</issue>
<code>
[start of CTFd/admin/submissions.py]
1 from flask import render_template, request
2
3 from CTFd.admin import admin
4 from CTFd.models import Challenges, Submissions
5 from CTFd.utils.decorators import admins_only
6 from CTFd.utils.modes import get_model
7
8
9 @admin.route("/admin/submissions", defaults={"submission_type": None})
10 @admin.route("/admin/submissions/<submission_type>")
11 @admins_only
12 def submissions_listing(submission_type):
13 filters = {}
14 if submission_type:
15 filters["type"] = submission_type
16
17 curr_page = abs(int(request.args.get("page", 1, type=int)))
18 results_per_page = 50
19 page_start = results_per_page * (curr_page - 1)
20 page_end = results_per_page * (curr_page - 1) + results_per_page
21 sub_count = Submissions.query.filter_by(**filters).count()
22 page_count = int(sub_count / results_per_page) + (sub_count % results_per_page > 0)
23
24 Model = get_model()
25
26 submissions = (
27 Submissions.query.add_columns(
28 Submissions.id,
29 Submissions.type,
30 Submissions.challenge_id,
31 Submissions.provided,
32 Submissions.account_id,
33 Submissions.date,
34 Challenges.name.label("challenge_name"),
35 Model.name.label("team_name"),
36 )
37 .filter_by(**filters)
38 .join(Challenges)
39 .join(Model)
40 .order_by(Submissions.date.desc())
41 .slice(page_start, page_end)
42 .all()
43 )
44
45 return render_template(
46 "admin/submissions.html",
47 submissions=submissions,
48 page_count=page_count,
49 curr_page=curr_page,
50 type=submission_type,
51 )
52
[end of CTFd/admin/submissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/admin/submissions.py b/CTFd/admin/submissions.py
--- a/CTFd/admin/submissions.py
+++ b/CTFd/admin/submissions.py
@@ -1,4 +1,4 @@
-from flask import render_template, request
+from flask import render_template, request, url_for
from CTFd.admin import admin
from CTFd.models import Challenges, Submissions
@@ -10,16 +10,21 @@
@admin.route("/admin/submissions/<submission_type>")
@admins_only
def submissions_listing(submission_type):
- filters = {}
+ filters_by = {}
if submission_type:
- filters["type"] = submission_type
+ filters_by["type"] = submission_type
+ filters = []
- curr_page = abs(int(request.args.get("page", 1, type=int)))
- results_per_page = 50
- page_start = results_per_page * (curr_page - 1)
- page_end = results_per_page * (curr_page - 1) + results_per_page
- sub_count = Submissions.query.filter_by(**filters).count()
- page_count = int(sub_count / results_per_page) + (sub_count % results_per_page > 0)
+ q = request.args.get("q")
+ field = request.args.get("field")
+ page = abs(request.args.get("page", 1, type=int))
+
+ if q:
+ submissions = []
+ if Submissions.__mapper__.has_property(
+ field
+ ): # The field exists as an exposed column
+ filters.append(getattr(Submissions, field).like("%{}%".format(q)))
Model = get_model()
@@ -34,18 +39,27 @@
Challenges.name.label("challenge_name"),
Model.name.label("team_name"),
)
- .filter_by(**filters)
+ .filter_by(**filters_by)
+ .filter(*filters)
.join(Challenges)
.join(Model)
.order_by(Submissions.date.desc())
- .slice(page_start, page_end)
- .all()
+ .paginate(page=page, per_page=50)
)
+ args = dict(request.args)
+ args.pop("page", 1)
+
return render_template(
"admin/submissions.html",
submissions=submissions,
- page_count=page_count,
- curr_page=curr_page,
+ prev_page=url_for(
+ request.endpoint, type=submission_type, page=submissions.prev_num, **args
+ ),
+ next_page=url_for(
+ request.endpoint, type=submission_type, page=submissions.next_num, **args
+ ),
type=submission_type,
+ q=q,
+ field=field,
)
| {"golden_diff": "diff --git a/CTFd/admin/submissions.py b/CTFd/admin/submissions.py\n--- a/CTFd/admin/submissions.py\n+++ b/CTFd/admin/submissions.py\n@@ -1,4 +1,4 @@\n-from flask import render_template, request\n+from flask import render_template, request, url_for\n \n from CTFd.admin import admin\n from CTFd.models import Challenges, Submissions\n@@ -10,16 +10,21 @@\n @admin.route(\"/admin/submissions/<submission_type>\")\n @admins_only\n def submissions_listing(submission_type):\n- filters = {}\n+ filters_by = {}\n if submission_type:\n- filters[\"type\"] = submission_type\n+ filters_by[\"type\"] = submission_type\n+ filters = []\n \n- curr_page = abs(int(request.args.get(\"page\", 1, type=int)))\n- results_per_page = 50\n- page_start = results_per_page * (curr_page - 1)\n- page_end = results_per_page * (curr_page - 1) + results_per_page\n- sub_count = Submissions.query.filter_by(**filters).count()\n- page_count = int(sub_count / results_per_page) + (sub_count % results_per_page > 0)\n+ q = request.args.get(\"q\")\n+ field = request.args.get(\"field\")\n+ page = abs(request.args.get(\"page\", 1, type=int))\n+\n+ if q:\n+ submissions = []\n+ if Submissions.__mapper__.has_property(\n+ field\n+ ): # The field exists as an exposed column\n+ filters.append(getattr(Submissions, field).like(\"%{}%\".format(q)))\n \n Model = get_model()\n \n@@ -34,18 +39,27 @@\n Challenges.name.label(\"challenge_name\"),\n Model.name.label(\"team_name\"),\n )\n- .filter_by(**filters)\n+ .filter_by(**filters_by)\n+ .filter(*filters)\n .join(Challenges)\n .join(Model)\n .order_by(Submissions.date.desc())\n- .slice(page_start, page_end)\n- .all()\n+ .paginate(page=page, per_page=50)\n )\n \n+ args = dict(request.args)\n+ args.pop(\"page\", 1)\n+\n return render_template(\n \"admin/submissions.html\",\n submissions=submissions,\n- page_count=page_count,\n- curr_page=curr_page,\n+ prev_page=url_for(\n+ request.endpoint, type=submission_type, page=submissions.prev_num, **args\n+ ),\n+ next_page=url_for(\n+ request.endpoint, type=submission_type, page=submissions.next_num, **args\n+ ),\n type=submission_type,\n+ q=q,\n+ field=field,\n )\n", "issue": "Submission search\nSearch submissions akin to how users are searched\n", "before_files": [{"content": "from flask import render_template, request\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Submissions\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.modes import get_model\n\n\[email protected](\"/admin/submissions\", defaults={\"submission_type\": None})\[email protected](\"/admin/submissions/<submission_type>\")\n@admins_only\ndef submissions_listing(submission_type):\n filters = {}\n if submission_type:\n filters[\"type\"] = submission_type\n\n curr_page = abs(int(request.args.get(\"page\", 1, type=int)))\n results_per_page = 50\n page_start = results_per_page * (curr_page - 1)\n page_end = results_per_page * (curr_page - 1) + results_per_page\n sub_count = Submissions.query.filter_by(**filters).count()\n page_count = int(sub_count / results_per_page) + (sub_count % results_per_page > 0)\n\n Model = get_model()\n\n submissions = (\n Submissions.query.add_columns(\n Submissions.id,\n Submissions.type,\n Submissions.challenge_id,\n Submissions.provided,\n Submissions.account_id,\n Submissions.date,\n Challenges.name.label(\"challenge_name\"),\n Model.name.label(\"team_name\"),\n )\n .filter_by(**filters)\n .join(Challenges)\n .join(Model)\n .order_by(Submissions.date.desc())\n .slice(page_start, page_end)\n .all()\n )\n\n return render_template(\n \"admin/submissions.html\",\n submissions=submissions,\n page_count=page_count,\n curr_page=curr_page,\n type=submission_type,\n )\n", "path": "CTFd/admin/submissions.py"}]} | 1,008 | 616 |
gh_patches_debug_51710 | rasdani/github-patches | git_diff | getsentry__sentry-python-2069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot import appengine
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.18.0
### Steps to Reproduce
Install the SDK within any project that is not pinning urllib3 < 2.0.0
### Expected Result
ability to import appengine
### Actual Result
Cannot import appengine as gaecontrib.
As per urllib 2.0.0 release: https://github.com/urllib3/urllib3/tree/2.0.0
Removed urllib3.contrib.appengine.AppEngineManager and support for Google App Engine Standard Environment (https://github.com/urllib3/urllib3/issues/2044).
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """
4 Sentry-Python - Sentry SDK for Python
5 =====================================
6
7 **Sentry-Python is an SDK for Sentry.** Check out `GitHub
8 <https://github.com/getsentry/sentry-python>`_ to find out more.
9 """
10
11 import os
12 from setuptools import setup, find_packages
13
14 here = os.path.abspath(os.path.dirname(__file__))
15
16
17 def get_file_text(file_name):
18 with open(os.path.join(here, file_name)) as in_file:
19 return in_file.read()
20
21
22 setup(
23 name="sentry-sdk",
24 version="1.21.1",
25 author="Sentry Team and Contributors",
26 author_email="[email protected]",
27 url="https://github.com/getsentry/sentry-python",
28 project_urls={
29 "Documentation": "https://docs.sentry.io/platforms/python/",
30 "Changelog": "https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md",
31 },
32 description="Python client for Sentry (https://sentry.io)",
33 long_description=get_file_text("README.md"),
34 long_description_content_type="text/markdown",
35 packages=find_packages(exclude=("tests", "tests.*")),
36 # PEP 561
37 package_data={"sentry_sdk": ["py.typed"]},
38 zip_safe=False,
39 license="MIT",
40 install_requires=[
41 'urllib3>=1.25.7; python_version<="3.4"',
42 'urllib3>=1.26.9; python_version=="3.5"',
43 'urllib3>=1.26.11; python_version >="3.6"',
44 "certifi",
45 ],
46 extras_require={
47 "flask": ["flask>=0.11", "blinker>=1.1"],
48 "quart": ["quart>=0.16.1", "blinker>=1.1"],
49 "bottle": ["bottle>=0.12.13"],
50 "falcon": ["falcon>=1.4"],
51 "django": ["django>=1.8"],
52 "sanic": ["sanic>=0.8"],
53 "celery": ["celery>=3"],
54 "huey": ["huey>=2"],
55 "beam": ["apache-beam>=2.12"],
56 "arq": ["arq>=0.23"],
57 "rq": ["rq>=0.6"],
58 "aiohttp": ["aiohttp>=3.5"],
59 "tornado": ["tornado>=5"],
60 "sqlalchemy": ["sqlalchemy>=1.2"],
61 "pyspark": ["pyspark>=2.4.4"],
62 "pure_eval": ["pure_eval", "executing", "asttokens"],
63 "chalice": ["chalice>=1.16.0"],
64 "httpx": ["httpx>=0.16.0"],
65 "starlette": ["starlette>=0.19.1"],
66 "starlite": ["starlite>=1.48"],
67 "fastapi": ["fastapi>=0.79.0"],
68 "pymongo": ["pymongo>=3.1"],
69 "opentelemetry": ["opentelemetry-distro>=0.35b0"],
70 "grpcio": ["grpcio>=1.21.1"]
71 },
72 classifiers=[
73 "Development Status :: 5 - Production/Stable",
74 "Environment :: Web Environment",
75 "Intended Audience :: Developers",
76 "License :: OSI Approved :: BSD License",
77 "Operating System :: OS Independent",
78 "Programming Language :: Python",
79 "Programming Language :: Python :: 2",
80 "Programming Language :: Python :: 2.7",
81 "Programming Language :: Python :: 3",
82 "Programming Language :: Python :: 3.4",
83 "Programming Language :: Python :: 3.5",
84 "Programming Language :: Python :: 3.6",
85 "Programming Language :: Python :: 3.7",
86 "Programming Language :: Python :: 3.8",
87 "Programming Language :: Python :: 3.9",
88 "Programming Language :: Python :: 3.10",
89 "Topic :: Software Development :: Libraries :: Python Modules",
90 ],
91 options={"bdist_wheel": {"universal": "1"}},
92 )
93
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,6 +41,7 @@
'urllib3>=1.25.7; python_version<="3.4"',
'urllib3>=1.26.9; python_version=="3.5"',
'urllib3>=1.26.11; python_version >="3.6"',
+ 'urllib3<2.0.0',
"certifi",
],
extras_require={
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,6 +41,7 @@\n 'urllib3>=1.25.7; python_version<=\"3.4\"',\n 'urllib3>=1.26.9; python_version==\"3.5\"',\n 'urllib3>=1.26.11; python_version >=\"3.6\"',\n+ 'urllib3<2.0.0',\n \"certifi\",\n ],\n extras_require={\n", "issue": "Cannot import appengine\n### How do you use Sentry?\n\nSentry Saas (sentry.io)\n\n### Version\n\n1.18.0\n\n### Steps to Reproduce\n\nInstall the SDK within any project that is not pinning urllib3 < 2.0.0\n\n### Expected Result\n\nability to import appengine\n\n### Actual Result\n\nCannot import appengine as gaecontrib.\r\nAs per urllib 2.0.0 release: https://github.com/urllib3/urllib3/tree/2.0.0\r\n\r\nRemoved urllib3.contrib.appengine.AppEngineManager and support for Google App Engine Standard Environment (https://github.com/urllib3/urllib3/issues/2044).\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nSentry-Python - Sentry SDK for Python\n=====================================\n\n**Sentry-Python is an SDK for Sentry.** Check out `GitHub\n<https://github.com/getsentry/sentry-python>`_ to find out more.\n\"\"\"\n\nimport os\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef get_file_text(file_name):\n with open(os.path.join(here, file_name)) as in_file:\n return in_file.read()\n\n\nsetup(\n name=\"sentry-sdk\",\n version=\"1.21.1\",\n author=\"Sentry Team and Contributors\",\n author_email=\"[email protected]\",\n url=\"https://github.com/getsentry/sentry-python\",\n project_urls={\n \"Documentation\": \"https://docs.sentry.io/platforms/python/\",\n \"Changelog\": \"https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md\",\n },\n description=\"Python client for Sentry (https://sentry.io)\",\n long_description=get_file_text(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n # PEP 561\n package_data={\"sentry_sdk\": [\"py.typed\"]},\n zip_safe=False,\n license=\"MIT\",\n install_requires=[\n 'urllib3>=1.25.7; python_version<=\"3.4\"',\n 'urllib3>=1.26.9; python_version==\"3.5\"',\n 'urllib3>=1.26.11; python_version >=\"3.6\"',\n \"certifi\",\n ],\n extras_require={\n \"flask\": [\"flask>=0.11\", \"blinker>=1.1\"],\n \"quart\": [\"quart>=0.16.1\", \"blinker>=1.1\"],\n \"bottle\": [\"bottle>=0.12.13\"],\n \"falcon\": [\"falcon>=1.4\"],\n \"django\": [\"django>=1.8\"],\n \"sanic\": [\"sanic>=0.8\"],\n \"celery\": [\"celery>=3\"],\n \"huey\": [\"huey>=2\"],\n \"beam\": [\"apache-beam>=2.12\"],\n \"arq\": [\"arq>=0.23\"],\n \"rq\": [\"rq>=0.6\"],\n \"aiohttp\": [\"aiohttp>=3.5\"],\n \"tornado\": [\"tornado>=5\"],\n \"sqlalchemy\": [\"sqlalchemy>=1.2\"],\n \"pyspark\": [\"pyspark>=2.4.4\"],\n \"pure_eval\": [\"pure_eval\", \"executing\", \"asttokens\"],\n \"chalice\": [\"chalice>=1.16.0\"],\n \"httpx\": [\"httpx>=0.16.0\"],\n \"starlette\": [\"starlette>=0.19.1\"],\n \"starlite\": [\"starlite>=1.48\"],\n \"fastapi\": [\"fastapi>=0.79.0\"],\n \"pymongo\": [\"pymongo>=3.1\"],\n \"opentelemetry\": [\"opentelemetry-distro>=0.35b0\"],\n \"grpcio\": [\"grpcio>=1.21.1\"]\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n options={\"bdist_wheel\": {\"universal\": \"1\"}},\n)\n", "path": "setup.py"}]} | 1,788 | 120 |
gh_patches_debug_51406 | rasdani/github-patches | git_diff | pytorch__ignite-1016 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyTorch dependency is lacking version constraint
## 🐛 Bug description
<!-- A clear and concise description of what the bug is. -->
PyTorch is a dependency of Ignite and, thus, is specified in `setup.py`
https://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/setup.py#L24-L26
and `conda.recipe/meta.yaml`:
https://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/conda.recipe/meta.yaml#L15-L23
The PyTorch dependency is lacking a version constraint which may work fine right now, but there is no guarantee that Ignite will be compatible with any future major PyTorch release (e.g. PyTorch v2.x).
I suggest to constrain the PyTorch version that Ignite is compatible with, e.g. `>=1.0,<2` or `<2` if any `0.x` and `1.x` version works. If PyTorch has a new major release, even previous Ignite versions can become compatible with the new major PyTorch release (especially if no changes to the code are necessary) by making new bug fix releases with relaxed version constraints to include the new PyTorch version.
In my opinion, it is highly preferable to be conservative about dependency version constraints through a [compatible release constraint](https://www.python.org/dev/peps/pep-0440/#compatible-release) in case the dependency conforms with semantic versioning. It is impossible to guarantee compatibility with a future major release of a dependency as its API can change arbitrarily.
</issue>
<code>
[start of setup.py]
1 import os
2 import io
3 import re
4 from setuptools import setup, find_packages
5
6
7 def read(*names, **kwargs):
8 with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get("encoding", "utf8")) as fp:
9 return fp.read()
10
11
12 def find_version(*file_paths):
13 version_file = read(*file_paths)
14 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
15 if version_match:
16 return version_match.group(1)
17 raise RuntimeError("Unable to find version string.")
18
19
20 readme = read("README.md")
21
22 VERSION = find_version("ignite", "__init__.py")
23
24 requirements = [
25 "torch",
26 ]
27
28 setup(
29 # Metadata
30 name="pytorch-ignite",
31 version=VERSION,
32 author="PyTorch Core Team",
33 author_email="[email protected]",
34 url="https://github.com/pytorch/ignite",
35 description="A lightweight library to help with training neural networks in PyTorch.",
36 long_description_content_type="text/markdown",
37 long_description=readme,
38 license="BSD",
39 # Package info
40 packages=find_packages(exclude=("tests", "tests.*",)),
41 zip_safe=True,
42 install_requires=requirements,
43 )
44
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,7 +22,7 @@
VERSION = find_version("ignite", "__init__.py")
requirements = [
- "torch",
+ "torch>=1.0,<2",
]
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,7 +22,7 @@\n VERSION = find_version(\"ignite\", \"__init__.py\")\n \n requirements = [\n- \"torch\",\n+ \"torch>=1.0,<2\",\n ]\n \n setup(\n", "issue": "PyTorch dependency is lacking version constraint\n## \ud83d\udc1b Bug description\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nPyTorch is a dependency of Ignite and, thus, is specified in `setup.py`\r\n\r\nhttps://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/setup.py#L24-L26\r\n\r\nand `conda.recipe/meta.yaml`:\r\n\r\nhttps://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/conda.recipe/meta.yaml#L15-L23\r\n\r\nThe PyTorch dependency is lacking a version constraint which may work fine right now, but there is no guarantee that Ignite will be compatible with any future major PyTorch release (e.g. PyTorch v2.x).\r\n\r\nI suggest to constrain the PyTorch version that Ignite is compatible with, e.g. `>=1.0,<2` or `<2` if any `0.x` and `1.x` version works. If PyTorch has a new major release, even previous Ignite versions can become compatible with the new major PyTorch release (especially if no changes to the code are necessary) by making new bug fix releases with relaxed version constraints to include the new PyTorch version.\r\n\r\nIn my opinion, it is highly preferable to be conservative about dependency version constraints through a [compatible release constraint](https://www.python.org/dev/peps/pep-0440/#compatible-release) in case the dependency conforms with semantic versioning. It is impossible to guarantee compatibility with a future major release of a dependency as its API can change arbitrarily.\n", "before_files": [{"content": "import os\nimport io\nimport re\nfrom setuptools import setup, find_packages\n\n\ndef read(*names, **kwargs):\n with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nreadme = read(\"README.md\")\n\nVERSION = find_version(\"ignite\", \"__init__.py\")\n\nrequirements = [\n \"torch\",\n]\n\nsetup(\n # Metadata\n name=\"pytorch-ignite\",\n version=VERSION,\n author=\"PyTorch Core Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pytorch/ignite\",\n description=\"A lightweight library to help with training neural networks in PyTorch.\",\n long_description_content_type=\"text/markdown\",\n long_description=readme,\n license=\"BSD\",\n # Package info\n packages=find_packages(exclude=(\"tests\", \"tests.*\",)),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}]} | 1,286 | 69 |
gh_patches_debug_27460 | rasdani/github-patches | git_diff | googleapis__python-bigquery-442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Too noise logging about telemetry
Hello,
In the Apache Airflow project, we use the BigQuery library, but recently we've started to see annoying log message when the library is loaded. It is enough that the library is loaded and there is an message every time.
In my opinion, this message should be of a lower level (DEBUG) so that it is not displayed much less often or is displayed only when the client is initialized.
```
import logging
logging.basicConfig(level=logging.INFO)
from google.cloud import bigquery
```
Output:
```
INFO:google.cloud.bigquery.opentelemetry_tracing:This service is instrumented using OpenTelemetry. OpenTelemetry could not be imported; please add opentelemetry-api and opentelemetry-instrumentation packages in order to get BigQuery Tracing data.
```
Related issue: https://github.com/apache/airflow/issues/13131
CC: @tswast
</issue>
<code>
[start of google/cloud/bigquery/opentelemetry_tracing.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 from contextlib import contextmanager
17 from google.api_core.exceptions import GoogleAPICallError
18
19 logger = logging.getLogger(__name__)
20 try:
21 from opentelemetry import trace
22 from opentelemetry.instrumentation.utils import http_status_to_canonical_code
23 from opentelemetry.trace.status import Status
24
25 HAS_OPENTELEMETRY = True
26
27 except ImportError:
28 logger.info(
29 "This service is instrumented using OpenTelemetry. "
30 "OpenTelemetry could not be imported; please "
31 "add opentelemetry-api and opentelemetry-instrumentation "
32 "packages in order to get BigQuery Tracing data."
33 )
34
35 HAS_OPENTELEMETRY = False
36
37 _default_attributes = {
38 "db.system": "BigQuery"
39 } # static, default values assigned to all spans
40
41
42 @contextmanager
43 def create_span(name, attributes=None, client=None, job_ref=None):
44 """Creates a ContextManager for a Span to be exported to the configured exporter.
45 If no configuration exists yields None.
46
47 Args:
48 name (str): Name that will be set for the span being created
49 attributes (Optional[dict]):
50 Additional attributes that pertain to
51 the specific API call (i.e. not a default attribute)
52 client (Optional[google.cloud.bigquery.client.Client]):
53 Pass in a Client object to extract any attributes that may be
54 relevant to it and add them to the created spans.
55 job_ref (Optional[google.cloud.bigquery.job._AsyncJob])
56 Pass in a _AsyncJob object to extract any attributes that may be
57 relevant to it and add them to the created spans.
58
59 Yields:
60 opentelemetry.trace.Span: Yields the newly created Span.
61
62 Raises:
63 google.api_core.exceptions.GoogleAPICallError:
64 Raised if a span could not be yielded or issue with call to
65 OpenTelemetry.
66 """
67 final_attributes = _get_final_span_attributes(attributes, client, job_ref)
68 if not HAS_OPENTELEMETRY:
69 yield None
70 return
71 tracer = trace.get_tracer(__name__)
72
73 # yield new span value
74 with tracer.start_as_current_span(name=name, attributes=final_attributes) as span:
75 try:
76 yield span
77 except GoogleAPICallError as error:
78 if error.code is not None:
79 span.set_status(Status(http_status_to_canonical_code(error.code)))
80 raise
81
82
83 def _get_final_span_attributes(attributes=None, client=None, job_ref=None):
84 final_attributes = {}
85 final_attributes.update(_default_attributes.copy())
86 if client:
87 client_attributes = _set_client_attributes(client)
88 final_attributes.update(client_attributes)
89 if job_ref:
90 job_attributes = _set_job_attributes(job_ref)
91 final_attributes.update(job_attributes)
92 if attributes:
93 final_attributes.update(attributes)
94 return final_attributes
95
96
97 def _set_client_attributes(client):
98 return {"db.name": client.project, "location": client.location}
99
100
101 def _set_job_attributes(job_ref):
102 job_attributes = {
103 "db.name": job_ref.project,
104 "location": job_ref.location,
105 "num_child_jobs": job_ref.num_child_jobs,
106 "job_id": job_ref.job_id,
107 "parent_job_id": job_ref.parent_job_id,
108 "state": job_ref.state,
109 }
110
111 job_attributes["hasErrors"] = job_ref.error_result is not None
112
113 if job_ref.created is not None:
114 job_attributes["timeCreated"] = job_ref.created.isoformat()
115
116 if job_ref.started is not None:
117 job_attributes["timeStarted"] = job_ref.started.isoformat()
118
119 if job_ref.ended is not None:
120 job_attributes["timeEnded"] = job_ref.ended.isoformat()
121
122 return job_attributes
123
[end of google/cloud/bigquery/opentelemetry_tracing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/cloud/bigquery/opentelemetry_tracing.py b/google/cloud/bigquery/opentelemetry_tracing.py
--- a/google/cloud/bigquery/opentelemetry_tracing.py
+++ b/google/cloud/bigquery/opentelemetry_tracing.py
@@ -23,16 +23,11 @@
from opentelemetry.trace.status import Status
HAS_OPENTELEMETRY = True
+ _warned_telemetry = True
except ImportError:
- logger.info(
- "This service is instrumented using OpenTelemetry. "
- "OpenTelemetry could not be imported; please "
- "add opentelemetry-api and opentelemetry-instrumentation "
- "packages in order to get BigQuery Tracing data."
- )
-
HAS_OPENTELEMETRY = False
+ _warned_telemetry = False
_default_attributes = {
"db.system": "BigQuery"
@@ -64,8 +59,18 @@
Raised if a span could not be yielded or issue with call to
OpenTelemetry.
"""
+ global _warned_telemetry
final_attributes = _get_final_span_attributes(attributes, client, job_ref)
if not HAS_OPENTELEMETRY:
+ if not _warned_telemetry:
+ logger.debug(
+ "This service is instrumented using OpenTelemetry. "
+ "OpenTelemetry could not be imported; please "
+ "add opentelemetry-api and opentelemetry-instrumentation "
+ "packages in order to get BigQuery Tracing data."
+ )
+ _warned_telemetry = True
+
yield None
return
tracer = trace.get_tracer(__name__)
| {"golden_diff": "diff --git a/google/cloud/bigquery/opentelemetry_tracing.py b/google/cloud/bigquery/opentelemetry_tracing.py\n--- a/google/cloud/bigquery/opentelemetry_tracing.py\n+++ b/google/cloud/bigquery/opentelemetry_tracing.py\n@@ -23,16 +23,11 @@\n from opentelemetry.trace.status import Status\n \n HAS_OPENTELEMETRY = True\n+ _warned_telemetry = True\n \n except ImportError:\n- logger.info(\n- \"This service is instrumented using OpenTelemetry. \"\n- \"OpenTelemetry could not be imported; please \"\n- \"add opentelemetry-api and opentelemetry-instrumentation \"\n- \"packages in order to get BigQuery Tracing data.\"\n- )\n-\n HAS_OPENTELEMETRY = False\n+ _warned_telemetry = False\n \n _default_attributes = {\n \"db.system\": \"BigQuery\"\n@@ -64,8 +59,18 @@\n Raised if a span could not be yielded or issue with call to\n OpenTelemetry.\n \"\"\"\n+ global _warned_telemetry\n final_attributes = _get_final_span_attributes(attributes, client, job_ref)\n if not HAS_OPENTELEMETRY:\n+ if not _warned_telemetry:\n+ logger.debug(\n+ \"This service is instrumented using OpenTelemetry. \"\n+ \"OpenTelemetry could not be imported; please \"\n+ \"add opentelemetry-api and opentelemetry-instrumentation \"\n+ \"packages in order to get BigQuery Tracing data.\"\n+ )\n+ _warned_telemetry = True\n+\n yield None\n return\n tracer = trace.get_tracer(__name__)\n", "issue": "Too noise logging about telemetry\nHello,\r\n\r\nIn the Apache Airflow project, we use the BigQuery library, but recently we've started to see annoying log message when the library is loaded. It is enough that the library is loaded and there is an message every time. \r\n\r\nIn my opinion, this message should be of a lower level (DEBUG) so that it is not displayed much less often or is displayed only when the client is initialized. \r\n```\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\nfrom google.cloud import bigquery\r\n```\r\nOutput: \r\n```\r\nINFO:google.cloud.bigquery.opentelemetry_tracing:This service is instrumented using OpenTelemetry. OpenTelemetry could not be imported; please add opentelemetry-api and opentelemetry-instrumentation packages in order to get BigQuery Tracing data.\r\n```\r\n\r\nRelated issue: https://github.com/apache/airflow/issues/13131\r\n\r\nCC: @tswast \n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom contextlib import contextmanager\nfrom google.api_core.exceptions import GoogleAPICallError\n\nlogger = logging.getLogger(__name__)\ntry:\n from opentelemetry import trace\n from opentelemetry.instrumentation.utils import http_status_to_canonical_code\n from opentelemetry.trace.status import Status\n\n HAS_OPENTELEMETRY = True\n\nexcept ImportError:\n logger.info(\n \"This service is instrumented using OpenTelemetry. \"\n \"OpenTelemetry could not be imported; please \"\n \"add opentelemetry-api and opentelemetry-instrumentation \"\n \"packages in order to get BigQuery Tracing data.\"\n )\n\n HAS_OPENTELEMETRY = False\n\n_default_attributes = {\n \"db.system\": \"BigQuery\"\n} # static, default values assigned to all spans\n\n\n@contextmanager\ndef create_span(name, attributes=None, client=None, job_ref=None):\n \"\"\"Creates a ContextManager for a Span to be exported to the configured exporter.\n If no configuration exists yields None.\n\n Args:\n name (str): Name that will be set for the span being created\n attributes (Optional[dict]):\n Additional attributes that pertain to\n the specific API call (i.e. not a default attribute)\n client (Optional[google.cloud.bigquery.client.Client]):\n Pass in a Client object to extract any attributes that may be\n relevant to it and add them to the created spans.\n job_ref (Optional[google.cloud.bigquery.job._AsyncJob])\n Pass in a _AsyncJob object to extract any attributes that may be\n relevant to it and add them to the created spans.\n\n Yields:\n opentelemetry.trace.Span: Yields the newly created Span.\n\n Raises:\n google.api_core.exceptions.GoogleAPICallError:\n Raised if a span could not be yielded or issue with call to\n OpenTelemetry.\n \"\"\"\n final_attributes = _get_final_span_attributes(attributes, client, job_ref)\n if not HAS_OPENTELEMETRY:\n yield None\n return\n tracer = trace.get_tracer(__name__)\n\n # yield new span value\n with tracer.start_as_current_span(name=name, attributes=final_attributes) as span:\n try:\n yield span\n except GoogleAPICallError as error:\n if error.code is not None:\n span.set_status(Status(http_status_to_canonical_code(error.code)))\n raise\n\n\ndef _get_final_span_attributes(attributes=None, client=None, job_ref=None):\n final_attributes = {}\n final_attributes.update(_default_attributes.copy())\n if client:\n client_attributes = _set_client_attributes(client)\n final_attributes.update(client_attributes)\n if job_ref:\n job_attributes = _set_job_attributes(job_ref)\n final_attributes.update(job_attributes)\n if attributes:\n final_attributes.update(attributes)\n return final_attributes\n\n\ndef _set_client_attributes(client):\n return {\"db.name\": client.project, \"location\": client.location}\n\n\ndef _set_job_attributes(job_ref):\n job_attributes = {\n \"db.name\": job_ref.project,\n \"location\": job_ref.location,\n \"num_child_jobs\": job_ref.num_child_jobs,\n \"job_id\": job_ref.job_id,\n \"parent_job_id\": job_ref.parent_job_id,\n \"state\": job_ref.state,\n }\n\n job_attributes[\"hasErrors\"] = job_ref.error_result is not None\n\n if job_ref.created is not None:\n job_attributes[\"timeCreated\"] = job_ref.created.isoformat()\n\n if job_ref.started is not None:\n job_attributes[\"timeStarted\"] = job_ref.started.isoformat()\n\n if job_ref.ended is not None:\n job_attributes[\"timeEnded\"] = job_ref.ended.isoformat()\n\n return job_attributes\n", "path": "google/cloud/bigquery/opentelemetry_tracing.py"}]} | 1,944 | 378 |
gh_patches_debug_35 | rasdani/github-patches | git_diff | StackStorm__st2-5104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add version string to st2tests to make it installable
Prior to this change, this will fail:
cd st2tests/st2tests
pip install .
After this change that command successfully installs the `st2tests` package. This will also work for installing via GitHub as in:
pip install -e git+https://github.com/StackStorm/[email protected]#egg=st2tests&subdirectory=st2tests
The original request in #2574 is to get st2tests onto PyPI, and I'm not sure if this will accomplish that request, but this is a good first step.
</issue>
<code>
[start of st2tests/st2tests/__init__.py]
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 from st2tests.base import EventletTestCase
19 from st2tests.base import DbTestCase
20 from st2tests.base import ExecutionDbTestCase
21 from st2tests.base import DbModelTestCase
22 from st2tests.base import WorkflowTestCase
23
24
25 __all__ = [
26 'EventletTestCase',
27 'DbTestCase',
28 'ExecutionDbTestCase',
29 'DbModelTestCase',
30 'WorkflowTestCase'
31 ]
32
[end of st2tests/st2tests/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2tests/st2tests/__init__.py b/st2tests/st2tests/__init__.py
--- a/st2tests/st2tests/__init__.py
+++ b/st2tests/st2tests/__init__.py
@@ -29,3 +29,5 @@
'DbModelTestCase',
'WorkflowTestCase'
]
+
+__version__ = '3.3dev'
| {"golden_diff": "diff --git a/st2tests/st2tests/__init__.py b/st2tests/st2tests/__init__.py\n--- a/st2tests/st2tests/__init__.py\n+++ b/st2tests/st2tests/__init__.py\n@@ -29,3 +29,5 @@\n 'DbModelTestCase',\n 'WorkflowTestCase'\n ]\n+\n+__version__ = '3.3dev'\n", "issue": "Add version string to st2tests to make it installable\nPrior to this change, this will fail:\r\n\r\n cd st2tests/st2tests\r\n pip install .\r\n\r\nAfter this change that command successfully installs the `st2tests` package. This will also work for installing via GitHub as in:\r\n\r\n pip install -e git+https://github.com/StackStorm/[email protected]#egg=st2tests&subdirectory=st2tests\r\n\r\nThe original request in #2574 is to get st2tests onto PyPI, and I'm not sure if this will accomplish that request, but this is a good first step.\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2tests.base import EventletTestCase\nfrom st2tests.base import DbTestCase\nfrom st2tests.base import ExecutionDbTestCase\nfrom st2tests.base import DbModelTestCase\nfrom st2tests.base import WorkflowTestCase\n\n\n__all__ = [\n 'EventletTestCase',\n 'DbTestCase',\n 'ExecutionDbTestCase',\n 'DbModelTestCase',\n 'WorkflowTestCase'\n]\n", "path": "st2tests/st2tests/__init__.py"}]} | 978 | 89 |
gh_patches_debug_29289 | rasdani/github-patches | git_diff | google__openhtf-186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Attaching binary file using test.attach raises UnicodeDecodeError
If I attach a png or avi I see the following in OutputTestRecord
Python2.7/site-packages/openhtf/**init**.py", line 185, in OutputTestRecord
output_cb(test_record)
File "virtualenv/local/lib/python2.7/site-packages/openhtf/**init**.py", line 83, in **call**
f.write(self.encode(as_dict))
File "/usr/lib/python2.7/json/encoder.py", line 209, in encode
chunks = list(chunks)
File "/usr/lib/python2.7/json/encoder.py", line 434, in _iterencode
for chunk in _iterencode_dict(o, _current_indent_level):
File "/usr/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict
for chunk in chunks:
File "/usr/lib/python2.7/json/encoder.py", line 332, in _iterencode_list
for chunk in chunks:
File "/usr/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict
for chunk in chunks:
File "/usr/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict
for chunk in chunks:
File "/usr/lib/python2.7/json/encoder.py", line 390, in _iterencode_dict
yield _encoder(value)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x89 in position 0: invalid start byte
</issue>
<code>
[start of openhtf/io/output/json_factory.py]
1 """Module for outputting test record to JSON-formatted files."""
2
3 from json import JSONEncoder
4
5 from openhtf import util
6 from openhtf.exe import test_state
7
8
9 class OutputToJSON(JSONEncoder):
10 """Return an output callback that writes JSON Test Records.
11
12 An example filename_pattern might be:
13 '/data/test_records/%(dut_id)s.%(start_time_millis)s'
14
15 To use this output mechanism:
16 test = openhtf.Test(PhaseOne, PhaseTwo)
17 test.AddOutputCallback(openhtf.OutputToJson(
18 '/data/test_records/%(dut_id)s.%(start_time_millis)s'))
19
20 Args:
21 filename_pattern: A format string specifying the filename to write to,
22 will be formatted with the Test Record as a dictionary.
23 inline_attachments: Whether attachments should be included inline in the
24 output. Set to False if you expect to have large binary attachments.
25 """
26
27 def __init__(self, filename_pattern=None, inline_attachments=True, **kwargs):
28 super(OutputToJSON, self).__init__(**kwargs)
29 self.filename_pattern = filename_pattern
30 self.inline_attachments = inline_attachments
31
32 def default(self, obj):
33 if isinstance(obj, BaseException):
34 # Just repr exceptions.
35 return repr(obj)
36 return super(OutputToJSON, self).default(obj)
37
38 # pylint: disable=invalid-name
39 def __call__(self, test_record):
40 assert self.filename_pattern, 'filename_pattern required'
41 if self.inline_attachments:
42 as_dict = util.ConvertToBaseTypes(test_record)
43 else:
44 as_dict = util.ConvertToBaseTypes(test_record, ignore_keys='attachments')
45 with open(self.filename_pattern % as_dict, 'w') as f:
46 f.write(self.encode(as_dict))
47 # pylint: enable=invalid-name
48
[end of openhtf/io/output/json_factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openhtf/io/output/json_factory.py b/openhtf/io/output/json_factory.py
--- a/openhtf/io/output/json_factory.py
+++ b/openhtf/io/output/json_factory.py
@@ -1,5 +1,6 @@
"""Module for outputting test record to JSON-formatted files."""
+import base64
from json import JSONEncoder
from openhtf import util
@@ -21,7 +22,9 @@
filename_pattern: A format string specifying the filename to write to,
will be formatted with the Test Record as a dictionary.
inline_attachments: Whether attachments should be included inline in the
- output. Set to False if you expect to have large binary attachments.
+ output. Set to False if you expect to have large binary attachments. If
+ True (the default), then attachments are base64 encoded to allow for
+ binary data that's not supported by JSON directly.
"""
def __init__(self, filename_pattern=None, inline_attachments=True, **kwargs):
@@ -40,6 +43,9 @@
assert self.filename_pattern, 'filename_pattern required'
if self.inline_attachments:
as_dict = util.ConvertToBaseTypes(test_record)
+ for phase in as_dict['phases']:
+ for value in phase['attachments'].itervalues():
+ value['data'] = base64.standard_b64encode(value['data'])
else:
as_dict = util.ConvertToBaseTypes(test_record, ignore_keys='attachments')
with open(self.filename_pattern % as_dict, 'w') as f:
| {"golden_diff": "diff --git a/openhtf/io/output/json_factory.py b/openhtf/io/output/json_factory.py\n--- a/openhtf/io/output/json_factory.py\n+++ b/openhtf/io/output/json_factory.py\n@@ -1,5 +1,6 @@\n \"\"\"Module for outputting test record to JSON-formatted files.\"\"\"\n \n+import base64\n from json import JSONEncoder\n \n from openhtf import util\n@@ -21,7 +22,9 @@\n filename_pattern: A format string specifying the filename to write to,\n will be formatted with the Test Record as a dictionary.\n inline_attachments: Whether attachments should be included inline in the\n- output. Set to False if you expect to have large binary attachments.\n+ output. Set to False if you expect to have large binary attachments. If\n+ True (the default), then attachments are base64 encoded to allow for\n+ binary data that's not supported by JSON directly.\n \"\"\"\n \n def __init__(self, filename_pattern=None, inline_attachments=True, **kwargs):\n@@ -40,6 +43,9 @@\n assert self.filename_pattern, 'filename_pattern required'\n if self.inline_attachments:\n as_dict = util.ConvertToBaseTypes(test_record)\n+ for phase in as_dict['phases']:\n+ for value in phase['attachments'].itervalues():\n+ value['data'] = base64.standard_b64encode(value['data'])\n else:\n as_dict = util.ConvertToBaseTypes(test_record, ignore_keys='attachments')\n with open(self.filename_pattern % as_dict, 'w') as f:\n", "issue": "Attaching binary file using test.attach raises UnicodeDecodeError\nIf I attach a png or avi I see the following in OutputTestRecord\n\nPython2.7/site-packages/openhtf/**init**.py\", line 185, in OutputTestRecord\n output_cb(test_record)\n File \"virtualenv/local/lib/python2.7/site-packages/openhtf/**init**.py\", line 83, in **call**\n f.write(self.encode(as_dict))\n File \"/usr/lib/python2.7/json/encoder.py\", line 209, in encode\n chunks = list(chunks)\n File \"/usr/lib/python2.7/json/encoder.py\", line 434, in _iterencode\n for chunk in _iterencode_dict(o, _current_indent_level):\n File \"/usr/lib/python2.7/json/encoder.py\", line 408, in _iterencode_dict\n for chunk in chunks:\n File \"/usr/lib/python2.7/json/encoder.py\", line 332, in _iterencode_list\n for chunk in chunks:\n File \"/usr/lib/python2.7/json/encoder.py\", line 408, in _iterencode_dict\n for chunk in chunks:\n File \"/usr/lib/python2.7/json/encoder.py\", line 408, in _iterencode_dict\n for chunk in chunks:\n File \"/usr/lib/python2.7/json/encoder.py\", line 390, in _iterencode_dict\n yield _encoder(value)\nUnicodeDecodeError: 'utf8' codec can't decode byte 0x89 in position 0: invalid start byte\n\n", "before_files": [{"content": "\"\"\"Module for outputting test record to JSON-formatted files.\"\"\"\n\nfrom json import JSONEncoder\n\nfrom openhtf import util\nfrom openhtf.exe import test_state\n\n\nclass OutputToJSON(JSONEncoder):\n \"\"\"Return an output callback that writes JSON Test Records.\n\n An example filename_pattern might be:\n '/data/test_records/%(dut_id)s.%(start_time_millis)s'\n\n To use this output mechanism:\n test = openhtf.Test(PhaseOne, PhaseTwo)\n test.AddOutputCallback(openhtf.OutputToJson(\n '/data/test_records/%(dut_id)s.%(start_time_millis)s'))\n\n Args:\n filename_pattern: A format string specifying the filename to write to,\n will be formatted with the Test Record as a dictionary.\n inline_attachments: Whether attachments should be included inline in the\n output. Set to False if you expect to have large binary attachments.\n \"\"\"\n\n def __init__(self, filename_pattern=None, inline_attachments=True, **kwargs):\n super(OutputToJSON, self).__init__(**kwargs)\n self.filename_pattern = filename_pattern\n self.inline_attachments = inline_attachments\n\n def default(self, obj):\n if isinstance(obj, BaseException):\n # Just repr exceptions.\n return repr(obj)\n return super(OutputToJSON, self).default(obj)\n\n # pylint: disable=invalid-name\n def __call__(self, test_record):\n assert self.filename_pattern, 'filename_pattern required'\n if self.inline_attachments:\n as_dict = util.ConvertToBaseTypes(test_record)\n else:\n as_dict = util.ConvertToBaseTypes(test_record, ignore_keys='attachments')\n with open(self.filename_pattern % as_dict, 'w') as f:\n f.write(self.encode(as_dict))\n # pylint: enable=invalid-name\n", "path": "openhtf/io/output/json_factory.py"}]} | 1,373 | 345 |
gh_patches_debug_24992 | rasdani/github-patches | git_diff | fedora-infra__bodhi-2733 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Staging is currently returning HTML to bodhi CLI requests
I am not sure why this is happening, but it seems that staging Bodhi is currently returning HTML to CLI requests. This also happens to requests with ```http``` or ```curl```.
I recall a problem with the unit tests where they would sometimes receive HTML when they didn't explicitly use a request header to ask for a JSON response once we started testing under Python 3. We ended up adjusting the tests to pass that header since this did not seem to happen when serving Bodhi with ```pserve-3```.
It turns out that there really is some problem that seems related to Python 3 since staging Bodhi started doing this same thing.
</issue>
<code>
[start of bodhi/server/webapp.py]
1 # -*- coding: utf-8 -*-
2 # Copyright © 2018 Red Hat, Inc.
3 #
4 # This file is part of Bodhi.
5 #
6 # This program is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU General Public License
8 # as published by the Free Software Foundation; either version 2
9 # of the License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program; if not, write to the Free Software
18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19 """
20 Define Bodhi's WSGI application.
21
22 As of the writing of this docblock, this module is a bit misnamed since the webapp is actually
23 defined in bodhi.server.__init__. However, that is an anti-pattern with lots of nasty in-line
24 imports due to circular dependencies, and this module is intended to solve that problem.
25 Unfortunately, it is a backwards-incompatible change to move main() here, so it will remain in
26 __init__ until we make a major Bodhi release. See https://github.com/fedora-infra/bodhi/issues/2294
27 """
28
29 from pyramid.events import NewRequest, subscriber
30
31 from bodhi import server
32
33
34 def _complete_database_session(request):
35 """
36 Commit the database changes if no exceptions occurred.
37
38 This is a post-request hook. It handles rolling back or committing the session based on whether
39 an exception occurred or not. To get a database session that's not tied to the request/response
40 cycle, just use the :data:`Session` scoped session.
41
42 Args:
43 request (pyramid.request.Request): The current web request.
44 """
45 _rollback_or_commit(request)
46 server.Session().close()
47 server.Session.remove()
48
49
50 @subscriber(NewRequest)
51 def _prepare_request(event):
52 """
53 Add callbacks onto every new request.
54
55 This function adds a callback to clean up the database session when the request is finished.
56
57 Args:
58 event (pyramid.events.NewRequest): The new request event.
59 """
60 event.request.add_finished_callback(_complete_database_session)
61
62
63 def _rollback_or_commit(request):
64 """
65 Commit the transaction if there are no exceptions, otherwise rollback.
66
67 Args:
68 request (pyramid.request.Request): The current web request.
69 """
70 if request.exception is not None:
71 server.Session().rollback()
72 else:
73 server.Session().commit()
74
[end of bodhi/server/webapp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi/server/webapp.py b/bodhi/server/webapp.py
--- a/bodhi/server/webapp.py
+++ b/bodhi/server/webapp.py
@@ -50,13 +50,25 @@
@subscriber(NewRequest)
def _prepare_request(event):
"""
- Add callbacks onto every new request.
+ Prepare each incoming request to Bodhi.
- This function adds a callback to clean up the database session when the request is finished.
+ This function does two things:
+ * If requests do not have an Accept header, or if their Accept header is "*/*", it sets the
+ header to application/json. Pyramid has undefined behavior when an ambiguous or missing
+ Accept header is received, and multiple views are defined that handle specific Accept
+ headers. For example, we have a view that returns html or JSON for /composes/, depending
+ on the Accept header, but if a request has no Accept header or has */*, Pyramid will
+ consider both views to be a match for the request and so it is undefined which view will
+ handle the request. Let's force ambibuous requests to receive a JSON response so we have a
+ defined behavior. See https://github.com/fedora-infra/bodhi/issues/2731.
+ * It adds a callback to clean up the database session when the request is finished.
Args:
event (pyramid.events.NewRequest): The new request event.
"""
+ if 'Accept' not in event.request.headers or event.request.headers['Accept'] == '*/*':
+ event.request.headers['Accept'] = 'application/json'
+
event.request.add_finished_callback(_complete_database_session)
| {"golden_diff": "diff --git a/bodhi/server/webapp.py b/bodhi/server/webapp.py\n--- a/bodhi/server/webapp.py\n+++ b/bodhi/server/webapp.py\n@@ -50,13 +50,25 @@\n @subscriber(NewRequest)\n def _prepare_request(event):\n \"\"\"\n- Add callbacks onto every new request.\n+ Prepare each incoming request to Bodhi.\n \n- This function adds a callback to clean up the database session when the request is finished.\n+ This function does two things:\n+ * If requests do not have an Accept header, or if their Accept header is \"*/*\", it sets the\n+ header to application/json. Pyramid has undefined behavior when an ambiguous or missing\n+ Accept header is received, and multiple views are defined that handle specific Accept\n+ headers. For example, we have a view that returns html or JSON for /composes/, depending\n+ on the Accept header, but if a request has no Accept header or has */*, Pyramid will\n+ consider both views to be a match for the request and so it is undefined which view will\n+ handle the request. Let's force ambibuous requests to receive a JSON response so we have a\n+ defined behavior. See https://github.com/fedora-infra/bodhi/issues/2731.\n+ * It adds a callback to clean up the database session when the request is finished.\n \n Args:\n event (pyramid.events.NewRequest): The new request event.\n \"\"\"\n+ if 'Accept' not in event.request.headers or event.request.headers['Accept'] == '*/*':\n+ event.request.headers['Accept'] = 'application/json'\n+\n event.request.add_finished_callback(_complete_database_session)\n", "issue": "Staging is currently returning HTML to bodhi CLI requests\nI am not sure why this is happening, but it seems that staging Bodhi is currently returning HTML to CLI requests. This also happens to requests with ```http``` or ```curl```.\r\n\r\nI recall a problem with the unit tests where they would sometimes receive HTML when they didn't explicitly use a request header to ask for a JSON response once we started testing under Python 3. We ended up adjusting the tests to pass that header since this did not seem to happen when serving Bodhi with ```pserve-3```.\r\n\r\nIt turns out that there really is some problem that seems related to Python 3 since staging Bodhi started doing this same thing.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright \u00a9 2018 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nDefine Bodhi's WSGI application.\n\nAs of the writing of this docblock, this module is a bit misnamed since the webapp is actually\ndefined in bodhi.server.__init__. However, that is an anti-pattern with lots of nasty in-line\nimports due to circular dependencies, and this module is intended to solve that problem.\nUnfortunately, it is a backwards-incompatible change to move main() here, so it will remain in\n__init__ until we make a major Bodhi release. See https://github.com/fedora-infra/bodhi/issues/2294\n\"\"\"\n\nfrom pyramid.events import NewRequest, subscriber\n\nfrom bodhi import server\n\n\ndef _complete_database_session(request):\n \"\"\"\n Commit the database changes if no exceptions occurred.\n\n This is a post-request hook. It handles rolling back or committing the session based on whether\n an exception occurred or not. To get a database session that's not tied to the request/response\n cycle, just use the :data:`Session` scoped session.\n\n Args:\n request (pyramid.request.Request): The current web request.\n \"\"\"\n _rollback_or_commit(request)\n server.Session().close()\n server.Session.remove()\n\n\n@subscriber(NewRequest)\ndef _prepare_request(event):\n \"\"\"\n Add callbacks onto every new request.\n\n This function adds a callback to clean up the database session when the request is finished.\n\n Args:\n event (pyramid.events.NewRequest): The new request event.\n \"\"\"\n event.request.add_finished_callback(_complete_database_session)\n\n\ndef _rollback_or_commit(request):\n \"\"\"\n Commit the transaction if there are no exceptions, otherwise rollback.\n\n Args:\n request (pyramid.request.Request): The current web request.\n \"\"\"\n if request.exception is not None:\n server.Session().rollback()\n else:\n server.Session().commit()\n", "path": "bodhi/server/webapp.py"}]} | 1,415 | 373 |
gh_patches_debug_10941 | rasdani/github-patches | git_diff | mesonbuild__meson-8978 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
get_variable with a file object as default value: Argument of type File is not held by an ObjectHolder
**Describe the bug**
After updating Meson, I see this error in a previously working build:
```
build/analysis/vale/meson.build:24:0: ERROR: Argument build/analysis/vale/vale-styleguide/config/documentation.vale.ini of type File is not held by an ObjectHolder.
This is a Meson bug and should be reported!
```
The file is being specified in this manner:
```
# Supply a style file, which will use this file instead of the default .vale.ini
vale_config_file = get_variable('vale_config_file',
files('vale-styleguide/config/documentation.vale.ini'))
```
The default variable option is being used - I'm not overriding it.
The same is happening in a Doxygen module I use:
```
doxyfile_input = get_variable('doxyfile_input', files('Doxyfile.in'))
```
I tried moving the file object into another variable:
```
vale_default_config_file = files('vale-styleguide/config/documentation.vale.ini')
vale_config_file = get_variable('vale_config_file', vale_default_config_file)
```
With teh same result - the error is reported on the `get_variable` line.
**system parameters**
* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? **native**
* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) **MacOS 10.15.7**
* what Python version are you using e.g. 3.8.0 **Python 3.9.6**
* what `meson --version` **0.59.0.rc1**
</issue>
<code>
[start of mesonbuild/interpreterbase/_unholder.py]
1 # Copyright 2013-2021 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from .baseobjects import InterpreterObject, MesonInterpreterObject, ObjectHolder, TYPE_var
16 from .exceptions import InvalidArguments
17 from ..mesonlib import HoldableObject, MesonBugException
18
19 import typing as T
20
21 def _unholder(obj: T.Union[TYPE_var, InterpreterObject], *, permissive: bool = False) -> TYPE_var:
22 if isinstance(obj, (int, bool, str)):
23 return obj
24 elif isinstance(obj, list):
25 return [_unholder(x) for x in obj]
26 elif isinstance(obj, dict):
27 return {k: _unholder(v) for k, v in obj.items()}
28 elif isinstance(obj, ObjectHolder):
29 assert isinstance(obj.held_object, HoldableObject)
30 return obj.held_object
31 elif isinstance(obj, MesonInterpreterObject):
32 return obj
33 elif isinstance(obj, HoldableObject) and permissive:
34 return obj
35 elif isinstance(obj, HoldableObject):
36 raise MesonBugException(f'Argument {obj} of type {type(obj).__name__} is not held by an ObjectHolder.')
37 elif isinstance(obj, InterpreterObject):
38 raise InvalidArguments(f'Argument {obj} of type {type(obj).__name__} cannot be passed to a method or function')
39 raise MesonBugException(f'Unknown object {obj} of type {type(obj).__name__} in the parameters.')
40
[end of mesonbuild/interpreterbase/_unholder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesonbuild/interpreterbase/_unholder.py b/mesonbuild/interpreterbase/_unholder.py
--- a/mesonbuild/interpreterbase/_unholder.py
+++ b/mesonbuild/interpreterbase/_unholder.py
@@ -22,9 +22,9 @@
if isinstance(obj, (int, bool, str)):
return obj
elif isinstance(obj, list):
- return [_unholder(x) for x in obj]
+ return [_unholder(x, permissive=permissive) for x in obj]
elif isinstance(obj, dict):
- return {k: _unholder(v) for k, v in obj.items()}
+ return {k: _unholder(v, permissive=permissive) for k, v in obj.items()}
elif isinstance(obj, ObjectHolder):
assert isinstance(obj.held_object, HoldableObject)
return obj.held_object
| {"golden_diff": "diff --git a/mesonbuild/interpreterbase/_unholder.py b/mesonbuild/interpreterbase/_unholder.py\n--- a/mesonbuild/interpreterbase/_unholder.py\n+++ b/mesonbuild/interpreterbase/_unholder.py\n@@ -22,9 +22,9 @@\n if isinstance(obj, (int, bool, str)):\n return obj\n elif isinstance(obj, list):\n- return [_unholder(x) for x in obj]\n+ return [_unholder(x, permissive=permissive) for x in obj]\n elif isinstance(obj, dict):\n- return {k: _unholder(v) for k, v in obj.items()}\n+ return {k: _unholder(v, permissive=permissive) for k, v in obj.items()}\n elif isinstance(obj, ObjectHolder):\n assert isinstance(obj.held_object, HoldableObject)\n return obj.held_object\n", "issue": "get_variable with a file object as default value: Argument of type File is not held by an ObjectHolder\n**Describe the bug**\r\nAfter updating Meson, I see this error in a previously working build:\r\n\r\n```\r\nbuild/analysis/vale/meson.build:24:0: ERROR: Argument build/analysis/vale/vale-styleguide/config/documentation.vale.ini of type File is not held by an ObjectHolder.\r\n\r\n This is a Meson bug and should be reported!\r\n```\r\n\r\nThe file is being specified in this manner:\r\n\r\n```\r\n# Supply a style file, which will use this file instead of the default .vale.ini\r\nvale_config_file = get_variable('vale_config_file',\r\n\tfiles('vale-styleguide/config/documentation.vale.ini'))\r\n```\r\n\r\nThe default variable option is being used - I'm not overriding it.\r\n\r\nThe same is happening in a Doxygen module I use:\r\n\r\n```\r\ndoxyfile_input = get_variable('doxyfile_input', files('Doxyfile.in'))\r\n```\r\n\r\nI tried moving the file object into another variable:\r\n\r\n```\r\nvale_default_config_file = files('vale-styleguide/config/documentation.vale.ini')\r\nvale_config_file = get_variable('vale_config_file', vale_default_config_file)\r\n```\r\n\r\nWith teh same result - the error is reported on the `get_variable` line.\r\n\r\n**system parameters**\r\n* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? **native**\r\n* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) **MacOS 10.15.7**\r\n* what Python version are you using e.g. 3.8.0 **Python 3.9.6**\r\n* what `meson --version` **0.59.0.rc1**\r\n\n", "before_files": [{"content": "# Copyright 2013-2021 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .baseobjects import InterpreterObject, MesonInterpreterObject, ObjectHolder, TYPE_var\nfrom .exceptions import InvalidArguments\nfrom ..mesonlib import HoldableObject, MesonBugException\n\nimport typing as T\n\ndef _unholder(obj: T.Union[TYPE_var, InterpreterObject], *, permissive: bool = False) -> TYPE_var:\n if isinstance(obj, (int, bool, str)):\n return obj\n elif isinstance(obj, list):\n return [_unholder(x) for x in obj]\n elif isinstance(obj, dict):\n return {k: _unholder(v) for k, v in obj.items()}\n elif isinstance(obj, ObjectHolder):\n assert isinstance(obj.held_object, HoldableObject)\n return obj.held_object\n elif isinstance(obj, MesonInterpreterObject):\n return obj\n elif isinstance(obj, HoldableObject) and permissive:\n return obj\n elif isinstance(obj, HoldableObject):\n raise MesonBugException(f'Argument {obj} of type {type(obj).__name__} is not held by an ObjectHolder.')\n elif isinstance(obj, InterpreterObject):\n raise InvalidArguments(f'Argument {obj} of type {type(obj).__name__} cannot be passed to a method or function')\n raise MesonBugException(f'Unknown object {obj} of type {type(obj).__name__} in the parameters.')\n", "path": "mesonbuild/interpreterbase/_unholder.py"}]} | 1,442 | 197 |
gh_patches_debug_1316 | rasdani/github-patches | git_diff | mozilla__bugbug-3334 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use information on how a bug is filed as a feature
This could be especially useful for the Spam model.
https://bugzilla.mozilla.org/show_bug.cgi?id=1565403
</issue>
<code>
[start of bugbug/models/spambug.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import xgboost
7 from imblearn.over_sampling import BorderlineSMOTE
8 from sklearn.compose import ColumnTransformer
9 from sklearn.feature_extraction import DictVectorizer
10 from sklearn.pipeline import Pipeline
11
12 from bugbug import bug_features, bugzilla, feature_cleanup, utils
13 from bugbug.model import BugModel
14
15
16 class SpamBugModel(BugModel):
17 def __init__(self, lemmatization=False):
18 BugModel.__init__(self, lemmatization)
19
20 self.sampler = BorderlineSMOTE(random_state=0)
21 self.calculate_importance = False
22
23 feature_extractors = [
24 bug_features.has_str(),
25 bug_features.has_regression_range(),
26 bug_features.severity(),
27 bug_features.has_crash_signature(),
28 bug_features.has_url(),
29 bug_features.whiteboard(),
30 bug_features.product(),
31 # TODO: We would like to use the component at the time of filing too,
32 # but we can't because the rollback script doesn't support changes to
33 # components yet.
34 # bug_features.component(),
35 bug_features.num_words_title(),
36 bug_features.num_words_comments(),
37 bug_features.keywords(),
38 bug_features.priority(),
39 bug_features.version(),
40 bug_features.target_milestone(),
41 bug_features.has_attachment(),
42 bug_features.platform(),
43 bug_features.op_sys(),
44 ]
45
46 cleanup_functions = [
47 feature_cleanup.fileref(),
48 feature_cleanup.url(),
49 feature_cleanup.synonyms(),
50 ]
51
52 self.extraction_pipeline = Pipeline(
53 [
54 (
55 "bug_extractor",
56 bug_features.BugExtractor(
57 feature_extractors, cleanup_functions, rollback=True
58 ),
59 ),
60 (
61 "union",
62 ColumnTransformer(
63 [
64 ("data", DictVectorizer(), "data"),
65 ("title", self.text_vectorizer(min_df=0.0001), "title"),
66 (
67 "comments",
68 self.text_vectorizer(min_df=0.0001),
69 "comments",
70 ),
71 ]
72 ),
73 ),
74 ]
75 )
76
77 self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())
78 self.clf.set_params(predictor="cpu_predictor")
79
80 def get_labels(self):
81 classes = {}
82
83 for bug_data in bugzilla.get_bugs(include_invalid=True):
84 bug_id = bug_data["id"]
85
86 # Skip bugs filed by Mozillians, since we are sure they are not spam.
87 if "@mozilla" in bug_data["creator"]:
88 continue
89
90 # A bug that was moved out of 'Invalid Bugs' is definitely a legitimate bug.
91 for history in bug_data["history"]:
92 for change in history["changes"]:
93 if (
94 change["field_name"] == "product"
95 and change["removed"] == "Invalid Bugs"
96 ):
97 classes[bug_id] = 0
98
99 # A fixed bug is definitely a legitimate bug.
100 if bug_data["resolution"] == "FIXED":
101 classes[bug_id] = 0
102
103 # A bug in the 'Invalid Bugs' product is definitely a spam bug.
104 elif bug_data["product"] == "Invalid Bugs":
105 classes[bug_id] = 1
106
107 print(
108 "{} bugs are classified as non-spam".format(
109 sum(1 for label in classes.values() if label == 0)
110 )
111 )
112 print(
113 "{} bugs are classified as spam".format(
114 sum(1 for label in classes.values() if label == 1)
115 )
116 )
117
118 return classes, [0, 1]
119
120 def items_gen(self, classes):
121 # Overwriting this method to add include_invalid=True to get_bugs to
122 # include spam bugs.
123 return (
124 (bug, classes[bug["id"]])
125 for bug in bugzilla.get_bugs(include_invalid=True)
126 if bug["id"] in classes
127 )
128
129 def get_feature_names(self):
130 return self.extraction_pipeline.named_steps["union"].get_feature_names_out()
131
132 def overwrite_classes(self, bugs, classes, probabilities):
133 for i, bug in enumerate(bugs):
134 if "@mozilla" in bug["creator"]:
135 if probabilities:
136 classes[i] = [1.0, 0.0]
137 else:
138 classes[i] = 0
139
140 return classes
141
[end of bugbug/models/spambug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/models/spambug.py b/bugbug/models/spambug.py
--- a/bugbug/models/spambug.py
+++ b/bugbug/models/spambug.py
@@ -41,6 +41,7 @@
bug_features.has_attachment(),
bug_features.platform(),
bug_features.op_sys(),
+ bug_features.filed_via(),
]
cleanup_functions = [
| {"golden_diff": "diff --git a/bugbug/models/spambug.py b/bugbug/models/spambug.py\n--- a/bugbug/models/spambug.py\n+++ b/bugbug/models/spambug.py\n@@ -41,6 +41,7 @@\n bug_features.has_attachment(),\n bug_features.platform(),\n bug_features.op_sys(),\n+ bug_features.filed_via(),\n ]\n \n cleanup_functions = [\n", "issue": "Use information on how a bug is filed as a feature\nThis could be especially useful for the Spam model.\r\n\r\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1565403\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.over_sampling import BorderlineSMOTE\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup, utils\nfrom bugbug.model import BugModel\n\n\nclass SpamBugModel(BugModel):\n def __init__(self, lemmatization=False):\n BugModel.__init__(self, lemmatization)\n\n self.sampler = BorderlineSMOTE(random_state=0)\n self.calculate_importance = False\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.whiteboard(),\n bug_features.product(),\n # TODO: We would like to use the component at the time of filing too,\n # but we can't because the rollback script doesn't support changes to\n # components yet.\n # bug_features.component(),\n bug_features.num_words_title(),\n bug_features.num_words_comments(),\n bug_features.keywords(),\n bug_features.priority(),\n bug_features.version(),\n bug_features.target_milestone(),\n bug_features.has_attachment(),\n bug_features.platform(),\n bug_features.op_sys(),\n ]\n\n cleanup_functions = [\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(\n feature_extractors, cleanup_functions, rollback=True\n ),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"data\", DictVectorizer(), \"data\"),\n (\"title\", self.text_vectorizer(min_df=0.0001), \"title\"),\n (\n \"comments\",\n self.text_vectorizer(min_df=0.0001),\n \"comments\",\n ),\n ]\n ),\n ),\n ]\n )\n\n self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n self.clf.set_params(predictor=\"cpu_predictor\")\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs(include_invalid=True):\n bug_id = bug_data[\"id\"]\n\n # Skip bugs filed by Mozillians, since we are sure they are not spam.\n if \"@mozilla\" in bug_data[\"creator\"]:\n continue\n\n # A bug that was moved out of 'Invalid Bugs' is definitely a legitimate bug.\n for history in bug_data[\"history\"]:\n for change in history[\"changes\"]:\n if (\n change[\"field_name\"] == \"product\"\n and change[\"removed\"] == \"Invalid Bugs\"\n ):\n classes[bug_id] = 0\n\n # A fixed bug is definitely a legitimate bug.\n if bug_data[\"resolution\"] == \"FIXED\":\n classes[bug_id] = 0\n\n # A bug in the 'Invalid Bugs' product is definitely a spam bug.\n elif bug_data[\"product\"] == \"Invalid Bugs\":\n classes[bug_id] = 1\n\n print(\n \"{} bugs are classified as non-spam\".format(\n sum(1 for label in classes.values() if label == 0)\n )\n )\n print(\n \"{} bugs are classified as spam\".format(\n sum(1 for label in classes.values() if label == 1)\n )\n )\n\n return classes, [0, 1]\n\n def items_gen(self, classes):\n # Overwriting this method to add include_invalid=True to get_bugs to\n # include spam bugs.\n return (\n (bug, classes[bug[\"id\"]])\n for bug in bugzilla.get_bugs(include_invalid=True)\n if bug[\"id\"] in classes\n )\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names_out()\n\n def overwrite_classes(self, bugs, classes, probabilities):\n for i, bug in enumerate(bugs):\n if \"@mozilla\" in bug[\"creator\"]:\n if probabilities:\n classes[i] = [1.0, 0.0]\n else:\n classes[i] = 0\n\n return classes\n", "path": "bugbug/models/spambug.py"}]} | 1,866 | 90 |
gh_patches_debug_22536 | rasdani/github-patches | git_diff | opsdroid__opsdroid-1860 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User configurable connection for mongo-based databases
So the pymongo client has a multitude of ways for connecting to different mongo services
So for MongoDB Atlas users the connection string is given as such
for python connections to the mongo db atlas
`mongodb+srv://<username>:<password>@<cluster-name>.mongodb.net/myFirstDatabase`
In making the mongo connection to be user configurable we can specify different types of mongo services versus
just asking for the basic connection arguments like port, user name, pass, and also we can give users an easier way to connect versus making assumptions about the type of mongodb the kinds of credentials they might have.
As long as the pymongo client accepts the connection and connects the user to the database and the collection they want I think this would be great!
Thanks again guys!
</issue>
<code>
[start of opsdroid/database/mongo/__init__.py]
1 # -*- coding: utf-8 -*-
2 """A module for opsdroid to allow persist in mongo database."""
3 import logging
4 from contextlib import asynccontextmanager
5 from motor.motor_asyncio import AsyncIOMotorClient
6 from voluptuous import Any
7
8 from opsdroid.database import Database
9
10 _LOGGER = logging.getLogger(__name__)
11 CONFIG_SCHEMA = {
12 "host": str,
13 "port": Any(int, str),
14 "database": str,
15 "user": str,
16 "password": str,
17 "collection": str,
18 }
19
20
21 class DatabaseMongo(Database):
22 """A module for opsdroid to allow memory to persist in a mongo database."""
23
24 def __init__(self, config, opsdroid=None):
25 """Create the connection.
26
27 Set some basic properties from the database config such as the name
28 of this database.
29
30 Args:
31 config (dict): The config for this database specified in the
32 `configuration.yaml` file.
33 opsdroid (OpsDroid): An instance of opsdroid.core.
34
35 """
36 super().__init__(config, opsdroid=opsdroid)
37 _LOGGER.debug("Loaded mongo database connector.")
38 self.name = "mongo"
39 self.config = config
40 self.client = None
41 self.database = None
42 self.collection = config.get("collection", "opsdroid")
43
44 async def connect(self):
45 """Connect to the database."""
46 host = self.config.get("host", "localhost")
47 port = self.config.get("port", "27017")
48 database = self.config.get("database", "opsdroid")
49 user = self.config.get("user")
50 pwd = self.config.get("password")
51 if user and pwd:
52 path = "mongodb://{user}:{pwd}@{host}:{port}".format(
53 user=user, pwd=pwd, host=host, port=port
54 )
55 else:
56 path = "mongodb://{host}:{port}".format(host=host, port=port)
57 self.client = AsyncIOMotorClient(path)
58 self.database = self.client[database]
59 _LOGGER.info("Connected to MongoDB.")
60
61 async def put(self, key, data):
62 """Insert or replace an object into the database for a given key.
63
64 Args:
65 key (str): the key is the document lookup key.
66 data (object): the data to be inserted or replaced
67
68 """
69 _LOGGER.debug("Putting %s into MongoDB collection %s", key, self.collection)
70
71 if isinstance(data, str):
72 data = {"value": data}
73 if "key" not in data:
74 data["key"] = key
75
76 return await self.database[self.collection].update_one(
77 {"key": data["key"]}, {"$set": data}, upsert=True
78 )
79
80 async def get(self, key):
81 """Get a document from the database (key).
82
83 Args:
84 key (str): the key is the document lookup key.
85
86 """
87 _LOGGER.debug("Getting %s from MongoDB collection %s", key, self.collection)
88
89 response = await self.database[self.collection].find_one(
90 {"$query": {"key": key}, "$orderby": {"$natural": -1}}
91 )
92 if response.keys() == {"_id", "key", "value"}:
93 response = response["value"]
94 return response
95
96 async def delete(self, key):
97 """Delete a document from the database (key).
98
99 Args:
100 key (str): the key is the document lookup key.
101
102 """
103 _LOGGER.debug("Deleting %s from MongoDB collection %s.", key, self.collection)
104
105 return await self.database[self.collection].delete_one({"key": key})
106
107 @asynccontextmanager
108 async def memory_in_collection(self, collection):
109 """Use the specified collection rather than the default."""
110 db_copy = DatabaseMongo(self.config, self.opsdroid)
111 try:
112 await db_copy.connect()
113 db_copy.collection = collection
114 yield db_copy
115 finally:
116 if db_copy.client:
117 db_copy.client.close()
118
[end of opsdroid/database/mongo/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/database/mongo/__init__.py b/opsdroid/database/mongo/__init__.py
--- a/opsdroid/database/mongo/__init__.py
+++ b/opsdroid/database/mongo/__init__.py
@@ -44,17 +44,18 @@
async def connect(self):
"""Connect to the database."""
host = self.config.get("host", "localhost")
+ protocol = self.config.get("protocol", "mongodb").replace("://", "")
port = self.config.get("port", "27017")
+ if port != "27017":
+ host = f"{host}:{port}"
database = self.config.get("database", "opsdroid")
user = self.config.get("user")
pwd = self.config.get("password")
if user and pwd:
- path = "mongodb://{user}:{pwd}@{host}:{port}".format(
- user=user, pwd=pwd, host=host, port=port
- )
+ self.db_url = f"{protocol}://{user}:{pwd}@{host}"
else:
- path = "mongodb://{host}:{port}".format(host=host, port=port)
- self.client = AsyncIOMotorClient(path)
+ self.db_url = f"{protocol}://{host}"
+ self.client = AsyncIOMotorClient(self.db_url)
self.database = self.client[database]
_LOGGER.info("Connected to MongoDB.")
| {"golden_diff": "diff --git a/opsdroid/database/mongo/__init__.py b/opsdroid/database/mongo/__init__.py\n--- a/opsdroid/database/mongo/__init__.py\n+++ b/opsdroid/database/mongo/__init__.py\n@@ -44,17 +44,18 @@\n async def connect(self):\n \"\"\"Connect to the database.\"\"\"\n host = self.config.get(\"host\", \"localhost\")\n+ protocol = self.config.get(\"protocol\", \"mongodb\").replace(\"://\", \"\")\n port = self.config.get(\"port\", \"27017\")\n+ if port != \"27017\":\n+ host = f\"{host}:{port}\"\n database = self.config.get(\"database\", \"opsdroid\")\n user = self.config.get(\"user\")\n pwd = self.config.get(\"password\")\n if user and pwd:\n- path = \"mongodb://{user}:{pwd}@{host}:{port}\".format(\n- user=user, pwd=pwd, host=host, port=port\n- )\n+ self.db_url = f\"{protocol}://{user}:{pwd}@{host}\"\n else:\n- path = \"mongodb://{host}:{port}\".format(host=host, port=port)\n- self.client = AsyncIOMotorClient(path)\n+ self.db_url = f\"{protocol}://{host}\"\n+ self.client = AsyncIOMotorClient(self.db_url)\n self.database = self.client[database]\n _LOGGER.info(\"Connected to MongoDB.\")\n", "issue": "User configurable connection for mongo-based databases\nSo the pymongo client has a multitude of ways for connecting to different mongo services\r\n\r\nSo for MongoDB Atlas users the connection string is given as such \r\nfor python connections to the mongo db atlas \r\n\r\n`mongodb+srv://<username>:<password>@<cluster-name>.mongodb.net/myFirstDatabase`\r\n\r\nIn making the mongo connection to be user configurable we can specify different types of mongo services versus\r\njust asking for the basic connection arguments like port, user name, pass, and also we can give users an easier way to connect versus making assumptions about the type of mongodb the kinds of credentials they might have. \r\n\r\nAs long as the pymongo client accepts the connection and connects the user to the database and the collection they want I think this would be great!\r\n\r\nThanks again guys!\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"A module for opsdroid to allow persist in mongo database.\"\"\"\nimport logging\nfrom contextlib import asynccontextmanager\nfrom motor.motor_asyncio import AsyncIOMotorClient\nfrom voluptuous import Any\n\nfrom opsdroid.database import Database\n\n_LOGGER = logging.getLogger(__name__)\nCONFIG_SCHEMA = {\n \"host\": str,\n \"port\": Any(int, str),\n \"database\": str,\n \"user\": str,\n \"password\": str,\n \"collection\": str,\n}\n\n\nclass DatabaseMongo(Database):\n \"\"\"A module for opsdroid to allow memory to persist in a mongo database.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connection.\n\n Set some basic properties from the database config such as the name\n of this database.\n\n Args:\n config (dict): The config for this database specified in the\n `configuration.yaml` file.\n opsdroid (OpsDroid): An instance of opsdroid.core.\n\n \"\"\"\n super().__init__(config, opsdroid=opsdroid)\n _LOGGER.debug(\"Loaded mongo database connector.\")\n self.name = \"mongo\"\n self.config = config\n self.client = None\n self.database = None\n self.collection = config.get(\"collection\", \"opsdroid\")\n\n async def connect(self):\n \"\"\"Connect to the database.\"\"\"\n host = self.config.get(\"host\", \"localhost\")\n port = self.config.get(\"port\", \"27017\")\n database = self.config.get(\"database\", \"opsdroid\")\n user = self.config.get(\"user\")\n pwd = self.config.get(\"password\")\n if user and pwd:\n path = \"mongodb://{user}:{pwd}@{host}:{port}\".format(\n user=user, pwd=pwd, host=host, port=port\n )\n else:\n path = \"mongodb://{host}:{port}\".format(host=host, port=port)\n self.client = AsyncIOMotorClient(path)\n self.database = self.client[database]\n _LOGGER.info(\"Connected to MongoDB.\")\n\n async def put(self, key, data):\n \"\"\"Insert or replace an object into the database for a given key.\n\n Args:\n key (str): the key is the document lookup key.\n data (object): the data to be inserted or replaced\n\n \"\"\"\n _LOGGER.debug(\"Putting %s into MongoDB collection %s\", key, self.collection)\n\n if isinstance(data, str):\n data = {\"value\": data}\n if \"key\" not in data:\n data[\"key\"] = key\n\n return await self.database[self.collection].update_one(\n {\"key\": data[\"key\"]}, {\"$set\": data}, upsert=True\n )\n\n async def get(self, key):\n \"\"\"Get a document from the database (key).\n\n Args:\n key (str): the key is the document lookup key.\n\n \"\"\"\n _LOGGER.debug(\"Getting %s from MongoDB collection %s\", key, self.collection)\n\n response = await self.database[self.collection].find_one(\n {\"$query\": {\"key\": key}, \"$orderby\": {\"$natural\": -1}}\n )\n if response.keys() == {\"_id\", \"key\", \"value\"}:\n response = response[\"value\"]\n return response\n\n async def delete(self, key):\n \"\"\"Delete a document from the database (key).\n\n Args:\n key (str): the key is the document lookup key.\n\n \"\"\"\n _LOGGER.debug(\"Deleting %s from MongoDB collection %s.\", key, self.collection)\n\n return await self.database[self.collection].delete_one({\"key\": key})\n\n @asynccontextmanager\n async def memory_in_collection(self, collection):\n \"\"\"Use the specified collection rather than the default.\"\"\"\n db_copy = DatabaseMongo(self.config, self.opsdroid)\n try:\n await db_copy.connect()\n db_copy.collection = collection\n yield db_copy\n finally:\n if db_copy.client:\n db_copy.client.close()\n", "path": "opsdroid/database/mongo/__init__.py"}]} | 1,832 | 326 |
gh_patches_debug_38160 | rasdani/github-patches | git_diff | archlinux__archinstall-238 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Look in to enabling SMART for drives that support it
Something like `smartctl --smart=on --offlineauto=on --saveauto=on /dev/sda` where `archinstall.hardware.detectSmart()` finds drives that support it (to extend drive lifetime if possible).
</issue>
<code>
[start of profiles/desktop.py]
1 # A desktop environment selector.
2
3 import archinstall, os
4
5 is_top_level_profile = True
6
7 def _prep_function(*args, **kwargs):
8 """
9 Magic function called by the importing installer
10 before continuing any further. It also avoids executing any
11 other code in this stage. So it's a safe way to ask the user
12 for more input before any other installer steps start.
13 """
14
15 supported_desktops = ['gnome', 'kde', 'awesome']
16 desktop = archinstall.generic_select(supported_desktops, 'Select your desired desktop environment: ')
17
18 # Temporarily store the selected desktop profile
19 # in a session-safe location, since this module will get reloaded
20 # the next time it gets executed.
21 archinstall.storage['_desktop_profile'] = desktop
22
23 profile = archinstall.Profile(None, desktop)
24 # Loading the instructions with a custom namespace, ensures that a __name__ comparison is never triggered.
25 with profile.load_instructions(namespace=f"{desktop}.py") as imported:
26 if hasattr(imported, '_prep_function'):
27 return imported._prep_function()
28 else:
29 print(f"Deprecated (??): {desktop} profile has no _prep_function() anymore")
30
31 if __name__ == 'desktop':
32 """
33 This "profile" is a meta-profile.
34 There are no desktop-specific steps, it simply routes
35 the installer to whichever desktop environment/window manager was chosen.
36
37 Maybe in the future, a network manager or similar things *could* be added here.
38 We should honor that Arch Linux does not officially endorse a desktop-setup, nor is
39 it trying to be a turn-key desktop distribution.
40
41 There are plenty of desktop-turn-key-solutions based on Arch Linux,
42 this is therefore just a helper to get started
43 """
44
45 # TODO: Remove magic variable 'installation' and place it
46 # in archinstall.storage or archinstall.session/archinstall.installation
47 installation.install_profile(archinstall.storage['_desktop_profile'])
48
[end of profiles/desktop.py]
[start of profiles/awesome.py]
1 # A desktop environment using "Awesome" window manager.
2
3 import archinstall
4
5 is_top_level_profile = False
6
7 # New way of defining packages for a profile, which is iterable and can be used out side
8 # of the profile to get a list of "what packages will be installed".
9 __packages__ = ['nano', 'nemo', 'gpicview-gtk3', 'openssh', 'sshfs', 'htop', 'scrot', 'wget']
10
11 def _prep_function(*args, **kwargs):
12 """
13 Magic function called by the importing installer
14 before continuing any further. It also avoids executing any
15 other code in this stage. So it's a safe way to ask the user
16 for more input before any other installer steps start.
17 """
18
19 # Awesome WM requires that xorg is installed
20 profile = archinstall.Profile(None, 'xorg')
21 with profile.load_instructions(namespace='xorg.py') as imported:
22 if hasattr(imported, '_prep_function'):
23 return imported._prep_function()
24 else:
25 print('Deprecated (??): xorg profile has no _prep_function() anymore')
26
27
28 # Ensures that this code only gets executed if executed
29 # through importlib.util.spec_from_file_location("awesome", "/somewhere/awesome.py")
30 # or through conventional import awesome
31 if __name__ == 'awesome':
32 # Install the application awesome from the template under /applications/
33 awesome = archinstall.Application(installation, 'awesome')
34 awesome.install()
35
36 # Then setup and configure the desktop environment: awesome
37 editor = "nano"
38 filebrowser = "nemo gpicview-gtk3"
39 utils = "openssh sshfs htop scrot wget"
40
41
42 installation.add_additional_packages(f"{utils} {filebrowser} {editor}")
43
44 alacritty = archinstall.Application(installation, 'alacritty')
45 alacritty.install()
46
47 # TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.
48 with open(f'{installation.mountpoint}/etc/xdg/awesome/rc.lua', 'r') as fh:
49 awesome_lua = fh.read()
50
51 ## Replace xterm with alacritty for a smoother experience.
52 awesome_lua = awesome_lua.replace('"xterm"', '"alacritty"')
53
54 with open(f'{installation.mountpoint}/etc/xdg/awesome/rc.lua', 'w') as fh:
55 fh.write(awesome_lua)
56
57 ## TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)
58
59 ## Remove some interfering nemo settings
60 installation.arch_chroot("gsettings set org.nemo.desktop show-desktop-icons false")
61 installation.arch_chroot("xdg-mime default nemo.desktop inode/directory application/x-gnome-saved-search")
62
[end of profiles/awesome.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/profiles/awesome.py b/profiles/awesome.py
--- a/profiles/awesome.py
+++ b/profiles/awesome.py
@@ -6,7 +6,7 @@
# New way of defining packages for a profile, which is iterable and can be used out side
# of the profile to get a list of "what packages will be installed".
-__packages__ = ['nano', 'nemo', 'gpicview-gtk3', 'openssh', 'sshfs', 'htop', 'scrot', 'wget']
+__packages__ = ['nemo', 'gpicview-gtk3', 'scrot']
def _prep_function(*args, **kwargs):
"""
@@ -33,13 +33,7 @@
awesome = archinstall.Application(installation, 'awesome')
awesome.install()
- # Then setup and configure the desktop environment: awesome
- editor = "nano"
- filebrowser = "nemo gpicview-gtk3"
- utils = "openssh sshfs htop scrot wget"
-
-
- installation.add_additional_packages(f"{utils} {filebrowser} {editor}")
+ installation.add_additional_packages(__packages__)
alacritty = archinstall.Application(installation, 'alacritty')
alacritty.install()
diff --git a/profiles/desktop.py b/profiles/desktop.py
--- a/profiles/desktop.py
+++ b/profiles/desktop.py
@@ -4,6 +4,10 @@
is_top_level_profile = True
+# New way of defining packages for a profile, which is iterable and can be used out side
+# of the profile to get a list of "what packages will be installed".
+__packages__ = ['nano', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools']
+
def _prep_function(*args, **kwargs):
"""
Magic function called by the importing installer
@@ -14,7 +18,7 @@
supported_desktops = ['gnome', 'kde', 'awesome']
desktop = archinstall.generic_select(supported_desktops, 'Select your desired desktop environment: ')
-
+
# Temporarily store the selected desktop profile
# in a session-safe location, since this module will get reloaded
# the next time it gets executed.
@@ -41,7 +45,11 @@
There are plenty of desktop-turn-key-solutions based on Arch Linux,
this is therefore just a helper to get started
"""
+
+ # Install common packages for all desktop environments
+ installation.add_additional_packages(__packages__)
# TODO: Remove magic variable 'installation' and place it
# in archinstall.storage or archinstall.session/archinstall.installation
installation.install_profile(archinstall.storage['_desktop_profile'])
+
| {"golden_diff": "diff --git a/profiles/awesome.py b/profiles/awesome.py\n--- a/profiles/awesome.py\n+++ b/profiles/awesome.py\n@@ -6,7 +6,7 @@\n \n # New way of defining packages for a profile, which is iterable and can be used out side\n # of the profile to get a list of \"what packages will be installed\".\n-__packages__ = ['nano', 'nemo', 'gpicview-gtk3', 'openssh', 'sshfs', 'htop', 'scrot', 'wget']\n+__packages__ = ['nemo', 'gpicview-gtk3', 'scrot']\n \n def _prep_function(*args, **kwargs):\n \t\"\"\"\n@@ -33,13 +33,7 @@\n \tawesome = archinstall.Application(installation, 'awesome')\n \tawesome.install()\n \n-\t# Then setup and configure the desktop environment: awesome\n-\teditor = \"nano\"\n-\tfilebrowser = \"nemo gpicview-gtk3\"\n-\tutils = \"openssh sshfs htop scrot wget\"\n-\n-\n-\tinstallation.add_additional_packages(f\"{utils} {filebrowser} {editor}\")\n+\tinstallation.add_additional_packages(__packages__)\n \n \talacritty = archinstall.Application(installation, 'alacritty')\n \talacritty.install()\ndiff --git a/profiles/desktop.py b/profiles/desktop.py\n--- a/profiles/desktop.py\n+++ b/profiles/desktop.py\n@@ -4,6 +4,10 @@\n \n is_top_level_profile = True\n \n+# New way of defining packages for a profile, which is iterable and can be used out side\n+# of the profile to get a list of \"what packages will be installed\".\n+__packages__ = ['nano', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools']\n+\n def _prep_function(*args, **kwargs):\n \t\"\"\"\n \tMagic function called by the importing installer\n@@ -14,7 +18,7 @@\n \n \tsupported_desktops = ['gnome', 'kde', 'awesome']\n \tdesktop = archinstall.generic_select(supported_desktops, 'Select your desired desktop environment: ')\n-\n+\t\n \t# Temporarily store the selected desktop profile\n \t# in a session-safe location, since this module will get reloaded\n \t# the next time it gets executed.\n@@ -41,7 +45,11 @@\n \tThere are plenty of desktop-turn-key-solutions based on Arch Linux,\n \tthis is therefore just a helper to get started\n \t\"\"\"\n+\t\n+\t# Install common packages for all desktop environments\n+\tinstallation.add_additional_packages(__packages__)\n \n \t# TODO: Remove magic variable 'installation' and place it\n \t# in archinstall.storage or archinstall.session/archinstall.installation\n \tinstallation.install_profile(archinstall.storage['_desktop_profile'])\n+\n", "issue": "Look in to enabling SMART for drives that support it\nSomething like `smartctl --smart=on --offlineauto=on --saveauto=on /dev/sda` where `archinstall.hardware.detectSmart()` finds drives that support it (to extend drive lifetime if possible).\n", "before_files": [{"content": "# A desktop environment selector.\n\nimport archinstall, os\n\nis_top_level_profile = True\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\tsupported_desktops = ['gnome', 'kde', 'awesome']\n\tdesktop = archinstall.generic_select(supported_desktops, 'Select your desired desktop environment: ')\n\n\t# Temporarily store the selected desktop profile\n\t# in a session-safe location, since this module will get reloaded\n\t# the next time it gets executed.\n\tarchinstall.storage['_desktop_profile'] = desktop\n\n\tprofile = archinstall.Profile(None, desktop)\n\t# Loading the instructions with a custom namespace, ensures that a __name__ comparison is never triggered.\n\twith profile.load_instructions(namespace=f\"{desktop}.py\") as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint(f\"Deprecated (??): {desktop} profile has no _prep_function() anymore\")\n\nif __name__ == 'desktop':\n\t\"\"\"\n\tThis \"profile\" is a meta-profile.\n\tThere are no desktop-specific steps, it simply routes\n\tthe installer to whichever desktop environment/window manager was chosen.\n\n\tMaybe in the future, a network manager or similar things *could* be added here.\n\tWe should honor that Arch Linux does not officially endorse a desktop-setup, nor is\n\tit trying to be a turn-key desktop distribution.\n\n\tThere are plenty of desktop-turn-key-solutions based on Arch Linux,\n\tthis is therefore just a helper to get started\n\t\"\"\"\n\n\t# TODO: Remove magic variable 'installation' and place it\n\t# in archinstall.storage or archinstall.session/archinstall.installation\n\tinstallation.install_profile(archinstall.storage['_desktop_profile'])\n", "path": "profiles/desktop.py"}, {"content": "# A desktop environment using \"Awesome\" window manager.\n\nimport archinstall\n\nis_top_level_profile = False\n\n# New way of defining packages for a profile, which is iterable and can be used out side\n# of the profile to get a list of \"what packages will be installed\".\n__packages__ = ['nano', 'nemo', 'gpicview-gtk3', 'openssh', 'sshfs', 'htop', 'scrot', 'wget']\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# Awesome WM requires that xorg is installed\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"awesome\", \"/somewhere/awesome.py\")\n# or through conventional import awesome\nif __name__ == 'awesome':\n\t# Install the application awesome from the template under /applications/\n\tawesome = archinstall.Application(installation, 'awesome')\n\tawesome.install()\n\n\t# Then setup and configure the desktop environment: awesome\n\teditor = \"nano\"\n\tfilebrowser = \"nemo gpicview-gtk3\"\n\tutils = \"openssh sshfs htop scrot wget\"\n\n\n\tinstallation.add_additional_packages(f\"{utils} {filebrowser} {editor}\")\n\n\talacritty = archinstall.Application(installation, 'alacritty')\n\talacritty.install()\n\n\t# TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.\n\twith open(f'{installation.mountpoint}/etc/xdg/awesome/rc.lua', 'r') as fh:\n\t\tawesome_lua = fh.read()\n\n\t## Replace xterm with alacritty for a smoother experience.\n\tawesome_lua = awesome_lua.replace('\"xterm\"', '\"alacritty\"')\n\n\twith open(f'{installation.mountpoint}/etc/xdg/awesome/rc.lua', 'w') as fh:\n\t\tfh.write(awesome_lua)\n\n\t## TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)\n\t\n\t## Remove some interfering nemo settings\n\tinstallation.arch_chroot(\"gsettings set org.nemo.desktop show-desktop-icons false\")\n\tinstallation.arch_chroot(\"xdg-mime default nemo.desktop inode/directory application/x-gnome-saved-search\")\n", "path": "profiles/awesome.py"}]} | 1,874 | 641 |
gh_patches_debug_6465 | rasdani/github-patches | git_diff | feast-dev__feast-3766 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feast ui cannot parse url path
## Expected Behavior
One of example cases:
When user navigate localhost:8888/p/order_count_project/feature-view/user_3_and_7_days_order_count should see related feature-view page
## Current Behavior
One of example cases:
When user navigate localhost:8888/p/order_count_project/feature-view/user_3_and_7_days_order_count see "Internal Server Error"
## Steps to reproduce
install feast 0.34.1
run feast ui
navigate homepage localhost:8888
navigate any page (entities or feature-view or data sources doesn't matter)
you will see the page you clicked at browser search bar like http://localhost:8888/p/order_count_project/data-source
then refresh or copy url open in new tab
you will see internal server error
### Specifications
- Version: 0.34.1
- Platform: macos
- Subsystem:
## Possible Solution
ui_server.py file updated recently. commit changes resource finder library and then it returns PosixPath.
We should convert to str and add little "/" to "@app.api_route("/p/{path_name:path}", methods=["GET"])" function
</issue>
<code>
[start of sdk/python/feast/ui_server.py]
1 import json
2 import threading
3 from typing import Callable, Optional
4
5 import importlib_resources
6 import uvicorn
7 from fastapi import FastAPI, Response
8 from fastapi.middleware.cors import CORSMiddleware
9 from fastapi.staticfiles import StaticFiles
10
11 import feast
12
13
14 def get_app(
15 store: "feast.FeatureStore",
16 project_id: str,
17 registry_ttl_secs: int,
18 root_path: str = "",
19 ):
20 app = FastAPI()
21
22 app.add_middleware(
23 CORSMiddleware,
24 allow_origins=["*"],
25 allow_credentials=True,
26 allow_methods=["*"],
27 allow_headers=["*"],
28 )
29
30 # Asynchronously refresh registry, notifying shutdown and canceling the active timer if the app is shutting down
31 registry_proto = None
32 shutting_down = False
33 active_timer: Optional[threading.Timer] = None
34
35 def async_refresh():
36 store.refresh_registry()
37 nonlocal registry_proto
38 registry_proto = store.registry.proto()
39 if shutting_down:
40 return
41 nonlocal active_timer
42 active_timer = threading.Timer(registry_ttl_secs, async_refresh)
43 active_timer.start()
44
45 @app.on_event("shutdown")
46 def shutdown_event():
47 nonlocal shutting_down
48 shutting_down = True
49 if active_timer:
50 active_timer.cancel()
51
52 async_refresh()
53
54 ui_dir_ref = importlib_resources.files(__name__) / "ui/build/"
55 with importlib_resources.as_file(ui_dir_ref) as ui_dir:
56 # Initialize with the projects-list.json file
57 with ui_dir.joinpath("projects-list.json").open(mode="w") as f:
58 projects_dict = {
59 "projects": [
60 {
61 "name": "Project",
62 "description": "Test project",
63 "id": project_id,
64 "registryPath": f"{root_path}/registry",
65 }
66 ]
67 }
68 f.write(json.dumps(projects_dict))
69
70 @app.get("/registry")
71 def read_registry():
72 return Response(
73 content=registry_proto.SerializeToString(),
74 media_type="application/octet-stream",
75 )
76
77 # For all other paths (such as paths that would otherwise be handled by react router), pass to React
78 @app.api_route("/p/{path_name:path}", methods=["GET"])
79 def catch_all():
80 filename = ui_dir + "index.html"
81
82 with open(filename) as f:
83 content = f.read()
84
85 return Response(content, media_type="text/html")
86
87 app.mount(
88 "/",
89 StaticFiles(directory=ui_dir, html=True),
90 name="site",
91 )
92
93 return app
94
95
96 def start_server(
97 store: "feast.FeatureStore",
98 host: str,
99 port: int,
100 get_registry_dump: Callable,
101 project_id: str,
102 registry_ttl_sec: int,
103 root_path: str = "",
104 ):
105 app = get_app(
106 store,
107 project_id,
108 registry_ttl_sec,
109 root_path,
110 )
111 uvicorn.run(app, host=host, port=port)
112
[end of sdk/python/feast/ui_server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/feast/ui_server.py b/sdk/python/feast/ui_server.py
--- a/sdk/python/feast/ui_server.py
+++ b/sdk/python/feast/ui_server.py
@@ -77,7 +77,7 @@
# For all other paths (such as paths that would otherwise be handled by react router), pass to React
@app.api_route("/p/{path_name:path}", methods=["GET"])
def catch_all():
- filename = ui_dir + "index.html"
+ filename = ui_dir.joinpath("index.html")
with open(filename) as f:
content = f.read()
| {"golden_diff": "diff --git a/sdk/python/feast/ui_server.py b/sdk/python/feast/ui_server.py\n--- a/sdk/python/feast/ui_server.py\n+++ b/sdk/python/feast/ui_server.py\n@@ -77,7 +77,7 @@\n # For all other paths (such as paths that would otherwise be handled by react router), pass to React\n @app.api_route(\"/p/{path_name:path}\", methods=[\"GET\"])\n def catch_all():\n- filename = ui_dir + \"index.html\"\n+ filename = ui_dir.joinpath(\"index.html\")\n \n with open(filename) as f:\n content = f.read()\n", "issue": "Feast ui cannot parse url path\n## Expected Behavior \r\n\r\nOne of example cases:\r\nWhen user navigate localhost:8888/p/order_count_project/feature-view/user_3_and_7_days_order_count should see related feature-view page\r\n\r\n## Current Behavior\r\n\r\nOne of example cases:\r\nWhen user navigate localhost:8888/p/order_count_project/feature-view/user_3_and_7_days_order_count see \"Internal Server Error\"\r\n\r\n## Steps to reproduce\r\n\r\ninstall feast 0.34.1\r\nrun feast ui\r\nnavigate homepage localhost:8888\r\nnavigate any page (entities or feature-view or data sources doesn't matter)\r\nyou will see the page you clicked at browser search bar like http://localhost:8888/p/order_count_project/data-source \r\nthen refresh or copy url open in new tab\r\nyou will see internal server error\r\n\r\n### Specifications\r\n\r\n- Version: 0.34.1\r\n- Platform: macos\r\n- Subsystem: \r\n\r\n## Possible Solution\r\n\r\nui_server.py file updated recently. commit changes resource finder library and then it returns PosixPath. \r\nWe should convert to str and add little \"/\" to \"@app.api_route(\"/p/{path_name:path}\", methods=[\"GET\"])\" function\r\n\r\n\n", "before_files": [{"content": "import json\nimport threading\nfrom typing import Callable, Optional\n\nimport importlib_resources\nimport uvicorn\nfrom fastapi import FastAPI, Response\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom fastapi.staticfiles import StaticFiles\n\nimport feast\n\n\ndef get_app(\n store: \"feast.FeatureStore\",\n project_id: str,\n registry_ttl_secs: int,\n root_path: str = \"\",\n):\n app = FastAPI()\n\n app.add_middleware(\n CORSMiddleware,\n allow_origins=[\"*\"],\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n # Asynchronously refresh registry, notifying shutdown and canceling the active timer if the app is shutting down\n registry_proto = None\n shutting_down = False\n active_timer: Optional[threading.Timer] = None\n\n def async_refresh():\n store.refresh_registry()\n nonlocal registry_proto\n registry_proto = store.registry.proto()\n if shutting_down:\n return\n nonlocal active_timer\n active_timer = threading.Timer(registry_ttl_secs, async_refresh)\n active_timer.start()\n\n @app.on_event(\"shutdown\")\n def shutdown_event():\n nonlocal shutting_down\n shutting_down = True\n if active_timer:\n active_timer.cancel()\n\n async_refresh()\n\n ui_dir_ref = importlib_resources.files(__name__) / \"ui/build/\"\n with importlib_resources.as_file(ui_dir_ref) as ui_dir:\n # Initialize with the projects-list.json file\n with ui_dir.joinpath(\"projects-list.json\").open(mode=\"w\") as f:\n projects_dict = {\n \"projects\": [\n {\n \"name\": \"Project\",\n \"description\": \"Test project\",\n \"id\": project_id,\n \"registryPath\": f\"{root_path}/registry\",\n }\n ]\n }\n f.write(json.dumps(projects_dict))\n\n @app.get(\"/registry\")\n def read_registry():\n return Response(\n content=registry_proto.SerializeToString(),\n media_type=\"application/octet-stream\",\n )\n\n # For all other paths (such as paths that would otherwise be handled by react router), pass to React\n @app.api_route(\"/p/{path_name:path}\", methods=[\"GET\"])\n def catch_all():\n filename = ui_dir + \"index.html\"\n\n with open(filename) as f:\n content = f.read()\n\n return Response(content, media_type=\"text/html\")\n\n app.mount(\n \"/\",\n StaticFiles(directory=ui_dir, html=True),\n name=\"site\",\n )\n\n return app\n\n\ndef start_server(\n store: \"feast.FeatureStore\",\n host: str,\n port: int,\n get_registry_dump: Callable,\n project_id: str,\n registry_ttl_sec: int,\n root_path: str = \"\",\n):\n app = get_app(\n store,\n project_id,\n registry_ttl_sec,\n root_path,\n )\n uvicorn.run(app, host=host, port=port)\n", "path": "sdk/python/feast/ui_server.py"}]} | 1,670 | 139 |
gh_patches_debug_37912 | rasdani/github-patches | git_diff | tournesol-app__tournesol-155 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Count ratings appropriately
If a contributor rates A versus B on 9 quality criteria, this should count as 9 ratings.
The home page statistics should reflect this, on not the number of times a contributor rated A versus B :)
</issue>
<code>
[start of backend/backend/api_v2/statistics.py]
1 from backend.models import ExpertRating, Video, UserInformation
2 from drf_spectacular.utils import extend_schema
3 from rest_framework import serializers
4 from rest_framework import viewsets
5 from rest_framework.decorators import action
6 from rest_framework.permissions import IsAuthenticatedOrReadOnly
7 from rest_framework.response import Response
8 from backend.rating_fields import VIDEO_FIELDS
9 from django.db.models import Min, Max, F, Q
10 from backend.api_v2.helpers import WithPKOverflowProtection
11 import datetime
12 from django.utils.timezone import make_aware
13
14
15 class StatisticsSerializerV2(serializers.Serializer):
16 """Serialize statistics for the website."""
17 certified_experts = serializers.IntegerField(
18 help_text="Number of experts with certified e-mails")
19 total_experts = serializers.IntegerField(
20 help_text="Number of all experts")
21 pairwise_comparisons = serializers.IntegerField(
22 help_text="Total number of pairwise comparisons")
23 videos = serializers.IntegerField(
24 help_text="Total number of videos in the database")
25 min_score = serializers.FloatField(
26 help_text="Minimal aggregated score over all videos and features")
27 max_score = serializers.FloatField(
28 help_text="Maximal aggregated score over all videos and features")
29 weekly_active_ratings = serializers.IntegerField(
30 help_text="Number of ratings added within a week")
31 n_rated_videos = serializers.IntegerField(
32 help_text="Total number of videos with ratings")
33
34
35 class StatisticsViewSetV2(viewsets.ViewSet, WithPKOverflowProtection):
36 """Show website statistics."""
37 serializer_class = StatisticsSerializerV2
38 permission_classes = [IsAuthenticatedOrReadOnly]
39
40 # need a list, otherwise router will not register this viewset
41 @extend_schema(exclude=True, responses={
42 200: StatisticsSerializerV2(
43 many=True),
44 400: None})
45 def list(self, request):
46 return Response({})
47
48 @extend_schema(
49 responses={
50 200: StatisticsSerializerV2(
51 many=False)},
52 operation_id="view")
53 @action(methods=['GET'], detail=False)
54 def view(self, request):
55 """Get statistics for the website."""
56 minmax_scores = \
57 Video.objects.aggregate(**{'max_' + f: Max(F(f)) for f in VIDEO_FIELDS},
58 **{'min_' + f: Min(F(f)) for f in VIDEO_FIELDS})
59
60 try:
61 min_score = min([v for k, v in minmax_scores.items() if k.startswith('min')])
62 max_score = max([v for k, v in minmax_scores.items() if k.startswith('max')])
63 except Exception:
64 min_score = 0.0
65 max_score = 0.0
66
67 date_week_ago = make_aware(datetime.datetime.now()) - datetime.timedelta(days=7)
68
69 data = {'certified_experts': UserInformation.
70 _annotate_is_certified(UserInformation.objects.all())
71 .filter(_is_certified=1, user__is_active=True).count(),
72 'pairwise_comparisons': ExpertRating.objects.all().count(),
73 'videos': Video.objects.all().count(),
74 'min_score': min_score,
75 'max_score': max_score,
76 'total_experts': UserInformation.objects.filter(is_demo=False).count(),
77 'weekly_active_ratings': ExpertRating.objects.filter(
78 datetime_lastedit__gte=date_week_ago).count(),
79 'n_rated_videos': Video.objects.exclude(Q(expertrating_video_1__id=None) &
80 Q(expertrating_video_2__id=None)
81 ).distinct().count()
82 }
83
84 return Response(StatisticsSerializerV2(data, many=False).data)
85
[end of backend/backend/api_v2/statistics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/backend/api_v2/statistics.py b/backend/backend/api_v2/statistics.py
--- a/backend/backend/api_v2/statistics.py
+++ b/backend/backend/api_v2/statistics.py
@@ -12,24 +12,35 @@
from django.utils.timezone import make_aware
-class StatisticsSerializerV2(serializers.Serializer):
- """Serialize statistics for the website."""
- certified_experts = serializers.IntegerField(
- help_text="Number of experts with certified e-mails")
- total_experts = serializers.IntegerField(
- help_text="Number of all experts")
- pairwise_comparisons = serializers.IntegerField(
- help_text="Total number of pairwise comparisons")
- videos = serializers.IntegerField(
- help_text="Total number of videos in the database")
- min_score = serializers.FloatField(
- help_text="Minimal aggregated score over all videos and features")
- max_score = serializers.FloatField(
- help_text="Maximal aggregated score over all videos and features")
- weekly_active_ratings = serializers.IntegerField(
- help_text="Number of ratings added within a week")
- n_rated_videos = serializers.IntegerField(
- help_text="Total number of videos with ratings")
+StatisticsSerializerV2 = type(
+ 'StatisticsSerializerV2', (serializers.Serializer,),
+ {**dict(
+ __doc__="""Serialize statistics for the website.""",
+ certified_experts=serializers.IntegerField(
+ help_text="Number of experts with certified e-mails"),
+ total_experts=serializers.IntegerField(
+ help_text="Number of all experts"),
+ pairwise_comparisons=serializers.IntegerField(
+ help_text="Total number of pairwise comparisons"),
+ videos=serializers.IntegerField(
+ help_text="Total number of videos in the database"),
+ min_score=serializers.FloatField(
+ help_text="Minimal aggregated score over all videos and features"),
+ max_score=serializers.FloatField(
+ help_text="Maximal aggregated score over all videos and features"),
+ weekly_active_ratings=serializers.IntegerField(
+ help_text="Number of ratings added within a week"),
+ n_rated_videos=serializers.IntegerField(
+ help_text="Total number of videos with ratings"),
+
+ n_sum_comparisons=serializers.IntegerField(
+ help_text="Sum of all numbers of comparisons for all features"),
+ ),
+ **{f"n_{f}_comparisons": serializers.IntegerField(
+ help_text=f"Number of comparisons for {f}")
+ for f in VIDEO_FIELDS}
+ }
+)
class StatisticsViewSetV2(viewsets.ViewSet, WithPKOverflowProtection):
@@ -81,4 +92,13 @@
).distinct().count()
}
+ n_sum_comparisons = 0
+ for f in VIDEO_FIELDS:
+ val = ExpertRating.objects.filter(**{
+ f + '__isnull': False, f + '_weight__gt': 0}).distinct().count()
+ data[f"n_{f}_comparisons"] = val
+ n_sum_comparisons += val
+
+ data["n_sum_comparisons"] = n_sum_comparisons
+
return Response(StatisticsSerializerV2(data, many=False).data)
| {"golden_diff": "diff --git a/backend/backend/api_v2/statistics.py b/backend/backend/api_v2/statistics.py\n--- a/backend/backend/api_v2/statistics.py\n+++ b/backend/backend/api_v2/statistics.py\n@@ -12,24 +12,35 @@\n from django.utils.timezone import make_aware\r\n \r\n \r\n-class StatisticsSerializerV2(serializers.Serializer):\r\n- \"\"\"Serialize statistics for the website.\"\"\"\r\n- certified_experts = serializers.IntegerField(\r\n- help_text=\"Number of experts with certified e-mails\")\r\n- total_experts = serializers.IntegerField(\r\n- help_text=\"Number of all experts\")\r\n- pairwise_comparisons = serializers.IntegerField(\r\n- help_text=\"Total number of pairwise comparisons\")\r\n- videos = serializers.IntegerField(\r\n- help_text=\"Total number of videos in the database\")\r\n- min_score = serializers.FloatField(\r\n- help_text=\"Minimal aggregated score over all videos and features\")\r\n- max_score = serializers.FloatField(\r\n- help_text=\"Maximal aggregated score over all videos and features\")\r\n- weekly_active_ratings = serializers.IntegerField(\r\n- help_text=\"Number of ratings added within a week\")\r\n- n_rated_videos = serializers.IntegerField(\r\n- help_text=\"Total number of videos with ratings\")\r\n+StatisticsSerializerV2 = type(\r\n+ 'StatisticsSerializerV2', (serializers.Serializer,),\r\n+ {**dict(\r\n+ __doc__=\"\"\"Serialize statistics for the website.\"\"\",\r\n+ certified_experts=serializers.IntegerField(\r\n+ help_text=\"Number of experts with certified e-mails\"),\r\n+ total_experts=serializers.IntegerField(\r\n+ help_text=\"Number of all experts\"),\r\n+ pairwise_comparisons=serializers.IntegerField(\r\n+ help_text=\"Total number of pairwise comparisons\"),\r\n+ videos=serializers.IntegerField(\r\n+ help_text=\"Total number of videos in the database\"),\r\n+ min_score=serializers.FloatField(\r\n+ help_text=\"Minimal aggregated score over all videos and features\"),\r\n+ max_score=serializers.FloatField(\r\n+ help_text=\"Maximal aggregated score over all videos and features\"),\r\n+ weekly_active_ratings=serializers.IntegerField(\r\n+ help_text=\"Number of ratings added within a week\"),\r\n+ n_rated_videos=serializers.IntegerField(\r\n+ help_text=\"Total number of videos with ratings\"),\r\n+\r\n+ n_sum_comparisons=serializers.IntegerField(\r\n+ help_text=\"Sum of all numbers of comparisons for all features\"),\r\n+ ),\r\n+ **{f\"n_{f}_comparisons\": serializers.IntegerField(\r\n+ help_text=f\"Number of comparisons for {f}\")\r\n+ for f in VIDEO_FIELDS}\r\n+ }\r\n+)\r\n \r\n \r\n class StatisticsViewSetV2(viewsets.ViewSet, WithPKOverflowProtection):\r\n@@ -81,4 +92,13 @@\n ).distinct().count()\r\n }\r\n \r\n+ n_sum_comparisons = 0\r\n+ for f in VIDEO_FIELDS:\r\n+ val = ExpertRating.objects.filter(**{\r\n+ f + '__isnull': False, f + '_weight__gt': 0}).distinct().count()\r\n+ data[f\"n_{f}_comparisons\"] = val\r\n+ n_sum_comparisons += val\r\n+\r\n+ data[\"n_sum_comparisons\"] = n_sum_comparisons\r\n+\r\n return Response(StatisticsSerializerV2(data, many=False).data)\n", "issue": "Count ratings appropriately\nIf a contributor rates A versus B on 9 quality criteria, this should count as 9 ratings.\r\nThe home page statistics should reflect this, on not the number of times a contributor rated A versus B :)\n", "before_files": [{"content": "from backend.models import ExpertRating, Video, UserInformation\r\nfrom drf_spectacular.utils import extend_schema\r\nfrom rest_framework import serializers\r\nfrom rest_framework import viewsets\r\nfrom rest_framework.decorators import action\r\nfrom rest_framework.permissions import IsAuthenticatedOrReadOnly\r\nfrom rest_framework.response import Response\r\nfrom backend.rating_fields import VIDEO_FIELDS\r\nfrom django.db.models import Min, Max, F, Q\r\nfrom backend.api_v2.helpers import WithPKOverflowProtection\r\nimport datetime\r\nfrom django.utils.timezone import make_aware\r\n\r\n\r\nclass StatisticsSerializerV2(serializers.Serializer):\r\n \"\"\"Serialize statistics for the website.\"\"\"\r\n certified_experts = serializers.IntegerField(\r\n help_text=\"Number of experts with certified e-mails\")\r\n total_experts = serializers.IntegerField(\r\n help_text=\"Number of all experts\")\r\n pairwise_comparisons = serializers.IntegerField(\r\n help_text=\"Total number of pairwise comparisons\")\r\n videos = serializers.IntegerField(\r\n help_text=\"Total number of videos in the database\")\r\n min_score = serializers.FloatField(\r\n help_text=\"Minimal aggregated score over all videos and features\")\r\n max_score = serializers.FloatField(\r\n help_text=\"Maximal aggregated score over all videos and features\")\r\n weekly_active_ratings = serializers.IntegerField(\r\n help_text=\"Number of ratings added within a week\")\r\n n_rated_videos = serializers.IntegerField(\r\n help_text=\"Total number of videos with ratings\")\r\n\r\n\r\nclass StatisticsViewSetV2(viewsets.ViewSet, WithPKOverflowProtection):\r\n \"\"\"Show website statistics.\"\"\"\r\n serializer_class = StatisticsSerializerV2\r\n permission_classes = [IsAuthenticatedOrReadOnly]\r\n\r\n # need a list, otherwise router will not register this viewset\r\n @extend_schema(exclude=True, responses={\r\n 200: StatisticsSerializerV2(\r\n many=True),\r\n 400: None})\r\n def list(self, request):\r\n return Response({})\r\n\r\n @extend_schema(\r\n responses={\r\n 200: StatisticsSerializerV2(\r\n many=False)},\r\n operation_id=\"view\")\r\n @action(methods=['GET'], detail=False)\r\n def view(self, request):\r\n \"\"\"Get statistics for the website.\"\"\"\r\n minmax_scores = \\\r\n Video.objects.aggregate(**{'max_' + f: Max(F(f)) for f in VIDEO_FIELDS},\r\n **{'min_' + f: Min(F(f)) for f in VIDEO_FIELDS})\r\n\r\n try:\r\n min_score = min([v for k, v in minmax_scores.items() if k.startswith('min')])\r\n max_score = max([v for k, v in minmax_scores.items() if k.startswith('max')])\r\n except Exception:\r\n min_score = 0.0\r\n max_score = 0.0\r\n\r\n date_week_ago = make_aware(datetime.datetime.now()) - datetime.timedelta(days=7)\r\n\r\n data = {'certified_experts': UserInformation.\r\n _annotate_is_certified(UserInformation.objects.all())\r\n .filter(_is_certified=1, user__is_active=True).count(),\r\n 'pairwise_comparisons': ExpertRating.objects.all().count(),\r\n 'videos': Video.objects.all().count(),\r\n 'min_score': min_score,\r\n 'max_score': max_score,\r\n 'total_experts': UserInformation.objects.filter(is_demo=False).count(),\r\n 'weekly_active_ratings': ExpertRating.objects.filter(\r\n datetime_lastedit__gte=date_week_ago).count(),\r\n 'n_rated_videos': Video.objects.exclude(Q(expertrating_video_1__id=None) &\r\n Q(expertrating_video_2__id=None)\r\n ).distinct().count()\r\n }\r\n\r\n return Response(StatisticsSerializerV2(data, many=False).data)\r\n", "path": "backend/backend/api_v2/statistics.py"}]} | 1,516 | 706 |
gh_patches_debug_41799 | rasdani/github-patches | git_diff | mindee__doctr-369 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[demo] Improve UI for OCR result display
For very dense documents, since the predicted text value is plotted statically, there can be some readability issues. We should try to improve this
</issue>
<code>
[start of demo/app.py]
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import os
7 import streamlit as st
8 import matplotlib.pyplot as plt
9
10 os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
11
12 import tensorflow as tf
13 import cv2
14
15 gpu_devices = tf.config.experimental.list_physical_devices('GPU')
16 if any(gpu_devices):
17 tf.config.experimental.set_memory_growth(gpu_devices[0], True)
18
19 from doctr.documents import DocumentFile
20 from doctr.models import ocr_predictor
21 from doctr.utils.visualization import synthetize_page, visualize_page
22
23 DET_ARCHS = ["db_resnet50"]
24 RECO_ARCHS = ["crnn_vgg16_bn", "crnn_resnet31", "sar_vgg16_bn", "sar_resnet31"]
25
26
27 def main():
28
29 # Wide mode
30 st.set_page_config(layout="wide")
31
32 # Designing the interface
33 st.title("DocTR: Document Text Recognition")
34 # For newline
35 st.write('\n')
36 # Set the columns
37 cols = st.beta_columns((1, 1))
38 cols[0].subheader("Input document (first page)")
39 cols[1].subheader("Raw heatmap (segmentation task)")
40
41 # Sidebar
42 # File selection
43 st.sidebar.title("Document selection")
44 # Disabling warning
45 st.set_option('deprecation.showfileUploaderEncoding', False)
46 # Choose your own image
47 uploaded_file = st.sidebar.file_uploader("Upload files", type=['pdf', 'png', 'jpeg', 'jpg'])
48 if uploaded_file is not None:
49 if uploaded_file.name.endswith('.pdf'):
50 doc = DocumentFile.from_pdf(uploaded_file.read()).as_images(output_size=(1024, 1024))
51 else:
52 doc = DocumentFile.from_images(uploaded_file.read())
53 cols[0].image(doc[0], width=640)
54
55 # Model selection
56 st.sidebar.title("Model selection")
57 det_arch = st.sidebar.selectbox("Text detection model", DET_ARCHS)
58 reco_arch = st.sidebar.selectbox("Text recognition model", RECO_ARCHS)
59
60 # For newline
61 st.sidebar.write('\n')
62
63 if st.sidebar.button("Analyze document"):
64
65 if uploaded_file is None:
66 st.sidebar.write("Please upload a document")
67
68 else:
69 with st.spinner('Loading model...'):
70 predictor = ocr_predictor(det_arch, reco_arch, pretrained=True)
71
72 with st.spinner('Analyzing...'):
73
74 # Forward the image to the model
75 processed_batches = predictor.det_predictor.pre_processor(doc)
76 out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False)
77 seg_map = out["out_map"]
78 seg_map = tf.squeeze(seg_map[0, ...], axis=[2])
79 seg_map = cv2.resize(seg_map.numpy(), (doc[0].shape[1], doc[0].shape[0]),
80 interpolation=cv2.INTER_LINEAR)
81 # Plot the raw heatmap
82 fig, ax = plt.subplots()
83 ax.imshow(seg_map)
84 ax.axis('off')
85 cols[1].pyplot(fig)
86
87 # Plot OCR output
88 out = predictor(doc, training=False)
89 cols[1].subheader("OCR output")
90 fig = visualize_page(out.pages[0].export(), doc[0], interactive=False)
91 cols[1].pyplot(fig)
92
93 # Page reconsitution under input page
94 cols[0].subheader("Page reconstitution from OCR output")
95 img = synthetize_page(out.pages[0].export())
96 cols[0].image(img, clamp=True, width=640)
97
98
99 if __name__ == '__main__':
100 main()
101
[end of demo/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/demo/app.py b/demo/app.py
--- a/demo/app.py
+++ b/demo/app.py
@@ -33,10 +33,14 @@
st.title("DocTR: Document Text Recognition")
# For newline
st.write('\n')
+ # Instructions
+ st.markdown("*Hint: click on the top-right corner of an image to enlarge it!*")
# Set the columns
- cols = st.beta_columns((1, 1))
- cols[0].subheader("Input document (first page)")
- cols[1].subheader("Raw heatmap (segmentation task)")
+ cols = st.beta_columns((1, 1, 1, 1))
+ cols[0].subheader("Input page")
+ cols[1].subheader("Segmentation heatmap")
+ cols[2].subheader("OCR output")
+ cols[3].subheader("Page reconstitution")
# Sidebar
# File selection
@@ -50,7 +54,8 @@
doc = DocumentFile.from_pdf(uploaded_file.read()).as_images(output_size=(1024, 1024))
else:
doc = DocumentFile.from_images(uploaded_file.read())
- cols[0].image(doc[0], width=640)
+ page_idx = st.sidebar.selectbox("Page selection", [idx + 1 for idx in range(len(doc))]) - 1
+ cols[0].image(doc[page_idx])
# Model selection
st.sidebar.title("Model selection")
@@ -60,7 +65,7 @@
# For newline
st.sidebar.write('\n')
- if st.sidebar.button("Analyze document"):
+ if st.sidebar.button("Analyze page"):
if uploaded_file is None:
st.sidebar.write("Please upload a document")
@@ -72,11 +77,11 @@
with st.spinner('Analyzing...'):
# Forward the image to the model
- processed_batches = predictor.det_predictor.pre_processor(doc)
+ processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])
out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False)
seg_map = out["out_map"]
seg_map = tf.squeeze(seg_map[0, ...], axis=[2])
- seg_map = cv2.resize(seg_map.numpy(), (doc[0].shape[1], doc[0].shape[0]),
+ seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),
interpolation=cv2.INTER_LINEAR)
# Plot the raw heatmap
fig, ax = plt.subplots()
@@ -85,15 +90,18 @@
cols[1].pyplot(fig)
# Plot OCR output
- out = predictor(doc, training=False)
- cols[1].subheader("OCR output")
- fig = visualize_page(out.pages[0].export(), doc[0], interactive=False)
- cols[1].pyplot(fig)
+ out = predictor([doc[page_idx]], training=False)
+ fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False)
+ cols[2].pyplot(fig)
# Page reconsitution under input page
- cols[0].subheader("Page reconstitution from OCR output")
- img = synthetize_page(out.pages[0].export())
- cols[0].image(img, clamp=True, width=640)
+ page_export = out.pages[0].export()
+ img = synthetize_page(page_export)
+ cols[3].image(img, clamp=True)
+
+ # Display JSON
+ st.markdown("\nHere are your analysis results in JSON format:")
+ st.json(page_export)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/demo/app.py b/demo/app.py\n--- a/demo/app.py\n+++ b/demo/app.py\n@@ -33,10 +33,14 @@\n st.title(\"DocTR: Document Text Recognition\")\n # For newline\n st.write('\\n')\n+ # Instructions\n+ st.markdown(\"*Hint: click on the top-right corner of an image to enlarge it!*\")\n # Set the columns\n- cols = st.beta_columns((1, 1))\n- cols[0].subheader(\"Input document (first page)\")\n- cols[1].subheader(\"Raw heatmap (segmentation task)\")\n+ cols = st.beta_columns((1, 1, 1, 1))\n+ cols[0].subheader(\"Input page\")\n+ cols[1].subheader(\"Segmentation heatmap\")\n+ cols[2].subheader(\"OCR output\")\n+ cols[3].subheader(\"Page reconstitution\")\n \n # Sidebar\n # File selection\n@@ -50,7 +54,8 @@\n doc = DocumentFile.from_pdf(uploaded_file.read()).as_images(output_size=(1024, 1024))\n else:\n doc = DocumentFile.from_images(uploaded_file.read())\n- cols[0].image(doc[0], width=640)\n+ page_idx = st.sidebar.selectbox(\"Page selection\", [idx + 1 for idx in range(len(doc))]) - 1\n+ cols[0].image(doc[page_idx])\n \n # Model selection\n st.sidebar.title(\"Model selection\")\n@@ -60,7 +65,7 @@\n # For newline\n st.sidebar.write('\\n')\n \n- if st.sidebar.button(\"Analyze document\"):\n+ if st.sidebar.button(\"Analyze page\"):\n \n if uploaded_file is None:\n st.sidebar.write(\"Please upload a document\")\n@@ -72,11 +77,11 @@\n with st.spinner('Analyzing...'):\n \n # Forward the image to the model\n- processed_batches = predictor.det_predictor.pre_processor(doc)\n+ processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])\n out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False)\n seg_map = out[\"out_map\"]\n seg_map = tf.squeeze(seg_map[0, ...], axis=[2])\n- seg_map = cv2.resize(seg_map.numpy(), (doc[0].shape[1], doc[0].shape[0]),\n+ seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),\n interpolation=cv2.INTER_LINEAR)\n # Plot the raw heatmap\n fig, ax = plt.subplots()\n@@ -85,15 +90,18 @@\n cols[1].pyplot(fig)\n \n # Plot OCR output\n- out = predictor(doc, training=False)\n- cols[1].subheader(\"OCR output\")\n- fig = visualize_page(out.pages[0].export(), doc[0], interactive=False)\n- cols[1].pyplot(fig)\n+ out = predictor([doc[page_idx]], training=False)\n+ fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False)\n+ cols[2].pyplot(fig)\n \n # Page reconsitution under input page\n- cols[0].subheader(\"Page reconstitution from OCR output\")\n- img = synthetize_page(out.pages[0].export())\n- cols[0].image(img, clamp=True, width=640)\n+ page_export = out.pages[0].export()\n+ img = synthetize_page(page_export)\n+ cols[3].image(img, clamp=True)\n+\n+ # Display JSON\n+ st.markdown(\"\\nHere are your analysis results in JSON format:\")\n+ st.json(page_export)\n \n \n if __name__ == '__main__':\n", "issue": "[demo] Improve UI for OCR result display\nFor very dense documents, since the predicted text value is plotted statically, there can be some readability issues. We should try to improve this\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport os\nimport streamlit as st\nimport matplotlib.pyplot as plt\n\nos.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\"\n\nimport tensorflow as tf\nimport cv2\n\ngpu_devices = tf.config.experimental.list_physical_devices('GPU')\nif any(gpu_devices):\n tf.config.experimental.set_memory_growth(gpu_devices[0], True)\n\nfrom doctr.documents import DocumentFile\nfrom doctr.models import ocr_predictor\nfrom doctr.utils.visualization import synthetize_page, visualize_page\n\nDET_ARCHS = [\"db_resnet50\"]\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_resnet31\", \"sar_vgg16_bn\", \"sar_resnet31\"]\n\n\ndef main():\n\n # Wide mode\n st.set_page_config(layout=\"wide\")\n\n # Designing the interface\n st.title(\"DocTR: Document Text Recognition\")\n # For newline\n st.write('\\n')\n # Set the columns\n cols = st.beta_columns((1, 1))\n cols[0].subheader(\"Input document (first page)\")\n cols[1].subheader(\"Raw heatmap (segmentation task)\")\n\n # Sidebar\n # File selection\n st.sidebar.title(\"Document selection\")\n # Disabling warning\n st.set_option('deprecation.showfileUploaderEncoding', False)\n # Choose your own image\n uploaded_file = st.sidebar.file_uploader(\"Upload files\", type=['pdf', 'png', 'jpeg', 'jpg'])\n if uploaded_file is not None:\n if uploaded_file.name.endswith('.pdf'):\n doc = DocumentFile.from_pdf(uploaded_file.read()).as_images(output_size=(1024, 1024))\n else:\n doc = DocumentFile.from_images(uploaded_file.read())\n cols[0].image(doc[0], width=640)\n\n # Model selection\n st.sidebar.title(\"Model selection\")\n det_arch = st.sidebar.selectbox(\"Text detection model\", DET_ARCHS)\n reco_arch = st.sidebar.selectbox(\"Text recognition model\", RECO_ARCHS)\n\n # For newline\n st.sidebar.write('\\n')\n\n if st.sidebar.button(\"Analyze document\"):\n\n if uploaded_file is None:\n st.sidebar.write(\"Please upload a document\")\n\n else:\n with st.spinner('Loading model...'):\n predictor = ocr_predictor(det_arch, reco_arch, pretrained=True)\n\n with st.spinner('Analyzing...'):\n\n # Forward the image to the model\n processed_batches = predictor.det_predictor.pre_processor(doc)\n out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False)\n seg_map = out[\"out_map\"]\n seg_map = tf.squeeze(seg_map[0, ...], axis=[2])\n seg_map = cv2.resize(seg_map.numpy(), (doc[0].shape[1], doc[0].shape[0]),\n interpolation=cv2.INTER_LINEAR)\n # Plot the raw heatmap\n fig, ax = plt.subplots()\n ax.imshow(seg_map)\n ax.axis('off')\n cols[1].pyplot(fig)\n\n # Plot OCR output\n out = predictor(doc, training=False)\n cols[1].subheader(\"OCR output\")\n fig = visualize_page(out.pages[0].export(), doc[0], interactive=False)\n cols[1].pyplot(fig)\n\n # Page reconsitution under input page\n cols[0].subheader(\"Page reconstitution from OCR output\")\n img = synthetize_page(out.pages[0].export())\n cols[0].image(img, clamp=True, width=640)\n\n\nif __name__ == '__main__':\n main()\n", "path": "demo/app.py"}]} | 1,611 | 864 |
gh_patches_debug_27227 | rasdani/github-patches | git_diff | searx__searx-2066 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mymemory_translated engine: unexpected crash 'str' object has no attribute 'decode'
mymemory engine does not work.
You can see it in the search engine statistics: https://searx.space/#.
Either: "unexpected crash 'str' object has no attribute 'decode'"
Or: "no result"
My instance is https://searx.hlfh.space (I use antibot-proxy) and I have the first issue.
I am using mymemory with the API key I got from the service.
</issue>
<code>
[start of searx/engines/translated.py]
1 """
2 MyMemory Translated
3
4 @website https://mymemory.translated.net/
5 @provide-api yes (https://mymemory.translated.net/doc/spec.php)
6 @using-api yes
7 @results JSON
8 @stable yes
9 @parse url, title, content
10 """
11 import re
12 from sys import version_info
13 from searx.utils import is_valid_lang
14
15 if version_info[0] == 3:
16 unicode = str
17
18 categories = ['general']
19 url = u'http://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'
20 web_url = u'http://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'
21 weight = 100
22
23 parser_re = re.compile(u'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)
24 api_key = ''
25
26
27 def request(query, params):
28 m = parser_re.match(unicode(query, 'utf8'))
29 if not m:
30 return params
31
32 from_lang, to_lang, query = m.groups()
33
34 from_lang = is_valid_lang(from_lang)
35 to_lang = is_valid_lang(to_lang)
36
37 if not from_lang or not to_lang:
38 return params
39
40 if api_key:
41 key_form = '&key=' + api_key
42 else:
43 key_form = ''
44 params['url'] = url.format(from_lang=from_lang[1],
45 to_lang=to_lang[1],
46 query=query,
47 key=key_form)
48 params['query'] = query
49 params['from_lang'] = from_lang
50 params['to_lang'] = to_lang
51
52 return params
53
54
55 def response(resp):
56 results = []
57 results.append({
58 'url': web_url.format(
59 from_lang=resp.search_params['from_lang'][2],
60 to_lang=resp.search_params['to_lang'][2],
61 query=resp.search_params['query']),
62 'title': '[{0}-{1}] {2}'.format(
63 resp.search_params['from_lang'][1],
64 resp.search_params['to_lang'][1],
65 resp.search_params['query']),
66 'content': resp.json()['responseData']['translatedText']
67 })
68 return results
69
[end of searx/engines/translated.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/translated.py b/searx/engines/translated.py
--- a/searx/engines/translated.py
+++ b/searx/engines/translated.py
@@ -9,23 +9,19 @@
@parse url, title, content
"""
import re
-from sys import version_info
from searx.utils import is_valid_lang
-if version_info[0] == 3:
- unicode = str
-
categories = ['general']
-url = u'http://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'
-web_url = u'http://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'
+url = u'https://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'
+web_url = u'https://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'
weight = 100
-parser_re = re.compile(u'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)
+parser_re = re.compile(b'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)
api_key = ''
def request(query, params):
- m = parser_re.match(unicode(query, 'utf8'))
+ m = parser_re.match(query)
if not m:
return params
@@ -43,9 +39,9 @@
key_form = ''
params['url'] = url.format(from_lang=from_lang[1],
to_lang=to_lang[1],
- query=query,
+ query=query.decode('utf-8'),
key=key_form)
- params['query'] = query
+ params['query'] = query.decode('utf-8')
params['from_lang'] = from_lang
params['to_lang'] = to_lang
| {"golden_diff": "diff --git a/searx/engines/translated.py b/searx/engines/translated.py\n--- a/searx/engines/translated.py\n+++ b/searx/engines/translated.py\n@@ -9,23 +9,19 @@\n @parse url, title, content\n \"\"\"\n import re\n-from sys import version_info\n from searx.utils import is_valid_lang\n \n-if version_info[0] == 3:\n- unicode = str\n-\n categories = ['general']\n-url = u'http://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'\n-web_url = u'http://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'\n+url = u'https://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'\n+web_url = u'https://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'\n weight = 100\n \n-parser_re = re.compile(u'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)\n+parser_re = re.compile(b'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)\n api_key = ''\n \n \n def request(query, params):\n- m = parser_re.match(unicode(query, 'utf8'))\n+ m = parser_re.match(query)\n if not m:\n return params\n \n@@ -43,9 +39,9 @@\n key_form = ''\n params['url'] = url.format(from_lang=from_lang[1],\n to_lang=to_lang[1],\n- query=query,\n+ query=query.decode('utf-8'),\n key=key_form)\n- params['query'] = query\n+ params['query'] = query.decode('utf-8')\n params['from_lang'] = from_lang\n params['to_lang'] = to_lang\n", "issue": "mymemory_translated engine: unexpected crash 'str' object has no attribute 'decode' \nmymemory engine does not work.\r\nYou can see it in the search engine statistics: https://searx.space/#.\r\n\r\nEither: \"unexpected crash 'str' object has no attribute 'decode'\"\r\nOr: \"no result\"\r\n\r\nMy instance is https://searx.hlfh.space (I use antibot-proxy) and I have the first issue.\r\nI am using mymemory with the API key I got from the service.\n", "before_files": [{"content": "\"\"\"\n MyMemory Translated\n\n @website https://mymemory.translated.net/\n @provide-api yes (https://mymemory.translated.net/doc/spec.php)\n @using-api yes\n @results JSON\n @stable yes\n @parse url, title, content\n\"\"\"\nimport re\nfrom sys import version_info\nfrom searx.utils import is_valid_lang\n\nif version_info[0] == 3:\n unicode = str\n\ncategories = ['general']\nurl = u'http://api.mymemory.translated.net/get?q={query}&langpair={from_lang}|{to_lang}{key}'\nweb_url = u'http://mymemory.translated.net/en/{from_lang}/{to_lang}/{query}'\nweight = 100\n\nparser_re = re.compile(u'.*?([a-z]+)-([a-z]+) (.{2,})$', re.I)\napi_key = ''\n\n\ndef request(query, params):\n m = parser_re.match(unicode(query, 'utf8'))\n if not m:\n return params\n\n from_lang, to_lang, query = m.groups()\n\n from_lang = is_valid_lang(from_lang)\n to_lang = is_valid_lang(to_lang)\n\n if not from_lang or not to_lang:\n return params\n\n if api_key:\n key_form = '&key=' + api_key\n else:\n key_form = ''\n params['url'] = url.format(from_lang=from_lang[1],\n to_lang=to_lang[1],\n query=query,\n key=key_form)\n params['query'] = query\n params['from_lang'] = from_lang\n params['to_lang'] = to_lang\n\n return params\n\n\ndef response(resp):\n results = []\n results.append({\n 'url': web_url.format(\n from_lang=resp.search_params['from_lang'][2],\n to_lang=resp.search_params['to_lang'][2],\n query=resp.search_params['query']),\n 'title': '[{0}-{1}] {2}'.format(\n resp.search_params['from_lang'][1],\n resp.search_params['to_lang'][1],\n resp.search_params['query']),\n 'content': resp.json()['responseData']['translatedText']\n })\n return results\n", "path": "searx/engines/translated.py"}]} | 1,272 | 430 |
gh_patches_debug_9958 | rasdani/github-patches | git_diff | ethereum__web3.py-3187 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
web3 import errors in Python 3.12
* Version: 6.13.0
* Python: 3.12, inside a venv
* OS: linux (but is probably applicable to other platforms as well)
* `pip freeze` output:
```
aiohttp==3.9.1
aiosignal==1.3.1
attrs==23.2.0
bitarray==2.9.2
certifi==2023.11.17
charset-normalizer==3.3.2
cytoolz==0.12.2
eth-abi==4.2.1
eth-account==0.10.0
eth-hash==0.5.2
eth-keyfile==0.7.0
eth-keys==0.4.0
eth-rlp==1.0.0
eth-typing==3.5.2
eth-utils==2.3.1
frozenlist==1.4.1
hexbytes==0.3.1
idna==3.6
jsonschema==4.20.0
jsonschema-specifications==2023.12.1
lru-dict==1.2.0
multidict==6.0.4
parsimonious==0.9.0
protobuf==4.25.1
pycryptodome==3.19.1
pyunormalize==15.1.0
referencing==0.32.1
regex==2023.12.25
requests==2.31.0
rlp==4.0.0
rpds-py==0.16.2
toolz==0.12.0
typing_extensions==4.9.0
urllib3==2.1.0
web3==6.13.0
websockets==12.0
yarl==1.9.4
```
### What was wrong?
In certain situations, web3 will raise ImportErrors on python 3.12 if the `setuptools` package is not installed. _In particular, this happens inside a fresh Python 3.12 venv._ The `setuptools` package automatically installs the `pkg_resources` package, which is used in web3 [here](https://github.com/ethereum/web3.py/blob/8f853f5841fd62187bce0c9f17be75627104ca43/web3/__init__.py#L25). This used to work fine in older Python versions. However, according to the [new changes in 3.12](https://docs.python.org/3/whatsnew/3.12.html):
> gh-95299: Do not pre-install setuptools in virtual environments created with venv. This means that distutils, setuptools, pkg_resources, and easy_install will no longer available by default; to access these run pip install setuptools in the activated virtual environment.
This means that the pkg_resources package is no longer accessible which causes this error.
Among other things, this scenario can occur inside tox tests for projects that have the `web3` package installed and are configured to test against 3.12. This causes such tests to immediately fail because of the ImportError. The workaround, installing setuptools after the venv created, causes unnecessarily long test times, adding about 3 minutes to the run time.
### How can it be fixed?
Given that web3's use of setuptools/pkg_resources is limited to just getting the version number, this should be trivial to fix. Why not open the file with built-in functions such as `open()` and parse it for the version number? I don't think that `web3` should continue to depend on setuptools.
</issue>
<code>
[start of web3/__init__.py]
1 from eth_account import Account # noqa: E402,
2 import pkg_resources
3
4 from web3.main import (
5 AsyncWeb3,
6 Web3,
7 )
8 from web3.providers.async_rpc import ( # noqa: E402
9 AsyncHTTPProvider,
10 )
11 from web3.providers.eth_tester import ( # noqa: E402
12 EthereumTesterProvider,
13 )
14 from web3.providers.ipc import ( # noqa: E402
15 IPCProvider,
16 )
17 from web3.providers.rpc import ( # noqa: E402
18 HTTPProvider,
19 )
20 from web3.providers.websocket import ( # noqa: E402
21 WebsocketProvider,
22 WebsocketProviderV2,
23 )
24
25 __version__ = pkg_resources.get_distribution("web3").version
26
27 __all__ = [
28 "__version__",
29 "AsyncWeb3",
30 "Web3",
31 "HTTPProvider",
32 "IPCProvider",
33 "WebsocketProvider",
34 "WebsocketProviderV2",
35 "EthereumTesterProvider",
36 "Account",
37 "AsyncHTTPProvider",
38 ]
39
[end of web3/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/web3/__init__.py b/web3/__init__.py
--- a/web3/__init__.py
+++ b/web3/__init__.py
@@ -1,5 +1,15 @@
-from eth_account import Account # noqa: E402,
-import pkg_resources
+from eth_account import Account # noqa: E402
+import sys
+
+if sys.version_info.major == 3 and sys.version_info.minor < 8:
+ import pkg_resources
+
+ __version__ = pkg_resources.get_distribution("web3").version
+else:
+ from importlib.metadata import version
+
+ __version__ = version("web3")
+
from web3.main import (
AsyncWeb3,
@@ -22,7 +32,6 @@
WebsocketProviderV2,
)
-__version__ = pkg_resources.get_distribution("web3").version
__all__ = [
"__version__",
| {"golden_diff": "diff --git a/web3/__init__.py b/web3/__init__.py\n--- a/web3/__init__.py\n+++ b/web3/__init__.py\n@@ -1,5 +1,15 @@\n-from eth_account import Account # noqa: E402,\n-import pkg_resources\n+from eth_account import Account # noqa: E402\n+import sys\n+\n+if sys.version_info.major == 3 and sys.version_info.minor < 8:\n+ import pkg_resources\n+\n+ __version__ = pkg_resources.get_distribution(\"web3\").version\n+else:\n+ from importlib.metadata import version\n+\n+ __version__ = version(\"web3\")\n+\n \n from web3.main import (\n AsyncWeb3,\n@@ -22,7 +32,6 @@\n WebsocketProviderV2,\n )\n \n-__version__ = pkg_resources.get_distribution(\"web3\").version\n \n __all__ = [\n \"__version__\",\n", "issue": "web3 import errors in Python 3.12\n* Version: 6.13.0\r\n* Python: 3.12, inside a venv\r\n* OS: linux (but is probably applicable to other platforms as well)\r\n* `pip freeze` output:\r\n\r\n```\r\naiohttp==3.9.1\r\naiosignal==1.3.1\r\nattrs==23.2.0\r\nbitarray==2.9.2\r\ncertifi==2023.11.17\r\ncharset-normalizer==3.3.2\r\ncytoolz==0.12.2\r\neth-abi==4.2.1\r\neth-account==0.10.0\r\neth-hash==0.5.2\r\neth-keyfile==0.7.0\r\neth-keys==0.4.0\r\neth-rlp==1.0.0\r\neth-typing==3.5.2\r\neth-utils==2.3.1\r\nfrozenlist==1.4.1\r\nhexbytes==0.3.1\r\nidna==3.6\r\njsonschema==4.20.0\r\njsonschema-specifications==2023.12.1\r\nlru-dict==1.2.0\r\nmultidict==6.0.4\r\nparsimonious==0.9.0\r\nprotobuf==4.25.1\r\npycryptodome==3.19.1\r\npyunormalize==15.1.0\r\nreferencing==0.32.1\r\nregex==2023.12.25\r\nrequests==2.31.0\r\nrlp==4.0.0\r\nrpds-py==0.16.2\r\ntoolz==0.12.0\r\ntyping_extensions==4.9.0\r\nurllib3==2.1.0\r\nweb3==6.13.0\r\nwebsockets==12.0\r\nyarl==1.9.4\r\n```\r\n\r\n### What was wrong?\r\n\r\nIn certain situations, web3 will raise ImportErrors on python 3.12 if the `setuptools` package is not installed. _In particular, this happens inside a fresh Python 3.12 venv._ The `setuptools` package automatically installs the `pkg_resources` package, which is used in web3 [here](https://github.com/ethereum/web3.py/blob/8f853f5841fd62187bce0c9f17be75627104ca43/web3/__init__.py#L25). This used to work fine in older Python versions. However, according to the [new changes in 3.12](https://docs.python.org/3/whatsnew/3.12.html):\r\n\r\n> gh-95299: Do not pre-install setuptools in virtual environments created with venv. This means that distutils, setuptools, pkg_resources, and easy_install will no longer available by default; to access these run pip install setuptools in the activated virtual environment.\r\n\r\nThis means that the pkg_resources package is no longer accessible which causes this error.\r\n\r\nAmong other things, this scenario can occur inside tox tests for projects that have the `web3` package installed and are configured to test against 3.12. This causes such tests to immediately fail because of the ImportError. The workaround, installing setuptools after the venv created, causes unnecessarily long test times, adding about 3 minutes to the run time.\r\n\r\n### How can it be fixed?\r\n\r\nGiven that web3's use of setuptools/pkg_resources is limited to just getting the version number, this should be trivial to fix. Why not open the file with built-in functions such as `open()` and parse it for the version number? I don't think that `web3` should continue to depend on setuptools.\n", "before_files": [{"content": "from eth_account import Account # noqa: E402,\nimport pkg_resources\n\nfrom web3.main import (\n AsyncWeb3,\n Web3,\n)\nfrom web3.providers.async_rpc import ( # noqa: E402\n AsyncHTTPProvider,\n)\nfrom web3.providers.eth_tester import ( # noqa: E402\n EthereumTesterProvider,\n)\nfrom web3.providers.ipc import ( # noqa: E402\n IPCProvider,\n)\nfrom web3.providers.rpc import ( # noqa: E402\n HTTPProvider,\n)\nfrom web3.providers.websocket import ( # noqa: E402\n WebsocketProvider,\n WebsocketProviderV2,\n)\n\n__version__ = pkg_resources.get_distribution(\"web3\").version\n\n__all__ = [\n \"__version__\",\n \"AsyncWeb3\",\n \"Web3\",\n \"HTTPProvider\",\n \"IPCProvider\",\n \"WebsocketProvider\",\n \"WebsocketProviderV2\",\n \"EthereumTesterProvider\",\n \"Account\",\n \"AsyncHTTPProvider\",\n]\n", "path": "web3/__init__.py"}]} | 1,675 | 211 |
gh_patches_debug_27086 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-8283 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable intersphinx support for hoverxref in our documentation
While writing #8283, I realized that we still do not enable intersphinx support in our sphinx-hoverxref documentation. More info here:
https://blog.readthedocs.com/hoverxref-intersphinx/
I think it would be nice to do so.
</issue>
<code>
[start of docs/conf.py]
1 import os
2 import sys
3 from configparser import RawConfigParser
4
5 import sphinx_rtd_theme
6
7 sys.path.insert(0, os.path.abspath('..'))
8 sys.path.append(os.path.dirname(__file__))
9 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "readthedocs.settings.dev")
10
11 from django.utils import timezone
12
13 import django
14 django.setup()
15
16
17 def get_version():
18 """Return package version from setup.cfg."""
19 config = RawConfigParser()
20 config.read(os.path.join('..', 'setup.cfg'))
21 return config.get('metadata', 'version')
22
23
24 sys.path.append(os.path.abspath('_ext'))
25 extensions = [
26 'sphinx.ext.autosectionlabel',
27 'sphinx.ext.autodoc',
28 'sphinx.ext.intersphinx',
29 'sphinxcontrib.httpdomain',
30 'djangodocs',
31 'doc_extensions',
32 'sphinx_tabs.tabs',
33 'sphinx-prompt',
34 'notfound.extension',
35 'hoverxref.extension',
36 'sphinx_search.extension',
37 'sphinxemoji.sphinxemoji',
38 ]
39
40 templates_path = ['_templates']
41
42 master_doc = 'index'
43 project = 'Read the Docs'
44 copyright = '2010-{}, Read the Docs, Inc & contributors'.format(
45 timezone.now().year
46 )
47 version = get_version()
48 release = version
49 exclude_patterns = ['_build']
50 default_role = 'obj'
51 intersphinx_mapping = {
52 'python': ('https://docs.python.org/3.6/', None),
53 'django': ('https://docs.djangoproject.com/en/1.11/', 'https://docs.djangoproject.com/en/1.11/_objects/'),
54 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),
55 'pip': ('https://pip.pypa.io/en/stable/', None),
56 }
57 htmlhelp_basename = 'ReadTheDocsdoc'
58 latex_documents = [
59 ('index', 'ReadTheDocs.tex', 'Read the Docs Documentation',
60 'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),
61 ]
62 man_pages = [
63 ('index', 'read-the-docs', 'Read the Docs Documentation',
64 ['Eric Holscher, Charlie Leifer, Bobby Grace'], 1)
65 ]
66
67 exclude_patterns = [
68 # 'api' # needed for ``make gettext`` to not die.
69 ]
70
71 language = 'en'
72
73 locale_dirs = [
74 'locale/',
75 ]
76 gettext_compact = False
77
78 html_theme = 'sphinx_rtd_theme'
79 html_static_path = ['_static']
80 html_js_files = ['js/expand_tabs.js']
81 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
82 html_logo = 'img/logo.svg'
83 html_theme_options = {
84 'logo_only': True,
85 'display_version': False,
86 }
87
88 hoverxref_auto_ref = True
89 hoverxref_domains = ['py']
90 hoverxref_roles = [
91 'option',
92 'doc',
93 ]
94 hoverxref_role_types = {
95 'mod': 'modal', # for Python Sphinx Domain
96 'doc': 'modal', # for whole docs
97 'class': 'tooltip', # for Python Sphinx Domain
98 'ref': 'tooltip', # for hoverxref_auto_ref config
99 'confval': 'tooltip', # for custom object
100 }
101
102 rst_epilog = """
103 .. |org_brand| replace:: Read the Docs Community
104 .. |com_brand| replace:: Read the Docs for Business
105 """
106
107 # Activate autosectionlabel plugin
108 autosectionlabel_prefix_document = True
109
110 numfig = True
111
112 # sphinx-notfound-page
113 # https://github.com/readthedocs/sphinx-notfound-page
114 notfound_context = {
115 'title': 'Page Not Found',
116 'body': '''
117 <h1>Page Not Found</h1>
118
119 <p>Sorry, we couldn't find that page.</p>
120
121 <p>Try using the search box or go to the homepage.</p>
122 ''',
123 }
124 linkcheck_ignore = [
125 r'http://127\.0\.0\.1',
126 r'http://localhost',
127 r'http://community\.dev\.readthedocs\.io',
128 r'https://yourproject\.readthedocs\.io',
129 r'https?://docs\.example\.com',
130 r'https://foo\.readthedocs\.io/projects',
131 r'https://github\.com.+?#L\d+',
132 r'https://github\.com/readthedocs/readthedocs\.org/issues',
133 r'https://github\.com/readthedocs/readthedocs\.org/pull',
134 r'https://docs\.readthedocs\.io/\?rtd_search',
135 r'https://readthedocs\.org/search',
136 # This page is under login
137 r'https://readthedocs\.org/accounts/gold',
138 ]
139
140
141 def setup(app):
142 app.add_css_file('css/sphinx_prompt_css.css')
143
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -53,7 +53,23 @@
'django': ('https://docs.djangoproject.com/en/1.11/', 'https://docs.djangoproject.com/en/1.11/_objects/'),
'sphinx': ('https://www.sphinx-doc.org/en/master/', None),
'pip': ('https://pip.pypa.io/en/stable/', None),
+ 'nbsphinx': ('https://nbsphinx.readthedocs.io/en/0.8.6/', None),
+ 'myst-nb': ('https://myst-nb.readthedocs.io/en/v0.12.3/', None),
+ 'ipywidgets': ('https://ipywidgets.readthedocs.io/en/7.6.3/', None),
+ 'jupytext': ('https://jupytext.readthedocs.io/en/stable/', None),
+ 'ipyleaflet': ('https://ipyleaflet.readthedocs.io/en/stable/', None),
+ 'poliastro': ('https://docs.poliastro.space/en/v0.15.2/', None),
+ 'qiskit': ('https://qiskit.org/documentation/', None),
+ 'myst-parser': ('https://myst-parser.readthedocs.io/en/v0.15.1/', None),
}
+hoverxref_intersphinx = [
+ "sphinx",
+ "pip",
+ "nbsphinx",
+ "myst-nb",
+ "ipywidgets",
+ "jupytext",
+]
htmlhelp_basename = 'ReadTheDocsdoc'
latex_documents = [
('index', 'ReadTheDocs.tex', 'Read the Docs Documentation',
@@ -107,8 +123,6 @@
# Activate autosectionlabel plugin
autosectionlabel_prefix_document = True
-numfig = True
-
# sphinx-notfound-page
# https://github.com/readthedocs/sphinx-notfound-page
notfound_context = {
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -53,7 +53,23 @@\n 'django': ('https://docs.djangoproject.com/en/1.11/', 'https://docs.djangoproject.com/en/1.11/_objects/'),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n 'pip': ('https://pip.pypa.io/en/stable/', None),\n+ 'nbsphinx': ('https://nbsphinx.readthedocs.io/en/0.8.6/', None),\n+ 'myst-nb': ('https://myst-nb.readthedocs.io/en/v0.12.3/', None),\n+ 'ipywidgets': ('https://ipywidgets.readthedocs.io/en/7.6.3/', None),\n+ 'jupytext': ('https://jupytext.readthedocs.io/en/stable/', None),\n+ 'ipyleaflet': ('https://ipyleaflet.readthedocs.io/en/stable/', None),\n+ 'poliastro': ('https://docs.poliastro.space/en/v0.15.2/', None),\n+ 'qiskit': ('https://qiskit.org/documentation/', None),\n+ 'myst-parser': ('https://myst-parser.readthedocs.io/en/v0.15.1/', None),\n }\n+hoverxref_intersphinx = [\n+ \"sphinx\",\n+ \"pip\",\n+ \"nbsphinx\",\n+ \"myst-nb\",\n+ \"ipywidgets\",\n+ \"jupytext\",\n+]\n htmlhelp_basename = 'ReadTheDocsdoc'\n latex_documents = [\n ('index', 'ReadTheDocs.tex', 'Read the Docs Documentation',\n@@ -107,8 +123,6 @@\n # Activate autosectionlabel plugin\n autosectionlabel_prefix_document = True\n \n-numfig = True\n-\n # sphinx-notfound-page\n # https://github.com/readthedocs/sphinx-notfound-page\n notfound_context = {\n", "issue": "Enable intersphinx support for hoverxref in our documentation\nWhile writing #8283, I realized that we still do not enable intersphinx support in our sphinx-hoverxref documentation. More info here:\r\n\r\nhttps://blog.readthedocs.com/hoverxref-intersphinx/\r\n\r\nI think it would be nice to do so.\n", "before_files": [{"content": "import os\nimport sys\nfrom configparser import RawConfigParser\n\nimport sphinx_rtd_theme\n\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.dirname(__file__))\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"readthedocs.settings.dev\")\n\nfrom django.utils import timezone\n\nimport django\ndjango.setup()\n\n\ndef get_version():\n \"\"\"Return package version from setup.cfg.\"\"\"\n config = RawConfigParser()\n config.read(os.path.join('..', 'setup.cfg'))\n return config.get('metadata', 'version')\n\n\nsys.path.append(os.path.abspath('_ext'))\nextensions = [\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinxcontrib.httpdomain',\n 'djangodocs',\n 'doc_extensions',\n 'sphinx_tabs.tabs',\n 'sphinx-prompt',\n 'notfound.extension',\n 'hoverxref.extension',\n 'sphinx_search.extension',\n 'sphinxemoji.sphinxemoji',\n]\n\ntemplates_path = ['_templates']\n\nmaster_doc = 'index'\nproject = 'Read the Docs'\ncopyright = '2010-{}, Read the Docs, Inc & contributors'.format(\n timezone.now().year\n)\nversion = get_version()\nrelease = version\nexclude_patterns = ['_build']\ndefault_role = 'obj'\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3.6/', None),\n 'django': ('https://docs.djangoproject.com/en/1.11/', 'https://docs.djangoproject.com/en/1.11/_objects/'),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n 'pip': ('https://pip.pypa.io/en/stable/', None),\n}\nhtmlhelp_basename = 'ReadTheDocsdoc'\nlatex_documents = [\n ('index', 'ReadTheDocs.tex', 'Read the Docs Documentation',\n 'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),\n]\nman_pages = [\n ('index', 'read-the-docs', 'Read the Docs Documentation',\n ['Eric Holscher, Charlie Leifer, Bobby Grace'], 1)\n]\n\nexclude_patterns = [\n # 'api' # needed for ``make gettext`` to not die.\n]\n\nlanguage = 'en'\n\nlocale_dirs = [\n 'locale/',\n]\ngettext_compact = False\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_static_path = ['_static']\nhtml_js_files = ['js/expand_tabs.js']\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_logo = 'img/logo.svg'\nhtml_theme_options = {\n 'logo_only': True,\n 'display_version': False,\n}\n\nhoverxref_auto_ref = True\nhoverxref_domains = ['py']\nhoverxref_roles = [\n 'option',\n 'doc',\n]\nhoverxref_role_types = {\n 'mod': 'modal', # for Python Sphinx Domain\n 'doc': 'modal', # for whole docs\n 'class': 'tooltip', # for Python Sphinx Domain\n 'ref': 'tooltip', # for hoverxref_auto_ref config\n 'confval': 'tooltip', # for custom object\n}\n\nrst_epilog = \"\"\"\n.. |org_brand| replace:: Read the Docs Community\n.. |com_brand| replace:: Read the Docs for Business\n\"\"\"\n\n# Activate autosectionlabel plugin\nautosectionlabel_prefix_document = True\n\nnumfig = True\n\n# sphinx-notfound-page\n# https://github.com/readthedocs/sphinx-notfound-page\nnotfound_context = {\n 'title': 'Page Not Found',\n 'body': '''\n<h1>Page Not Found</h1>\n\n<p>Sorry, we couldn't find that page.</p>\n\n<p>Try using the search box or go to the homepage.</p>\n''',\n}\nlinkcheck_ignore = [\n r'http://127\\.0\\.0\\.1',\n r'http://localhost',\n r'http://community\\.dev\\.readthedocs\\.io',\n r'https://yourproject\\.readthedocs\\.io',\n r'https?://docs\\.example\\.com',\n r'https://foo\\.readthedocs\\.io/projects',\n r'https://github\\.com.+?#L\\d+',\n r'https://github\\.com/readthedocs/readthedocs\\.org/issues',\n r'https://github\\.com/readthedocs/readthedocs\\.org/pull',\n r'https://docs\\.readthedocs\\.io/\\?rtd_search',\n r'https://readthedocs\\.org/search',\n # This page is under login\n r'https://readthedocs\\.org/accounts/gold',\n]\n\n\ndef setup(app):\n app.add_css_file('css/sphinx_prompt_css.css')\n", "path": "docs/conf.py"}]} | 1,960 | 453 |
gh_patches_debug_428 | rasdani/github-patches | git_diff | python__python-docs-es-1762 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translate 'library/os.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/library/os.html once translated.
Meanwhile, the English version is shown.
Current stats for `library/os.po`:
* Fuzzy: 27
* Percent translated: 94.8%
* Entries: 804 / 848
* Untranslated: 44
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
</issue>
<code>
[start of scripts/translate.py]
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":rfc:`[^`]+`",
46 ":doc:`[^`]+`",
47 "``[^`]+``",
48 "`[^`]+`__",
49 "`[^`]+`_",
50 "\*\*[^\*]+\*\*", # bold text between **
51 "\*[^\*]+\*", # italic text between *
52 ]
53
54 _exps = [re.compile(e) for e in _patterns]
55
56 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
57 """
58 Parameters:
59 string containing the text to translate
60
61 Returns:
62 dictionary containing all the placeholder text as keys
63 and the correct value.
64 """
65
66 i = 0
67 d: Dict[str, str] = {}
68 for exp in _exps:
69 matches = exp.findall(s)
70 if DEBUG:
71 print(exp, matches)
72 for match in matches:
73 ph = f"XASDF{str(i).zfill(2)}"
74 s = s.replace(match, ph)
75 if ph in d and VERBOSE:
76 print(f"Error: {ph} is already in the dictionary")
77 print("new", match)
78 print("old", d[ph])
79 d[ph] = match
80 i += 1
81 return d, s
82
83
84 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
85 for ph, value in placeholders.items():
86 translated_text = translated_text.replace(ph, value)
87 if DEBUG:
88 print(ph, value)
89 print(translated_text)
90 return translated_text
91
92
93 if __name__ == "__main__":
94 filename = sys.argv[1]
95 if not os.path.isfile(filename):
96 print(f"File not found: '{filename}'")
97 sys.exit(-1)
98
99 po = polib.pofile(filename)
100 translator = GoogleTranslator(source="en", target="es")
101
102 for entry in po:
103 # If the entry has already a translation, skip.
104 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
105 continue
106
107 print("\nEN|", entry.msgid)
108 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
109 if VERBOSE:
110 print(temp_text)
111 print(placeholders)
112
113 # Translate the temporary text without sphinx statements
114 translated_text = translator.translate(temp_text)
115
116 # Recover sphinx statements
117 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
118 print("ES|", real_text)
119
120 # Replace the po file translated entry
121 entry.msgstr = real_text
122
123 # Save the file after all the entries are translated
124 po.save()
125
[end of scripts/translate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -44,6 +44,8 @@
":RFC:`[^`]+`",
":rfc:`[^`]+`",
":doc:`[^`]+`",
+ ":manpage:`[^`]+`",
+ ":sup:`[^`]+`",
"``[^`]+``",
"`[^`]+`__",
"`[^`]+`_",
| {"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -44,6 +44,8 @@\n \":RFC:`[^`]+`\",\n \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n+ \":manpage:`[^`]+`\",\n+ \":sup:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n", "issue": "Translate 'library/os.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/library/os.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `library/os.po`:\n\n* Fuzzy: 27\n* Percent translated: 94.8%\n* Entries: 804 / 848\n* Untranslated: 44\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*[^\\*]+\\*\\*\", # bold text between **\n \"\\*[^\\*]+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]} | 1,863 | 114 |
gh_patches_debug_7956 | rasdani/github-patches | git_diff | open-mmlab__mmpose-783 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
resource limit bug
**Describe the feature**
**Motivation**
It is inconvenient when we run mmpose on slurm clustre which may has larger file-open's soft limit than 4096. The resource limit adjust here [https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/builder.py#L13-L19](url) will reduce the base file-open's soft limit to 4096. Sometimes it will result in 'OSError: [Error 24] Too many open files' during training process.
**Additional context**
the code maybe can be modified like below:
```python
if platform.system() != 'Windows':
# https://github.com/pytorch/pytorch/issues/973
import resource
rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
base_soft_limit = rlimit[0]
hard_limit = rlimit[1]
soft_limit = min(max(4096,base_soft_limit), hard_limit)
resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
</issue>
<code>
[start of mmpose/datasets/builder.py]
1 import platform
2 import random
3 from functools import partial
4
5 import numpy as np
6 from mmcv.parallel import collate
7 from mmcv.runner import get_dist_info
8 from mmcv.utils import Registry, build_from_cfg
9 from mmcv.utils.parrots_wrapper import _get_dataloader
10
11 from .samplers import DistributedSampler
12
13 if platform.system() != 'Windows':
14 # https://github.com/pytorch/pytorch/issues/973
15 import resource
16 rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
17 hard_limit = rlimit[1]
18 soft_limit = min(4096, hard_limit)
19 resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
20
21 DATASETS = Registry('dataset')
22 PIPELINES = Registry('pipeline')
23
24
25 def build_dataset(cfg, default_args=None):
26 """Build a dataset from config dict.
27
28 Args:
29 cfg (dict): Config dict. It should at least contain the key "type".
30 default_args (dict, optional): Default initialization arguments.
31 Default: None.
32
33 Returns:
34 Dataset: The constructed dataset.
35 """
36 from .dataset_wrappers import RepeatDataset
37
38 if cfg['type'] == 'RepeatDataset':
39 dataset = RepeatDataset(
40 build_dataset(cfg['dataset'], default_args), cfg['times'])
41 else:
42 dataset = build_from_cfg(cfg, DATASETS, default_args)
43 return dataset
44
45
46 def build_dataloader(dataset,
47 samples_per_gpu,
48 workers_per_gpu,
49 num_gpus=1,
50 dist=True,
51 shuffle=True,
52 seed=None,
53 drop_last=True,
54 pin_memory=True,
55 **kwargs):
56 """Build PyTorch DataLoader.
57
58 In distributed training, each GPU/process has a dataloader.
59 In non-distributed training, there is only one dataloader for all GPUs.
60
61 Args:
62 dataset (Dataset): A PyTorch dataset.
63 samples_per_gpu (int): Number of training samples on each GPU, i.e.,
64 batch size of each GPU.
65 workers_per_gpu (int): How many subprocesses to use for data loading
66 for each GPU.
67 num_gpus (int): Number of GPUs. Only used in non-distributed training.
68 dist (bool): Distributed training/test or not. Default: True.
69 shuffle (bool): Whether to shuffle the data at every epoch.
70 Default: True.
71 drop_last (bool): Whether to drop the last incomplete batch in epoch.
72 Default: True
73 pin_memory (bool): Whether to use pin_memory in DataLoader.
74 Default: True
75 kwargs: any keyword argument to be used to initialize DataLoader
76
77 Returns:
78 DataLoader: A PyTorch dataloader.
79 """
80 rank, world_size = get_dist_info()
81 if dist:
82 sampler = DistributedSampler(
83 dataset, world_size, rank, shuffle=shuffle, seed=seed)
84 shuffle = False
85 batch_size = samples_per_gpu
86 num_workers = workers_per_gpu
87 else:
88 sampler = None
89 batch_size = num_gpus * samples_per_gpu
90 num_workers = num_gpus * workers_per_gpu
91
92 init_fn = partial(
93 worker_init_fn, num_workers=num_workers, rank=rank,
94 seed=seed) if seed is not None else None
95
96 _, DataLoader = _get_dataloader()
97 data_loader = DataLoader(
98 dataset,
99 batch_size=batch_size,
100 sampler=sampler,
101 num_workers=num_workers,
102 collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),
103 pin_memory=pin_memory,
104 shuffle=shuffle,
105 worker_init_fn=init_fn,
106 drop_last=drop_last,
107 **kwargs)
108
109 return data_loader
110
111
112 def worker_init_fn(worker_id, num_workers, rank, seed):
113 """Init the random seed for various workers."""
114 # The seed of each worker equals to
115 # num_worker * rank + worker_id + user_seed
116 worker_seed = num_workers * rank + worker_id + seed
117 np.random.seed(worker_seed)
118 random.seed(worker_seed)
119
[end of mmpose/datasets/builder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmpose/datasets/builder.py b/mmpose/datasets/builder.py
--- a/mmpose/datasets/builder.py
+++ b/mmpose/datasets/builder.py
@@ -14,8 +14,9 @@
# https://github.com/pytorch/pytorch/issues/973
import resource
rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
+ base_soft_limit = rlimit[0]
hard_limit = rlimit[1]
- soft_limit = min(4096, hard_limit)
+ soft_limit = min(max(4096, base_soft_limit), hard_limit)
resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
DATASETS = Registry('dataset')
| {"golden_diff": "diff --git a/mmpose/datasets/builder.py b/mmpose/datasets/builder.py\n--- a/mmpose/datasets/builder.py\n+++ b/mmpose/datasets/builder.py\n@@ -14,8 +14,9 @@\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n+ base_soft_limit = rlimit[0]\n hard_limit = rlimit[1]\n- soft_limit = min(4096, hard_limit)\n+ soft_limit = min(max(4096, base_soft_limit), hard_limit)\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\n \n DATASETS = Registry('dataset')\n", "issue": "resource limit bug\n**Describe the feature**\r\n\r\n**Motivation**\r\n\r\nIt is inconvenient when we run mmpose on slurm clustre which may has larger file-open's soft limit than 4096. The resource limit adjust here [https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/builder.py#L13-L19](url) will reduce the base file-open's soft limit to 4096. Sometimes it will result in 'OSError: [Error 24] Too many open files' during training process.\r\n\r\n\r\n**Additional context**\r\nthe code maybe can be modified like below:\r\n```python\r\n\r\nif platform.system() != 'Windows':\r\n # https://github.com/pytorch/pytorch/issues/973\r\n import resource\r\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\r\n base_soft_limit = rlimit[0]\r\n hard_limit = rlimit[1]\r\n soft_limit = min(max(4096,base_soft_limit), hard_limit)\r\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom mmcv.utils import Registry, build_from_cfg\nfrom mmcv.utils.parrots_wrapper import _get_dataloader\n\nfrom .samplers import DistributedSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n hard_limit = rlimit[1]\n soft_limit = min(4096, hard_limit)\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\n\nDATASETS = Registry('dataset')\nPIPELINES = Registry('pipeline')\n\n\ndef build_dataset(cfg, default_args=None):\n \"\"\"Build a dataset from config dict.\n\n Args:\n cfg (dict): Config dict. It should at least contain the key \"type\".\n default_args (dict, optional): Default initialization arguments.\n Default: None.\n\n Returns:\n Dataset: The constructed dataset.\n \"\"\"\n from .dataset_wrappers import RepeatDataset\n\n if cfg['type'] == 'RepeatDataset':\n dataset = RepeatDataset(\n build_dataset(cfg['dataset'], default_args), cfg['times'])\n else:\n dataset = build_from_cfg(cfg, DATASETS, default_args)\n return dataset\n\n\ndef build_dataloader(dataset,\n samples_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n drop_last=True,\n pin_memory=True,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (Dataset): A PyTorch dataset.\n samples_per_gpu (int): Number of training samples on each GPU, i.e.,\n batch size of each GPU.\n workers_per_gpu (int): How many subprocesses to use for data loading\n for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed training.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n drop_last (bool): Whether to drop the last incomplete batch in epoch.\n Default: True\n pin_memory (bool): Whether to use pin_memory in DataLoader.\n Default: True\n kwargs: any keyword argument to be used to initialize DataLoader\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n if dist:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=shuffle, seed=seed)\n shuffle = False\n batch_size = samples_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = None\n batch_size = num_gpus * samples_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n init_fn = partial(\n worker_init_fn, num_workers=num_workers, rank=rank,\n seed=seed) if seed is not None else None\n\n _, DataLoader = _get_dataloader()\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),\n pin_memory=pin_memory,\n shuffle=shuffle,\n worker_init_fn=init_fn,\n drop_last=drop_last,\n **kwargs)\n\n return data_loader\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n \"\"\"Init the random seed for various workers.\"\"\"\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n", "path": "mmpose/datasets/builder.py"}]} | 1,903 | 173 |
gh_patches_debug_30975 | rasdani/github-patches | git_diff | liqd__a4-product-608 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mandatory mB topic selection on bet.in ( US #1775)
All projects need a topic on bet.in now, even existing ones. Can we remove that requirement? We haven't yet thought about how to implement topics on bet.in and there are not shown anywhere, so it would probably be confusing for initiators.
</issue>
<code>
[start of liqd_product/apps/projects/dashboard.py]
1 from django.urls import reverse
2 from django.utils.translation import ugettext_lazy as _
3
4 from adhocracy4.dashboard import DashboardComponent
5 from adhocracy4.dashboard import ProjectFormComponent
6 from adhocracy4.dashboard import components
7
8 from . import forms
9 from . import views
10
11
12 class ParticipantsComponent(DashboardComponent):
13 identifier = 'participants'
14 weight = 30
15 label = _('Participants')
16
17 def is_effective(self, project):
18 return not project.is_draft and project.is_private
19
20 def get_base_url(self, project):
21 return reverse('a4dashboard:dashboard-participants-edit', kwargs={
22 'project_slug': project.slug
23 })
24
25 def get_urls(self):
26 return [(
27 r'^projects/(?P<project_slug>[-\w_]+)/participants/$',
28 views.DashboardProjectParticipantsView.as_view(component=self),
29 'dashboard-participants-edit'
30 )]
31
32
33 class ModeratorsComponent(DashboardComponent):
34 identifier = 'moderators'
35 weight = 32
36 label = _('Moderators')
37
38 def is_effective(self, project):
39 return True
40
41 def get_base_url(self, project):
42 return reverse('a4dashboard:dashboard-moderators-edit', kwargs={
43 'project_slug': project.slug
44 })
45
46 def get_urls(self):
47 return [(
48 r'^projects/(?P<project_slug>[-\w_]+)/moderators/$',
49 views.DashboardProjectModeratorsView.as_view(component=self),
50 'dashboard-moderators-edit'
51 )]
52
53
54 class TopicComponent(ProjectFormComponent):
55 identifier = 'topics'
56 weight = 33
57 label = _('Topics')
58
59 form_title = _('Edit topics')
60 form_class = forms.TopicForm
61 form_template_name = 'liqd_product_projects/project_topics.html'
62
63
64 components.register_project(ModeratorsComponent())
65 components.register_project(ParticipantsComponent())
66 components.register_project(TopicComponent())
67
[end of liqd_product/apps/projects/dashboard.py]
[start of liqd_product/apps/projects/forms.py]
1 from django import forms
2 from django.contrib.auth import get_user_model
3 from django.core.exceptions import ValidationError
4 from django.utils.translation import ugettext_lazy as _
5
6 from adhocracy4.dashboard.forms import ProjectDashboardForm
7 from adhocracy4.projects.models import Project
8 from liqd_product.apps.users import fields as user_fields
9
10 from .models import ModeratorInvite
11 from .models import ParticipantInvite
12
13 User = get_user_model()
14
15
16 class InviteForm(forms.ModelForm):
17 accept = forms.CharField(required=False)
18 reject = forms.CharField(required=False)
19
20 def clean(self):
21 data = self.data
22 if 'accept' not in data and 'reject' not in data:
23 raise ValidationError('Reject or accept')
24 return data
25
26 def is_accepted(self):
27 data = self.data
28 return 'accept' in data and 'reject' not in data
29
30
31 class ParticipantInviteForm(InviteForm):
32
33 class Meta:
34 model = ParticipantInvite
35 fields = ['accept', 'reject']
36
37
38 class ModeratorInviteForm(InviteForm):
39
40 class Meta:
41 model = ModeratorInvite
42 fields = ['accept', 'reject']
43
44
45 class InviteUsersFromEmailForm(forms.Form):
46 add_users = user_fields.CommaSeparatedEmailField(
47 required=False,
48 label=_('Invite users via email')
49 )
50
51 add_users_upload = user_fields.EmailFileField(
52 required=False,
53 label=_('Invite users via file upload'),
54 help_text=_('Upload a csv file containing email addresses.')
55 )
56
57 def __init__(self, *args, **kwargs):
58 labels = kwargs.pop('labels', None)
59 super().__init__(*args, **kwargs)
60
61 if labels:
62 self.fields['add_users'].label = labels[0]
63 self.fields['add_users_upload'].label = labels[1]
64
65 def clean(self):
66 cleaned_data = super().clean()
67 add_users = self.data.get('add_users')
68 add_users_upload = self.files.get('add_users_upload')
69 if not self.errors and not add_users and not add_users_upload:
70 raise ValidationError(
71 _('Please enter email addresses or upload a file'))
72 return cleaned_data
73
74
75 class TopicForm(ProjectDashboardForm):
76
77 class Meta:
78 model = Project
79 fields = ['topics']
80 required_for_project_publish = ['topics']
81
[end of liqd_product/apps/projects/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/liqd_product/apps/projects/dashboard.py b/liqd_product/apps/projects/dashboard.py
--- a/liqd_product/apps/projects/dashboard.py
+++ b/liqd_product/apps/projects/dashboard.py
@@ -2,10 +2,8 @@
from django.utils.translation import ugettext_lazy as _
from adhocracy4.dashboard import DashboardComponent
-from adhocracy4.dashboard import ProjectFormComponent
from adhocracy4.dashboard import components
-from . import forms
from . import views
@@ -51,16 +49,5 @@
)]
-class TopicComponent(ProjectFormComponent):
- identifier = 'topics'
- weight = 33
- label = _('Topics')
-
- form_title = _('Edit topics')
- form_class = forms.TopicForm
- form_template_name = 'liqd_product_projects/project_topics.html'
-
-
components.register_project(ModeratorsComponent())
components.register_project(ParticipantsComponent())
-components.register_project(TopicComponent())
diff --git a/liqd_product/apps/projects/forms.py b/liqd_product/apps/projects/forms.py
--- a/liqd_product/apps/projects/forms.py
+++ b/liqd_product/apps/projects/forms.py
@@ -3,8 +3,6 @@
from django.core.exceptions import ValidationError
from django.utils.translation import ugettext_lazy as _
-from adhocracy4.dashboard.forms import ProjectDashboardForm
-from adhocracy4.projects.models import Project
from liqd_product.apps.users import fields as user_fields
from .models import ModeratorInvite
@@ -70,11 +68,3 @@
raise ValidationError(
_('Please enter email addresses or upload a file'))
return cleaned_data
-
-
-class TopicForm(ProjectDashboardForm):
-
- class Meta:
- model = Project
- fields = ['topics']
- required_for_project_publish = ['topics']
| {"golden_diff": "diff --git a/liqd_product/apps/projects/dashboard.py b/liqd_product/apps/projects/dashboard.py\n--- a/liqd_product/apps/projects/dashboard.py\n+++ b/liqd_product/apps/projects/dashboard.py\n@@ -2,10 +2,8 @@\n from django.utils.translation import ugettext_lazy as _\n \n from adhocracy4.dashboard import DashboardComponent\n-from adhocracy4.dashboard import ProjectFormComponent\n from adhocracy4.dashboard import components\n \n-from . import forms\n from . import views\n \n \n@@ -51,16 +49,5 @@\n )]\n \n \n-class TopicComponent(ProjectFormComponent):\n- identifier = 'topics'\n- weight = 33\n- label = _('Topics')\n-\n- form_title = _('Edit topics')\n- form_class = forms.TopicForm\n- form_template_name = 'liqd_product_projects/project_topics.html'\n-\n-\n components.register_project(ModeratorsComponent())\n components.register_project(ParticipantsComponent())\n-components.register_project(TopicComponent())\ndiff --git a/liqd_product/apps/projects/forms.py b/liqd_product/apps/projects/forms.py\n--- a/liqd_product/apps/projects/forms.py\n+++ b/liqd_product/apps/projects/forms.py\n@@ -3,8 +3,6 @@\n from django.core.exceptions import ValidationError\n from django.utils.translation import ugettext_lazy as _\n \n-from adhocracy4.dashboard.forms import ProjectDashboardForm\n-from adhocracy4.projects.models import Project\n from liqd_product.apps.users import fields as user_fields\n \n from .models import ModeratorInvite\n@@ -70,11 +68,3 @@\n raise ValidationError(\n _('Please enter email addresses or upload a file'))\n return cleaned_data\n-\n-\n-class TopicForm(ProjectDashboardForm):\n-\n- class Meta:\n- model = Project\n- fields = ['topics']\n- required_for_project_publish = ['topics']\n", "issue": "Mandatory mB topic selection on bet.in ( US #1775)\nAll projects need a topic on bet.in now, even existing ones. Can we remove that requirement? We haven't yet thought about how to implement topics on bet.in and there are not shown anywhere, so it would probably be confusing for initiators.\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.dashboard import DashboardComponent\nfrom adhocracy4.dashboard import ProjectFormComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import forms\nfrom . import views\n\n\nclass ParticipantsComponent(DashboardComponent):\n identifier = 'participants'\n weight = 30\n label = _('Participants')\n\n def is_effective(self, project):\n return not project.is_draft and project.is_private\n\n def get_base_url(self, project):\n return reverse('a4dashboard:dashboard-participants-edit', kwargs={\n 'project_slug': project.slug\n })\n\n def get_urls(self):\n return [(\n r'^projects/(?P<project_slug>[-\\w_]+)/participants/$',\n views.DashboardProjectParticipantsView.as_view(component=self),\n 'dashboard-participants-edit'\n )]\n\n\nclass ModeratorsComponent(DashboardComponent):\n identifier = 'moderators'\n weight = 32\n label = _('Moderators')\n\n def is_effective(self, project):\n return True\n\n def get_base_url(self, project):\n return reverse('a4dashboard:dashboard-moderators-edit', kwargs={\n 'project_slug': project.slug\n })\n\n def get_urls(self):\n return [(\n r'^projects/(?P<project_slug>[-\\w_]+)/moderators/$',\n views.DashboardProjectModeratorsView.as_view(component=self),\n 'dashboard-moderators-edit'\n )]\n\n\nclass TopicComponent(ProjectFormComponent):\n identifier = 'topics'\n weight = 33\n label = _('Topics')\n\n form_title = _('Edit topics')\n form_class = forms.TopicForm\n form_template_name = 'liqd_product_projects/project_topics.html'\n\n\ncomponents.register_project(ModeratorsComponent())\ncomponents.register_project(ParticipantsComponent())\ncomponents.register_project(TopicComponent())\n", "path": "liqd_product/apps/projects/dashboard.py"}, {"content": "from django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.dashboard.forms import ProjectDashboardForm\nfrom adhocracy4.projects.models import Project\nfrom liqd_product.apps.users import fields as user_fields\n\nfrom .models import ModeratorInvite\nfrom .models import ParticipantInvite\n\nUser = get_user_model()\n\n\nclass InviteForm(forms.ModelForm):\n accept = forms.CharField(required=False)\n reject = forms.CharField(required=False)\n\n def clean(self):\n data = self.data\n if 'accept' not in data and 'reject' not in data:\n raise ValidationError('Reject or accept')\n return data\n\n def is_accepted(self):\n data = self.data\n return 'accept' in data and 'reject' not in data\n\n\nclass ParticipantInviteForm(InviteForm):\n\n class Meta:\n model = ParticipantInvite\n fields = ['accept', 'reject']\n\n\nclass ModeratorInviteForm(InviteForm):\n\n class Meta:\n model = ModeratorInvite\n fields = ['accept', 'reject']\n\n\nclass InviteUsersFromEmailForm(forms.Form):\n add_users = user_fields.CommaSeparatedEmailField(\n required=False,\n label=_('Invite users via email')\n )\n\n add_users_upload = user_fields.EmailFileField(\n required=False,\n label=_('Invite users via file upload'),\n help_text=_('Upload a csv file containing email addresses.')\n )\n\n def __init__(self, *args, **kwargs):\n labels = kwargs.pop('labels', None)\n super().__init__(*args, **kwargs)\n\n if labels:\n self.fields['add_users'].label = labels[0]\n self.fields['add_users_upload'].label = labels[1]\n\n def clean(self):\n cleaned_data = super().clean()\n add_users = self.data.get('add_users')\n add_users_upload = self.files.get('add_users_upload')\n if not self.errors and not add_users and not add_users_upload:\n raise ValidationError(\n _('Please enter email addresses or upload a file'))\n return cleaned_data\n\n\nclass TopicForm(ProjectDashboardForm):\n\n class Meta:\n model = Project\n fields = ['topics']\n required_for_project_publish = ['topics']\n", "path": "liqd_product/apps/projects/forms.py"}]} | 1,807 | 388 |
gh_patches_debug_13265 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-569 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `integtest.sh` from all plugin repos
The [integtest.sh](https://github.com/opensearch-project/opensearch-build/blob/main/bundle-workflow/scripts/default/integtest.sh) tool contains the logic to run integration tests for a plugin. This logic is mostly common across most plugins, so it has been moved to `opensearch-build` repo. Thus it can be removed from the individual plugin repos.
However, if a plugin requires some custom logic to run integtests, which the standard tool doesn't provide, they can continue maintaining this integtest.sh in their own repo. In this case, when the integration tests are run, if a plugin has a integtest.sh tool in their repo, it gets precedence over the standard default integtest.sh in the `opensearch-build` repo. This precedence order logic is defined in ScriptFinder [here](https://github.com/opensearch-project/opensearch-build/blob/84f2fa1cf15abe314aee62dbd2cb39bf2c9bb65f/bundle-workflow/src/paths/script_finder.py#L65)
Action items:
Raise PRs on all plugin repos and remove integtest.sh
- [ ] index-management
- [ ] anomaly-detection,
- [ ] alerting
- [ ] asynchronous-search
- [ ] k-NN
Changes will need to be backported into 1.x branches if such exist, too.
</issue>
<code>
[start of bundle-workflow/src/paths/script_finder.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8
9
10 class ScriptFinder:
11 class ScriptNotFoundError(Exception):
12 def __init__(self, kind, paths):
13 self.kind = kind
14 self.paths = paths
15 super().__init__(f"Could not find {kind} script. Looked in {paths}.")
16
17 component_scripts_path = os.path.realpath(
18 os.path.join(
19 os.path.dirname(os.path.abspath(__file__)), "../../scripts/components"
20 )
21 )
22
23 default_scripts_path = os.path.realpath(
24 os.path.join(
25 os.path.dirname(os.path.abspath(__file__)), "../../scripts/default"
26 )
27 )
28
29 """
30 ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.
31
32 For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,
33 it will look in the following locations, in order:
34 * Root of the Git repository
35 * /scripts/<script-name> in the Git repository
36 * <component_scripts_path>/<component_name>/<script-name>
37 * <default_scripts_path>/<script-name>
38
39 For install.sh scripts, given a component name, it will look in the following locations, in order:
40 * <component_scripts_path>/<component_name>/<script-name>
41 * <default_scripts_path>/<script-name>
42 """
43
44 @classmethod
45 def __find_script(cls, name, paths):
46 script = next(filter(lambda path: os.path.exists(path), paths), None)
47 if script is None:
48 raise ScriptFinder.ScriptNotFoundError(name, paths)
49 return script
50
51 @classmethod
52 def find_build_script(cls, component_name, git_dir):
53 paths = [
54 os.path.realpath(os.path.join(git_dir, "build.sh")),
55 os.path.realpath(os.path.join(git_dir, "scripts/build.sh")),
56 os.path.realpath(
57 os.path.join(cls.component_scripts_path, component_name, "build.sh")
58 ),
59 os.path.realpath(os.path.join(cls.default_scripts_path, "build.sh")),
60 ]
61
62 return cls.__find_script("build.sh", paths)
63
64 @classmethod
65 def find_integ_test_script(cls, component_name, git_dir):
66 paths = [
67 # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497
68 # os.path.realpath(os.path.join(git_dir, "integtest.sh")),
69 # os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
70 os.path.realpath(
71 os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
72 ),
73 os.path.realpath(os.path.join(cls.default_scripts_path, "integtest.sh")),
74 ]
75
76 return cls.__find_script("integtest.sh", paths)
77
78 @classmethod
79 def find_install_script(cls, component_name):
80 paths = [
81 os.path.realpath(
82 os.path.join(cls.component_scripts_path, component_name, "install.sh")
83 ),
84 os.path.realpath(os.path.join(cls.default_scripts_path, "install.sh")),
85 ]
86
87 return cls.__find_script("install.sh", paths)
88
89 @classmethod
90 def find_bwc_test_script(cls, component_name, git_dir):
91 paths = [
92 os.path.realpath(os.path.join(git_dir, "bwctest.sh")),
93 os.path.realpath(os.path.join(git_dir, "scripts/bwctest.sh")),
94 os.path.realpath(
95 os.path.join(cls.component_scripts_path, component_name, "bwctest.sh")
96 ),
97 os.path.realpath(os.path.join(cls.default_scripts_path, "bwctest.sh")),
98 ]
99
100 return cls.__find_script("bwctest.sh", paths)
101
[end of bundle-workflow/src/paths/script_finder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py
--- a/bundle-workflow/src/paths/script_finder.py
+++ b/bundle-workflow/src/paths/script_finder.py
@@ -64,9 +64,8 @@
@classmethod
def find_integ_test_script(cls, component_name, git_dir):
paths = [
- # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497
- # os.path.realpath(os.path.join(git_dir, "integtest.sh")),
- # os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
+ os.path.realpath(os.path.join(git_dir, "integtest.sh")),
+ os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
os.path.realpath(
os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
),
| {"golden_diff": "diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py\n--- a/bundle-workflow/src/paths/script_finder.py\n+++ b/bundle-workflow/src/paths/script_finder.py\n@@ -64,9 +64,8 @@\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n- # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n- # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n- # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n+ os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n+ os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n", "issue": "Remove `integtest.sh` from all plugin repos\nThe [integtest.sh](https://github.com/opensearch-project/opensearch-build/blob/main/bundle-workflow/scripts/default/integtest.sh) tool contains the logic to run integration tests for a plugin. This logic is mostly common across most plugins, so it has been moved to `opensearch-build` repo. Thus it can be removed from the individual plugin repos.\r\nHowever, if a plugin requires some custom logic to run integtests, which the standard tool doesn't provide, they can continue maintaining this integtest.sh in their own repo. In this case, when the integration tests are run, if a plugin has a integtest.sh tool in their repo, it gets precedence over the standard default integtest.sh in the `opensearch-build` repo. This precedence order logic is defined in ScriptFinder [here](https://github.com/opensearch-project/opensearch-build/blob/84f2fa1cf15abe314aee62dbd2cb39bf2c9bb65f/bundle-workflow/src/paths/script_finder.py#L65) \r\n\r\nAction items:\r\n\r\nRaise PRs on all plugin repos and remove integtest.sh \r\n- [ ] index-management\r\n- [ ] anomaly-detection,\r\n- [ ] alerting\r\n- [ ] asynchronous-search\r\n- [ ] k-NN\r\n\r\nChanges will need to be backported into 1.x branches if such exist, too.\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n\n @classmethod\n def find_bwc_test_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"bwctest.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/bwctest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"bwctest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"bwctest.sh\")),\n ]\n\n return cls.__find_script(\"bwctest.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}]} | 1,867 | 215 |
gh_patches_debug_31101 | rasdani/github-patches | git_diff | StackStorm__st2-4592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The api key in the st2api log is not obfuscated
##### SUMMARY
The user found in clean API key in query request (for the load balancer health check)
```GET /api/v1/?st2-api-key=foo HTTP/1.1```
##### ISSUE TYPE
- Bug Report
##### STACKSTORM VERSION
st2 2.10.3, on Python 2.7.12
</issue>
<code>
[start of st2common/st2common/middleware/logging.py]
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17 import time
18 import types
19 import itertools
20
21 from st2common.constants.api import REQUEST_ID_HEADER
22 from st2common import log as logging
23 from st2common.router import Request, NotFoundException
24
25 LOG = logging.getLogger(__name__)
26
27 try:
28 clock = time.perf_counter
29 except AttributeError:
30 clock = time.time
31
32
33 class LoggingMiddleware(object):
34 """
35 Logs all incoming requests and outgoing responses
36 """
37
38 def __init__(self, app, router):
39 self.app = app
40 self.router = router
41
42 def __call__(self, environ, start_response):
43 start_time = clock()
44 status_code = []
45 content_length = []
46
47 request = Request(environ)
48
49 # Log the incoming request
50 values = {
51 'method': request.method,
52 'path': request.path,
53 'remote_addr': request.remote_addr,
54 'query': request.GET.dict_of_lists(),
55 'request_id': request.headers.get(REQUEST_ID_HEADER, None)
56 }
57
58 LOG.info('%(request_id)s - %(method)s %(path)s with query=%(query)s' %
59 values, extra=values)
60
61 def custom_start_response(status, headers, exc_info=None):
62 status_code.append(int(status.split(' ')[0]))
63
64 for name, value in headers:
65 if name.lower() == 'content-length':
66 content_length.append(int(value))
67 break
68
69 return start_response(status, headers, exc_info)
70
71 retval = self.app(environ, custom_start_response)
72
73 try:
74 endpoint, path_vars = self.router.match(request)
75 except NotFoundException:
76 endpoint = {}
77
78 log_result = endpoint.get('x-log-result', True)
79
80 if isinstance(retval, (types.GeneratorType, itertools.chain)):
81 # Note: We don't log the result when return value is a generator, because this would
82 # result in calling str() on the generator and as such, exhausting it
83 content_length = [float('inf')]
84 log_result = False
85
86 # Log the response
87 values = {
88 'method': request.method,
89 'path': request.path,
90 'remote_addr': request.remote_addr,
91 'status': status_code[0],
92 'runtime': float("{0:.3f}".format((clock() - start_time) * 10**3)),
93 'content_length': content_length[0] if content_length else len(b''.join(retval)),
94 'request_id': request.headers.get(REQUEST_ID_HEADER, None)
95 }
96
97 log_msg = '%(request_id)s - %(status)s %(content_length)s %(runtime)sms' % (values)
98 LOG.info(log_msg, extra=values)
99
100 if log_result:
101 values['result'] = retval[0]
102 log_msg = ('%(request_id)s - %(status)s %(content_length)s %(runtime)sms\n%(result)s' %
103 (values))
104 LOG.debug(log_msg, extra=values)
105
106 return retval
107
[end of st2common/st2common/middleware/logging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2common/st2common/middleware/logging.py b/st2common/st2common/middleware/logging.py
--- a/st2common/st2common/middleware/logging.py
+++ b/st2common/st2common/middleware/logging.py
@@ -14,16 +14,28 @@
# limitations under the License.
from __future__ import absolute_import
+
import time
import types
import itertools
+from oslo_config import cfg
+
from st2common.constants.api import REQUEST_ID_HEADER
+from st2common.constants.auth import QUERY_PARAM_ATTRIBUTE_NAME
+from st2common.constants.auth import QUERY_PARAM_API_KEY_ATTRIBUTE_NAME
+from st2common.constants.secrets import MASKED_ATTRIBUTE_VALUE
+from st2common.constants.secrets import MASKED_ATTRIBUTES_BLACKLIST
from st2common import log as logging
from st2common.router import Request, NotFoundException
LOG = logging.getLogger(__name__)
+SECRET_QUERY_PARAMS = [
+ QUERY_PARAM_ATTRIBUTE_NAME,
+ QUERY_PARAM_API_KEY_ATTRIBUTE_NAME
+] + MASKED_ATTRIBUTES_BLACKLIST
+
try:
clock = time.perf_counter
except AttributeError:
@@ -46,12 +58,20 @@
request = Request(environ)
+ query_params = request.GET.dict_of_lists()
+
+ # Mask secret / sensitive query params
+ secret_query_params = SECRET_QUERY_PARAMS + cfg.CONF.log.mask_secrets_blacklist
+ for param_name in secret_query_params:
+ if param_name in query_params:
+ query_params[param_name] = MASKED_ATTRIBUTE_VALUE
+
# Log the incoming request
values = {
'method': request.method,
'path': request.path,
'remote_addr': request.remote_addr,
- 'query': request.GET.dict_of_lists(),
+ 'query': query_params,
'request_id': request.headers.get(REQUEST_ID_HEADER, None)
}
| {"golden_diff": "diff --git a/st2common/st2common/middleware/logging.py b/st2common/st2common/middleware/logging.py\n--- a/st2common/st2common/middleware/logging.py\n+++ b/st2common/st2common/middleware/logging.py\n@@ -14,16 +14,28 @@\n # limitations under the License.\n \n from __future__ import absolute_import\n+\n import time\n import types\n import itertools\n \n+from oslo_config import cfg\n+\n from st2common.constants.api import REQUEST_ID_HEADER\n+from st2common.constants.auth import QUERY_PARAM_ATTRIBUTE_NAME\n+from st2common.constants.auth import QUERY_PARAM_API_KEY_ATTRIBUTE_NAME\n+from st2common.constants.secrets import MASKED_ATTRIBUTE_VALUE\n+from st2common.constants.secrets import MASKED_ATTRIBUTES_BLACKLIST\n from st2common import log as logging\n from st2common.router import Request, NotFoundException\n \n LOG = logging.getLogger(__name__)\n \n+SECRET_QUERY_PARAMS = [\n+ QUERY_PARAM_ATTRIBUTE_NAME,\n+ QUERY_PARAM_API_KEY_ATTRIBUTE_NAME\n+] + MASKED_ATTRIBUTES_BLACKLIST\n+\n try:\n clock = time.perf_counter\n except AttributeError:\n@@ -46,12 +58,20 @@\n \n request = Request(environ)\n \n+ query_params = request.GET.dict_of_lists()\n+\n+ # Mask secret / sensitive query params\n+ secret_query_params = SECRET_QUERY_PARAMS + cfg.CONF.log.mask_secrets_blacklist\n+ for param_name in secret_query_params:\n+ if param_name in query_params:\n+ query_params[param_name] = MASKED_ATTRIBUTE_VALUE\n+\n # Log the incoming request\n values = {\n 'method': request.method,\n 'path': request.path,\n 'remote_addr': request.remote_addr,\n- 'query': request.GET.dict_of_lists(),\n+ 'query': query_params,\n 'request_id': request.headers.get(REQUEST_ID_HEADER, None)\n }\n", "issue": "The api key in the st2api log is not obfuscated\n##### SUMMARY\r\nThe user found in clean API key in query request (for the load balancer health check)\r\n```GET /api/v1/?st2-api-key=foo HTTP/1.1```\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n \r\n##### STACKSTORM VERSION\r\nst2 2.10.3, on Python 2.7.12\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport time\nimport types\nimport itertools\n\nfrom st2common.constants.api import REQUEST_ID_HEADER\nfrom st2common import log as logging\nfrom st2common.router import Request, NotFoundException\n\nLOG = logging.getLogger(__name__)\n\ntry:\n clock = time.perf_counter\nexcept AttributeError:\n clock = time.time\n\n\nclass LoggingMiddleware(object):\n \"\"\"\n Logs all incoming requests and outgoing responses\n \"\"\"\n\n def __init__(self, app, router):\n self.app = app\n self.router = router\n\n def __call__(self, environ, start_response):\n start_time = clock()\n status_code = []\n content_length = []\n\n request = Request(environ)\n\n # Log the incoming request\n values = {\n 'method': request.method,\n 'path': request.path,\n 'remote_addr': request.remote_addr,\n 'query': request.GET.dict_of_lists(),\n 'request_id': request.headers.get(REQUEST_ID_HEADER, None)\n }\n\n LOG.info('%(request_id)s - %(method)s %(path)s with query=%(query)s' %\n values, extra=values)\n\n def custom_start_response(status, headers, exc_info=None):\n status_code.append(int(status.split(' ')[0]))\n\n for name, value in headers:\n if name.lower() == 'content-length':\n content_length.append(int(value))\n break\n\n return start_response(status, headers, exc_info)\n\n retval = self.app(environ, custom_start_response)\n\n try:\n endpoint, path_vars = self.router.match(request)\n except NotFoundException:\n endpoint = {}\n\n log_result = endpoint.get('x-log-result', True)\n\n if isinstance(retval, (types.GeneratorType, itertools.chain)):\n # Note: We don't log the result when return value is a generator, because this would\n # result in calling str() on the generator and as such, exhausting it\n content_length = [float('inf')]\n log_result = False\n\n # Log the response\n values = {\n 'method': request.method,\n 'path': request.path,\n 'remote_addr': request.remote_addr,\n 'status': status_code[0],\n 'runtime': float(\"{0:.3f}\".format((clock() - start_time) * 10**3)),\n 'content_length': content_length[0] if content_length else len(b''.join(retval)),\n 'request_id': request.headers.get(REQUEST_ID_HEADER, None)\n }\n\n log_msg = '%(request_id)s - %(status)s %(content_length)s %(runtime)sms' % (values)\n LOG.info(log_msg, extra=values)\n\n if log_result:\n values['result'] = retval[0]\n log_msg = ('%(request_id)s - %(status)s %(content_length)s %(runtime)sms\\n%(result)s' %\n (values))\n LOG.debug(log_msg, extra=values)\n\n return retval\n", "path": "st2common/st2common/middleware/logging.py"}]} | 1,658 | 407 |
gh_patches_debug_14129 | rasdani/github-patches | git_diff | freedomofpress__securedrop-237 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Possible path confusion / traversal via imprecise store.verify()
The method `store.verify()` checks file paths provided via URL and other ways and raises an exception if they are not matching the validation criteria.
A problem with this validation process was spotted: `os.path.commonprefix()` is not sufficient to check if the path is inside the configured store path. It only compares character by character. Thus allows to navigate into another folder when they share the same start string.
```
Example: config.STORE_DIR = '/opt/store'
PoC: store.verify('/opt/store_backup')
```
Mitigation has to make sure, that the path is inside the configured store folder. A mitigation could be to add another check in `store.verify()` with `os.path.relpath(p, config.STORE_DIR)`. If the absolute path p is not inside the store directory, `os.path.relpath()` will return a string starting with '../'.
Example:
```
os.path.relpath('/opt/store_backup', config.STORE_DIR) == '../store_backup'
```
**Reported as part of the cure53 audit of 0.2 as: SD-01-006**
</issue>
<code>
[start of securedrop/store.py]
1 # -*- coding: utf-8 -*-
2 import os
3 import re
4 import config
5 import zipfile
6 import crypto_util
7 import uuid
8 import tempfile
9
10 VALIDATE_FILENAME = re.compile(
11 "^(reply-)?[a-f0-9-]+(_msg|_doc\.zip|)\.gpg$").match
12
13
14 class PathException(Exception):
15
16 '''An exception raised by `store.verify` when it encounters a bad path. A path
17 can be bad when it is not absolute, not normalized, not within
18 `config.STORE_DIR`, or doesn't match the filename format.
19 '''
20 pass
21
22
23 def verify(p):
24 '''Assert that the path is absolute, normalized, inside `config.STORE_DIR`, and
25 matches the filename format.
26 '''
27 if not os.path.isabs(config.STORE_DIR):
28 raise PathException("config.STORE_DIR(%s) is not absolute" % (
29 config.STORE_DIR, ))
30
31 # os.path.abspath makes the path absolute and normalizes '/foo/../bar' to
32 # '/bar', etc. We have to check that the path is normalized before checking
33 # that it starts with the `config.STORE_DIR` or else a malicious actor could
34 # append a bunch of '../../..' to access files outside of the store.
35 if not p == os.path.abspath(p):
36 raise PathException("The path is not absolute and/or normalized")
37
38 if os.path.commonprefix([config.STORE_DIR, p]) != config.STORE_DIR:
39 raise PathException("Invalid directory %s" % (p, ))
40
41 filename = os.path.basename(p)
42 ext = os.path.splitext(filename)[-1]
43
44 if os.path.isfile(p):
45 if filename == '_FLAG':
46 return True
47 if ext != '.gpg':
48 # if there's an extension, verify it's a GPG
49 raise PathException("Invalid file extension %s" % (ext, ))
50 if not VALIDATE_FILENAME(filename):
51 raise PathException("Invalid filename %s" % (filename, ))
52
53
54 def path(*s):
55 '''Get the normalized, absolute file path, within `config.STORE_DIR`.'''
56 joined = os.path.join(os.path.abspath(config.STORE_DIR), *s)
57 absolute = os.path.abspath(joined)
58 verify(absolute)
59 return absolute
60
61
62 def get_bulk_archive(filenames):
63 zip_file = tempfile.NamedTemporaryFile(prefix='tmp_securedrop_bulk_dl_')
64 with zipfile.ZipFile(zip_file, 'w') as zip:
65 for filename in filenames:
66 verify(filename)
67 zip.write(filename, arcname=os.path.basename(filename))
68 return zip_file
69
70
71 def log(msg):
72 file(path('NOTES'), 'a').write(msg)
73
[end of securedrop/store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/store.py b/securedrop/store.py
--- a/securedrop/store.py
+++ b/securedrop/store.py
@@ -35,13 +35,13 @@
if not p == os.path.abspath(p):
raise PathException("The path is not absolute and/or normalized")
- if os.path.commonprefix([config.STORE_DIR, p]) != config.STORE_DIR:
+ # Check that the path p is in config.STORE_DIR
+ if os.path.relpath(p, config.STORE_DIR).startswith('..'):
raise PathException("Invalid directory %s" % (p, ))
- filename = os.path.basename(p)
- ext = os.path.splitext(filename)[-1]
-
if os.path.isfile(p):
+ filename = os.path.basename(p)
+ ext = os.path.splitext(filename)[-1]
if filename == '_FLAG':
return True
if ext != '.gpg':
| {"golden_diff": "diff --git a/securedrop/store.py b/securedrop/store.py\n--- a/securedrop/store.py\n+++ b/securedrop/store.py\n@@ -35,13 +35,13 @@\n if not p == os.path.abspath(p):\n raise PathException(\"The path is not absolute and/or normalized\")\n \n- if os.path.commonprefix([config.STORE_DIR, p]) != config.STORE_DIR:\n+ # Check that the path p is in config.STORE_DIR\n+ if os.path.relpath(p, config.STORE_DIR).startswith('..'):\n raise PathException(\"Invalid directory %s\" % (p, ))\n \n- filename = os.path.basename(p)\n- ext = os.path.splitext(filename)[-1]\n-\n if os.path.isfile(p):\n+ filename = os.path.basename(p)\n+ ext = os.path.splitext(filename)[-1]\n if filename == '_FLAG':\n return True\n if ext != '.gpg':\n", "issue": "Possible path confusion / traversal via imprecise store.verify()\nThe method `store.verify()` checks file paths provided via URL and other ways and raises an exception if they are not matching the validation criteria.\n\nA problem with this validation process was spotted: `os.path.commonprefix()` is not sufficient to check if the path is inside the configured store path. It only compares character by character. Thus allows to navigate into another folder when they share the same start string.\n\n```\nExample: config.STORE_DIR = '/opt/store'\nPoC: store.verify('/opt/store_backup')\n```\n\nMitigation has to make sure, that the path is inside the configured store folder. A mitigation could be to add another check in `store.verify()` with `os.path.relpath(p, config.STORE_DIR)`. If the absolute path p is not inside the store directory, `os.path.relpath()` will return a string starting with '../'.\n\nExample:\n\n```\nos.path.relpath('/opt/store_backup', config.STORE_DIR) == '../store_backup'\n```\n\n**Reported as part of the cure53 audit of 0.2 as: SD-01-006**\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport re\nimport config\nimport zipfile\nimport crypto_util\nimport uuid\nimport tempfile\n\nVALIDATE_FILENAME = re.compile(\n \"^(reply-)?[a-f0-9-]+(_msg|_doc\\.zip|)\\.gpg$\").match\n\n\nclass PathException(Exception):\n\n '''An exception raised by `store.verify` when it encounters a bad path. A path\n can be bad when it is not absolute, not normalized, not within\n `config.STORE_DIR`, or doesn't match the filename format.\n '''\n pass\n\n\ndef verify(p):\n '''Assert that the path is absolute, normalized, inside `config.STORE_DIR`, and\n matches the filename format.\n '''\n if not os.path.isabs(config.STORE_DIR):\n raise PathException(\"config.STORE_DIR(%s) is not absolute\" % (\n config.STORE_DIR, ))\n\n # os.path.abspath makes the path absolute and normalizes '/foo/../bar' to\n # '/bar', etc. We have to check that the path is normalized before checking\n # that it starts with the `config.STORE_DIR` or else a malicious actor could\n # append a bunch of '../../..' to access files outside of the store.\n if not p == os.path.abspath(p):\n raise PathException(\"The path is not absolute and/or normalized\")\n\n if os.path.commonprefix([config.STORE_DIR, p]) != config.STORE_DIR:\n raise PathException(\"Invalid directory %s\" % (p, ))\n\n filename = os.path.basename(p)\n ext = os.path.splitext(filename)[-1]\n\n if os.path.isfile(p):\n if filename == '_FLAG':\n return True\n if ext != '.gpg':\n # if there's an extension, verify it's a GPG\n raise PathException(\"Invalid file extension %s\" % (ext, ))\n if not VALIDATE_FILENAME(filename):\n raise PathException(\"Invalid filename %s\" % (filename, ))\n\n\ndef path(*s):\n '''Get the normalized, absolute file path, within `config.STORE_DIR`.'''\n joined = os.path.join(os.path.abspath(config.STORE_DIR), *s)\n absolute = os.path.abspath(joined)\n verify(absolute)\n return absolute\n\n\ndef get_bulk_archive(filenames):\n zip_file = tempfile.NamedTemporaryFile(prefix='tmp_securedrop_bulk_dl_')\n with zipfile.ZipFile(zip_file, 'w') as zip:\n for filename in filenames:\n verify(filename)\n zip.write(filename, arcname=os.path.basename(filename))\n return zip_file\n\n\ndef log(msg):\n file(path('NOTES'), 'a').write(msg)\n", "path": "securedrop/store.py"}]} | 1,498 | 209 |
gh_patches_debug_26522 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-3741 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Assigning group members: memberlist batch navigation is broken.
## groupmembers listing batch and `showAll` link is broken
### What I did:
Assign members to a group:
- click on "show all" in the user filter.
- if you have lots of users the list is batched
- click on the next batch page
### What I expect to happen:
the next user batch list is shown
### What actually happened:
the user list is empty
### What version of Plone/ Addons I am using:
Plone 6.0.2
### Additional
The "toggle all" checkboxes do not work. This can be solved with `pat-checklist` ...
</issue>
<code>
[start of Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py]
1 from Products.CMFCore.utils import getToolByName
2 from Products.CMFPlone import PloneMessageFactory as _
3 from Products.CMFPlone.controlpanel.browser.usergroups import (
4 UsersGroupsControlPanelView,
5 )
6 from Products.CMFPlone.utils import normalizeString
7 from zExceptions import Forbidden
8
9
10 class GroupMembershipControlPanel(UsersGroupsControlPanelView):
11
12 def update(self):
13 self.groupname = getattr(self.request, 'groupname')
14 self.gtool = getToolByName(self, 'portal_groups')
15 self.mtool = getToolByName(self, 'portal_membership')
16 self.group = self.gtool.getGroupById(self.groupname)
17 if self.group is None:
18 return
19
20 self.grouptitle = self.group.getGroupTitleOrName() or self.groupname
21
22 self.request.set('grouproles', self.group.getRoles()
23 if self.group else [])
24 self.canAddUsers = True
25 if 'Manager' in self.request.get('grouproles') and not self.is_zope_manager:
26 self.canAddUsers = False
27
28 self.groupquery = self.makeQuery(groupname=self.groupname)
29 self.groupkeyquery = self.makeQuery(key=self.groupname)
30
31 form = self.request.form
32 submitted = form.get('form.submitted', False)
33
34 self.searchResults = []
35 self.searchString = ''
36 self.newSearch = False
37
38 if submitted:
39 # add/delete before we search so we don't show stale results
40 toAdd = form.get('add', [])
41 if toAdd:
42 if not self.canAddUsers:
43 raise Forbidden
44
45 for u in toAdd:
46 self.gtool.addPrincipalToGroup(
47 u, self.groupname, self.request)
48 self.context.plone_utils.addPortalMessage(_('Changes made.'))
49
50 toDelete = form.get('delete', [])
51 if toDelete:
52 for u in toDelete:
53 self.gtool.removePrincipalFromGroup(
54 u, self.groupname, self.request)
55 self.context.plone_utils.addPortalMessage(_('Changes made.'))
56
57 search = form.get('form.button.Search', None) is not None
58 edit = form.get('form.button.Edit', None) is not None and toDelete
59 add = form.get('form.button.Add', None) is not None and toAdd
60 findAll = form.get('form.button.FindAll', None) is not None and \
61 not self.many_users
62 # The search string should be cleared when one of the
63 # non-search buttons has been clicked.
64 if findAll or edit or add:
65 form['searchstring'] = ''
66 self.searchString = form.get('searchstring', '')
67 if findAll or bool(self.searchString):
68 self.searchResults = self.getPotentialMembers(
69 self.searchString)
70
71 if search or findAll:
72 self.newSearch = True
73
74 self.groupMembers = self.getMembers()
75
76 def __call__(self):
77 self.update()
78 return self.index()
79
80 def isGroup(self, itemName):
81 return self.gtool.isGroup(itemName)
82
83 def getMembers(self):
84 searchResults = self.gtool.getGroupMembers(self.groupname)
85
86 groupResults = []
87 userResults = []
88 for principal_id in searchResults:
89 principal = self.gtool.getGroupById(principal_id)
90 if principal is not None:
91 groupResults.append(principal)
92 continue
93 principal = self.mtool.getMemberById(principal_id)
94 if principal is not None:
95 userResults.append(principal)
96
97 groupResults.sort(key=lambda x: normalizeString(x.getGroupTitleOrName()))
98 userResults.sort(key=lambda x: normalizeString(x.getProperty('fullname') or ''))
99
100 return groupResults + userResults
101
102 def getPotentialMembers(self, searchString):
103 ignoredUsersGroups = [
104 x.id for x in self.getMembers() + [self.group, ] if x is not None]
105 return self.membershipSearch(searchString, ignore=ignoredUsersGroups)
106
[end of Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py
--- a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py
+++ b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py
@@ -57,14 +57,21 @@
search = form.get('form.button.Search', None) is not None
edit = form.get('form.button.Edit', None) is not None and toDelete
add = form.get('form.button.Add', None) is not None and toAdd
- findAll = form.get('form.button.FindAll', None) is not None and \
- not self.many_users
+ isBatched = form.get("b_start", None) is not None
+ findAll = (
+ form.get('form.button.FindAll', None) is not None
+ and not self.many_users
+ )
+ unbatchedAll = (
+ form.get("showAll", "") == "y"
+ and not self.many_users
+ )
# The search string should be cleared when one of the
# non-search buttons has been clicked.
- if findAll or edit or add:
+ if findAll or unbatchedAll or edit or add:
form['searchstring'] = ''
self.searchString = form.get('searchstring', '')
- if findAll or bool(self.searchString):
+ if findAll or isBatched or unbatchedAll or bool(self.searchString):
self.searchResults = self.getPotentialMembers(
self.searchString)
| {"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py\n--- a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py\n+++ b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py\n@@ -57,14 +57,21 @@\n search = form.get('form.button.Search', None) is not None\n edit = form.get('form.button.Edit', None) is not None and toDelete\n add = form.get('form.button.Add', None) is not None and toAdd\n- findAll = form.get('form.button.FindAll', None) is not None and \\\n- not self.many_users\n+ isBatched = form.get(\"b_start\", None) is not None\n+ findAll = (\n+ form.get('form.button.FindAll', None) is not None\n+ and not self.many_users\n+ )\n+ unbatchedAll = (\n+ form.get(\"showAll\", \"\") == \"y\"\n+ and not self.many_users\n+ )\n # The search string should be cleared when one of the\n # non-search buttons has been clicked.\n- if findAll or edit or add:\n+ if findAll or unbatchedAll or edit or add:\n form['searchstring'] = ''\n self.searchString = form.get('searchstring', '')\n- if findAll or bool(self.searchString):\n+ if findAll or isBatched or unbatchedAll or bool(self.searchString):\n self.searchResults = self.getPotentialMembers(\n self.searchString)\n", "issue": "Assigning group members: memberlist batch navigation is broken.\n## groupmembers listing batch and `showAll` link is broken\r\n\r\n### What I did:\r\n\r\nAssign members to a group:\r\n\r\n- click on \"show all\" in the user filter.\r\n- if you have lots of users the list is batched\r\n- click on the next batch page\r\n\r\n### What I expect to happen:\r\n\r\nthe next user batch list is shown\r\n\r\n### What actually happened:\r\n\r\nthe user list is empty\r\n\r\n### What version of Plone/ Addons I am using:\r\n\r\nPlone 6.0.2\r\n\r\n\r\n### Additional\r\n\r\nThe \"toggle all\" checkboxes do not work. This can be solved with `pat-checklist` ...\n", "before_files": [{"content": "from Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.controlpanel.browser.usergroups import (\n UsersGroupsControlPanelView,\n)\nfrom Products.CMFPlone.utils import normalizeString\nfrom zExceptions import Forbidden\n\n\nclass GroupMembershipControlPanel(UsersGroupsControlPanelView):\n\n def update(self):\n self.groupname = getattr(self.request, 'groupname')\n self.gtool = getToolByName(self, 'portal_groups')\n self.mtool = getToolByName(self, 'portal_membership')\n self.group = self.gtool.getGroupById(self.groupname)\n if self.group is None:\n return\n\n self.grouptitle = self.group.getGroupTitleOrName() or self.groupname\n\n self.request.set('grouproles', self.group.getRoles()\n if self.group else [])\n self.canAddUsers = True\n if 'Manager' in self.request.get('grouproles') and not self.is_zope_manager:\n self.canAddUsers = False\n\n self.groupquery = self.makeQuery(groupname=self.groupname)\n self.groupkeyquery = self.makeQuery(key=self.groupname)\n\n form = self.request.form\n submitted = form.get('form.submitted', False)\n\n self.searchResults = []\n self.searchString = ''\n self.newSearch = False\n\n if submitted:\n # add/delete before we search so we don't show stale results\n toAdd = form.get('add', [])\n if toAdd:\n if not self.canAddUsers:\n raise Forbidden\n\n for u in toAdd:\n self.gtool.addPrincipalToGroup(\n u, self.groupname, self.request)\n self.context.plone_utils.addPortalMessage(_('Changes made.'))\n\n toDelete = form.get('delete', [])\n if toDelete:\n for u in toDelete:\n self.gtool.removePrincipalFromGroup(\n u, self.groupname, self.request)\n self.context.plone_utils.addPortalMessage(_('Changes made.'))\n\n search = form.get('form.button.Search', None) is not None\n edit = form.get('form.button.Edit', None) is not None and toDelete\n add = form.get('form.button.Add', None) is not None and toAdd\n findAll = form.get('form.button.FindAll', None) is not None and \\\n not self.many_users\n # The search string should be cleared when one of the\n # non-search buttons has been clicked.\n if findAll or edit or add:\n form['searchstring'] = ''\n self.searchString = form.get('searchstring', '')\n if findAll or bool(self.searchString):\n self.searchResults = self.getPotentialMembers(\n self.searchString)\n\n if search or findAll:\n self.newSearch = True\n\n self.groupMembers = self.getMembers()\n\n def __call__(self):\n self.update()\n return self.index()\n\n def isGroup(self, itemName):\n return self.gtool.isGroup(itemName)\n\n def getMembers(self):\n searchResults = self.gtool.getGroupMembers(self.groupname)\n\n groupResults = []\n userResults = []\n for principal_id in searchResults:\n principal = self.gtool.getGroupById(principal_id)\n if principal is not None:\n groupResults.append(principal)\n continue\n principal = self.mtool.getMemberById(principal_id)\n if principal is not None:\n userResults.append(principal)\n\n groupResults.sort(key=lambda x: normalizeString(x.getGroupTitleOrName()))\n userResults.sort(key=lambda x: normalizeString(x.getProperty('fullname') or ''))\n\n return groupResults + userResults\n\n def getPotentialMembers(self, searchString):\n ignoredUsersGroups = [\n x.id for x in self.getMembers() + [self.group, ] if x is not None]\n return self.membershipSearch(searchString, ignore=ignoredUsersGroups)\n", "path": "Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py"}]} | 1,757 | 361 |
gh_patches_debug_3110 | rasdani/github-patches | git_diff | kserve__kserve-2018 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KServe 0.8 release tracking
/kind feature
**Describe the solution you'd like**
KServe 0.8 release tracking:
RC release Date: 12/30/2021
Release Date: 1/14/2021
KServe Model Serving:
- [x] torchserve v2 protocol
- https://github.com/kserve/kserve/pull/1870 @jagadeeshi2i
- [X] Transformer -> Predictor gRPC support
- https://github.com/kserve/kserve/pull/1933
- [X] MLServer 0.5 update
- https://github.com/kserve/kserve/pull/1853 @adriangonz
- [X] Scikit-Learn 1.0.1 and XGBoost 1.5.0 upgrade
- https://github.com/kserve/kserve/pull/1954 @yuzisun
- [X] Introduce ServingRuntime to single model serving @pvaneck @Suresh-Nakkeran
- https://github.com/kserve/kserve/pull/1901
- https://github.com/kserve/kserve/pull/1926
- [ ] Introduce new storage spec @Tomcli
- https://github.com/kserve/kserve/pull/1899
- [X] Storage initializer fixes
- https://github.com/kserve/kserve/pull/1883
- https://github.com/kserve/kserve/pull/1940
- [X] Helm chart for KServe and ModelMesh @yuzisun
- https://github.com/kserve/kserve/pull/1878
- [X] KServe SDK features and fixes
- https://github.com/kserve/kserve/pull/1949 @markwinter
- https://github.com/kserve/kserve/pull/1934 @markwinter
- https://github.com/kserve/kserve/pull/1918 @markwinter
ModelMesh:
- [X] Multi-namespace support for ModelMesh
- [X] Improve rest proxy support
- https://github.com/kserve/rest-proxy/pull/6
Models UI:
- [ ] Models Web App KServe migration @kimwnasptd
Website:
- [ ] Website doc update
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
</issue>
<code>
[start of python/kserve/setup.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 TESTS_REQUIRES = [
18 'pytest',
19 'pytest-xdist',
20 'pytest-cov',
21 'pytest-asyncio',
22 'pytest-tornasync',
23 'mypy'
24 ]
25
26 with open('requirements.txt') as f:
27 REQUIRES = f.readlines()
28
29 setuptools.setup(
30 name='kserve',
31 version='0.8.0rc0',
32 author="The KServe Authors",
33 author_email='[email protected], [email protected], [email protected]',
34 license="Apache License Version 2.0",
35 url="https://github.com/kserve/kserve/tree/master/python/kserve",
36 description="KServe Python SDK",
37 long_description="Python SDK for KServe Server and Client.",
38 python_requires='>=3.6',
39 packages=[
40 'kserve',
41 'kserve.api',
42 'kserve.constants',
43 'kserve.models',
44 'kserve.handlers',
45 'kserve.utils',
46 ],
47 package_data={'': ['requirements.txt']},
48 include_package_data=True,
49 zip_safe=False,
50 classifiers=[
51 'Intended Audience :: Developers',
52 'Intended Audience :: Education',
53 'Intended Audience :: Science/Research',
54 'Programming Language :: Python :: 3',
55 'Programming Language :: Python :: 3.6',
56 'Programming Language :: Python :: 3.7',
57 "License :: OSI Approved :: Apache Software License",
58 "Operating System :: OS Independent",
59 'Topic :: Scientific/Engineering',
60 'Topic :: Scientific/Engineering :: Artificial Intelligence',
61 'Topic :: Software Development',
62 'Topic :: Software Development :: Libraries',
63 'Topic :: Software Development :: Libraries :: Python Modules',
64 ],
65 install_requires=REQUIRES,
66 tests_require=TESTS_REQUIRES,
67 extras_require={'test': TESTS_REQUIRES}
68 )
69
[end of python/kserve/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kserve/setup.py b/python/kserve/setup.py
--- a/python/kserve/setup.py
+++ b/python/kserve/setup.py
@@ -28,7 +28,7 @@
setuptools.setup(
name='kserve',
- version='0.8.0rc0',
+ version='0.8.0',
author="The KServe Authors",
author_email='[email protected], [email protected], [email protected]',
license="Apache License Version 2.0",
| {"golden_diff": "diff --git a/python/kserve/setup.py b/python/kserve/setup.py\n--- a/python/kserve/setup.py\n+++ b/python/kserve/setup.py\n@@ -28,7 +28,7 @@\n \n setuptools.setup(\n name='kserve',\n- version='0.8.0rc0',\n+ version='0.8.0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n", "issue": "KServe 0.8 release tracking\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\nKServe 0.8 release tracking:\r\nRC release Date: 12/30/2021\r\nRelease Date: 1/14/2021\r\n\r\nKServe Model Serving:\r\n- [x] torchserve v2 protocol\r\n - https://github.com/kserve/kserve/pull/1870 @jagadeeshi2i \r\n- [X] Transformer -> Predictor gRPC support\r\n - https://github.com/kserve/kserve/pull/1933\r\n- [X] MLServer 0.5 update\r\n - https://github.com/kserve/kserve/pull/1853 @adriangonz \r\n- [X] Scikit-Learn 1.0.1 and XGBoost 1.5.0 upgrade\r\n - https://github.com/kserve/kserve/pull/1954 @yuzisun \r\n- [X] Introduce ServingRuntime to single model serving @pvaneck @Suresh-Nakkeran \r\n - https://github.com/kserve/kserve/pull/1901\r\n - https://github.com/kserve/kserve/pull/1926\r\n- [ ] Introduce new storage spec @Tomcli \r\n - https://github.com/kserve/kserve/pull/1899\r\n- [X] Storage initializer fixes\r\n - https://github.com/kserve/kserve/pull/1883\r\n - https://github.com/kserve/kserve/pull/1940\r\n- [X] Helm chart for KServe and ModelMesh @yuzisun \r\n - https://github.com/kserve/kserve/pull/1878\r\n- [X] KServe SDK features and fixes\r\n - https://github.com/kserve/kserve/pull/1949 @markwinter \r\n - https://github.com/kserve/kserve/pull/1934 @markwinter \r\n - https://github.com/kserve/kserve/pull/1918 @markwinter \r\n\r\nModelMesh:\r\n- [X] Multi-namespace support for ModelMesh\r\n- [X] Improve rest proxy support\r\n - https://github.com/kserve/rest-proxy/pull/6\r\n\r\nModels UI:\r\n- [ ] Models Web App KServe migration @kimwnasptd \r\n \r\n \r\nWebsite: \r\n- [ ] Website doc update\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-xdist',\n 'pytest-cov',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kserve',\n version='0.8.0rc0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n python_requires='>=3.6',\n packages=[\n 'kserve',\n 'kserve.api',\n 'kserve.constants',\n 'kserve.models',\n 'kserve.handlers',\n 'kserve.utils',\n ],\n package_data={'': ['requirements.txt']},\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kserve/setup.py"}]} | 1,749 | 125 |
gh_patches_debug_22124 | rasdani/github-patches | git_diff | fossasia__open-event-server-5566 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Session Export CSV does not include all data
The Session Export should export all data sets that are available e.g. including:
* Submission time
* All speakers
* Proposed length
* Type (Workshop, Talk)
* Level (e.g. Intermediate)
* Status (e.g. pending, accepted etc.)

</issue>
<code>
[start of app/api/helpers/csv_jobs_util.py]
1 from app.models.helpers.versioning import strip_tags
2
3
4 def export_orders_csv(orders):
5 headers = ['Order#', 'Order Date', 'Status', 'Payment Type', 'Total Amount', 'Quantity',
6 'Discount Code', 'First Name', 'Last Name', 'Email']
7
8 rows = [headers]
9 for order in orders:
10 if order.status != "deleted":
11 column = [str(order.get_invoice_number()), str(order.created_at) if order.created_at else '',
12 str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',
13 str(order.amount) if order.amount else '', str(order.tickets_count),
14 str(order.discount_code.code) if order.discount_code else '',
15 str(order.user.first_name)
16 if order.user and order.user.first_name else '',
17 str(order.user.last_name)
18 if order.user and order.user.last_name else '',
19 str(order.user.email) if order.user and order.user.email else '']
20 rows.append(column)
21
22 return rows
23
24
25 def export_attendees_csv(attendees):
26 headers = ['Order#', 'Order Date', 'Status', 'First Name', 'Last Name', 'Email',
27 'Country', 'Payment Type', 'Ticket Name', 'Ticket Price', 'Ticket Type']
28
29 rows = [headers]
30 for attendee in attendees:
31 column = [str(attendee.order.get_invoice_number()) if attendee.order else '-',
32 str(attendee.order.created_at) if attendee.order and attendee.order.created_at else '-',
33 str(attendee.order.status) if attendee.order and attendee.order.status else '-',
34 str(attendee.firstname) if attendee.firstname else '',
35 str(attendee.lastname) if attendee.lastname else '',
36 str(attendee.email) if attendee.email else '',
37 str(attendee.country) if attendee.country else '',
38 str(attendee.order.payment_mode) if attendee.order and attendee.order.payment_mode else '',
39 str(attendee.ticket.name) if attendee.ticket and attendee.ticket.name else '',
40 str(attendee.ticket.price) if attendee.ticket and attendee.ticket.price else '0',
41 str(attendee.ticket.type) if attendee.ticket and attendee.ticket.type else '']
42
43 rows.append(column)
44
45 return rows
46
47
48 def export_sessions_csv(sessions):
49 headers = ['Session Title', 'Session Speakers',
50 'Session Track', 'Session Abstract', 'Created At', 'Email Sent']
51 rows = [headers]
52 for session in sessions:
53 if not session.deleted_at:
54 column = [session.title + ' (' + session.state + ')' if session.title else '']
55 if session.speakers:
56 in_session = ''
57 for speaker in session.speakers:
58 if speaker.name:
59 in_session += (speaker.name + '; ')
60 column.append(in_session[:-2])
61 else:
62 column.append('')
63 column.append(session.track.name if session.track and session.track.name else '')
64 column.append(strip_tags(session.short_abstract) if session.short_abstract else '')
65 column.append(session.created_at if session.created_at else '')
66 column.append('Yes' if session.is_mail_sent else 'No')
67 rows.append(column)
68
69 return rows
70
71
72 def export_speakers_csv(speakers):
73 headers = ['Speaker Name', 'Speaker Email', 'Speaker Session(s)',
74 'Speaker Mobile', 'Speaker Bio', 'Speaker Organisation', 'Speaker Position']
75 rows = [headers]
76 for speaker in speakers:
77 column = [speaker.name if speaker.name else '', speaker.email if speaker.email else '']
78 if speaker.sessions:
79 session_details = ''
80 for session in speaker.sessions:
81 if not session.deleted_at:
82 session_details += session.title + ' (' + session.state + '); '
83 column.append(session_details[:-2])
84 else:
85 column.append('')
86 column.append(speaker.mobile if speaker.mobile else '')
87 column.append(speaker.short_biography if speaker.short_biography else '')
88 column.append(speaker.organisation if speaker.organisation else '')
89 column.append(speaker.position if speaker.position else '')
90 rows.append(column)
91
92 return rows
93
[end of app/api/helpers/csv_jobs_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/helpers/csv_jobs_util.py b/app/api/helpers/csv_jobs_util.py
--- a/app/api/helpers/csv_jobs_util.py
+++ b/app/api/helpers/csv_jobs_util.py
@@ -47,7 +47,8 @@
def export_sessions_csv(sessions):
headers = ['Session Title', 'Session Speakers',
- 'Session Track', 'Session Abstract', 'Created At', 'Email Sent']
+ 'Session Track', 'Session Abstract', 'Created At', 'Email Sent',
+ 'Level', 'Status', 'Session Type', 'Talk Length']
rows = [headers]
for session in sessions:
if not session.deleted_at:
@@ -64,6 +65,10 @@
column.append(strip_tags(session.short_abstract) if session.short_abstract else '')
column.append(session.created_at if session.created_at else '')
column.append('Yes' if session.is_mail_sent else 'No')
+ column.append(session.level)
+ column.append(session.state)
+ column.append(session.type)
+ column.append(len(session.long_abstract))
rows.append(column)
return rows
| {"golden_diff": "diff --git a/app/api/helpers/csv_jobs_util.py b/app/api/helpers/csv_jobs_util.py\n--- a/app/api/helpers/csv_jobs_util.py\n+++ b/app/api/helpers/csv_jobs_util.py\n@@ -47,7 +47,8 @@\n \n def export_sessions_csv(sessions):\n headers = ['Session Title', 'Session Speakers',\n- 'Session Track', 'Session Abstract', 'Created At', 'Email Sent']\n+ 'Session Track', 'Session Abstract', 'Created At', 'Email Sent',\n+ 'Level', 'Status', 'Session Type', 'Talk Length']\n rows = [headers]\n for session in sessions:\n if not session.deleted_at:\n@@ -64,6 +65,10 @@\n column.append(strip_tags(session.short_abstract) if session.short_abstract else '')\n column.append(session.created_at if session.created_at else '')\n column.append('Yes' if session.is_mail_sent else 'No')\n+ column.append(session.level)\n+ column.append(session.state)\n+ column.append(session.type)\n+ column.append(len(session.long_abstract))\n rows.append(column)\n \n return rows\n", "issue": "Session Export CSV does not include all data \nThe Session Export should export all data sets that are available e.g. including:\r\n* Submission time\r\n* All speakers\r\n* Proposed length\r\n* Type (Workshop, Talk)\r\n* Level (e.g. Intermediate)\r\n* Status (e.g. pending, accepted etc.)\r\n\r\n\n", "before_files": [{"content": "from app.models.helpers.versioning import strip_tags\n\n\ndef export_orders_csv(orders):\n headers = ['Order#', 'Order Date', 'Status', 'Payment Type', 'Total Amount', 'Quantity',\n 'Discount Code', 'First Name', 'Last Name', 'Email']\n\n rows = [headers]\n for order in orders:\n if order.status != \"deleted\":\n column = [str(order.get_invoice_number()), str(order.created_at) if order.created_at else '',\n str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',\n str(order.amount) if order.amount else '', str(order.tickets_count),\n str(order.discount_code.code) if order.discount_code else '',\n str(order.user.first_name)\n if order.user and order.user.first_name else '',\n str(order.user.last_name)\n if order.user and order.user.last_name else '',\n str(order.user.email) if order.user and order.user.email else '']\n rows.append(column)\n\n return rows\n\n\ndef export_attendees_csv(attendees):\n headers = ['Order#', 'Order Date', 'Status', 'First Name', 'Last Name', 'Email',\n 'Country', 'Payment Type', 'Ticket Name', 'Ticket Price', 'Ticket Type']\n\n rows = [headers]\n for attendee in attendees:\n column = [str(attendee.order.get_invoice_number()) if attendee.order else '-',\n str(attendee.order.created_at) if attendee.order and attendee.order.created_at else '-',\n str(attendee.order.status) if attendee.order and attendee.order.status else '-',\n str(attendee.firstname) if attendee.firstname else '',\n str(attendee.lastname) if attendee.lastname else '',\n str(attendee.email) if attendee.email else '',\n str(attendee.country) if attendee.country else '',\n str(attendee.order.payment_mode) if attendee.order and attendee.order.payment_mode else '',\n str(attendee.ticket.name) if attendee.ticket and attendee.ticket.name else '',\n str(attendee.ticket.price) if attendee.ticket and attendee.ticket.price else '0',\n str(attendee.ticket.type) if attendee.ticket and attendee.ticket.type else '']\n\n rows.append(column)\n\n return rows\n\n\ndef export_sessions_csv(sessions):\n headers = ['Session Title', 'Session Speakers',\n 'Session Track', 'Session Abstract', 'Created At', 'Email Sent']\n rows = [headers]\n for session in sessions:\n if not session.deleted_at:\n column = [session.title + ' (' + session.state + ')' if session.title else '']\n if session.speakers:\n in_session = ''\n for speaker in session.speakers:\n if speaker.name:\n in_session += (speaker.name + '; ')\n column.append(in_session[:-2])\n else:\n column.append('')\n column.append(session.track.name if session.track and session.track.name else '')\n column.append(strip_tags(session.short_abstract) if session.short_abstract else '')\n column.append(session.created_at if session.created_at else '')\n column.append('Yes' if session.is_mail_sent else 'No')\n rows.append(column)\n\n return rows\n\n\ndef export_speakers_csv(speakers):\n headers = ['Speaker Name', 'Speaker Email', 'Speaker Session(s)',\n 'Speaker Mobile', 'Speaker Bio', 'Speaker Organisation', 'Speaker Position']\n rows = [headers]\n for speaker in speakers:\n column = [speaker.name if speaker.name else '', speaker.email if speaker.email else '']\n if speaker.sessions:\n session_details = ''\n for session in speaker.sessions:\n if not session.deleted_at:\n session_details += session.title + ' (' + session.state + '); '\n column.append(session_details[:-2])\n else:\n column.append('')\n column.append(speaker.mobile if speaker.mobile else '')\n column.append(speaker.short_biography if speaker.short_biography else '')\n column.append(speaker.organisation if speaker.organisation else '')\n column.append(speaker.position if speaker.position else '')\n rows.append(column)\n\n return rows\n", "path": "app/api/helpers/csv_jobs_util.py"}]} | 1,723 | 238 |
gh_patches_debug_10825 | rasdani/github-patches | git_diff | chainer__chainer-601 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
chainer.functions.Parameter cannot accept cupy.ndarray
```
In [1]: import numpy, chainer, cupy
In [2]: p = chainer.functions.Parameter(numpy.arange(12, dtype=numpy.float32))
In [3]: p = chainer.functions.Parameter(cupy.arange(12, dtype=numpy.float32))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-3bee41ef9fca> in <module>()
----> 1 p = chainer.functions.Parameter(cupy.arange(12, dtype=numpy.float32))
/home/delta/dev/chainer2/chainer/functions/connection/parameter.py in __init__(self, array)
21 def __init__(self, array):
22 self.W = array
---> 23 self.gW = numpy.full_like(array, numpy.nan)
24
25 def __call__(self, volatile=False):
/home/delta/.pyenv/versions/pyenv-2.7.9/lib/python2.7/site-packages/numpy/core/numeric.pyc in full_like(a, fill_value, dtype, order, subok)
344
345 """
--> 346 res = empty_like(a, dtype=dtype, order=order, subok=subok)
347 multiarray.copyto(res, fill_value, casting='unsafe')
348 return res
ValueError: object __array__ method not producing an array
```
</issue>
<code>
[start of chainer/functions/connection/parameter.py]
1 import numpy
2
3 from chainer import function
4 from chainer.utils import type_check
5
6
7 class Parameter(function.Function):
8
9 """Function that outputs its weight array.
10
11 This is a parameterized function that takes no input and returns a variable
12 holding a shallow copy of the parameter array.
13
14 Args:
15 array: Initial parameter array.
16
17 """
18 parameter_names = 'W',
19 gradient_names = 'gW',
20
21 def __init__(self, array):
22 self.W = array
23 self.gW = numpy.full_like(array, numpy.nan)
24
25 def __call__(self, volatile=False):
26 ret = super(Parameter, self).__call__()
27 if volatile:
28 ret.unchain_backward()
29 ret.volatile = volatile
30 return ret
31
32 def check_type_forward(self, in_types):
33 type_check.expect(in_types.size() == 0)
34
35 def forward(self, x):
36 return self.W,
37
38 def backward(self, x, gy):
39 self.gW += gy[0]
40 return ()
41
[end of chainer/functions/connection/parameter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/connection/parameter.py b/chainer/functions/connection/parameter.py
--- a/chainer/functions/connection/parameter.py
+++ b/chainer/functions/connection/parameter.py
@@ -1,5 +1,6 @@
import numpy
+from chainer import cuda
from chainer import function
from chainer.utils import type_check
@@ -20,7 +21,8 @@
def __init__(self, array):
self.W = array
- self.gW = numpy.full_like(array, numpy.nan)
+ xp = cuda.get_array_module(array)
+ self.gW = xp.full_like(self.W, numpy.nan)
def __call__(self, volatile=False):
ret = super(Parameter, self).__call__()
| {"golden_diff": "diff --git a/chainer/functions/connection/parameter.py b/chainer/functions/connection/parameter.py\n--- a/chainer/functions/connection/parameter.py\n+++ b/chainer/functions/connection/parameter.py\n@@ -1,5 +1,6 @@\n import numpy\n \n+from chainer import cuda\n from chainer import function\n from chainer.utils import type_check\n \n@@ -20,7 +21,8 @@\n \n def __init__(self, array):\n self.W = array\n- self.gW = numpy.full_like(array, numpy.nan)\n+ xp = cuda.get_array_module(array)\n+ self.gW = xp.full_like(self.W, numpy.nan)\n \n def __call__(self, volatile=False):\n ret = super(Parameter, self).__call__()\n", "issue": "chainer.functions.Parameter cannot accept cupy.ndarray\n```\nIn [1]: import numpy, chainer, cupy\nIn [2]: p = chainer.functions.Parameter(numpy.arange(12, dtype=numpy.float32))\nIn [3]: p = chainer.functions.Parameter(cupy.arange(12, dtype=numpy.float32))\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n<ipython-input-3-3bee41ef9fca> in <module>()\n----> 1 p = chainer.functions.Parameter(cupy.arange(12, dtype=numpy.float32))\n\n/home/delta/dev/chainer2/chainer/functions/connection/parameter.py in __init__(self, array)\n 21 def __init__(self, array):\n 22 self.W = array\n---> 23 self.gW = numpy.full_like(array, numpy.nan)\n 24 \n 25 def __call__(self, volatile=False):\n\n/home/delta/.pyenv/versions/pyenv-2.7.9/lib/python2.7/site-packages/numpy/core/numeric.pyc in full_like(a, fill_value, dtype, order, subok)\n 344 \n 345 \"\"\"\n--> 346 res = empty_like(a, dtype=dtype, order=order, subok=subok)\n 347 multiarray.copyto(res, fill_value, casting='unsafe')\n 348 return res\n\nValueError: object __array__ method not producing an array\n```\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\nclass Parameter(function.Function):\n\n \"\"\"Function that outputs its weight array.\n\n This is a parameterized function that takes no input and returns a variable\n holding a shallow copy of the parameter array.\n\n Args:\n array: Initial parameter array.\n\n \"\"\"\n parameter_names = 'W',\n gradient_names = 'gW',\n\n def __init__(self, array):\n self.W = array\n self.gW = numpy.full_like(array, numpy.nan)\n\n def __call__(self, volatile=False):\n ret = super(Parameter, self).__call__()\n if volatile:\n ret.unchain_backward()\n ret.volatile = volatile\n return ret\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 0)\n\n def forward(self, x):\n return self.W,\n\n def backward(self, x, gy):\n self.gW += gy[0]\n return ()\n", "path": "chainer/functions/connection/parameter.py"}]} | 1,171 | 163 |
gh_patches_debug_24551 | rasdani/github-patches | git_diff | opsdroid__opsdroid-41 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Generate default config
It should be possible to generate some basic config with a command line flag to opsdroid. It should cause opsdroid to print out the config so that is can be piped into a file.
e.g
```
opsdroid --gen-config > configuration.yaml
```
</issue>
<code>
[start of opsdroid/__main__.py]
1 """Starts opsdroid."""
2
3 import logging
4
5 from opsdroid.loader import Loader
6 from opsdroid.core import OpsDroid
7 from opsdroid.helper import set_logging_level
8 from opsdroid.const import LOG_FILENAME
9
10
11 def main():
12 """The main function."""
13 logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
14 logging.info("="*40)
15 logging.info("Stated application")
16 with OpsDroid() as opsdroid:
17 loader = Loader(opsdroid)
18 opsdroid.config = loader.load_config_file([
19 "./configuration.yaml",
20 "~/.opsdroid/configuration.yaml",
21 "/etc/opsdroid/configuration.yaml"
22 ])
23 if "logging" in opsdroid.config:
24 set_logging_level(opsdroid.config['logging'])
25 loader.load_config(opsdroid.config)
26 opsdroid.exit()
27
28 if __name__ == "__main__":
29 main()
30
[end of opsdroid/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/__main__.py b/opsdroid/__main__.py
--- a/opsdroid/__main__.py
+++ b/opsdroid/__main__.py
@@ -1,6 +1,9 @@
"""Starts opsdroid."""
+import sys
+import os
import logging
+import argparse
from opsdroid.loader import Loader
from opsdroid.core import OpsDroid
@@ -8,11 +11,30 @@
from opsdroid.const import LOG_FILENAME
+def parse_args(args):
+ """Parse command line arguments."""
+ parser = argparse.ArgumentParser(description='Run opsdroid.')
+ parser.add_argument('--gen-config', action="store_true",
+ help='prints out an example configuration file')
+ return parser.parse_args(args)
+
+
def main():
"""The main function."""
logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
logging.info("="*40)
logging.info("Stated application")
+
+ args = parse_args(sys.argv[1:])
+
+ if args.gen_config:
+ path = os.path.join(
+ os.path.dirname(os.path.abspath(__file__)),
+ "configuration/example_configuration.yaml")
+ with open(path, 'r') as conf:
+ print(conf.read())
+ sys.exit(0)
+
with OpsDroid() as opsdroid:
loader = Loader(opsdroid)
opsdroid.config = loader.load_config_file([
| {"golden_diff": "diff --git a/opsdroid/__main__.py b/opsdroid/__main__.py\n--- a/opsdroid/__main__.py\n+++ b/opsdroid/__main__.py\n@@ -1,6 +1,9 @@\n \"\"\"Starts opsdroid.\"\"\"\n \n+import sys\n+import os\n import logging\n+import argparse\n \n from opsdroid.loader import Loader\n from opsdroid.core import OpsDroid\n@@ -8,11 +11,30 @@\n from opsdroid.const import LOG_FILENAME\n \n \n+def parse_args(args):\n+ \"\"\"Parse command line arguments.\"\"\"\n+ parser = argparse.ArgumentParser(description='Run opsdroid.')\n+ parser.add_argument('--gen-config', action=\"store_true\",\n+ help='prints out an example configuration file')\n+ return parser.parse_args(args)\n+\n+\n def main():\n \"\"\"The main function.\"\"\"\n logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)\n logging.info(\"=\"*40)\n logging.info(\"Stated application\")\n+\n+ args = parse_args(sys.argv[1:])\n+\n+ if args.gen_config:\n+ path = os.path.join(\n+ os.path.dirname(os.path.abspath(__file__)),\n+ \"configuration/example_configuration.yaml\")\n+ with open(path, 'r') as conf:\n+ print(conf.read())\n+ sys.exit(0)\n+\n with OpsDroid() as opsdroid:\n loader = Loader(opsdroid)\n opsdroid.config = loader.load_config_file([\n", "issue": "Generate default config\nIt should be possible to generate some basic config with a command line flag to opsdroid. It should cause opsdroid to print out the config so that is can be piped into a file.\n\ne.g\n\n```\nopsdroid --gen-config > configuration.yaml\n```\n\n", "before_files": [{"content": "\"\"\"Starts opsdroid.\"\"\"\n\nimport logging\n\nfrom opsdroid.loader import Loader\nfrom opsdroid.core import OpsDroid\nfrom opsdroid.helper import set_logging_level\nfrom opsdroid.const import LOG_FILENAME\n\n\ndef main():\n \"\"\"The main function.\"\"\"\n logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)\n logging.info(\"=\"*40)\n logging.info(\"Stated application\")\n with OpsDroid() as opsdroid:\n loader = Loader(opsdroid)\n opsdroid.config = loader.load_config_file([\n \"./configuration.yaml\",\n \"~/.opsdroid/configuration.yaml\",\n \"/etc/opsdroid/configuration.yaml\"\n ])\n if \"logging\" in opsdroid.config:\n set_logging_level(opsdroid.config['logging'])\n loader.load_config(opsdroid.config)\n opsdroid.exit()\n\nif __name__ == \"__main__\":\n main()\n", "path": "opsdroid/__main__.py"}]} | 849 | 323 |
gh_patches_debug_35552 | rasdani/github-patches | git_diff | pfnet__pytorch-pfn-extras-385 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Profiler: Automatically fill `tag` in `record`?
Maybe we can use the caller's function name (`inspect.stack()`) if tag is not given.
TODO: Need to measure overhead.
</issue>
<code>
[start of pytorch_pfn_extras/profiler/_record.py]
1 from contextlib import contextmanager
2 from typing import Any, Callable, Generator, Iterable, Optional, TypeVar
3
4 import torch
5
6 from pytorch_pfn_extras.profiler._time_summary import time_summary, _ReportNotification
7
8
9 @contextmanager
10 def record(
11 tag: str,
12 metric: Optional[str] = None,
13 use_cuda: bool = False,
14 ) -> Generator[_ReportNotification, None, None]:
15 if metric is None:
16 metric = tag
17
18 if use_cuda:
19 torch.cuda.nvtx.range_push(tag) # type: ignore[no-untyped-call]
20 try:
21 with torch.autograd.profiler.record_function(tag):
22 with time_summary.report(metric, use_cuda) as ntf:
23 yield ntf
24 finally:
25 if use_cuda:
26 torch.cuda.nvtx.range_pop() # type: ignore[no-untyped-call]
27
28
29 _T = TypeVar('_T')
30
31
32 def record_function(
33 tag: str,
34 use_cuda: bool = False,
35 ) -> Callable[[Callable[..., _T]], Callable[..., _T]]:
36 def wrapper(f: Callable[..., _T]) -> Callable[..., _T]:
37 def wrapped(*args: Any, **kwargs: Any) -> _T:
38 with record(tag, use_cuda=use_cuda):
39 return f(*args, **kwargs)
40
41 return wrapped
42
43 return wrapper
44
45
46 def record_iterable(
47 tag: str,
48 iter: Iterable[_T],
49 divide_metric: bool = False,
50 use_cuda: bool = False,
51 ) -> Iterable[_T]:
52 def wrapped() -> Iterable[_T]:
53 for i, x in enumerate(iter):
54 name = f"{tag}-{i}"
55 metric = name if divide_metric else tag
56 with record(name, metric, use_cuda=use_cuda):
57 yield x
58
59 return wrapped()
60
[end of pytorch_pfn_extras/profiler/_record.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_pfn_extras/profiler/_record.py b/pytorch_pfn_extras/profiler/_record.py
--- a/pytorch_pfn_extras/profiler/_record.py
+++ b/pytorch_pfn_extras/profiler/_record.py
@@ -1,17 +1,35 @@
from contextlib import contextmanager
+import inspect
from typing import Any, Callable, Generator, Iterable, Optional, TypeVar
+import types
import torch
from pytorch_pfn_extras.profiler._time_summary import time_summary, _ReportNotification
+def _infer_tag_name(frame: Optional[types.FrameType], depth: int) -> str:
+ for _ in range(depth):
+ assert frame is not None
+ frame = frame.f_back
+ assert frame is not None
+ frame_info = inspect.getframeinfo(frame, context=0)
+ return '{}:{}:{}'.format(
+ inspect.getmodulename(frame_info.filename),
+ frame_info.lineno,
+ frame_info.function,
+ )
+
+
@contextmanager
def record(
- tag: str,
+ tag: Optional[str],
metric: Optional[str] = None,
use_cuda: bool = False,
) -> Generator[_ReportNotification, None, None]:
+ if tag is None:
+ tag = _infer_tag_name(inspect.currentframe(), depth=2)
+
if metric is None:
metric = tag
@@ -30,12 +48,12 @@
def record_function(
- tag: str,
+ tag: Optional[str],
use_cuda: bool = False,
) -> Callable[[Callable[..., _T]], Callable[..., _T]]:
def wrapper(f: Callable[..., _T]) -> Callable[..., _T]:
def wrapped(*args: Any, **kwargs: Any) -> _T:
- with record(tag, use_cuda=use_cuda):
+ with record(tag or f.__name__, use_cuda=use_cuda):
return f(*args, **kwargs)
return wrapped
@@ -44,11 +62,14 @@
def record_iterable(
- tag: str,
- iter: Iterable[_T],
- divide_metric: bool = False,
- use_cuda: bool = False,
+ tag: Optional[str],
+ iter: Iterable[_T],
+ divide_metric: bool = False,
+ use_cuda: bool = False,
) -> Iterable[_T]:
+ if tag is None:
+ tag = _infer_tag_name(inspect.currentframe(), depth=1)
+
def wrapped() -> Iterable[_T]:
for i, x in enumerate(iter):
name = f"{tag}-{i}"
| {"golden_diff": "diff --git a/pytorch_pfn_extras/profiler/_record.py b/pytorch_pfn_extras/profiler/_record.py\n--- a/pytorch_pfn_extras/profiler/_record.py\n+++ b/pytorch_pfn_extras/profiler/_record.py\n@@ -1,17 +1,35 @@\n from contextlib import contextmanager\n+import inspect\n from typing import Any, Callable, Generator, Iterable, Optional, TypeVar\n+import types\n \n import torch\n \n from pytorch_pfn_extras.profiler._time_summary import time_summary, _ReportNotification\n \n \n+def _infer_tag_name(frame: Optional[types.FrameType], depth: int) -> str:\n+ for _ in range(depth):\n+ assert frame is not None\n+ frame = frame.f_back\n+ assert frame is not None\n+ frame_info = inspect.getframeinfo(frame, context=0)\n+ return '{}:{}:{}'.format(\n+ inspect.getmodulename(frame_info.filename),\n+ frame_info.lineno,\n+ frame_info.function,\n+ )\n+\n+\n @contextmanager\n def record(\n- tag: str,\n+ tag: Optional[str],\n metric: Optional[str] = None,\n use_cuda: bool = False,\n ) -> Generator[_ReportNotification, None, None]:\n+ if tag is None:\n+ tag = _infer_tag_name(inspect.currentframe(), depth=2)\n+\n if metric is None:\n metric = tag\n \n@@ -30,12 +48,12 @@\n \n \n def record_function(\n- tag: str,\n+ tag: Optional[str],\n use_cuda: bool = False,\n ) -> Callable[[Callable[..., _T]], Callable[..., _T]]:\n def wrapper(f: Callable[..., _T]) -> Callable[..., _T]:\n def wrapped(*args: Any, **kwargs: Any) -> _T:\n- with record(tag, use_cuda=use_cuda):\n+ with record(tag or f.__name__, use_cuda=use_cuda):\n return f(*args, **kwargs)\n \n return wrapped\n@@ -44,11 +62,14 @@\n \n \n def record_iterable(\n- tag: str,\n- iter: Iterable[_T],\n- divide_metric: bool = False,\n- use_cuda: bool = False,\n+ tag: Optional[str],\n+ iter: Iterable[_T],\n+ divide_metric: bool = False,\n+ use_cuda: bool = False,\n ) -> Iterable[_T]:\n+ if tag is None:\n+ tag = _infer_tag_name(inspect.currentframe(), depth=1)\n+\n def wrapped() -> Iterable[_T]:\n for i, x in enumerate(iter):\n name = f\"{tag}-{i}\"\n", "issue": "Profiler: Automatically fill `tag` in `record`?\nMaybe we can use the caller's function name (`inspect.stack()`) if tag is not given.\r\n\r\nTODO: Need to measure overhead.\n", "before_files": [{"content": "from contextlib import contextmanager\nfrom typing import Any, Callable, Generator, Iterable, Optional, TypeVar\n\nimport torch\n\nfrom pytorch_pfn_extras.profiler._time_summary import time_summary, _ReportNotification\n\n\n@contextmanager\ndef record(\n tag: str,\n metric: Optional[str] = None,\n use_cuda: bool = False,\n) -> Generator[_ReportNotification, None, None]:\n if metric is None:\n metric = tag\n\n if use_cuda:\n torch.cuda.nvtx.range_push(tag) # type: ignore[no-untyped-call]\n try:\n with torch.autograd.profiler.record_function(tag):\n with time_summary.report(metric, use_cuda) as ntf:\n yield ntf\n finally:\n if use_cuda:\n torch.cuda.nvtx.range_pop() # type: ignore[no-untyped-call]\n\n\n_T = TypeVar('_T')\n\n\ndef record_function(\n tag: str,\n use_cuda: bool = False,\n) -> Callable[[Callable[..., _T]], Callable[..., _T]]:\n def wrapper(f: Callable[..., _T]) -> Callable[..., _T]:\n def wrapped(*args: Any, **kwargs: Any) -> _T:\n with record(tag, use_cuda=use_cuda):\n return f(*args, **kwargs)\n\n return wrapped\n\n return wrapper\n\n\ndef record_iterable(\n tag: str,\n iter: Iterable[_T],\n divide_metric: bool = False,\n use_cuda: bool = False,\n) -> Iterable[_T]:\n def wrapped() -> Iterable[_T]:\n for i, x in enumerate(iter):\n name = f\"{tag}-{i}\"\n metric = name if divide_metric else tag\n with record(name, metric, use_cuda=use_cuda):\n yield x\n\n return wrapped()\n", "path": "pytorch_pfn_extras/profiler/_record.py"}]} | 1,093 | 586 |
gh_patches_debug_35093 | rasdani/github-patches | git_diff | hydroshare__hydroshare-4819 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rewrite author order test
**Description of the bug**
This test fails occasionally. Rewrite it removing 2 assertions:
[https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_c[…]ore/tests/api/native/test_reorder_authors_management_command.py](https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_core/tests/api/native/test_reorder_authors_management_command.py#L180)
[https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_c[…]ore/tests/api/native/test_reorder_authors_management_command.py](https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_core/tests/api/native/test_reorder_authors_management_command.py#L152)
Also: rewrite this management command so that it takes a res ID as a param:
https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_core/management/commands/reorder_authors.py#L24
Steps to reproduce the bug:
http://ci.hydroshare.org:8080/job/hydroshare-pull-requests/5750/testReport/junit/hs_core.tests.api.native.test_reorder_authors_management_command/TestReorderAuthorsCommand/test_command_fixes_triplicate_authors/
**Expected behavior**
Test should not be dependent on django .get() order
</issue>
<code>
[start of hs_core/management/commands/reorder_authors.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 Fix duplicate author "order" values
5
6 Related to https://github.com/hydroshare/hydroshare/issues/4695
7 """
8
9 from django.core.management.base import BaseCommand
10 from hs_core.models import BaseResource
11 from hs_core.hydroshare.utils import set_dirty_bag_flag
12
13
14 class Command(BaseCommand):
15 help = "Fix duplicate author 'order' values"
16
17 def handle(self, *args, **options):
18 resources = BaseResource.objects.filter(raccess__published=False).only('object_id', 'short_id')
19 for res in resources:
20 if res.metadata is not None:
21 creators = res.metadata.creators.all()
22 is_dirty = False
23 for index, creator in enumerate(creators, start=1):
24 if creator.order != index:
25 print("*" * 100)
26 print(f"Author out of order.\nR:{res.short_id}"
27 f"\nExpected: {index}, got: {creator.order}")
28 creator.order = index
29 creator.save()
30 is_dirty = True
31 if is_dirty:
32 set_dirty_bag_flag(res)
33
[end of hs_core/management/commands/reorder_authors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hs_core/management/commands/reorder_authors.py b/hs_core/management/commands/reorder_authors.py
--- a/hs_core/management/commands/reorder_authors.py
+++ b/hs_core/management/commands/reorder_authors.py
@@ -6,7 +6,7 @@
Related to https://github.com/hydroshare/hydroshare/issues/4695
"""
-from django.core.management.base import BaseCommand
+from django.core.management.base import BaseCommand, CommandError
from hs_core.models import BaseResource
from hs_core.hydroshare.utils import set_dirty_bag_flag
@@ -14,19 +14,30 @@
class Command(BaseCommand):
help = "Fix duplicate author 'order' values"
+ def add_arguments(self, parser):
+ # ID of a resource for which users should be re-ordered
+ parser.add_argument('--resource_id', type=str, help=('Required. The id (short_id) of'
+ ' the resource'))
+
def handle(self, *args, **options):
- resources = BaseResource.objects.filter(raccess__published=False).only('object_id', 'short_id')
- for res in resources:
- if res.metadata is not None:
- creators = res.metadata.creators.all()
- is_dirty = False
- for index, creator in enumerate(creators, start=1):
- if creator.order != index:
- print("*" * 100)
- print(f"Author out of order.\nR:{res.short_id}"
- f"\nExpected: {index}, got: {creator.order}")
- creator.order = index
- creator.save()
- is_dirty = True
- if is_dirty:
- set_dirty_bag_flag(res)
+ if not options['resource_id']:
+ raise CommandError('resource_id argument is required')
+ res_id = options['resource_id']
+ res = BaseResource.objects.filter(short_id=res_id).first()
+ if not res:
+ raise CommandError('No resource found for the provided resource_id')
+ if res.raccess.published:
+ raise CommandError(f"Resource id: {res_id} is already published--can't update author order.")
+ if res.metadata is not None:
+ creators = res.metadata.creators.all()
+ is_dirty = False
+ for index, creator in enumerate(creators, start=1):
+ if creator.order != index:
+ print("*" * 100)
+ print(f"Author out of order.\nR:{res.short_id}"
+ f"\nExpected: {index}, got: {creator.order}")
+ creator.order = index
+ creator.save()
+ is_dirty = True
+ if is_dirty:
+ set_dirty_bag_flag(res)
| {"golden_diff": "diff --git a/hs_core/management/commands/reorder_authors.py b/hs_core/management/commands/reorder_authors.py\n--- a/hs_core/management/commands/reorder_authors.py\n+++ b/hs_core/management/commands/reorder_authors.py\n@@ -6,7 +6,7 @@\n Related to https://github.com/hydroshare/hydroshare/issues/4695\n \"\"\"\n \n-from django.core.management.base import BaseCommand\n+from django.core.management.base import BaseCommand, CommandError\n from hs_core.models import BaseResource\n from hs_core.hydroshare.utils import set_dirty_bag_flag\n \n@@ -14,19 +14,30 @@\n class Command(BaseCommand):\n help = \"Fix duplicate author 'order' values\"\n \n+ def add_arguments(self, parser):\n+ # ID of a resource for which users should be re-ordered\n+ parser.add_argument('--resource_id', type=str, help=('Required. The id (short_id) of'\n+ ' the resource'))\n+\n def handle(self, *args, **options):\n- resources = BaseResource.objects.filter(raccess__published=False).only('object_id', 'short_id')\n- for res in resources:\n- if res.metadata is not None:\n- creators = res.metadata.creators.all()\n- is_dirty = False\n- for index, creator in enumerate(creators, start=1):\n- if creator.order != index:\n- print(\"*\" * 100)\n- print(f\"Author out of order.\\nR:{res.short_id}\"\n- f\"\\nExpected: {index}, got: {creator.order}\")\n- creator.order = index\n- creator.save()\n- is_dirty = True\n- if is_dirty:\n- set_dirty_bag_flag(res)\n+ if not options['resource_id']:\n+ raise CommandError('resource_id argument is required')\n+ res_id = options['resource_id']\n+ res = BaseResource.objects.filter(short_id=res_id).first()\n+ if not res:\n+ raise CommandError('No resource found for the provided resource_id')\n+ if res.raccess.published:\n+ raise CommandError(f\"Resource id: {res_id} is already published--can't update author order.\")\n+ if res.metadata is not None:\n+ creators = res.metadata.creators.all()\n+ is_dirty = False\n+ for index, creator in enumerate(creators, start=1):\n+ if creator.order != index:\n+ print(\"*\" * 100)\n+ print(f\"Author out of order.\\nR:{res.short_id}\"\n+ f\"\\nExpected: {index}, got: {creator.order}\")\n+ creator.order = index\n+ creator.save()\n+ is_dirty = True\n+ if is_dirty:\n+ set_dirty_bag_flag(res)\n", "issue": "rewrite author order test\n**Description of the bug**\r\nThis test fails occasionally. Rewrite it removing 2 assertions:\r\n[https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_c[\u2026]ore/tests/api/native/test_reorder_authors_management_command.py](https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_core/tests/api/native/test_reorder_authors_management_command.py#L180)\r\n\r\n[https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_c[\u2026]ore/tests/api/native/test_reorder_authors_management_command.py](https://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_core/tests/api/native/test_reorder_authors_management_command.py#L152)\r\n\r\nAlso: rewrite this management command so that it takes a res ID as a param:\r\nhttps://github.com/hydroshare/hydroshare/blob/4372-communities-and-groups-2.0/hs_core/management/commands/reorder_authors.py#L24\r\n\r\nSteps to reproduce the bug:\r\nhttp://ci.hydroshare.org:8080/job/hydroshare-pull-requests/5750/testReport/junit/hs_core.tests.api.native.test_reorder_authors_management_command/TestReorderAuthorsCommand/test_command_fixes_triplicate_authors/\r\n\r\n**Expected behavior**\r\nTest should not be dependent on django .get() order\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nFix duplicate author \"order\" values\n\nRelated to https://github.com/hydroshare/hydroshare/issues/4695\n\"\"\"\n\nfrom django.core.management.base import BaseCommand\nfrom hs_core.models import BaseResource\nfrom hs_core.hydroshare.utils import set_dirty_bag_flag\n\n\nclass Command(BaseCommand):\n help = \"Fix duplicate author 'order' values\"\n\n def handle(self, *args, **options):\n resources = BaseResource.objects.filter(raccess__published=False).only('object_id', 'short_id')\n for res in resources:\n if res.metadata is not None:\n creators = res.metadata.creators.all()\n is_dirty = False\n for index, creator in enumerate(creators, start=1):\n if creator.order != index:\n print(\"*\" * 100)\n print(f\"Author out of order.\\nR:{res.short_id}\"\n f\"\\nExpected: {index}, got: {creator.order}\")\n creator.order = index\n creator.save()\n is_dirty = True\n if is_dirty:\n set_dirty_bag_flag(res)\n", "path": "hs_core/management/commands/reorder_authors.py"}]} | 1,182 | 619 |
gh_patches_debug_3501 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1068 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mail controlpanel: doesn't keep password field
Saving the mail settings in the controlpanel doesn't keep the password field value, as it is obviously never shown in ESMTP password.
Steps to reproduce:
1. Fill in ESMTP username and ESMTP password. Save settings. They are correctly stored.
2. Apply save settings again. ESMTP password is incorrectly stored as None.
</issue>
<code>
[start of Products/CMFPlone/controlpanel/browser/mail.py]
1 from Products.CMFCore.utils import getToolByName
2 from Products.CMFPlone import PloneMessageFactory as _
3 from Products.CMFPlone.interfaces.controlpanel import IMailSchema
4 from Products.MailHost.MailHost import MailHostError
5 from Products.statusmessages.interfaces import IStatusMessage
6 from logging import getLogger
7 from plone.app.registry.browser import controlpanel
8 from plone.registry.interfaces import IRegistry
9 from z3c.form import button
10 from zope.component import getUtility
11
12 import smtplib
13 import socket
14 import sys
15
16 log = getLogger('Plone')
17
18
19 class MailControlPanelForm(controlpanel.RegistryEditForm):
20
21 id = "MailControlPanel"
22 label = _(u"Mail Settings")
23 schema = IMailSchema
24 schema_prefix = "plone"
25
26 @button.buttonAndHandler(_('Save'), name=None)
27 def handleSave(self, action):
28 self.save()
29
30 @button.buttonAndHandler(_('Cancel'), name='cancel')
31 def handleCancel(self, action):
32 super(MailControlPanelForm, self).handleCancel(self, action)
33
34 def save(self):
35 data, errors = self.extractData()
36 if errors:
37 self.status = self.formErrorsMessage
38 return False
39 self.applyChanges(data)
40 return True
41
42 @button.buttonAndHandler(
43 _('label_smtp_test', default='Save and send test e-mail'),
44 name='test')
45 def handle_test_action(self, action):
46 # Save data first
47 if not self.save():
48 return
49 mailhost = getToolByName(self.context, 'MailHost')
50
51 registry = getUtility(IRegistry)
52 mail_settings = registry.forInterface(IMailSchema, prefix='plone')
53 fromaddr = mail_settings.email_from_address
54 fromname = mail_settings.email_from_name
55
56 message = ("Hi,\n\nThis is a test message sent from the Plone "
57 "'Mail settings' control panel. Your receipt of this "
58 "message (at the address specified in the Site 'From' "
59 "address field) indicates that your e-mail server is "
60 "working!\n\n"
61 "Have a nice day.\n\n"
62 "Love,\n\nPlone")
63 email_charset = mail_settings.email_charset
64 subject = "Test e-mail from Plone"
65
66 # Make the timeout incredibly short. This is enough time for most mail
67 # servers, wherever they may be in the world, to respond to the
68 # connection request. Make sure we save the current value
69 # and restore it afterward.
70 timeout = socket.getdefaulttimeout()
71 try:
72 socket.setdefaulttimeout(3)
73 try:
74 mailhost.send(message,
75 mto=fromaddr,
76 mfrom=fromaddr,
77 subject=subject,
78 charset=email_charset,
79 immediate=True)
80
81 except (socket.error, MailHostError, smtplib.SMTPException):
82 # Connection refused or timeout.
83 log.exception('Unable to send test e-mail.')
84 value = sys.exc_info()[1]
85 msg = _(u'Unable to send test e-mail ${error}.',
86 mapping={'error': unicode(value)})
87 IStatusMessage(self.request).addStatusMessage(
88 msg, type='error')
89 else:
90 IStatusMessage(self.request).addStatusMessage(
91 _(u'Success! Check your mailbox for the test message.'),
92 type='info')
93 finally:
94 # Restore timeout to default value
95 socket.setdefaulttimeout(timeout)
96
97
98 class MailControlPanel(controlpanel.ControlPanelFormWrapper):
99 form = MailControlPanelForm
100
[end of Products/CMFPlone/controlpanel/browser/mail.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/controlpanel/browser/mail.py b/Products/CMFPlone/controlpanel/browser/mail.py
--- a/Products/CMFPlone/controlpanel/browser/mail.py
+++ b/Products/CMFPlone/controlpanel/browser/mail.py
@@ -36,6 +36,10 @@
if errors:
self.status = self.formErrorsMessage
return False
+ #keep password field
+ if (data.get('smtp_userid') is not None
+ and data.get('smtp_pass') is None):
+ del data['smtp_pass']
self.applyChanges(data)
return True
| {"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/mail.py b/Products/CMFPlone/controlpanel/browser/mail.py\n--- a/Products/CMFPlone/controlpanel/browser/mail.py\n+++ b/Products/CMFPlone/controlpanel/browser/mail.py\n@@ -36,6 +36,10 @@\n if errors:\n self.status = self.formErrorsMessage\n return False\n+ #keep password field\n+ if (data.get('smtp_userid') is not None\n+ and data.get('smtp_pass') is None):\n+ del data['smtp_pass']\n self.applyChanges(data)\n return True\n", "issue": "mail controlpanel: doesn't keep password field\nSaving the mail settings in the controlpanel doesn't keep the password field value, as it is obviously never shown in ESMTP password.\n\nSteps to reproduce:\n1. Fill in ESMTP username and ESMTP password. Save settings. They are correctly stored.\n2. Apply save settings again. ESMTP password is incorrectly stored as None.\n\n", "before_files": [{"content": "from Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.interfaces.controlpanel import IMailSchema\nfrom Products.MailHost.MailHost import MailHostError\nfrom Products.statusmessages.interfaces import IStatusMessage\nfrom logging import getLogger\nfrom plone.app.registry.browser import controlpanel\nfrom plone.registry.interfaces import IRegistry\nfrom z3c.form import button\nfrom zope.component import getUtility\n\nimport smtplib\nimport socket\nimport sys\n\nlog = getLogger('Plone')\n\n\nclass MailControlPanelForm(controlpanel.RegistryEditForm):\n\n id = \"MailControlPanel\"\n label = _(u\"Mail Settings\")\n schema = IMailSchema\n schema_prefix = \"plone\"\n\n @button.buttonAndHandler(_('Save'), name=None)\n def handleSave(self, action):\n self.save()\n\n @button.buttonAndHandler(_('Cancel'), name='cancel')\n def handleCancel(self, action):\n super(MailControlPanelForm, self).handleCancel(self, action)\n\n def save(self):\n data, errors = self.extractData()\n if errors:\n self.status = self.formErrorsMessage\n return False\n self.applyChanges(data)\n return True\n\n @button.buttonAndHandler(\n _('label_smtp_test', default='Save and send test e-mail'),\n name='test')\n def handle_test_action(self, action):\n # Save data first\n if not self.save():\n return\n mailhost = getToolByName(self.context, 'MailHost')\n\n registry = getUtility(IRegistry)\n mail_settings = registry.forInterface(IMailSchema, prefix='plone')\n fromaddr = mail_settings.email_from_address\n fromname = mail_settings.email_from_name\n\n message = (\"Hi,\\n\\nThis is a test message sent from the Plone \"\n \"'Mail settings' control panel. Your receipt of this \"\n \"message (at the address specified in the Site 'From' \"\n \"address field) indicates that your e-mail server is \"\n \"working!\\n\\n\"\n \"Have a nice day.\\n\\n\"\n \"Love,\\n\\nPlone\")\n email_charset = mail_settings.email_charset\n subject = \"Test e-mail from Plone\"\n\n # Make the timeout incredibly short. This is enough time for most mail\n # servers, wherever they may be in the world, to respond to the\n # connection request. Make sure we save the current value\n # and restore it afterward.\n timeout = socket.getdefaulttimeout()\n try:\n socket.setdefaulttimeout(3)\n try:\n mailhost.send(message,\n mto=fromaddr,\n mfrom=fromaddr,\n subject=subject,\n charset=email_charset,\n immediate=True)\n\n except (socket.error, MailHostError, smtplib.SMTPException):\n # Connection refused or timeout.\n log.exception('Unable to send test e-mail.')\n value = sys.exc_info()[1]\n msg = _(u'Unable to send test e-mail ${error}.',\n mapping={'error': unicode(value)})\n IStatusMessage(self.request).addStatusMessage(\n msg, type='error')\n else:\n IStatusMessage(self.request).addStatusMessage(\n _(u'Success! Check your mailbox for the test message.'),\n type='info')\n finally:\n # Restore timeout to default value\n socket.setdefaulttimeout(timeout)\n\n\nclass MailControlPanel(controlpanel.ControlPanelFormWrapper):\n form = MailControlPanelForm\n", "path": "Products/CMFPlone/controlpanel/browser/mail.py"}]} | 1,571 | 142 |
gh_patches_debug_32239 | rasdani/github-patches | git_diff | Textualize__textual-2112 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`)` cannot appear as part of the parameter passed to an action
Reproduction:
```python
from textual.app import App
class ActionBugApp(App):
BINDINGS = [("a", "test(')')", "Test")]
def action_test(self, _: str) -> None:
pass
if __name__ == '__main__':
app = ActionBugApp()
app.run()
```
Omitting the full stack trace (since it's fairly easy to reproduce), the key error message is:
```
ActionError: unable to parse "(')" in action "test(')')"
```
Seems that [this regex](https://github.com/Textualize/textual/blob/2a6368754a8b3a11f1772b52298b5d3b50ceebaa/src/textual/actions.py#L20) is not general enough.
</issue>
<code>
[start of src/textual/actions.py]
1 from __future__ import annotations
2
3 import ast
4 import re
5
6 from typing_extensions import Any, TypeAlias
7
8 ActionParseResult: TypeAlias = "tuple[str, tuple[Any, ...]]"
9 """An action is its name and the arbitrary tuple of its parameters."""
10
11
12 class SkipAction(Exception):
13 """Raise in an action to skip the action (and allow any parent bindings to run)."""
14
15
16 class ActionError(Exception):
17 pass
18
19
20 re_action_params = re.compile(r"([\w\.]+)(\(.*?\))")
21
22
23 def parse(action: str) -> ActionParseResult:
24 """Parses an action string.
25
26 Args:
27 action: String containing action.
28
29 Raises:
30 ActionError: If the action has invalid syntax.
31
32 Returns:
33 Action name and parameters
34 """
35 params_match = re_action_params.match(action)
36 if params_match is not None:
37 action_name, action_params_str = params_match.groups()
38 try:
39 action_params = ast.literal_eval(action_params_str)
40 except Exception:
41 raise ActionError(
42 f"unable to parse {action_params_str!r} in action {action!r}"
43 )
44 else:
45 action_name = action
46 action_params = ()
47
48 return (
49 action_name,
50 action_params if isinstance(action_params, tuple) else (action_params,),
51 )
52
[end of src/textual/actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/textual/actions.py b/src/textual/actions.py
--- a/src/textual/actions.py
+++ b/src/textual/actions.py
@@ -6,7 +6,7 @@
from typing_extensions import Any, TypeAlias
ActionParseResult: TypeAlias = "tuple[str, tuple[Any, ...]]"
-"""An action is its name and the arbitrary tuple of its parameters."""
+"""An action is its name and the arbitrary tuple of its arguments."""
class SkipAction(Exception):
@@ -17,7 +17,7 @@
pass
-re_action_params = re.compile(r"([\w\.]+)(\(.*?\))")
+re_action_args = re.compile(r"([\w\.]+)\((.*)\)")
def parse(action: str) -> ActionParseResult:
@@ -30,22 +30,25 @@
ActionError: If the action has invalid syntax.
Returns:
- Action name and parameters
+ Action name and arguments.
"""
- params_match = re_action_params.match(action)
- if params_match is not None:
- action_name, action_params_str = params_match.groups()
- try:
- action_params = ast.literal_eval(action_params_str)
- except Exception:
- raise ActionError(
- f"unable to parse {action_params_str!r} in action {action!r}"
- )
+ args_match = re_action_args.match(action)
+ if args_match is not None:
+ action_name, action_args_str = args_match.groups()
+ if action_args_str:
+ try:
+ # We wrap `action_args_str` to be able to disambiguate the cases where
+ # the list of arguments is a comma-separated list of values from the
+ # case where the argument is a single tuple.
+ action_args: tuple[Any, ...] = ast.literal_eval(f"({action_args_str},)")
+ except Exception:
+ raise ActionError(
+ f"unable to parse {action_args_str!r} in action {action!r}"
+ )
+ else:
+ action_args = ()
else:
action_name = action
- action_params = ()
+ action_args = ()
- return (
- action_name,
- action_params if isinstance(action_params, tuple) else (action_params,),
- )
+ return action_name, action_args
| {"golden_diff": "diff --git a/src/textual/actions.py b/src/textual/actions.py\n--- a/src/textual/actions.py\n+++ b/src/textual/actions.py\n@@ -6,7 +6,7 @@\n from typing_extensions import Any, TypeAlias\n \n ActionParseResult: TypeAlias = \"tuple[str, tuple[Any, ...]]\"\n-\"\"\"An action is its name and the arbitrary tuple of its parameters.\"\"\"\n+\"\"\"An action is its name and the arbitrary tuple of its arguments.\"\"\"\n \n \n class SkipAction(Exception):\n@@ -17,7 +17,7 @@\n pass\n \n \n-re_action_params = re.compile(r\"([\\w\\.]+)(\\(.*?\\))\")\n+re_action_args = re.compile(r\"([\\w\\.]+)\\((.*)\\)\")\n \n \n def parse(action: str) -> ActionParseResult:\n@@ -30,22 +30,25 @@\n ActionError: If the action has invalid syntax.\n \n Returns:\n- Action name and parameters\n+ Action name and arguments.\n \"\"\"\n- params_match = re_action_params.match(action)\n- if params_match is not None:\n- action_name, action_params_str = params_match.groups()\n- try:\n- action_params = ast.literal_eval(action_params_str)\n- except Exception:\n- raise ActionError(\n- f\"unable to parse {action_params_str!r} in action {action!r}\"\n- )\n+ args_match = re_action_args.match(action)\n+ if args_match is not None:\n+ action_name, action_args_str = args_match.groups()\n+ if action_args_str:\n+ try:\n+ # We wrap `action_args_str` to be able to disambiguate the cases where\n+ # the list of arguments is a comma-separated list of values from the\n+ # case where the argument is a single tuple.\n+ action_args: tuple[Any, ...] = ast.literal_eval(f\"({action_args_str},)\")\n+ except Exception:\n+ raise ActionError(\n+ f\"unable to parse {action_args_str!r} in action {action!r}\"\n+ )\n+ else:\n+ action_args = ()\n else:\n action_name = action\n- action_params = ()\n+ action_args = ()\n \n- return (\n- action_name,\n- action_params if isinstance(action_params, tuple) else (action_params,),\n- )\n+ return action_name, action_args\n", "issue": "`)` cannot appear as part of the parameter passed to an action\nReproduction:\r\n\r\n```python\r\nfrom textual.app import App\r\n\r\n\r\nclass ActionBugApp(App):\r\n BINDINGS = [(\"a\", \"test(')')\", \"Test\")]\r\n\r\n def action_test(self, _: str) -> None:\r\n pass\r\n\r\n\r\nif __name__ == '__main__':\r\n app = ActionBugApp()\r\n app.run()\r\n```\r\n\r\nOmitting the full stack trace (since it's fairly easy to reproduce), the key error message is:\r\n\r\n```\r\nActionError: unable to parse \"(')\" in action \"test(')')\"\r\n```\r\n\r\nSeems that [this regex](https://github.com/Textualize/textual/blob/2a6368754a8b3a11f1772b52298b5d3b50ceebaa/src/textual/actions.py#L20) is not general enough.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport ast\nimport re\n\nfrom typing_extensions import Any, TypeAlias\n\nActionParseResult: TypeAlias = \"tuple[str, tuple[Any, ...]]\"\n\"\"\"An action is its name and the arbitrary tuple of its parameters.\"\"\"\n\n\nclass SkipAction(Exception):\n \"\"\"Raise in an action to skip the action (and allow any parent bindings to run).\"\"\"\n\n\nclass ActionError(Exception):\n pass\n\n\nre_action_params = re.compile(r\"([\\w\\.]+)(\\(.*?\\))\")\n\n\ndef parse(action: str) -> ActionParseResult:\n \"\"\"Parses an action string.\n\n Args:\n action: String containing action.\n\n Raises:\n ActionError: If the action has invalid syntax.\n\n Returns:\n Action name and parameters\n \"\"\"\n params_match = re_action_params.match(action)\n if params_match is not None:\n action_name, action_params_str = params_match.groups()\n try:\n action_params = ast.literal_eval(action_params_str)\n except Exception:\n raise ActionError(\n f\"unable to parse {action_params_str!r} in action {action!r}\"\n )\n else:\n action_name = action\n action_params = ()\n\n return (\n action_name,\n action_params if isinstance(action_params, tuple) else (action_params,),\n )\n", "path": "src/textual/actions.py"}]} | 1,116 | 519 |
gh_patches_debug_7276 | rasdani/github-patches | git_diff | pyodide__pyodide-3013 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Relative URLs in pyodide.loadPackage
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
The documentation states that [pyodide.loadPackage](https://pyodide.org/en/stable/usage/api/js-api.html#pyodide.loadPackage) supports relative URLs. I'm trying to load an out-of-tree wheel from my local webserver, but this doesn't seem to work out well.
### To Reproduce
<!-- Minimal code example to reproduce the bug. -->
```js
await pyodide.loadPackage("dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl");
```
or
```js
await pyodide.loadPackage("./dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl");
```
Pyodide tries to load the wheel from `https://cdn.jsdelivr.net/pyodide/v0.21.1/full/dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl`.
### Expected behavior
<!-- FILL IN -->
Load the wheel from the relative URL.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.21.1
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: Firefox ESR 91.12.0, Chromium 104.0.5112.101
- Any other relevant information:
<!-- If you are building Pyodide by yourself, please also include these information: -->
<!--
- Commit hash of Pyodide git repository:
- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:
-->
### Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of packages/micropip/src/micropip/_compat_in_pyodide.py]
1 from io import BytesIO
2 from typing import IO
3 from urllib.parse import urlparse
4
5 from pyodide._core import IN_BROWSER
6 from pyodide.http import pyfetch
7
8 try:
9 import pyodide_js
10 from pyodide_js import loadedPackages, loadPackage
11 from pyodide_js._api import loadBinaryFile, loadDynlib # type: ignore[import]
12
13 REPODATA_PACKAGES = pyodide_js._api.repodata_packages.to_py()
14 REPODATA_INFO = pyodide_js._api.repodata_info.to_py()
15 except ImportError:
16 if IN_BROWSER:
17 raise
18 # Otherwise, this is pytest test collection so let it go.
19
20
21 async def fetch_bytes(url: str, kwargs: dict[str, str]) -> IO[bytes]:
22 parsed_url = urlparse(url)
23 if parsed_url.scheme == "emfs":
24 return open(parsed_url.path, "rb")
25 if parsed_url.scheme == "file":
26 result_bytes = (await loadBinaryFile("", parsed_url.path)).to_bytes()
27 else:
28 result_bytes = await (await pyfetch(url, **kwargs)).bytes()
29 return BytesIO(result_bytes)
30
31
32 async def fetch_string(url: str, kwargs: dict[str, str]) -> str:
33 return await (await pyfetch(url, **kwargs)).string()
34
35
36 __all__ = [
37 "fetch_bytes",
38 "fetch_string",
39 "REPODATA_INFO",
40 "REPODATA_PACKAGES",
41 "loadedPackages",
42 "loadDynlib",
43 "loadPackage",
44 ]
45
[end of packages/micropip/src/micropip/_compat_in_pyodide.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packages/micropip/src/micropip/_compat_in_pyodide.py b/packages/micropip/src/micropip/_compat_in_pyodide.py
--- a/packages/micropip/src/micropip/_compat_in_pyodide.py
+++ b/packages/micropip/src/micropip/_compat_in_pyodide.py
@@ -23,7 +23,7 @@
if parsed_url.scheme == "emfs":
return open(parsed_url.path, "rb")
if parsed_url.scheme == "file":
- result_bytes = (await loadBinaryFile("", parsed_url.path)).to_bytes()
+ result_bytes = (await loadBinaryFile(parsed_url.path)).to_bytes()
else:
result_bytes = await (await pyfetch(url, **kwargs)).bytes()
return BytesIO(result_bytes)
| {"golden_diff": "diff --git a/packages/micropip/src/micropip/_compat_in_pyodide.py b/packages/micropip/src/micropip/_compat_in_pyodide.py\n--- a/packages/micropip/src/micropip/_compat_in_pyodide.py\n+++ b/packages/micropip/src/micropip/_compat_in_pyodide.py\n@@ -23,7 +23,7 @@\n if parsed_url.scheme == \"emfs\":\n return open(parsed_url.path, \"rb\")\n if parsed_url.scheme == \"file\":\n- result_bytes = (await loadBinaryFile(\"\", parsed_url.path)).to_bytes()\n+ result_bytes = (await loadBinaryFile(parsed_url.path)).to_bytes()\n else:\n result_bytes = await (await pyfetch(url, **kwargs)).bytes()\n return BytesIO(result_bytes)\n", "issue": "Relative URLs in pyodide.loadPackage\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nThe documentation states that [pyodide.loadPackage](https://pyodide.org/en/stable/usage/api/js-api.html#pyodide.loadPackage) supports relative URLs. I'm trying to load an out-of-tree wheel from my local webserver, but this doesn't seem to work out well.\r\n\r\n### To Reproduce\r\n\r\n<!-- Minimal code example to reproduce the bug. -->\r\n```js\r\nawait pyodide.loadPackage(\"dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl\");\r\n```\r\nor\r\n```js\r\nawait pyodide.loadPackage(\"./dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl\");\r\n```\r\nPyodide tries to load the wheel from `https://cdn.jsdelivr.net/pyodide/v0.21.1/full/dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl`.\r\n\r\n### Expected behavior\r\n\r\n<!-- FILL IN -->\r\nLoad the wheel from the relative URL.\r\n\r\n### Environment\r\n\r\n- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.21.1\r\n- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: Firefox ESR 91.12.0, Chromium 104.0.5112.101\r\n- Any other relevant information:\r\n\r\n<!-- If you are building Pyodide by yourself, please also include these information: -->\r\n\r\n<!--\r\n- Commit hash of Pyodide git repository:\r\n- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:\r\n-->\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "from io import BytesIO\nfrom typing import IO\nfrom urllib.parse import urlparse\n\nfrom pyodide._core import IN_BROWSER\nfrom pyodide.http import pyfetch\n\ntry:\n import pyodide_js\n from pyodide_js import loadedPackages, loadPackage\n from pyodide_js._api import loadBinaryFile, loadDynlib # type: ignore[import]\n\n REPODATA_PACKAGES = pyodide_js._api.repodata_packages.to_py()\n REPODATA_INFO = pyodide_js._api.repodata_info.to_py()\nexcept ImportError:\n if IN_BROWSER:\n raise\n # Otherwise, this is pytest test collection so let it go.\n\n\nasync def fetch_bytes(url: str, kwargs: dict[str, str]) -> IO[bytes]:\n parsed_url = urlparse(url)\n if parsed_url.scheme == \"emfs\":\n return open(parsed_url.path, \"rb\")\n if parsed_url.scheme == \"file\":\n result_bytes = (await loadBinaryFile(\"\", parsed_url.path)).to_bytes()\n else:\n result_bytes = await (await pyfetch(url, **kwargs)).bytes()\n return BytesIO(result_bytes)\n\n\nasync def fetch_string(url: str, kwargs: dict[str, str]) -> str:\n return await (await pyfetch(url, **kwargs)).string()\n\n\n__all__ = [\n \"fetch_bytes\",\n \"fetch_string\",\n \"REPODATA_INFO\",\n \"REPODATA_PACKAGES\",\n \"loadedPackages\",\n \"loadDynlib\",\n \"loadPackage\",\n]\n", "path": "packages/micropip/src/micropip/_compat_in_pyodide.py"}]} | 1,430 | 183 |
gh_patches_debug_35885 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-264 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[master]Use tf.ResourceVariable to store model
Currently we store model as a <string, ndarray> map. when using tf.optimizer.apply_gradient() to update model, we need to convert the map to ResourceVariable and back. It is better to change model to a <string, ResourceVariable> map to avoid copy and conversion.
</issue>
<code>
[start of elasticdl/master/servicer.py]
1 import threading
2
3 from proto import master_pb2
4 from proto import master_pb2_grpc
5 from util.converter import NdarrayToTensor, TensorToNdarray
6
7
8 class MasterServicer(master_pb2_grpc.MasterServicer):
9 """Master service implementation"""
10
11 def __init__(self, logger, grads_to_wait):
12 self.logger = logger
13 self._lock = threading.Lock()
14 # TODO: random initialization
15 self._model = {}
16 self._version = 0
17 self._gradient_sum = {}
18 self._grad_to_wait = grads_to_wait
19 self._grad_n = 0
20
21 def GetTask(self, request, context):
22 # TODO: implent task queues. Return an empty task for now.
23 res = master_pb2.Task()
24 res.shard_file_name = ""
25 res.model_version = self._version
26 return res
27
28 def GetModel(self, request, context):
29 if request.min_version > self._version:
30 err_msg = (
31 "Requested version %d not available yet, current version: %d"
32 % (request.min_version, self._version)
33 )
34 self.logger.warning(err_msg)
35 raise ValueError(err_msg)
36
37 res = master_pb2.Model()
38 with self._lock:
39 res.version = self._version
40 for k, v in self._model.items():
41 res.param[k].CopyFrom(NdarrayToTensor(v))
42 return res
43
44 def ReportTaskResult(self, request, context):
45 if request.model_version > self._version:
46 err_msg = "Model version %d out of range, current version: %d" % (
47 request.model_version,
48 self._version,
49 )
50 self.logger.warning(err_msg)
51 raise ValueError(err_msg)
52
53 res = master_pb2.ReportTaskResultReply()
54 if request.model_version < self._version:
55 self.logger.warning(
56 "Task result for outdated version %d dropped",
57 request.model_version,
58 )
59 res.accepted = False
60 res.model_version = self._version
61 return res
62
63 if request.err_message:
64 self.logger.warning("Worker error: %s" % request.err_message)
65 res.accepted = False
66 res.model_version = self._version
67 return res
68
69 # TODO: Update task queue with task_id
70 with self._lock:
71 tmp = {}
72 # Do sanity check before accumulating gradients.
73 for k, v in request.gradient.items():
74 if k not in self._model:
75 raise ValueError(
76 "Gradient key: %s is not part of model", k
77 )
78 arr = TensorToNdarray(v)
79 if arr.shape != self._model[k].shape:
80 raise ValueError(
81 "Gradient key: %s has incompatible dimension", k
82 )
83 tmp[k] = arr
84
85 for k, v in tmp.items():
86 if k in self._gradient_sum:
87 self._gradient_sum[k] = self._gradient_sum[k] + v
88 else:
89 self._gradient_sum[k] = v
90
91 self._grad_n += 1
92 if self._grad_n >= self._grad_to_wait:
93 # TODO: update model
94 self._version += 1
95 self._gradient_sum.clear()
96 self._grad_n = 0
97 res.accepted = True
98 res.model_version = self._version
99 return res
100
[end of elasticdl/master/servicer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/master/servicer.py b/elasticdl/master/servicer.py
--- a/elasticdl/master/servicer.py
+++ b/elasticdl/master/servicer.py
@@ -1,5 +1,7 @@
import threading
+import numpy as np
+import tensorflow as tf
from proto import master_pb2
from proto import master_pb2_grpc
from util.converter import NdarrayToTensor, TensorToNdarray
@@ -12,12 +14,21 @@
self.logger = logger
self._lock = threading.Lock()
# TODO: random initialization
+ # A <string, tf.ResourceVariable> map. We use tf.ResourceVariable
+ # instead ndarray to avoid copying and conversion when calling
+ # optimizer's apply_gradients() function.
self._model = {}
self._version = 0
self._gradient_sum = {}
self._grad_to_wait = grads_to_wait
self._grad_n = 0
+ def _set_model_var(self, name, value):
+ """Add or set model variable. Value should be a float32 ndarray"""
+ if value.dtype != np.float32:
+ raise ValueError("Value should be a float32 numpy array")
+ self._model[name] = tf.Variable(value, name=name, use_resource=True)
+
def GetTask(self, request, context):
# TODO: implent task queues. Return an empty task for now.
res = master_pb2.Task()
@@ -38,7 +49,7 @@
with self._lock:
res.version = self._version
for k, v in self._model.items():
- res.param[k].CopyFrom(NdarrayToTensor(v))
+ res.param[k].CopyFrom(NdarrayToTensor(v.numpy()))
return res
def ReportTaskResult(self, request, context):
@@ -76,7 +87,7 @@
"Gradient key: %s is not part of model", k
)
arr = TensorToNdarray(v)
- if arr.shape != self._model[k].shape:
+ if arr.shape != self._model[k].numpy().shape:
raise ValueError(
"Gradient key: %s has incompatible dimension", k
)
| {"golden_diff": "diff --git a/elasticdl/master/servicer.py b/elasticdl/master/servicer.py\n--- a/elasticdl/master/servicer.py\n+++ b/elasticdl/master/servicer.py\n@@ -1,5 +1,7 @@\n import threading\n+import numpy as np\n \n+import tensorflow as tf\n from proto import master_pb2\n from proto import master_pb2_grpc\n from util.converter import NdarrayToTensor, TensorToNdarray\n@@ -12,12 +14,21 @@\n self.logger = logger\n self._lock = threading.Lock()\n # TODO: random initialization\n+ # A <string, tf.ResourceVariable> map. We use tf.ResourceVariable\n+ # instead ndarray to avoid copying and conversion when calling\n+ # optimizer's apply_gradients() function.\n self._model = {}\n self._version = 0\n self._gradient_sum = {}\n self._grad_to_wait = grads_to_wait\n self._grad_n = 0\n \n+ def _set_model_var(self, name, value):\n+ \"\"\"Add or set model variable. Value should be a float32 ndarray\"\"\"\n+ if value.dtype != np.float32:\n+ raise ValueError(\"Value should be a float32 numpy array\")\n+ self._model[name] = tf.Variable(value, name=name, use_resource=True)\n+\n def GetTask(self, request, context):\n # TODO: implent task queues. Return an empty task for now.\n res = master_pb2.Task()\n@@ -38,7 +49,7 @@\n with self._lock:\n res.version = self._version\n for k, v in self._model.items():\n- res.param[k].CopyFrom(NdarrayToTensor(v))\n+ res.param[k].CopyFrom(NdarrayToTensor(v.numpy()))\n return res\n \n def ReportTaskResult(self, request, context):\n@@ -76,7 +87,7 @@\n \"Gradient key: %s is not part of model\", k\n )\n arr = TensorToNdarray(v)\n- if arr.shape != self._model[k].shape:\n+ if arr.shape != self._model[k].numpy().shape:\n raise ValueError(\n \"Gradient key: %s has incompatible dimension\", k\n )\n", "issue": "[master]Use tf.ResourceVariable to store model\nCurrently we store model as a <string, ndarray> map. when using tf.optimizer.apply_gradient() to update model, we need to convert the map to ResourceVariable and back. It is better to change model to a <string, ResourceVariable> map to avoid copy and conversion.\n", "before_files": [{"content": "import threading\n\nfrom proto import master_pb2\nfrom proto import master_pb2_grpc\nfrom util.converter import NdarrayToTensor, TensorToNdarray\n\n\nclass MasterServicer(master_pb2_grpc.MasterServicer):\n \"\"\"Master service implementation\"\"\"\n\n def __init__(self, logger, grads_to_wait):\n self.logger = logger\n self._lock = threading.Lock()\n # TODO: random initialization\n self._model = {}\n self._version = 0\n self._gradient_sum = {}\n self._grad_to_wait = grads_to_wait\n self._grad_n = 0\n\n def GetTask(self, request, context):\n # TODO: implent task queues. Return an empty task for now.\n res = master_pb2.Task()\n res.shard_file_name = \"\"\n res.model_version = self._version\n return res\n\n def GetModel(self, request, context):\n if request.min_version > self._version:\n err_msg = (\n \"Requested version %d not available yet, current version: %d\"\n % (request.min_version, self._version)\n )\n self.logger.warning(err_msg)\n raise ValueError(err_msg)\n\n res = master_pb2.Model()\n with self._lock:\n res.version = self._version\n for k, v in self._model.items():\n res.param[k].CopyFrom(NdarrayToTensor(v))\n return res\n\n def ReportTaskResult(self, request, context):\n if request.model_version > self._version:\n err_msg = \"Model version %d out of range, current version: %d\" % (\n request.model_version,\n self._version,\n )\n self.logger.warning(err_msg)\n raise ValueError(err_msg)\n\n res = master_pb2.ReportTaskResultReply()\n if request.model_version < self._version:\n self.logger.warning(\n \"Task result for outdated version %d dropped\",\n request.model_version,\n )\n res.accepted = False\n res.model_version = self._version\n return res\n\n if request.err_message:\n self.logger.warning(\"Worker error: %s\" % request.err_message)\n res.accepted = False\n res.model_version = self._version\n return res\n\n # TODO: Update task queue with task_id\n with self._lock:\n tmp = {}\n # Do sanity check before accumulating gradients.\n for k, v in request.gradient.items():\n if k not in self._model:\n raise ValueError(\n \"Gradient key: %s is not part of model\", k\n )\n arr = TensorToNdarray(v)\n if arr.shape != self._model[k].shape:\n raise ValueError(\n \"Gradient key: %s has incompatible dimension\", k\n )\n tmp[k] = arr\n\n for k, v in tmp.items():\n if k in self._gradient_sum:\n self._gradient_sum[k] = self._gradient_sum[k] + v\n else:\n self._gradient_sum[k] = v\n\n self._grad_n += 1\n if self._grad_n >= self._grad_to_wait:\n # TODO: update model\n self._version += 1\n self._gradient_sum.clear()\n self._grad_n = 0\n res.accepted = True\n res.model_version = self._version\n return res\n", "path": "elasticdl/master/servicer.py"}]} | 1,524 | 497 |
gh_patches_debug_27358 | rasdani/github-patches | git_diff | modoboa__modoboa-759 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passwords complexity
We must ensure passwords respect a minimum complexity.
See https://github.com/modoboa/modoboa-admin/issues/27
</issue>
<code>
[start of modoboa/core/forms.py]
1 # coding: utf-8
2
3 """Core forms."""
4
5 from django import forms
6 from django.utils.translation import ugettext as _, ugettext_lazy
7
8 from modoboa.core.models import User
9 from modoboa.lib import parameters
10
11
12 class LoginForm(forms.Form):
13 username = forms.CharField(
14 label=ugettext_lazy("Username"),
15 widget=forms.TextInput(attrs={"class": "form-control"})
16 )
17 password = forms.CharField(
18 label=ugettext_lazy("Password"),
19 widget=forms.PasswordInput(attrs={"class": "form-control"})
20 )
21 rememberme = forms.BooleanField(
22 initial=False,
23 required=False
24 )
25
26
27 class ProfileForm(forms.ModelForm):
28 oldpassword = forms.CharField(
29 label=ugettext_lazy("Old password"), required=False,
30 widget=forms.PasswordInput(attrs={"class": "form-control"})
31 )
32 newpassword = forms.CharField(
33 label=ugettext_lazy("New password"), required=False,
34 widget=forms.PasswordInput(attrs={"class": "form-control"})
35 )
36 confirmation = forms.CharField(
37 label=ugettext_lazy("Confirmation"), required=False,
38 widget=forms.PasswordInput(attrs={"class": "form-control"})
39 )
40
41 class Meta:
42 model = User
43 fields = ("first_name", "last_name")
44 widgets = {
45 'first_name': forms.TextInput(attrs={'class': 'form-control'}),
46 'last_name': forms.TextInput(attrs={'class': 'form-control'})
47 }
48
49 def __init__(self, update_password, *args, **kwargs):
50 super(ProfileForm, self).__init__(*args, **kwargs)
51 if not update_password:
52 del self.fields["oldpassword"]
53 del self.fields["newpassword"]
54 del self.fields["confirmation"]
55
56 def clean_oldpassword(self):
57 if self.cleaned_data["oldpassword"] == "":
58 return self.cleaned_data["oldpassword"]
59
60 if parameters.get_admin("AUTHENTICATION_TYPE") != "local":
61 return self.cleaned_data["oldpassword"]
62
63 if not self.instance.check_password(self.cleaned_data["oldpassword"]):
64 raise forms.ValidationError(_("Old password mismatchs"))
65 return self.cleaned_data["oldpassword"]
66
67 def clean_confirmation(self):
68 newpassword = self.cleaned_data["newpassword"]
69 confirmation = self.cleaned_data["confirmation"]
70 if newpassword != confirmation:
71 raise forms.ValidationError(_("Passwords mismatch"))
72 return self.cleaned_data["confirmation"]
73
74 def save(self, commit=True):
75 user = super(ProfileForm, self).save(commit=False)
76 if commit:
77 if self.cleaned_data.get("confirmation", "") != "":
78 user.set_password(
79 self.cleaned_data["confirmation"],
80 self.cleaned_data["oldpassword"]
81 )
82 user.save()
83 return user
84
[end of modoboa/core/forms.py]
[start of modoboa/core/dev_settings.py]
1 # Development settings
2 import os
3
4 BOWER_COMPONENTS_ROOT = os.path.join(
5 os.path.dirname(__file__), ".."
6 )
7
8 BOWER_INSTALLED_APPS = (
9 "jquery#1.9",
10 "jquery-ui#1.11",
11 "bootstrap#3.3.1",
12 "bootstrap-select#1.6",
13 "d3#3.5.0",
14 "eonasdan-bootstrap-datetimepicker#3.1.3",
15 "font-awesome#4.2.0",
16 "c3#0.4.10",
17 )
18
[end of modoboa/core/dev_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modoboa/core/dev_settings.py b/modoboa/core/dev_settings.py
--- a/modoboa/core/dev_settings.py
+++ b/modoboa/core/dev_settings.py
@@ -8,7 +8,7 @@
BOWER_INSTALLED_APPS = (
"jquery#1.9",
"jquery-ui#1.11",
- "bootstrap#3.3.1",
+ "bootstrap#3.3.5",
"bootstrap-select#1.6",
"d3#3.5.0",
"eonasdan-bootstrap-datetimepicker#3.1.3",
diff --git a/modoboa/core/forms.py b/modoboa/core/forms.py
--- a/modoboa/core/forms.py
+++ b/modoboa/core/forms.py
@@ -5,6 +5,8 @@
from django import forms
from django.utils.translation import ugettext as _, ugettext_lazy
+from passwords.fields import PasswordField
+
from modoboa.core.models import User
from modoboa.lib import parameters
@@ -29,11 +31,11 @@
label=ugettext_lazy("Old password"), required=False,
widget=forms.PasswordInput(attrs={"class": "form-control"})
)
- newpassword = forms.CharField(
+ newpassword = PasswordField(
label=ugettext_lazy("New password"), required=False,
widget=forms.PasswordInput(attrs={"class": "form-control"})
)
- confirmation = forms.CharField(
+ confirmation = PasswordField(
label=ugettext_lazy("Confirmation"), required=False,
widget=forms.PasswordInput(attrs={"class": "form-control"})
)
| {"golden_diff": "diff --git a/modoboa/core/dev_settings.py b/modoboa/core/dev_settings.py\n--- a/modoboa/core/dev_settings.py\n+++ b/modoboa/core/dev_settings.py\n@@ -8,7 +8,7 @@\n BOWER_INSTALLED_APPS = (\n \"jquery#1.9\",\n \"jquery-ui#1.11\",\n- \"bootstrap#3.3.1\",\n+ \"bootstrap#3.3.5\",\n \"bootstrap-select#1.6\",\n \"d3#3.5.0\",\n \"eonasdan-bootstrap-datetimepicker#3.1.3\",\ndiff --git a/modoboa/core/forms.py b/modoboa/core/forms.py\n--- a/modoboa/core/forms.py\n+++ b/modoboa/core/forms.py\n@@ -5,6 +5,8 @@\n from django import forms\n from django.utils.translation import ugettext as _, ugettext_lazy\n \n+from passwords.fields import PasswordField\n+\n from modoboa.core.models import User\n from modoboa.lib import parameters\n \n@@ -29,11 +31,11 @@\n label=ugettext_lazy(\"Old password\"), required=False,\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n- newpassword = forms.CharField(\n+ newpassword = PasswordField(\n label=ugettext_lazy(\"New password\"), required=False,\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n- confirmation = forms.CharField(\n+ confirmation = PasswordField(\n label=ugettext_lazy(\"Confirmation\"), required=False,\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n", "issue": "Passwords complexity\nWe must ensure passwords respect a minimum complexity.\n\nSee https://github.com/modoboa/modoboa-admin/issues/27\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"Core forms.\"\"\"\n\nfrom django import forms\nfrom django.utils.translation import ugettext as _, ugettext_lazy\n\nfrom modoboa.core.models import User\nfrom modoboa.lib import parameters\n\n\nclass LoginForm(forms.Form):\n username = forms.CharField(\n label=ugettext_lazy(\"Username\"),\n widget=forms.TextInput(attrs={\"class\": \"form-control\"})\n )\n password = forms.CharField(\n label=ugettext_lazy(\"Password\"),\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n rememberme = forms.BooleanField(\n initial=False,\n required=False\n )\n\n\nclass ProfileForm(forms.ModelForm):\n oldpassword = forms.CharField(\n label=ugettext_lazy(\"Old password\"), required=False,\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n newpassword = forms.CharField(\n label=ugettext_lazy(\"New password\"), required=False,\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n confirmation = forms.CharField(\n label=ugettext_lazy(\"Confirmation\"), required=False,\n widget=forms.PasswordInput(attrs={\"class\": \"form-control\"})\n )\n\n class Meta:\n model = User\n fields = (\"first_name\", \"last_name\")\n widgets = {\n 'first_name': forms.TextInput(attrs={'class': 'form-control'}),\n 'last_name': forms.TextInput(attrs={'class': 'form-control'})\n }\n\n def __init__(self, update_password, *args, **kwargs):\n super(ProfileForm, self).__init__(*args, **kwargs)\n if not update_password:\n del self.fields[\"oldpassword\"]\n del self.fields[\"newpassword\"]\n del self.fields[\"confirmation\"]\n\n def clean_oldpassword(self):\n if self.cleaned_data[\"oldpassword\"] == \"\":\n return self.cleaned_data[\"oldpassword\"]\n\n if parameters.get_admin(\"AUTHENTICATION_TYPE\") != \"local\":\n return self.cleaned_data[\"oldpassword\"]\n\n if not self.instance.check_password(self.cleaned_data[\"oldpassword\"]):\n raise forms.ValidationError(_(\"Old password mismatchs\"))\n return self.cleaned_data[\"oldpassword\"]\n\n def clean_confirmation(self):\n newpassword = self.cleaned_data[\"newpassword\"]\n confirmation = self.cleaned_data[\"confirmation\"]\n if newpassword != confirmation:\n raise forms.ValidationError(_(\"Passwords mismatch\"))\n return self.cleaned_data[\"confirmation\"]\n\n def save(self, commit=True):\n user = super(ProfileForm, self).save(commit=False)\n if commit:\n if self.cleaned_data.get(\"confirmation\", \"\") != \"\":\n user.set_password(\n self.cleaned_data[\"confirmation\"],\n self.cleaned_data[\"oldpassword\"]\n )\n user.save()\n return user\n", "path": "modoboa/core/forms.py"}, {"content": "# Development settings\nimport os\n\nBOWER_COMPONENTS_ROOT = os.path.join(\n os.path.dirname(__file__), \"..\"\n)\n\nBOWER_INSTALLED_APPS = (\n \"jquery#1.9\",\n \"jquery-ui#1.11\",\n \"bootstrap#3.3.1\",\n \"bootstrap-select#1.6\",\n \"d3#3.5.0\",\n \"eonasdan-bootstrap-datetimepicker#3.1.3\",\n \"font-awesome#4.2.0\",\n \"c3#0.4.10\",\n)\n", "path": "modoboa/core/dev_settings.py"}]} | 1,458 | 348 |
gh_patches_debug_29775 | rasdani/github-patches | git_diff | liqd__adhocracy4-476 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
adding multiple answer text to answer page
</issue>
<code>
[start of adhocracy4/comments_async/serializers.py]
1 from django.conf import settings
2 from django.utils.translation import ugettext as _
3 from easy_thumbnails.files import get_thumbnailer
4 from rest_framework import serializers
5
6 from adhocracy4.comments.models import Comment
7
8
9 class CommentSerializer(serializers.ModelSerializer):
10 """Default Serializer for the comments."""
11
12 user_name = serializers.SerializerMethodField()
13 user_pk = serializers.SerializerMethodField()
14 user_profile_url = serializers.SerializerMethodField()
15 user_image = serializers.SerializerMethodField()
16 is_deleted = serializers.SerializerMethodField()
17 ratings = serializers.SerializerMethodField()
18 is_moderator = serializers.SerializerMethodField()
19
20 class Meta:
21 model = Comment
22 read_only_fields = ('modified', 'created', 'id',
23 'user_name', 'user_pk', 'user_image',
24 'ratings', 'content_type', 'object_pk')
25 exclude = ('creator', 'is_censored', 'is_removed')
26
27 def to_representation(self, instance):
28 """
29 Create a dictionary form categories.
30
31 Gets the categories and adds them along with their values
32 to a dictionary.
33 """
34 ret = super().to_representation(instance)
35 categories = {}
36 if ret['comment_categories']:
37 category_choices = getattr(settings,
38 'A4_COMMENT_CATEGORIES', '')
39 if category_choices:
40 category_choices = dict((x, str(y)) for x, y
41 in category_choices)
42 category_list = ret['comment_categories'].strip('[]').split(',')
43 for category in category_list:
44 if category in category_choices:
45 categories[category] = category_choices[category]
46 else:
47 categories[category] = category
48 ret['comment_categories'] = categories
49 return ret
50
51 def to_internal_value(self, data):
52 data = super().to_internal_value(data)
53 if 'comment_categories' in data:
54 value = data.get('comment_categories')
55 if value == '' or value == '[]':
56 raise serializers.ValidationError({
57 'comment_categories': _('Please choose a category')
58 })
59 return data
60
61 def get_user_pk(self, obj):
62 if (obj.is_censored or obj.is_removed):
63 return -1
64 return str(obj.creator.id)
65
66 def get_user_profile_url(self, obj):
67 if obj.is_censored or obj.is_removed:
68 return ''
69 try:
70 return obj.creator.get_absolute_url()
71 except AttributeError:
72 return ''
73
74 def get_user_name(self, obj):
75 """Don't show username if comment is marked removed or censored."""
76 if(obj.is_censored or obj.is_removed):
77 return _('unknown user')
78 return obj.creator.get_short_name()
79
80 def get_user_image(self, obj):
81 """Load small thumbnail images for user images."""
82 if(obj.is_censored or obj.is_removed):
83 return None
84 try:
85 if obj.creator.avatar:
86 avatar = get_thumbnailer(obj.creator.avatar)['avatar']
87 return avatar.url
88 except AttributeError:
89 pass
90 return None
91
92 def get_is_moderator(self, obj):
93 return obj.project.has_moderator(obj.creator)
94
95 def get_is_deleted(self, obj):
96 """Return true if one of the flags is set."""
97 return (obj.is_censored or obj.is_removed)
98
99 def get_ratings(self, comment):
100 """
101 Get positive and negative rating count.
102
103 As well as info on the request users rating
104 """
105 user = self.context['request'].user
106 positive_ratings = comment.ratings.filter(value=1).count()
107 negative_ratings = comment.ratings.filter(value=-1).count()
108
109 if user.is_authenticated:
110 user_rating = comment.ratings.filter(creator=user).first()
111 else:
112 user_rating = None
113
114 if user_rating:
115 user_rating_value = user_rating.value
116 user_rating_id = user_rating.pk
117 else:
118 user_rating_value = None
119 user_rating_id = None
120
121 result = {
122 'positive_ratings': positive_ratings,
123 'negative_ratings': negative_ratings,
124 'current_user_rating_value': user_rating_value,
125 'current_user_rating_id': user_rating_id
126 }
127
128 return result
129
130
131 class CommentListSerializer(CommentSerializer):
132 """Serializer for the comments to be used when viewed as list."""
133
134 comment = serializers.SerializerMethodField()
135
136 def get_comment(self, obj):
137 if obj.is_removed:
138 return _('deleted by creator')
139 if obj.is_censored:
140 return _('deleted by moderator')
141 return obj.comment
142
143
144 class ThreadSerializer(CommentSerializer):
145 """Serializes a comment including child comment (replies)."""
146
147 child_comments = CommentSerializer(many=True, read_only=True)
148
149
150 class ThreadListSerializer(CommentListSerializer):
151 """
152 Serializes comments when viewed.
153
154 As list including child comment (replies).
155 """
156
157 child_comments = CommentListSerializer(many=True, read_only=True)
158
[end of adhocracy4/comments_async/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/adhocracy4/comments_async/serializers.py b/adhocracy4/comments_async/serializers.py
--- a/adhocracy4/comments_async/serializers.py
+++ b/adhocracy4/comments_async/serializers.py
@@ -21,7 +21,8 @@
model = Comment
read_only_fields = ('modified', 'created', 'id',
'user_name', 'user_pk', 'user_image',
- 'ratings', 'content_type', 'object_pk')
+ 'user_image_fallback', 'ratings',
+ 'content_type', 'object_pk')
exclude = ('creator', 'is_censored', 'is_removed')
def to_representation(self, instance):
@@ -77,6 +78,17 @@
return _('unknown user')
return obj.creator.get_short_name()
+ def get_user_image_fallback(self, obj):
+ """Load small thumbnail images for default user images."""
+ if(obj.is_censored or obj.is_removed):
+ return None
+ try:
+ if obj.creator.avatar_fallback:
+ return obj.creator.avatar_fallback
+ except AttributeError:
+ pass
+ return None
+
def get_user_image(self, obj):
"""Load small thumbnail images for user images."""
if(obj.is_censored or obj.is_removed):
@@ -87,7 +99,7 @@
return avatar.url
except AttributeError:
pass
- return None
+ return self.get_user_image_fallback(obj)
def get_is_moderator(self, obj):
return obj.project.has_moderator(obj.creator)
| {"golden_diff": "diff --git a/adhocracy4/comments_async/serializers.py b/adhocracy4/comments_async/serializers.py\n--- a/adhocracy4/comments_async/serializers.py\n+++ b/adhocracy4/comments_async/serializers.py\n@@ -21,7 +21,8 @@\n model = Comment\n read_only_fields = ('modified', 'created', 'id',\n 'user_name', 'user_pk', 'user_image',\n- 'ratings', 'content_type', 'object_pk')\n+ 'user_image_fallback', 'ratings',\n+ 'content_type', 'object_pk')\n exclude = ('creator', 'is_censored', 'is_removed')\n \n def to_representation(self, instance):\n@@ -77,6 +78,17 @@\n return _('unknown user')\n return obj.creator.get_short_name()\n \n+ def get_user_image_fallback(self, obj):\n+ \"\"\"Load small thumbnail images for default user images.\"\"\"\n+ if(obj.is_censored or obj.is_removed):\n+ return None\n+ try:\n+ if obj.creator.avatar_fallback:\n+ return obj.creator.avatar_fallback\n+ except AttributeError:\n+ pass\n+ return None\n+\n def get_user_image(self, obj):\n \"\"\"Load small thumbnail images for user images.\"\"\"\n if(obj.is_censored or obj.is_removed):\n@@ -87,7 +99,7 @@\n return avatar.url\n except AttributeError:\n pass\n- return None\n+ return self.get_user_image_fallback(obj)\n \n def get_is_moderator(self, obj):\n return obj.project.has_moderator(obj.creator)\n", "issue": "adding multiple answer text to answer page\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.utils.translation import ugettext as _\nfrom easy_thumbnails.files import get_thumbnailer\nfrom rest_framework import serializers\n\nfrom adhocracy4.comments.models import Comment\n\n\nclass CommentSerializer(serializers.ModelSerializer):\n \"\"\"Default Serializer for the comments.\"\"\"\n\n user_name = serializers.SerializerMethodField()\n user_pk = serializers.SerializerMethodField()\n user_profile_url = serializers.SerializerMethodField()\n user_image = serializers.SerializerMethodField()\n is_deleted = serializers.SerializerMethodField()\n ratings = serializers.SerializerMethodField()\n is_moderator = serializers.SerializerMethodField()\n\n class Meta:\n model = Comment\n read_only_fields = ('modified', 'created', 'id',\n 'user_name', 'user_pk', 'user_image',\n 'ratings', 'content_type', 'object_pk')\n exclude = ('creator', 'is_censored', 'is_removed')\n\n def to_representation(self, instance):\n \"\"\"\n Create a dictionary form categories.\n\n Gets the categories and adds them along with their values\n to a dictionary.\n \"\"\"\n ret = super().to_representation(instance)\n categories = {}\n if ret['comment_categories']:\n category_choices = getattr(settings,\n 'A4_COMMENT_CATEGORIES', '')\n if category_choices:\n category_choices = dict((x, str(y)) for x, y\n in category_choices)\n category_list = ret['comment_categories'].strip('[]').split(',')\n for category in category_list:\n if category in category_choices:\n categories[category] = category_choices[category]\n else:\n categories[category] = category\n ret['comment_categories'] = categories\n return ret\n\n def to_internal_value(self, data):\n data = super().to_internal_value(data)\n if 'comment_categories' in data:\n value = data.get('comment_categories')\n if value == '' or value == '[]':\n raise serializers.ValidationError({\n 'comment_categories': _('Please choose a category')\n })\n return data\n\n def get_user_pk(self, obj):\n if (obj.is_censored or obj.is_removed):\n return -1\n return str(obj.creator.id)\n\n def get_user_profile_url(self, obj):\n if obj.is_censored or obj.is_removed:\n return ''\n try:\n return obj.creator.get_absolute_url()\n except AttributeError:\n return ''\n\n def get_user_name(self, obj):\n \"\"\"Don't show username if comment is marked removed or censored.\"\"\"\n if(obj.is_censored or obj.is_removed):\n return _('unknown user')\n return obj.creator.get_short_name()\n\n def get_user_image(self, obj):\n \"\"\"Load small thumbnail images for user images.\"\"\"\n if(obj.is_censored or obj.is_removed):\n return None\n try:\n if obj.creator.avatar:\n avatar = get_thumbnailer(obj.creator.avatar)['avatar']\n return avatar.url\n except AttributeError:\n pass\n return None\n\n def get_is_moderator(self, obj):\n return obj.project.has_moderator(obj.creator)\n\n def get_is_deleted(self, obj):\n \"\"\"Return true if one of the flags is set.\"\"\"\n return (obj.is_censored or obj.is_removed)\n\n def get_ratings(self, comment):\n \"\"\"\n Get positive and negative rating count.\n\n As well as info on the request users rating\n \"\"\"\n user = self.context['request'].user\n positive_ratings = comment.ratings.filter(value=1).count()\n negative_ratings = comment.ratings.filter(value=-1).count()\n\n if user.is_authenticated:\n user_rating = comment.ratings.filter(creator=user).first()\n else:\n user_rating = None\n\n if user_rating:\n user_rating_value = user_rating.value\n user_rating_id = user_rating.pk\n else:\n user_rating_value = None\n user_rating_id = None\n\n result = {\n 'positive_ratings': positive_ratings,\n 'negative_ratings': negative_ratings,\n 'current_user_rating_value': user_rating_value,\n 'current_user_rating_id': user_rating_id\n }\n\n return result\n\n\nclass CommentListSerializer(CommentSerializer):\n \"\"\"Serializer for the comments to be used when viewed as list.\"\"\"\n\n comment = serializers.SerializerMethodField()\n\n def get_comment(self, obj):\n if obj.is_removed:\n return _('deleted by creator')\n if obj.is_censored:\n return _('deleted by moderator')\n return obj.comment\n\n\nclass ThreadSerializer(CommentSerializer):\n \"\"\"Serializes a comment including child comment (replies).\"\"\"\n\n child_comments = CommentSerializer(many=True, read_only=True)\n\n\nclass ThreadListSerializer(CommentListSerializer):\n \"\"\"\n Serializes comments when viewed.\n\n As list including child comment (replies).\n \"\"\"\n\n child_comments = CommentListSerializer(many=True, read_only=True)\n", "path": "adhocracy4/comments_async/serializers.py"}]} | 1,952 | 354 |
gh_patches_debug_75 | rasdani/github-patches | git_diff | kedro-org__kedro-2092 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release Kedro `0.18.4`
### Depends on:
- Dataset issues
- Spaceflights tutorial documentation
- Open PRs related to datasets:
- [x] https://github.com/kedro-org/kedro/pull/2082
- [x] https://github.com/kedro-org/kedro/pull/1746
- [x] https://github.com/kedro-org/kedro/pull/1992
- [x] https://github.com/kedro-org/kedro/pull/1865
- [x] https://github.com/kedro-org/kedro/pull/1312
- [x] https://github.com/kedro-org/kedro/pull/1844
- [x] https://github.com/kedro-org/kedro/pull/1962
- [x] https://github.com/kedro-org/kedro/pull/1964
- [x] https://github.com/kedro-org/kedro/pull/1931
- [x] https://github.com/kedro-org/kedro/pull/1587
For the above PRs: if it's nearly finished, but the author isn't responding, we as a team can take over and finish the PR. If the PR still needs a lot of work and the author isn't responding, I suggest we close it and ask them to re-open in the new `kedro-datasets` repo.
</issue>
<code>
[start of kedro/__init__.py]
1 """Kedro is a framework that makes it easy to build robust and scalable
2 data pipelines by providing uniform project templates, data abstraction,
3 configuration and pipeline assembly.
4 """
5
6 __version__ = "0.18.3"
7
8
9 import logging
10
11 logging.getLogger(__name__).addHandler(logging.NullHandler())
12
[end of kedro/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kedro/__init__.py b/kedro/__init__.py
--- a/kedro/__init__.py
+++ b/kedro/__init__.py
@@ -3,7 +3,7 @@
configuration and pipeline assembly.
"""
-__version__ = "0.18.3"
+__version__ = "0.18.4"
import logging
| {"golden_diff": "diff --git a/kedro/__init__.py b/kedro/__init__.py\n--- a/kedro/__init__.py\n+++ b/kedro/__init__.py\n@@ -3,7 +3,7 @@\n configuration and pipeline assembly.\n \"\"\"\n \n-__version__ = \"0.18.3\"\n+__version__ = \"0.18.4\"\n \n \n import logging\n", "issue": "Release Kedro `0.18.4`\n### Depends on:\n- Dataset issues\n- Spaceflights tutorial documentation\n- Open PRs related to datasets:\n - [x] https://github.com/kedro-org/kedro/pull/2082\n - [x] https://github.com/kedro-org/kedro/pull/1746\n - [x] https://github.com/kedro-org/kedro/pull/1992\n - [x] https://github.com/kedro-org/kedro/pull/1865\n - [x] https://github.com/kedro-org/kedro/pull/1312\n - [x] https://github.com/kedro-org/kedro/pull/1844\n - [x] https://github.com/kedro-org/kedro/pull/1962\n - [x] https://github.com/kedro-org/kedro/pull/1964\n - [x] https://github.com/kedro-org/kedro/pull/1931\n - [x] https://github.com/kedro-org/kedro/pull/1587\n\nFor the above PRs: if it's nearly finished, but the author isn't responding, we as a team can take over and finish the PR. If the PR still needs a lot of work and the author isn't responding, I suggest we close it and ask them to re-open in the new `kedro-datasets` repo. \n\n", "before_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\n__version__ = \"0.18.3\"\n\n\nimport logging\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "kedro/__init__.py"}]} | 962 | 88 |
gh_patches_debug_62231 | rasdani/github-patches | git_diff | obspy__obspy-1673 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Parsing SEED: 'Date is required.' Warning
Hi,
Each time I want to read a dataless with different periods of time, I have this annoying warning message:
```
from obspy.io.xseed import Parser
from obspy import UTCDateTime
Parser('http://geoscope.ipgp.fr/metadata/G/dataless.G.CAN.seed')
/Users/bonaime/git/obspy/obspy/io/xseed/fields.py:374: UserWarning: Date is required. warnings.warn('Date is required.', UserWarning)
```
Is there a nice way to avoid this warning ? I try that but it is not working
``` code
from obspy.io.xseed import Parser
from obspy import UTCDateTime
Parser('http://geoscope.ipgp.fr/metadata/G/dataless.G.CAN.seed').get_paz('G.CAN.00.BHZ', datetime=UTCDateTime())
```
and the result is
```
/Users/bonaime/git/obspy/obspy/io/xseed/fields.py:374: UserWarning: Date is required.
warnings.warn('Date is required.', UserWarning)
Out[1]:
{u'digitizer_gain': 1677720.0,
u'gain': 1.24658e+17,
u'poles': [(-0.0120768+0.011706j),
(-0.0120768-0.011706j),
(-36.4684+66.8452j),
(-36.4684-66.8452j),
(-29.8656+380.54j),
(-29.8656-380.54j),
(-12145.6+0j),
(-12145.6+0j)],
u'seismometer_gain': 3450.0,
u'sensitivity': 5788280000.0,
u'zeros': [0j, 0j]}
```
</issue>
<code>
[start of obspy/io/xseed/blockette/blockette051.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import (absolute_import, division, print_function,
3 unicode_literals)
4 from future.builtins import * # NOQA
5
6 from .blockette import Blockette
7 from ..fields import Integer, VariableString
8
9
10 class Blockette051(Blockette):
11 """
12 Blockette 051: Station Comment Blockette.
13
14 Sample:
15 05100351992,001~1992,002~0740000000
16 """
17
18 id = 51
19 name = "Station Comment"
20 fields = [
21 VariableString(3, "Beginning effective time", 1, 22, 'T'),
22 VariableString(4, "End effective time", 1, 22, 'T', optional=True),
23 Integer(5, "Comment code key", 4, xpath=31),
24 Integer(6, "Comment level", 6, ignore=True)
25 ]
26
[end of obspy/io/xseed/blockette/blockette051.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/obspy/io/xseed/blockette/blockette051.py b/obspy/io/xseed/blockette/blockette051.py
--- a/obspy/io/xseed/blockette/blockette051.py
+++ b/obspy/io/xseed/blockette/blockette051.py
@@ -19,7 +19,7 @@
name = "Station Comment"
fields = [
VariableString(3, "Beginning effective time", 1, 22, 'T'),
- VariableString(4, "End effective time", 1, 22, 'T', optional=True),
+ VariableString(4, "End effective time", 0, 22, 'T', optional=True),
Integer(5, "Comment code key", 4, xpath=31),
Integer(6, "Comment level", 6, ignore=True)
]
| {"golden_diff": "diff --git a/obspy/io/xseed/blockette/blockette051.py b/obspy/io/xseed/blockette/blockette051.py\n--- a/obspy/io/xseed/blockette/blockette051.py\n+++ b/obspy/io/xseed/blockette/blockette051.py\n@@ -19,7 +19,7 @@\n name = \"Station Comment\"\n fields = [\n VariableString(3, \"Beginning effective time\", 1, 22, 'T'),\n- VariableString(4, \"End effective time\", 1, 22, 'T', optional=True),\n+ VariableString(4, \"End effective time\", 0, 22, 'T', optional=True),\n Integer(5, \"Comment code key\", 4, xpath=31),\n Integer(6, \"Comment level\", 6, ignore=True)\n ]\n", "issue": "Parsing SEED: 'Date is required.' Warning\nHi,\n\nEach time I want to read a dataless with different periods of time, I have this annoying warning message:\n\n```\nfrom obspy.io.xseed import Parser\nfrom obspy import UTCDateTime\nParser('http://geoscope.ipgp.fr/metadata/G/dataless.G.CAN.seed')\n/Users/bonaime/git/obspy/obspy/io/xseed/fields.py:374: UserWarning: Date is required. warnings.warn('Date is required.', UserWarning)\n```\n\nIs there a nice way to avoid this warning ? I try that but it is not working\n\n``` code\nfrom obspy.io.xseed import Parser\nfrom obspy import UTCDateTime\nParser('http://geoscope.ipgp.fr/metadata/G/dataless.G.CAN.seed').get_paz('G.CAN.00.BHZ', datetime=UTCDateTime())\n\n```\n\nand the result is\n\n```\n/Users/bonaime/git/obspy/obspy/io/xseed/fields.py:374: UserWarning: Date is required.\n warnings.warn('Date is required.', UserWarning)\nOut[1]:\n{u'digitizer_gain': 1677720.0,\n u'gain': 1.24658e+17,\n u'poles': [(-0.0120768+0.011706j),\n (-0.0120768-0.011706j),\n (-36.4684+66.8452j),\n (-36.4684-66.8452j),\n (-29.8656+380.54j),\n (-29.8656-380.54j),\n (-12145.6+0j),\n (-12145.6+0j)],\n u'seismometer_gain': 3450.0,\n u'sensitivity': 5788280000.0,\n u'zeros': [0j, 0j]}\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\nfrom .blockette import Blockette\nfrom ..fields import Integer, VariableString\n\n\nclass Blockette051(Blockette):\n \"\"\"\n Blockette 051: Station Comment Blockette.\n\n Sample:\n 05100351992,001~1992,002~0740000000\n \"\"\"\n\n id = 51\n name = \"Station Comment\"\n fields = [\n VariableString(3, \"Beginning effective time\", 1, 22, 'T'),\n VariableString(4, \"End effective time\", 1, 22, 'T', optional=True),\n Integer(5, \"Comment code key\", 4, xpath=31),\n Integer(6, \"Comment level\", 6, ignore=True)\n ]\n", "path": "obspy/io/xseed/blockette/blockette051.py"}]} | 1,292 | 198 |
gh_patches_debug_12780 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2017 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E7003 Errors when using Fn::Transform inside a Mapping
*cfn-lint version: 0.49.2*
*Description of issue.*
#2006 tightened what is considered valid for use in a Mapping. This causes it to reject what otherwise appears to be a valid use of `Fn::Transform` as the body of a Mapping.
For example, this snippet is valid CFN:
```yaml
Mappings:
AwsAgentPlatformMap:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: s3://my-bucket-name/version/3.0.1/amazonlinux2/a-json-file.json
```
This usage trips the newly enhanced regex:
```
E7003 Mapping key (Fn::Transform) has invalid name. Name has to be alphanumeric, '-' or '.'
```
</issue>
<code>
[start of src/cfnlint/rules/mappings/KeyName.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9 from cfnlint.helpers import REGEX_ALPHANUMERIC
10
11
12 class KeyName(CloudFormationLintRule):
13 """Check if Mapping Keys are type string"""
14 id = 'E7003'
15 shortdesc = 'Mapping keys are strings and alphanumeric'
16 description = 'Check if Mappings keys are properly typed as strings and alphanumeric'
17 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
18 tags = ['mappings']
19
20 def check_attribute(self, key, path):
21 """ Check the key name for string and alphanumeric"""
22 matches = []
23 if not isinstance(key, six.string_types):
24 message = 'Mapping attribute ({0}) has to be a string.'
25 matches.append(RuleMatch(path[:], message.format(key)))
26 elif not re.match(REGEX_ALPHANUMERIC, key):
27 message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'
28 matches.append(RuleMatch(path[:], message.format(key)))
29
30 return matches
31
32 def check_key(self, key, path):
33 """ Check the key name for string and alphanumeric"""
34 matches = []
35 if not isinstance(key, six.string_types):
36 message = 'Mapping key ({0}) has to be a string.'
37 matches.append(RuleMatch(path[:], message.format(key)))
38 elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):
39 message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \'-\' or \'.\''
40 matches.append(RuleMatch(path[:], message.format(key)))
41
42 return matches
43
44 def match(self, cfn):
45 matches = []
46
47 mappings = cfn.template.get('Mappings', {})
48 for mapping_name, mapping_value in mappings.items():
49 if isinstance(mapping_value, dict):
50 for key_name, key_value in mapping_value.items():
51 matches.extend(self.check_key(
52 key_name, ['Mappings', mapping_name, key_name]))
53 if isinstance(key_value, dict):
54 for sub_key_name, _ in key_value.items():
55 matches.extend(
56 self.check_attribute(
57 sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
58
59 return matches
60
[end of src/cfnlint/rules/mappings/KeyName.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py
--- a/src/cfnlint/rules/mappings/KeyName.py
+++ b/src/cfnlint/rules/mappings/KeyName.py
@@ -35,7 +35,7 @@
if not isinstance(key, six.string_types):
message = 'Mapping key ({0}) has to be a string.'
matches.append(RuleMatch(path[:], message.format(key)))
- elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):
+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key) and key != 'Fn::Transform':
message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \'-\' or \'.\''
matches.append(RuleMatch(path[:], message.format(key)))
| {"golden_diff": "diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py\n--- a/src/cfnlint/rules/mappings/KeyName.py\n+++ b/src/cfnlint/rules/mappings/KeyName.py\n@@ -35,7 +35,7 @@\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n- elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key) and key != 'Fn::Transform':\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n", "issue": "E7003 Errors when using Fn::Transform inside a Mapping\n*cfn-lint version: 0.49.2*\r\n\r\n*Description of issue.*\r\n#2006 tightened what is considered valid for use in a Mapping. This causes it to reject what otherwise appears to be a valid use of `Fn::Transform` as the body of a Mapping.\r\n\r\nFor example, this snippet is valid CFN:\r\n\r\n```yaml\r\nMappings:\r\n AwsAgentPlatformMap:\r\n Fn::Transform:\r\n Name: AWS::Include\r\n Parameters:\r\n Location: s3://my-bucket-name/version/3.0.1/amazonlinux2/a-json-file.json\r\n```\r\n\r\nThis usage trips the newly enhanced regex:\r\n\r\n```\r\nE7003 Mapping key (Fn::Transform) has invalid name. Name has to be alphanumeric, '-' or '.'\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_ALPHANUMERIC\n\n\nclass KeyName(CloudFormationLintRule):\n \"\"\"Check if Mapping Keys are type string\"\"\"\n id = 'E7003'\n shortdesc = 'Mapping keys are strings and alphanumeric'\n description = 'Check if Mappings keys are properly typed as strings and alphanumeric'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n\n def check_attribute(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping attribute ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match(REGEX_ALPHANUMERIC, key):\n message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def check_key(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n mappings = cfn.template.get('Mappings', {})\n for mapping_name, mapping_value in mappings.items():\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n key_name, ['Mappings', mapping_name, key_name]))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n self.check_attribute(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n\n return matches\n", "path": "src/cfnlint/rules/mappings/KeyName.py"}]} | 1,371 | 201 |
gh_patches_debug_28855 | rasdani/github-patches | git_diff | ultrabug__py3status-2101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
external_script modifies numeric output
The external_script module converts numeric values to a numeric type. This removes the original formatting of the input and is undesired.
To reproduce create an external script and simply echo "0.123000", the output in the bar will be "0.123".
</issue>
<code>
[start of py3status/modules/external_script.py]
1 """
2 Display output of a given script.
3
4 Display output of any executable script set by `script_path`. Only the first
5 two lines of output will be used. The first line is used as the displayed
6 text. If the output has two or more lines, the second line is set as the text
7 color (and should hence be a valid hex color code such as #FF0000 for red).
8 The script should not have any parameters, but it could work.
9
10 Configuration parameters:
11 button_show_notification: button to show notification with full output
12 (default None)
13 cache_timeout: how often we refresh this module in seconds
14 (default 15)
15 format: see placeholders below (default '{output}')
16 localize: should script output be localized (if available)
17 (default True)
18 script_path: script you want to show output of (compulsory)
19 (default None)
20 strip_output: shall we strip leading and trailing spaces from output
21 (default False)
22
23 Format placeholders:
24 {lines} number of lines in the output
25 {output} output of script given by "script_path"
26
27 Examples:
28 ```
29 external_script {
30 format = "my name is {output}"
31 script_path = "/usr/bin/whoami"
32 }
33 ```
34
35 @author frimdo [email protected]
36
37 SAMPLE OUTPUT
38 {'full_text': 'script output'}
39
40 example
41 {'full_text': 'It is now: Wed Feb 22 22:24:13'}
42 """
43
44 import re
45
46 STRING_ERROR = "missing script_path"
47
48
49 class Py3status:
50 """
51 """
52
53 # available configuration parameters
54 button_show_notification = None
55 cache_timeout = 15
56 format = "{output}"
57 localize = True
58 script_path = None
59 strip_output = False
60
61 def post_config_hook(self):
62 if not self.script_path:
63 raise Exception(STRING_ERROR)
64
65 def external_script(self):
66 output_lines = None
67 response = {}
68 response["cached_until"] = self.py3.time_in(self.cache_timeout)
69 try:
70 self.output = self.py3.command_output(
71 self.script_path, shell=True, localized=self.localize
72 )
73 output_lines = self.output.splitlines()
74 if len(output_lines) > 1:
75 output_color = output_lines[1]
76 if re.search(r"^#[0-9a-fA-F]{6}$", output_color):
77 response["color"] = output_color
78 except self.py3.CommandError as e:
79 # something went wrong show error to user
80 output = e.output or e.error
81 self.py3.error(output)
82
83 if output_lines:
84 output = output_lines[0]
85 if self.strip_output:
86 output = output.strip()
87 # If we get something that looks numeric then we convert it
88 # to a numeric type because this can be helpful. for example:
89 #
90 # external_script {
91 # format = "file is [\?if=output>10 big|small]"
92 # script_path = "cat /tmp/my_file | wc -l"
93 # }
94 try:
95 output = int(output)
96 except ValueError:
97 try:
98 output = float(output)
99 except ValueError:
100 pass
101 else:
102 output = ""
103
104 response["full_text"] = self.py3.safe_format(
105 self.format, {"output": output, "lines": len(output_lines)}
106 )
107 return response
108
109 def on_click(self, event):
110 button = event["button"]
111 if button == self.button_show_notification:
112 self.py3.notify_user(self.output)
113 self.py3.prevent_refresh()
114
115
116 if __name__ == "__main__":
117 """
118 Run module in test mode.
119 """
120 from py3status.module_test import module_test
121
122 module_test(Py3status)
123
[end of py3status/modules/external_script.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py3status/modules/external_script.py b/py3status/modules/external_script.py
--- a/py3status/modules/external_script.py
+++ b/py3status/modules/external_script.py
@@ -12,6 +12,8 @@
(default None)
cache_timeout: how often we refresh this module in seconds
(default 15)
+ convert_numbers: convert decimal numbers to a numeric type
+ (default True)
format: see placeholders below (default '{output}')
localize: should script output be localized (if available)
(default True)
@@ -53,6 +55,7 @@
# available configuration parameters
button_show_notification = None
cache_timeout = 15
+ convert_numbers = True
format = "{output}"
localize = True
script_path = None
@@ -91,13 +94,14 @@
# format = "file is [\?if=output>10 big|small]"
# script_path = "cat /tmp/my_file | wc -l"
# }
- try:
- output = int(output)
- except ValueError:
+ if self.convert_numbers is True:
try:
- output = float(output)
+ output = int(output)
except ValueError:
- pass
+ try:
+ output = float(output)
+ except ValueError:
+ pass
else:
output = ""
| {"golden_diff": "diff --git a/py3status/modules/external_script.py b/py3status/modules/external_script.py\n--- a/py3status/modules/external_script.py\n+++ b/py3status/modules/external_script.py\n@@ -12,6 +12,8 @@\n (default None)\n cache_timeout: how often we refresh this module in seconds\n (default 15)\n+ convert_numbers: convert decimal numbers to a numeric type\n+ (default True)\n format: see placeholders below (default '{output}')\n localize: should script output be localized (if available)\n (default True)\n@@ -53,6 +55,7 @@\n # available configuration parameters\n button_show_notification = None\n cache_timeout = 15\n+ convert_numbers = True\n format = \"{output}\"\n localize = True\n script_path = None\n@@ -91,13 +94,14 @@\n # format = \"file is [\\?if=output>10 big|small]\"\n # script_path = \"cat /tmp/my_file | wc -l\"\n # }\n- try:\n- output = int(output)\n- except ValueError:\n+ if self.convert_numbers is True:\n try:\n- output = float(output)\n+ output = int(output)\n except ValueError:\n- pass\n+ try:\n+ output = float(output)\n+ except ValueError:\n+ pass\n else:\n output = \"\"\n", "issue": "external_script modifies numeric output\nThe external_script module converts numeric values to a numeric type. This removes the original formatting of the input and is undesired.\r\n\r\nTo reproduce create an external script and simply echo \"0.123000\", the output in the bar will be \"0.123\".\n", "before_files": [{"content": "\"\"\"\nDisplay output of a given script.\n\nDisplay output of any executable script set by `script_path`. Only the first\ntwo lines of output will be used. The first line is used as the displayed\ntext. If the output has two or more lines, the second line is set as the text\ncolor (and should hence be a valid hex color code such as #FF0000 for red).\nThe script should not have any parameters, but it could work.\n\nConfiguration parameters:\n button_show_notification: button to show notification with full output\n (default None)\n cache_timeout: how often we refresh this module in seconds\n (default 15)\n format: see placeholders below (default '{output}')\n localize: should script output be localized (if available)\n (default True)\n script_path: script you want to show output of (compulsory)\n (default None)\n strip_output: shall we strip leading and trailing spaces from output\n (default False)\n\nFormat placeholders:\n {lines} number of lines in the output\n {output} output of script given by \"script_path\"\n\nExamples:\n```\nexternal_script {\n format = \"my name is {output}\"\n script_path = \"/usr/bin/whoami\"\n}\n```\n\n@author frimdo [email protected]\n\nSAMPLE OUTPUT\n{'full_text': 'script output'}\n\nexample\n{'full_text': 'It is now: Wed Feb 22 22:24:13'}\n\"\"\"\n\nimport re\n\nSTRING_ERROR = \"missing script_path\"\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n\n # available configuration parameters\n button_show_notification = None\n cache_timeout = 15\n format = \"{output}\"\n localize = True\n script_path = None\n strip_output = False\n\n def post_config_hook(self):\n if not self.script_path:\n raise Exception(STRING_ERROR)\n\n def external_script(self):\n output_lines = None\n response = {}\n response[\"cached_until\"] = self.py3.time_in(self.cache_timeout)\n try:\n self.output = self.py3.command_output(\n self.script_path, shell=True, localized=self.localize\n )\n output_lines = self.output.splitlines()\n if len(output_lines) > 1:\n output_color = output_lines[1]\n if re.search(r\"^#[0-9a-fA-F]{6}$\", output_color):\n response[\"color\"] = output_color\n except self.py3.CommandError as e:\n # something went wrong show error to user\n output = e.output or e.error\n self.py3.error(output)\n\n if output_lines:\n output = output_lines[0]\n if self.strip_output:\n output = output.strip()\n # If we get something that looks numeric then we convert it\n # to a numeric type because this can be helpful. for example:\n #\n # external_script {\n # format = \"file is [\\?if=output>10 big|small]\"\n # script_path = \"cat /tmp/my_file | wc -l\"\n # }\n try:\n output = int(output)\n except ValueError:\n try:\n output = float(output)\n except ValueError:\n pass\n else:\n output = \"\"\n\n response[\"full_text\"] = self.py3.safe_format(\n self.format, {\"output\": output, \"lines\": len(output_lines)}\n )\n return response\n\n def on_click(self, event):\n button = event[\"button\"]\n if button == self.button_show_notification:\n self.py3.notify_user(self.output)\n self.py3.prevent_refresh()\n\n\nif __name__ == \"__main__\":\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n\n module_test(Py3status)\n", "path": "py3status/modules/external_script.py"}]} | 1,688 | 316 |
gh_patches_debug_6294 | rasdani/github-patches | git_diff | e-valuation__EvaP-1353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Importing a backup made by update_production.sh does not work flawlessly.
Last week we wanted to do a production update. The json dump file created during that update could not be imported without issues:
- The dump does not contain the cronjob user, but foreign key references to it. This can not be imported
- The dump contains data included by django by default (auth, permission, ...). These need to be excluded when importing.
There should be some kind of documentation on what needs to be executed to import this dump back into the database. We should also add some test (could probably just run on travis) that ensures this always works (dump, flush database, migrate, load dump).
</issue>
<code>
[start of evap/evaluation/management/commands/dump_testdata.py]
1 import os
2
3 from django.conf import settings
4 from django.core.management.base import BaseCommand
5 from django.core.management import call_command
6
7
8 class Command(BaseCommand):
9 args = ''
10 help = 'Dumps all relevant contents of the database into test_data.json.'
11 requires_migrations_checks = True
12
13 def handle(self, *args, **options):
14 outfile_name = os.path.join(settings.BASE_DIR, "evaluation", "fixtures", "test_data.json")
15 call_command("dumpdata", "auth.group", "evaluation", "rewards", "grades", indent=2, output=outfile_name)
16
[end of evap/evaluation/management/commands/dump_testdata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/evaluation/management/commands/dump_testdata.py b/evap/evaluation/management/commands/dump_testdata.py
--- a/evap/evaluation/management/commands/dump_testdata.py
+++ b/evap/evaluation/management/commands/dump_testdata.py
@@ -12,4 +12,6 @@
def handle(self, *args, **options):
outfile_name = os.path.join(settings.BASE_DIR, "evaluation", "fixtures", "test_data.json")
- call_command("dumpdata", "auth.group", "evaluation", "rewards", "grades", indent=2, output=outfile_name)
+ call_command(
+ "dumpdata", "auth.group", "evaluation", "rewards", "grades", indent=2,
+ output=outfile_name, natural_foreign=True, natural_primary=True)
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/dump_testdata.py b/evap/evaluation/management/commands/dump_testdata.py\n--- a/evap/evaluation/management/commands/dump_testdata.py\n+++ b/evap/evaluation/management/commands/dump_testdata.py\n@@ -12,4 +12,6 @@\n \n def handle(self, *args, **options):\n outfile_name = os.path.join(settings.BASE_DIR, \"evaluation\", \"fixtures\", \"test_data.json\")\n- call_command(\"dumpdata\", \"auth.group\", \"evaluation\", \"rewards\", \"grades\", indent=2, output=outfile_name)\n+ call_command(\n+ \"dumpdata\", \"auth.group\", \"evaluation\", \"rewards\", \"grades\", indent=2,\n+ output=outfile_name, natural_foreign=True, natural_primary=True)\n", "issue": "Importing a backup made by update_production.sh does not work flawlessly.\nLast week we wanted to do a production update. The json dump file created during that update could not be imported without issues:\r\n- The dump does not contain the cronjob user, but foreign key references to it. This can not be imported\r\n- The dump contains data included by django by default (auth, permission, ...). These need to be excluded when importing.\r\n\r\nThere should be some kind of documentation on what needs to be executed to import this dump back into the database. We should also add some test (could probably just run on travis) that ensures this always works (dump, flush database, migrate, load dump).\n", "before_files": [{"content": "import os\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.core.management import call_command\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Dumps all relevant contents of the database into test_data.json.'\n requires_migrations_checks = True\n\n def handle(self, *args, **options):\n outfile_name = os.path.join(settings.BASE_DIR, \"evaluation\", \"fixtures\", \"test_data.json\")\n call_command(\"dumpdata\", \"auth.group\", \"evaluation\", \"rewards\", \"grades\", indent=2, output=outfile_name)\n", "path": "evap/evaluation/management/commands/dump_testdata.py"}]} | 844 | 190 |
gh_patches_debug_1061 | rasdani/github-patches | git_diff | kymatio__kymatio-352 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENH+TST find a way of testing GPU code
With not too much investment in 💲 💰 it should be possible to set up a `jenkins` testing suite on amazon aws: The idea is to have a micro machine that costs 1c/h run the jenkins server. When tests should be run, this should somehow spawn a couple of GPU machines with different GPUs, ideally as spot instances, run the tests and then shut them down again.
I looked into this at the very beginning of `kymatio`, but I don't really know how to set this up yet. If anybody has experience with this, feel free to try! :)
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import csv
5 import importlib
6 import os
7 import shutil
8 import sys
9 from setuptools import setup, find_packages
10
11 # Constants
12 DISTNAME = 'kymatio'
13 DESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'
14 URL = 'https://www.kymat.io'
15 LICENSE = 'BSD-3-Clause'
16
17
18 # Parse description
19 with open('README.md') as f:
20 README = f.read().split('\n')
21 LONG_DESCRIPTION = '\n'.join([x for x in README if not x[:3]=='[!['])
22
23
24 # Parse version.py
25 kymatio_version_spec = importlib.util.spec_from_file_location(
26 'kymatio_version', 'kymatio/version.py')
27 kymatio_version_module = importlib.util.module_from_spec(kymatio_version_spec)
28 kymatio_version_spec.loader.exec_module(kymatio_version_module)
29 VERSION = kymatio_version_module.version
30
31
32 # Parse requirements.txt
33 with open('requirements.txt', 'r') as f:
34 REQUIREMENTS = f.read().split('\n')
35
36
37 setup_info = dict(
38 # Metadata
39 name=DISTNAME,
40 version=VERSION,
41 author=('Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, '
42 'Michael Eickenberg, Mathieu Andreux, Georgios Exarchakis, '
43 'Louis Thiry, Vincent Lostanlen, Joakim Andén, '
44 'Tomás Angles, Gabriel Huang, Roberto Leonarduzzi'),
45 author_email=('[email protected], [email protected], '
46 '[email protected], [email protected], '
47 '[email protected], [email protected], '
48 '[email protected], [email protected], [email protected], '
49 '[email protected], [email protected], [email protected]'),
50 url=URL,
51 download_url='https://github.com/kymatio/kymatio/releases',
52 project_urls={
53 'Documentation': 'https://www.kymat.io/codereference.html',
54 'Source': 'https://github.com/kymatio/kymatio/',
55 'Tracker': 'https://github.com/kymatio/kymatio/issues',
56 'Authors': 'https://github.com/kymatio/kymatio/blob/master/AUTHORS.md'
57 },
58 classifiers=['Intended Audience :: Education',
59 'Intended Audience :: Science/Research',
60 'License :: OSI Approved :: BSD License',
61 'Natural Language :: English',
62 'Operating System :: MacOS',
63 'Operating System :: POSIX :: Linux',
64 'Programming Language :: Python :: 3.5',
65 'Programming Language :: Python :: 3.6',
66 'Programming Language :: Python :: 3.7',
67 'Programming Language :: Python :: 3.8',
68 'Topic :: Multimedia :: Graphics :: 3D Modeling',
69 'Topic :: Multimedia :: Sound/Audio :: Analysis',
70 'Topic :: Scientific/Engineering :: Artificial Intelligence',
71 'Topic :: Scientific/Engineering :: Chemistry',
72 'Topic :: Scientific/Engineering :: Image Recognition',
73 'Topic :: Scientific/Engineering :: Information Analysis',
74 'Topic :: Scientific/Engineering :: Mathematics',
75 'Topic :: Scientific/Engineering :: Physics',
76 'Topic :: Software Development :: Libraries :: Python Modules',
77 ],
78 description=DESCRIPTION,
79 long_description=LONG_DESCRIPTION,
80 long_description_content_type='text/markdown',
81 python_requires='>=3.5',
82 license=LICENSE,
83 packages=find_packages(exclude=('test',)),
84 install_requires=REQUIREMENTS,
85 zip_safe=True,
86 )
87
88 setup(**setup_info)
89
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
# Parse description
-with open('README.md') as f:
+with open('README.md', encoding='utf8') as f:
README = f.read().split('\n')
LONG_DESCRIPTION = '\n'.join([x for x in README if not x[:3]=='[!['])
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n \n \n # Parse description\n-with open('README.md') as f:\n+with open('README.md', encoding='utf8') as f:\n README = f.read().split('\\n')\n LONG_DESCRIPTION = '\\n'.join([x for x in README if not x[:3]=='[!['])\n", "issue": "ENH+TST find a way of testing GPU code\nWith not too much investment in \ud83d\udcb2 \ud83d\udcb0 it should be possible to set up a `jenkins` testing suite on amazon aws: The idea is to have a micro machine that costs 1c/h run the jenkins server. When tests should be run, this should somehow spawn a couple of GPU machines with different GPUs, ideally as spot instances, run the tests and then shut them down again.\r\nI looked into this at the very beginning of `kymatio`, but I don't really know how to set this up yet. If anybody has experience with this, feel free to try! :)\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport csv\nimport importlib\nimport os\nimport shutil\nimport sys\nfrom setuptools import setup, find_packages\n\n# Constants\nDISTNAME = 'kymatio'\nDESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'\nURL = 'https://www.kymat.io'\nLICENSE = 'BSD-3-Clause'\n\n\n# Parse description\nwith open('README.md') as f:\n README = f.read().split('\\n')\n LONG_DESCRIPTION = '\\n'.join([x for x in README if not x[:3]=='[!['])\n\n\n# Parse version.py\nkymatio_version_spec = importlib.util.spec_from_file_location(\n 'kymatio_version', 'kymatio/version.py')\nkymatio_version_module = importlib.util.module_from_spec(kymatio_version_spec)\nkymatio_version_spec.loader.exec_module(kymatio_version_module)\nVERSION = kymatio_version_module.version\n\n\n# Parse requirements.txt\nwith open('requirements.txt', 'r') as f:\n REQUIREMENTS = f.read().split('\\n')\n\n\nsetup_info = dict(\n # Metadata\n name=DISTNAME,\n version=VERSION,\n author=('Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, '\n 'Michael Eickenberg, Mathieu Andreux, Georgios Exarchakis, '\n 'Louis Thiry, Vincent Lostanlen, Joakim And\u00e9n, '\n 'Tom\u00e1s Angles, Gabriel Huang, Roberto Leonarduzzi'),\n author_email=('[email protected], [email protected], '\n '[email protected], [email protected], '\n '[email protected], [email protected], '\n '[email protected], [email protected], [email protected], '\n '[email protected], [email protected], [email protected]'),\n url=URL,\n download_url='https://github.com/kymatio/kymatio/releases',\n project_urls={\n 'Documentation': 'https://www.kymat.io/codereference.html',\n 'Source': 'https://github.com/kymatio/kymatio/',\n 'Tracker': 'https://github.com/kymatio/kymatio/issues',\n 'Authors': 'https://github.com/kymatio/kymatio/blob/master/AUTHORS.md'\n },\n classifiers=['Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: MacOS',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Topic :: Multimedia :: Graphics :: 3D Modeling',\n 'Topic :: Multimedia :: Sound/Audio :: Analysis',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Chemistry',\n 'Topic :: Scientific/Engineering :: Image Recognition',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n python_requires='>=3.5',\n license=LICENSE,\n packages=find_packages(exclude=('test',)),\n install_requires=REQUIREMENTS,\n zip_safe=True,\n)\n\nsetup(**setup_info)\n", "path": "setup.py"}]} | 1,671 | 95 |
gh_patches_debug_6301 | rasdani/github-patches | git_diff | azavea__raster-vision-1235 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Predictor does not reset the scene's aoi_geometries and the raster source's extent_crop
Currently, the `Predictor` re-uses a `SceneConfig` from the pipeline config in the bundle (instead of creating a new one) and resets its `label_source` and `aoi_uris`.
https://github.com/azavea/raster-vision/blob/master/rastervision_core/rastervision/core/predictor.py#L70-L71
However, it should also do this for `raster_source.extent_crop` (#1030) and `aoi_geometries` (#1033). In general, it should be done for every field that cannot be safely assumed to be the same for the input scene.
Instead of having to add to this every time something new is added to the `SceneConfig` or any of its member classes, it might be better to create a new scene in the predictor with options from the command line.
</issue>
<code>
[start of rastervision_core/rastervision/core/predictor.py]
1 from os.path import join
2 import zipfile
3 import logging
4
5 from rastervision.pipeline import rv_config
6 from rastervision.pipeline.config import (build_config, upgrade_config)
7 from rastervision.pipeline.file_system.utils import (download_if_needed,
8 make_dir, file_to_json)
9 from rastervision.core.data.raster_source import ChannelOrderError
10 from rastervision.core.analyzer import StatsAnalyzerConfig
11
12 log = logging.getLogger(__name__)
13
14
15 class Predictor():
16 """Class for making predictions based off of a model bundle."""
17
18 def __init__(self,
19 model_bundle_uri,
20 tmp_dir,
21 update_stats=False,
22 channel_order=None):
23 """Creates a new Predictor.
24
25 Args:
26 model_bundle_uri: URI of the model bundle to use. Can be any
27 type of URI that Raster Vision can read.
28 tmp_dir: Temporary directory in which to store files that are used
29 by the Predictor. This directory is not cleaned up by this
30 class.
31 channel_order: Option for a new channel order to use for the
32 imagery being predicted against. If not present, the
33 channel_order from the original configuration in the predict
34 package will be used.
35 """
36 self.tmp_dir = tmp_dir
37 self.update_stats = update_stats
38 self.model_loaded = False
39
40 bundle_path = download_if_needed(model_bundle_uri, tmp_dir)
41 bundle_dir = join(tmp_dir, 'bundle')
42 make_dir(bundle_dir)
43 with zipfile.ZipFile(bundle_path, 'r') as bundle_zip:
44 bundle_zip.extractall(path=bundle_dir)
45
46 config_path = join(bundle_dir, 'pipeline-config.json')
47 config_dict = file_to_json(config_path)
48 rv_config.set_everett_config(
49 config_overrides=config_dict.get('rv_config'))
50 config_dict = upgrade_config(config_dict)
51 self.config = build_config(config_dict)
52 self.scene = self.config.dataset.validation_scenes[0]
53
54 if not hasattr(self.scene.raster_source, 'uris'):
55 raise Exception(
56 'raster_source in model bundle must have uris as field')
57
58 if not hasattr(self.scene.label_store, 'uri'):
59 raise Exception(
60 'label_store in model bundle must have uri as field')
61
62 for t in self.scene.raster_source.transformers:
63 t.update_root(bundle_dir)
64
65 if self.update_stats:
66 stats_analyzer = StatsAnalyzerConfig(
67 output_uri=join(bundle_dir, 'stats.json'))
68 self.config.analyzers = [stats_analyzer]
69
70 self.scene.label_source = None
71 self.scene.aoi_uris = None
72 self.config.dataset.train_scenes = [self.scene]
73 self.config.dataset.validation_scenes = [self.scene]
74 self.config.dataset.test_scenes = []
75 self.config.train_uri = bundle_dir
76
77 if channel_order is not None:
78 self.scene.raster_source.channel_order = channel_order
79
80 self.pipeline = None
81
82 def predict(self, image_uris, label_uri, vector_label_uri=None):
83 """Generate predictions for the given image.
84
85 Args:
86 image_uris: URIs of the images to make predictions against.
87 This can be any type of URI readable by Raster Vision
88 FileSystems.
89 label_uri: URI to save labels off into
90 vector_label_uri: URI to save vectorized labels for semantic segmentation
91 model bundles that support it
92 """
93 if self.pipeline is None:
94 self.scene.raster_source.uris = image_uris
95 self.pipeline = self.config.build(self.tmp_dir)
96 if not hasattr(self.pipeline, 'predict'):
97 raise Exception(
98 'pipeline in model bundle must have predict method')
99
100 try:
101 self.scene.raster_source.uris = image_uris
102 self.scene.label_store.uri = label_uri
103 if (hasattr(self.scene.label_store, 'vector_output')
104 and self.scene.label_store.vector_output):
105 if vector_label_uri:
106 for vo in self.scene.label_store.vector_output:
107 vo.uri = join(
108 vector_label_uri, '{}-{}.json'.format(
109 vo.class_id, vo.get_mode()))
110 else:
111 self.scene.label_store.vector_output = []
112 elif vector_label_uri:
113 log.warn(
114 'vector_label_uri was supplied but this model bundle does not '
115 'generate vector labels.')
116
117 if self.update_stats:
118 self.pipeline.analyze()
119 self.pipeline.predict()
120 except ChannelOrderError:
121 raise ValueError(
122 'The predict package is using a channel_order '
123 'with channels unavailable in the imagery.\nTo set a new '
124 'channel_order that only uses channels available in the '
125 'imagery, use the --channel-order option.')
126
[end of rastervision_core/rastervision/core/predictor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision_core/rastervision/core/predictor.py b/rastervision_core/rastervision/core/predictor.py
--- a/rastervision_core/rastervision/core/predictor.py
+++ b/rastervision_core/rastervision/core/predictor.py
@@ -69,6 +69,9 @@
self.scene.label_source = None
self.scene.aoi_uris = None
+ self.scene.aoi_geometries = None
+ self.scene.raster_source.extent_crop = None
+
self.config.dataset.train_scenes = [self.scene]
self.config.dataset.validation_scenes = [self.scene]
self.config.dataset.test_scenes = []
| {"golden_diff": "diff --git a/rastervision_core/rastervision/core/predictor.py b/rastervision_core/rastervision/core/predictor.py\n--- a/rastervision_core/rastervision/core/predictor.py\n+++ b/rastervision_core/rastervision/core/predictor.py\n@@ -69,6 +69,9 @@\n \n self.scene.label_source = None\n self.scene.aoi_uris = None\n+ self.scene.aoi_geometries = None\n+ self.scene.raster_source.extent_crop = None\n+\n self.config.dataset.train_scenes = [self.scene]\n self.config.dataset.validation_scenes = [self.scene]\n self.config.dataset.test_scenes = []\n", "issue": "Predictor does not reset the scene's aoi_geometries and the raster source's extent_crop\nCurrently, the `Predictor` re-uses a `SceneConfig` from the pipeline config in the bundle (instead of creating a new one) and resets its `label_source` and `aoi_uris`.\r\nhttps://github.com/azavea/raster-vision/blob/master/rastervision_core/rastervision/core/predictor.py#L70-L71\r\n\r\nHowever, it should also do this for `raster_source.extent_crop` (#1030) and `aoi_geometries` (#1033). In general, it should be done for every field that cannot be safely assumed to be the same for the input scene.\r\n\r\nInstead of having to add to this every time something new is added to the `SceneConfig` or any of its member classes, it might be better to create a new scene in the predictor with options from the command line.\n", "before_files": [{"content": "from os.path import join\nimport zipfile\nimport logging\n\nfrom rastervision.pipeline import rv_config\nfrom rastervision.pipeline.config import (build_config, upgrade_config)\nfrom rastervision.pipeline.file_system.utils import (download_if_needed,\n make_dir, file_to_json)\nfrom rastervision.core.data.raster_source import ChannelOrderError\nfrom rastervision.core.analyzer import StatsAnalyzerConfig\n\nlog = logging.getLogger(__name__)\n\n\nclass Predictor():\n \"\"\"Class for making predictions based off of a model bundle.\"\"\"\n\n def __init__(self,\n model_bundle_uri,\n tmp_dir,\n update_stats=False,\n channel_order=None):\n \"\"\"Creates a new Predictor.\n\n Args:\n model_bundle_uri: URI of the model bundle to use. Can be any\n type of URI that Raster Vision can read.\n tmp_dir: Temporary directory in which to store files that are used\n by the Predictor. This directory is not cleaned up by this\n class.\n channel_order: Option for a new channel order to use for the\n imagery being predicted against. If not present, the\n channel_order from the original configuration in the predict\n package will be used.\n \"\"\"\n self.tmp_dir = tmp_dir\n self.update_stats = update_stats\n self.model_loaded = False\n\n bundle_path = download_if_needed(model_bundle_uri, tmp_dir)\n bundle_dir = join(tmp_dir, 'bundle')\n make_dir(bundle_dir)\n with zipfile.ZipFile(bundle_path, 'r') as bundle_zip:\n bundle_zip.extractall(path=bundle_dir)\n\n config_path = join(bundle_dir, 'pipeline-config.json')\n config_dict = file_to_json(config_path)\n rv_config.set_everett_config(\n config_overrides=config_dict.get('rv_config'))\n config_dict = upgrade_config(config_dict)\n self.config = build_config(config_dict)\n self.scene = self.config.dataset.validation_scenes[0]\n\n if not hasattr(self.scene.raster_source, 'uris'):\n raise Exception(\n 'raster_source in model bundle must have uris as field')\n\n if not hasattr(self.scene.label_store, 'uri'):\n raise Exception(\n 'label_store in model bundle must have uri as field')\n\n for t in self.scene.raster_source.transformers:\n t.update_root(bundle_dir)\n\n if self.update_stats:\n stats_analyzer = StatsAnalyzerConfig(\n output_uri=join(bundle_dir, 'stats.json'))\n self.config.analyzers = [stats_analyzer]\n\n self.scene.label_source = None\n self.scene.aoi_uris = None\n self.config.dataset.train_scenes = [self.scene]\n self.config.dataset.validation_scenes = [self.scene]\n self.config.dataset.test_scenes = []\n self.config.train_uri = bundle_dir\n\n if channel_order is not None:\n self.scene.raster_source.channel_order = channel_order\n\n self.pipeline = None\n\n def predict(self, image_uris, label_uri, vector_label_uri=None):\n \"\"\"Generate predictions for the given image.\n\n Args:\n image_uris: URIs of the images to make predictions against.\n This can be any type of URI readable by Raster Vision\n FileSystems.\n label_uri: URI to save labels off into\n vector_label_uri: URI to save vectorized labels for semantic segmentation\n model bundles that support it\n \"\"\"\n if self.pipeline is None:\n self.scene.raster_source.uris = image_uris\n self.pipeline = self.config.build(self.tmp_dir)\n if not hasattr(self.pipeline, 'predict'):\n raise Exception(\n 'pipeline in model bundle must have predict method')\n\n try:\n self.scene.raster_source.uris = image_uris\n self.scene.label_store.uri = label_uri\n if (hasattr(self.scene.label_store, 'vector_output')\n and self.scene.label_store.vector_output):\n if vector_label_uri:\n for vo in self.scene.label_store.vector_output:\n vo.uri = join(\n vector_label_uri, '{}-{}.json'.format(\n vo.class_id, vo.get_mode()))\n else:\n self.scene.label_store.vector_output = []\n elif vector_label_uri:\n log.warn(\n 'vector_label_uri was supplied but this model bundle does not '\n 'generate vector labels.')\n\n if self.update_stats:\n self.pipeline.analyze()\n self.pipeline.predict()\n except ChannelOrderError:\n raise ValueError(\n 'The predict package is using a channel_order '\n 'with channels unavailable in the imagery.\\nTo set a new '\n 'channel_order that only uses channels available in the '\n 'imagery, use the --channel-order option.')\n", "path": "rastervision_core/rastervision/core/predictor.py"}]} | 2,027 | 160 |
gh_patches_debug_17813 | rasdani/github-patches | git_diff | translate__pootle-4679 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Snippet caching is not cleared between tests
Currently if you run a test that saves data in the exports cache, the data is still there in the next test
</issue>
<code>
[start of pytest_pootle/fixtures/site.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import tempfile
10
11 import pytest
12
13 from pytest_pootle.env import PootleTestEnv
14
15
16 @pytest.fixture(autouse=True, scope='session')
17 def setup_db_if_needed(request):
18 """Sets up the site DB only if tests requested to use the DB (autouse)."""
19 is_db_marker_set = [
20 item for item in request.node.items
21 if item.get_marker('django_db')
22 ]
23 if is_db_marker_set:
24 return request.getfuncargvalue('post_db_setup')
25
26 return None
27
28
29 @pytest.fixture(scope='session')
30 def post_db_setup(translations_directory, _django_db_setup,
31 _django_cursor_wrapper, request):
32 """Sets up the site DB for the test session."""
33 with _django_cursor_wrapper:
34 PootleTestEnv(request).setup()
35
36
37 @pytest.fixture
38 def no_projects():
39 from pootle_project.models import Project
40
41 Project.objects.all().delete()
42
43
44 @pytest.fixture
45 def no_permissions():
46 from django.contrib.auth.models import Permission
47
48 Permission.objects.all().delete()
49
50
51 @pytest.fixture
52 def no_permission_sets():
53 from pootle_app.models import PermissionSet
54
55 PermissionSet.objects.all().delete()
56
57
58 @pytest.fixture
59 def no_submissions():
60 from pootle_statistics.models import Submission
61
62 Submission.objects.all().delete()
63
64
65 @pytest.fixture
66 def no_users():
67 from django.contrib.auth import get_user_model
68
69 User = get_user_model()
70 User.objects.all().delete()
71
72
73 @pytest.fixture
74 def no_extra_users():
75 from django.contrib.auth import get_user_model
76
77 User = get_user_model()
78 User.objects.exclude(
79 username__in=["system", "default", "nobody"]).delete()
80
81
82 @pytest.fixture(autouse=True, scope="session")
83 def translations_directory(request):
84 """used by PootleEnv"""
85 from django.conf import settings
86 settings.POOTLE_TRANSLATION_DIRECTORY = tempfile.mkdtemp()
87
[end of pytest_pootle/fixtures/site.py]
[start of pytest_pootle/fixtures/revision.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import pytest
11
12
13 @pytest.fixture(autouse=True)
14 def revision():
15 """Sets up the revision counter for each test call."""
16 from pootle.core.models import Revision
17
18 Revision.initialize()
19
[end of pytest_pootle/fixtures/revision.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytest_pootle/fixtures/revision.py b/pytest_pootle/fixtures/revision.py
--- a/pytest_pootle/fixtures/revision.py
+++ b/pytest_pootle/fixtures/revision.py
@@ -11,8 +11,12 @@
@pytest.fixture(autouse=True)
-def revision():
- """Sets up the revision counter for each test call."""
+def revision(request, clear_cache):
+ """Sets up the cached revision counter for each test call."""
from pootle.core.models import Revision
+ from pootle_store.models import Unit
- Revision.initialize()
+ if request.node.get_marker("django_db"):
+ Revision.set(Unit.max_revision())
+ else:
+ Revision.initialize()
diff --git a/pytest_pootle/fixtures/site.py b/pytest_pootle/fixtures/site.py
--- a/pytest_pootle/fixtures/site.py
+++ b/pytest_pootle/fixtures/site.py
@@ -84,3 +84,13 @@
"""used by PootleEnv"""
from django.conf import settings
settings.POOTLE_TRANSLATION_DIRECTORY = tempfile.mkdtemp()
+
+
[email protected](autouse=True)
+def clear_cache(request):
+ """Currently tests only use one cache so this clears all"""
+
+ from django_redis import get_redis_connection
+
+ r_con = get_redis_connection('default')
+ r_con.flushdb()
| {"golden_diff": "diff --git a/pytest_pootle/fixtures/revision.py b/pytest_pootle/fixtures/revision.py\n--- a/pytest_pootle/fixtures/revision.py\n+++ b/pytest_pootle/fixtures/revision.py\n@@ -11,8 +11,12 @@\n \n \n @pytest.fixture(autouse=True)\n-def revision():\n- \"\"\"Sets up the revision counter for each test call.\"\"\"\n+def revision(request, clear_cache):\n+ \"\"\"Sets up the cached revision counter for each test call.\"\"\"\n from pootle.core.models import Revision\n+ from pootle_store.models import Unit\n \n- Revision.initialize()\n+ if request.node.get_marker(\"django_db\"):\n+ Revision.set(Unit.max_revision())\n+ else:\n+ Revision.initialize()\ndiff --git a/pytest_pootle/fixtures/site.py b/pytest_pootle/fixtures/site.py\n--- a/pytest_pootle/fixtures/site.py\n+++ b/pytest_pootle/fixtures/site.py\n@@ -84,3 +84,13 @@\n \"\"\"used by PootleEnv\"\"\"\n from django.conf import settings\n settings.POOTLE_TRANSLATION_DIRECTORY = tempfile.mkdtemp()\n+\n+\[email protected](autouse=True)\n+def clear_cache(request):\n+ \"\"\"Currently tests only use one cache so this clears all\"\"\"\n+\n+ from django_redis import get_redis_connection\n+\n+ r_con = get_redis_connection('default')\n+ r_con.flushdb()\n", "issue": "Snippet caching is not cleared between tests\nCurrently if you run a test that saves data in the exports cache, the data is still there in the next test\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport tempfile\n\nimport pytest\n\nfrom pytest_pootle.env import PootleTestEnv\n\n\[email protected](autouse=True, scope='session')\ndef setup_db_if_needed(request):\n \"\"\"Sets up the site DB only if tests requested to use the DB (autouse).\"\"\"\n is_db_marker_set = [\n item for item in request.node.items\n if item.get_marker('django_db')\n ]\n if is_db_marker_set:\n return request.getfuncargvalue('post_db_setup')\n\n return None\n\n\[email protected](scope='session')\ndef post_db_setup(translations_directory, _django_db_setup,\n _django_cursor_wrapper, request):\n \"\"\"Sets up the site DB for the test session.\"\"\"\n with _django_cursor_wrapper:\n PootleTestEnv(request).setup()\n\n\[email protected]\ndef no_projects():\n from pootle_project.models import Project\n\n Project.objects.all().delete()\n\n\[email protected]\ndef no_permissions():\n from django.contrib.auth.models import Permission\n\n Permission.objects.all().delete()\n\n\[email protected]\ndef no_permission_sets():\n from pootle_app.models import PermissionSet\n\n PermissionSet.objects.all().delete()\n\n\[email protected]\ndef no_submissions():\n from pootle_statistics.models import Submission\n\n Submission.objects.all().delete()\n\n\[email protected]\ndef no_users():\n from django.contrib.auth import get_user_model\n\n User = get_user_model()\n User.objects.all().delete()\n\n\[email protected]\ndef no_extra_users():\n from django.contrib.auth import get_user_model\n\n User = get_user_model()\n User.objects.exclude(\n username__in=[\"system\", \"default\", \"nobody\"]).delete()\n\n\[email protected](autouse=True, scope=\"session\")\ndef translations_directory(request):\n \"\"\"used by PootleEnv\"\"\"\n from django.conf import settings\n settings.POOTLE_TRANSLATION_DIRECTORY = tempfile.mkdtemp()\n", "path": "pytest_pootle/fixtures/site.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport pytest\n\n\[email protected](autouse=True)\ndef revision():\n \"\"\"Sets up the revision counter for each test call.\"\"\"\n from pootle.core.models import Revision\n\n Revision.initialize()\n", "path": "pytest_pootle/fixtures/revision.py"}]} | 1,400 | 311 |
gh_patches_debug_18334 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-1295 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Modify noxfile to build and test the package
Versions `2.0.0` and `2.0.1` were yanked from PyPI last week due to an issue where discovery documents were not included in the published package causing `discovery.build()` to fail(#1214). A basic check could be added to verify the package works correctly using the steps in #1214. Ideally it should be done on every PR and push to master so the issue can be caught before the package is published.
Use these steps from #1214 to re-produce the issue with version `2.0.0` and `2.0.1`:
1. Start with a clean clone of `google-api-python-client`
2. Checkout version `2.0.0` or `2.0.1`, using `git checkout 2.0.0`
3. Run `python setup.py sdist`
4. Run `pip install dist/google-api-python-client-<version>.tar.gz`
5. Run
```
$ python3
Python 3.8.7 (default, Jan 27 2021, 18:44:05)
[GCC 10.2.1 20201224] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from googleapiclient import discovery
>>> client = discovery.build("cloudprofiler", "v2")
...
```
Before closing this issue, we should ensure that we have checks in place so that a PR will fail if `package_data` [here](https://github.com/googleapis/google-api-python-client/blob/master/setup.py#L78) is empty.
</issue>
<code>
[start of noxfile.py]
1
2 # Copyright 2020 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import sys
17
18 import nox
19
20 test_dependencies = [
21 "django>=2.0.0",
22 "google-auth",
23 "google-auth-httplib2",
24 "mox",
25 "parameterized",
26 "pyopenssl",
27 "pytest",
28 "pytest-cov",
29 "webtest",
30 "coverage",
31 "unittest2",
32 "mock",
33 ]
34
35
36 @nox.session(python=["3.7"])
37 def lint(session):
38 session.install("flake8")
39 session.run(
40 "flake8",
41 "googleapiclient",
42 "tests",
43 "--count",
44 "--select=E9,F63,F7,F82",
45 "--show-source",
46 "--statistics",
47 )
48
49
50 @nox.session(python=["3.6", "3.7", "3.8", "3.9"])
51 @nox.parametrize(
52 "oauth2client",
53 [
54 "oauth2client<2dev",
55 "oauth2client>=2,<=3dev",
56 "oauth2client>=3,<=4dev",
57 "oauth2client>=4,<=5dev",
58 ],
59 )
60 def unit(session, oauth2client):
61 session.install(*test_dependencies)
62 session.install(oauth2client)
63 session.install('.')
64
65 # Run py.test against the unit tests.
66 session.run(
67 "py.test",
68 "--quiet",
69 "--cov=googleapiclient",
70 "--cov=tests",
71 "--cov-append",
72 "--cov-config=.coveragerc",
73 "--cov-report=",
74 "--cov-fail-under=85",
75 "tests",
76 *session.posargs,
77 )
78
[end of noxfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -16,6 +16,8 @@
import sys
import nox
+import os
+import shutil
test_dependencies = [
"django>=2.0.0",
@@ -58,9 +60,22 @@
],
)
def unit(session, oauth2client):
+ # Clean up dist and build folders
+ shutil.rmtree('dist', ignore_errors=True)
+ shutil.rmtree('build', ignore_errors=True)
+
session.install(*test_dependencies)
session.install(oauth2client)
- session.install('.')
+
+ # Create and install wheels
+ session.run('python3', 'setup.py', 'bdist_wheel')
+ session.install(os.path.join('dist', os.listdir('dist').pop()))
+
+ # Run tests from a different directory to test the package artifacts
+ root_dir = os.path.dirname(os.path.realpath(__file__))
+ temp_dir = session.create_tmp()
+ session.chdir(temp_dir)
+ shutil.copytree(os.path.join(root_dir, 'tests'), 'tests')
# Run py.test against the unit tests.
session.run(
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -16,6 +16,8 @@\n import sys\n \n import nox\n+import os\n+import shutil\n \n test_dependencies = [\n \"django>=2.0.0\",\n@@ -58,9 +60,22 @@\n ],\n )\n def unit(session, oauth2client):\n+ # Clean up dist and build folders\n+ shutil.rmtree('dist', ignore_errors=True)\n+ shutil.rmtree('build', ignore_errors=True)\n+\n session.install(*test_dependencies)\n session.install(oauth2client)\n- session.install('.')\n+\n+ # Create and install wheels\n+ session.run('python3', 'setup.py', 'bdist_wheel')\n+ session.install(os.path.join('dist', os.listdir('dist').pop()))\n+\n+ # Run tests from a different directory to test the package artifacts\n+ root_dir = os.path.dirname(os.path.realpath(__file__))\n+ temp_dir = session.create_tmp()\n+ session.chdir(temp_dir)\n+ shutil.copytree(os.path.join(root_dir, 'tests'), 'tests')\n \n # Run py.test against the unit tests.\n session.run(\n", "issue": "Modify noxfile to build and test the package\nVersions `2.0.0` and `2.0.1` were yanked from PyPI last week due to an issue where discovery documents were not included in the published package causing `discovery.build()` to fail(#1214). A basic check could be added to verify the package works correctly using the steps in #1214. Ideally it should be done on every PR and push to master so the issue can be caught before the package is published. \r\n\r\nUse these steps from #1214 to re-produce the issue with version `2.0.0` and `2.0.1`:\r\n1. Start with a clean clone of `google-api-python-client`\r\n2. Checkout version `2.0.0` or `2.0.1`, using `git checkout 2.0.0`\r\n3. Run `python setup.py sdist`\r\n4. Run `pip install dist/google-api-python-client-<version>.tar.gz`\r\n5. Run \r\n```\r\n$ python3\r\nPython 3.8.7 (default, Jan 27 2021, 18:44:05) \r\n[GCC 10.2.1 20201224] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from googleapiclient import discovery\r\n>>> client = discovery.build(\"cloudprofiler\", \"v2\")\r\n...\r\n```\r\n\r\nBefore closing this issue, we should ensure that we have checks in place so that a PR will fail if `package_data` [here](https://github.com/googleapis/google-api-python-client/blob/master/setup.py#L78) is empty.\r\n\n", "before_files": [{"content": "\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\n\nimport nox\n\ntest_dependencies = [\n \"django>=2.0.0\",\n \"google-auth\",\n \"google-auth-httplib2\",\n \"mox\",\n \"parameterized\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"webtest\",\n \"coverage\",\n \"unittest2\",\n \"mock\",\n]\n\n\[email protected](python=[\"3.7\"])\ndef lint(session):\n session.install(\"flake8\")\n session.run(\n \"flake8\",\n \"googleapiclient\",\n \"tests\",\n \"--count\",\n \"--select=E9,F63,F7,F82\",\n \"--show-source\",\n \"--statistics\",\n )\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\"])\[email protected](\n \"oauth2client\",\n [\n \"oauth2client<2dev\",\n \"oauth2client>=2,<=3dev\",\n \"oauth2client>=3,<=4dev\",\n \"oauth2client>=4,<=5dev\",\n ],\n)\ndef unit(session, oauth2client):\n session.install(*test_dependencies)\n session.install(oauth2client)\n session.install('.')\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=googleapiclient\",\n \"--cov=tests\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=85\",\n \"tests\",\n *session.posargs,\n )\n", "path": "noxfile.py"}]} | 1,536 | 268 |
gh_patches_debug_24814 | rasdani/github-patches | git_diff | coala__coala-bears-1276 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The bear HaskellLintBear raised an exception
I've used HaskellLintBear to linting https://github.com/wisn/elm-reactor/
Here is the log
https://travis-ci.org/wisn/elm-reactor/builds/180417562
The build result is green, but the bear HaskellLintBear raised an exception.
It seems HaskellLintBear have a problem
```
[WARNING][14:56:00] Bear HaskellLintBear failed to run. Take a look at debug messages (`-V`) for further information.
```
I've collected the traceback information:
```
Traceback (most recent call last):
File "/coala-bears/bears/haskell/HaskellLintBear.py", line 41, in process_output
assert issue['startLine'] == issue['endLine']
AssertionError
File "/coala-bears/bears/haskell/HaskellLintBear.py", line 45, in process_output
newline = line_to_change.replace(issue['from'], issue['to'])
TypeError: Can't convert 'NoneType' object to str implicitly
```
I think `TypeError: Can't convert 'NoneType' object to str implicitly` is the main problem.
Then, followed by `AssertionError`.
Unfortunately, I can't trace manually with `hlint` because my PC freezes when compiling (in installing) it. Hope this information will be helpful. Thanks and sorry for my bad English...
</issue>
<code>
[start of bears/haskell/HaskellLintBear.py]
1 import json
2
3 from coalib.bearlib.abstractions.Linter import linter
4 from dependency_management.requirements.DistributionRequirement import (
5 DistributionRequirement)
6 from coalib.results.Diff import Diff
7 from coalib.results.Result import Result
8 from coalib.results.RESULT_SEVERITY import RESULT_SEVERITY
9
10
11 @linter(executable='hlint')
12 class HaskellLintBear:
13 """
14 Check Haskell code for possible problems. This bear can propose patches for
15 using alternative functions, simplifying code and removing redundancies.
16
17 See <http://community.haskell.org/~ndm/darcs/hlint/hlint.htm> for more
18 information.
19 """
20
21 LANGUAGES = {'Haskell'}
22 REQUIREMENTS = {DistributionRequirement(apt_get='hlint')}
23 AUTHORS = {'The coala developers'}
24 AUTHORS_EMAILS = {'[email protected]'}
25 LICENSE = 'AGPL-3.0'
26 CAN_DETECT = {'Duplication'}
27 CAN_FIX = {'Unused Code', 'Code Simplification'}
28
29 severity_map = {'Error': RESULT_SEVERITY.MAJOR,
30 'Warning': RESULT_SEVERITY.NORMAL,
31 'Suggestion': RESULT_SEVERITY.INFO}
32
33 @staticmethod
34 def create_arguments(filename, file, config_file):
35 return '--json', filename
36
37 def process_output(self, output, filename, file):
38 output = json.loads(output)
39
40 for issue in output:
41 assert issue['startLine'] == issue['endLine']
42 diff = Diff(file)
43 line_nr = issue['startLine']
44 line_to_change = file[line_nr-1]
45 newline = line_to_change.replace(issue['from'], issue['to'])
46 diff.change_line(line_nr, line_to_change, newline)
47
48 yield Result.from_values(
49 origin=self,
50 message=issue['hint'],
51 file=filename,
52 severity=self.severity_map[issue['severity']],
53 line=issue['startLine'],
54 diffs={filename: diff})
55
[end of bears/haskell/HaskellLintBear.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bears/haskell/HaskellLintBear.py b/bears/haskell/HaskellLintBear.py
--- a/bears/haskell/HaskellLintBear.py
+++ b/bears/haskell/HaskellLintBear.py
@@ -38,11 +38,15 @@
output = json.loads(output)
for issue in output:
- assert issue['startLine'] == issue['endLine']
diff = Diff(file)
+ from_lines = issue['from'].splitlines()
+ to_lines = issue['to'].splitlines()
+ assert len(from_lines) == len(to_lines)
+ for other_lines in range(1, len(from_lines)):
+ assert from_lines[other_lines] == to_lines[other_lines]
line_nr = issue['startLine']
line_to_change = file[line_nr-1]
- newline = line_to_change.replace(issue['from'], issue['to'])
+ newline = line_to_change.replace(from_lines[0], to_lines[0])
diff.change_line(line_nr, line_to_change, newline)
yield Result.from_values(
@@ -51,4 +55,7 @@
file=filename,
severity=self.severity_map[issue['severity']],
line=issue['startLine'],
+ column=issue['startColumn'],
+ end_line=issue['endLine'],
+ end_column=issue['endColumn'],
diffs={filename: diff})
| {"golden_diff": "diff --git a/bears/haskell/HaskellLintBear.py b/bears/haskell/HaskellLintBear.py\n--- a/bears/haskell/HaskellLintBear.py\n+++ b/bears/haskell/HaskellLintBear.py\n@@ -38,11 +38,15 @@\n output = json.loads(output)\n \n for issue in output:\n- assert issue['startLine'] == issue['endLine']\n diff = Diff(file)\n+ from_lines = issue['from'].splitlines()\n+ to_lines = issue['to'].splitlines()\n+ assert len(from_lines) == len(to_lines)\n+ for other_lines in range(1, len(from_lines)):\n+ assert from_lines[other_lines] == to_lines[other_lines]\n line_nr = issue['startLine']\n line_to_change = file[line_nr-1]\n- newline = line_to_change.replace(issue['from'], issue['to'])\n+ newline = line_to_change.replace(from_lines[0], to_lines[0])\n diff.change_line(line_nr, line_to_change, newline)\n \n yield Result.from_values(\n@@ -51,4 +55,7 @@\n file=filename,\n severity=self.severity_map[issue['severity']],\n line=issue['startLine'],\n+ column=issue['startColumn'],\n+ end_line=issue['endLine'],\n+ end_column=issue['endColumn'],\n diffs={filename: diff})\n", "issue": "The bear HaskellLintBear raised an exception\nI've used HaskellLintBear to linting https://github.com/wisn/elm-reactor/\r\n\r\nHere is the log\r\nhttps://travis-ci.org/wisn/elm-reactor/builds/180417562\r\n\r\nThe build result is green, but the bear HaskellLintBear raised an exception.\r\n\r\nIt seems HaskellLintBear have a problem\r\n```\r\n[WARNING][14:56:00] Bear HaskellLintBear failed to run. Take a look at debug messages (`-V`) for further information.\r\n```\r\n\r\nI've collected the traceback information:\r\n```\r\nTraceback (most recent call last):\r\n File \"/coala-bears/bears/haskell/HaskellLintBear.py\", line 41, in process_output\r\n assert issue['startLine'] == issue['endLine']\r\n AssertionError\r\n\r\n File \"/coala-bears/bears/haskell/HaskellLintBear.py\", line 45, in process_output\r\n newline = line_to_change.replace(issue['from'], issue['to'])\r\n TypeError: Can't convert 'NoneType' object to str implicitly\r\n```\r\n\r\nI think `TypeError: Can't convert 'NoneType' object to str implicitly` is the main problem.\r\nThen, followed by `AssertionError`.\r\n\r\nUnfortunately, I can't trace manually with `hlint` because my PC freezes when compiling (in installing) it. Hope this information will be helpful. Thanks and sorry for my bad English...\n", "before_files": [{"content": "import json\n\nfrom coalib.bearlib.abstractions.Linter import linter\nfrom dependency_management.requirements.DistributionRequirement import (\n DistributionRequirement)\nfrom coalib.results.Diff import Diff\nfrom coalib.results.Result import Result\nfrom coalib.results.RESULT_SEVERITY import RESULT_SEVERITY\n\n\n@linter(executable='hlint')\nclass HaskellLintBear:\n \"\"\"\n Check Haskell code for possible problems. This bear can propose patches for\n using alternative functions, simplifying code and removing redundancies.\n\n See <http://community.haskell.org/~ndm/darcs/hlint/hlint.htm> for more\n information.\n \"\"\"\n\n LANGUAGES = {'Haskell'}\n REQUIREMENTS = {DistributionRequirement(apt_get='hlint')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Duplication'}\n CAN_FIX = {'Unused Code', 'Code Simplification'}\n\n severity_map = {'Error': RESULT_SEVERITY.MAJOR,\n 'Warning': RESULT_SEVERITY.NORMAL,\n 'Suggestion': RESULT_SEVERITY.INFO}\n\n @staticmethod\n def create_arguments(filename, file, config_file):\n return '--json', filename\n\n def process_output(self, output, filename, file):\n output = json.loads(output)\n\n for issue in output:\n assert issue['startLine'] == issue['endLine']\n diff = Diff(file)\n line_nr = issue['startLine']\n line_to_change = file[line_nr-1]\n newline = line_to_change.replace(issue['from'], issue['to'])\n diff.change_line(line_nr, line_to_change, newline)\n\n yield Result.from_values(\n origin=self,\n message=issue['hint'],\n file=filename,\n severity=self.severity_map[issue['severity']],\n line=issue['startLine'],\n diffs={filename: diff})\n", "path": "bears/haskell/HaskellLintBear.py"}]} | 1,374 | 309 |
gh_patches_debug_1616 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-3193 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Switching editions changes "shelved" date
**Describe the bug**
When switching editions of a book already on your "To Read" list, the "shelved" date is changed to today's date.
**To Reproduce**
Steps to reproduce the behavior:
1. Pick any book on your "To read" list with more than one edition
2. Pick another edition and switch to this
3. Observe that the book's shelved date is now today
**Expected behavior**
This shouldn't changed the shelved date
**Instance**
https://books.theunseen.city
---
**Desktop (please complete the following information):**
- OS: MacOS 14.1
- Browser: Firefox
- Version: 20.0 (64-bit)
</issue>
<code>
[start of bookwyrm/views/books/editions.py]
1 """ the good stuff! the books! """
2 from functools import reduce
3 import operator
4
5 from django.contrib.auth.decorators import login_required
6 from django.core.paginator import Paginator
7 from django.db import transaction
8 from django.db.models import Q
9 from django.shortcuts import get_object_or_404, redirect
10 from django.template.response import TemplateResponse
11 from django.views import View
12 from django.views.decorators.http import require_POST
13
14 from bookwyrm import forms, models
15 from bookwyrm.activitypub import ActivitypubResponse
16 from bookwyrm.settings import PAGE_LENGTH
17 from bookwyrm.views.helpers import is_api_request
18
19
20 # pylint: disable=no-self-use
21 class Editions(View):
22 """list of editions"""
23
24 def get(self, request, book_id):
25 """list of editions of a book"""
26 work = get_object_or_404(models.Work, id=book_id)
27
28 if is_api_request(request):
29 return ActivitypubResponse(work.to_edition_list(**request.GET))
30 filters = {}
31
32 if request.GET.get("language"):
33 filters["languages__contains"] = [request.GET.get("language")]
34 if request.GET.get("format"):
35 filters["physical_format__iexact"] = request.GET.get("format")
36
37 editions = work.editions.order_by("-edition_rank")
38 languages = set(sum(editions.values_list("languages", flat=True), []))
39
40 editions = editions.filter(**filters)
41
42 query = request.GET.get("q")
43 if query:
44 searchable_array_fields = ["languages", "publishers"]
45 searchable_fields = [
46 "title",
47 "physical_format",
48 "isbn_10",
49 "isbn_13",
50 "oclc_number",
51 "asin",
52 "aasin",
53 "isfdb",
54 ]
55 search_filter_entries = [
56 {f"{f}__icontains": query} for f in searchable_fields
57 ] + [{f"{f}__iexact": query} for f in searchable_array_fields]
58 editions = editions.filter(
59 reduce(operator.or_, (Q(**f) for f in search_filter_entries))
60 )
61
62 paginated = Paginator(editions, PAGE_LENGTH)
63 page = paginated.get_page(request.GET.get("page"))
64 data = {
65 "editions": page,
66 "page_range": paginated.get_elided_page_range(
67 page.number, on_each_side=2, on_ends=1
68 ),
69 "work": work,
70 "work_form": forms.EditionFromWorkForm(instance=work),
71 "languages": languages,
72 "formats": set(
73 e.physical_format.lower() for e in editions if e.physical_format
74 ),
75 }
76 return TemplateResponse(request, "book/editions/editions.html", data)
77
78
79 @login_required
80 @require_POST
81 @transaction.atomic
82 def switch_edition(request):
83 """switch your copy of a book to a different edition"""
84 edition_id = request.POST.get("edition")
85 new_edition = get_object_or_404(models.Edition, id=edition_id)
86 shelfbooks = models.ShelfBook.objects.filter(
87 book__parent_work=new_edition.parent_work, shelf__user=request.user
88 )
89 for shelfbook in shelfbooks.all():
90 with transaction.atomic():
91 models.ShelfBook.objects.create(
92 created_date=shelfbook.created_date,
93 user=shelfbook.user,
94 shelf=shelfbook.shelf,
95 book=new_edition,
96 )
97 shelfbook.delete()
98
99 readthroughs = models.ReadThrough.objects.filter(
100 book__parent_work=new_edition.parent_work, user=request.user
101 )
102 for readthrough in readthroughs.all():
103 readthrough.book = new_edition
104 readthrough.save()
105
106 return redirect(f"/book/{new_edition.id}")
107
[end of bookwyrm/views/books/editions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/books/editions.py b/bookwyrm/views/books/editions.py
--- a/bookwyrm/views/books/editions.py
+++ b/bookwyrm/views/books/editions.py
@@ -93,6 +93,7 @@
user=shelfbook.user,
shelf=shelfbook.shelf,
book=new_edition,
+ shelved_date=shelfbook.shelved_date,
)
shelfbook.delete()
| {"golden_diff": "diff --git a/bookwyrm/views/books/editions.py b/bookwyrm/views/books/editions.py\n--- a/bookwyrm/views/books/editions.py\n+++ b/bookwyrm/views/books/editions.py\n@@ -93,6 +93,7 @@\n user=shelfbook.user,\n shelf=shelfbook.shelf,\n book=new_edition,\n+ shelved_date=shelfbook.shelved_date,\n )\n shelfbook.delete()\n", "issue": "Switching editions changes \"shelved\" date\n**Describe the bug**\r\nWhen switching editions of a book already on your \"To Read\" list, the \"shelved\" date is changed to today's date.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Pick any book on your \"To read\" list with more than one edition\r\n2. Pick another edition and switch to this\r\n3. Observe that the book's shelved date is now today\r\n\r\n**Expected behavior**\r\nThis shouldn't changed the shelved date\r\n\r\n**Instance**\r\nhttps://books.theunseen.city\r\n\r\n---\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: MacOS 14.1\r\n - Browser: Firefox\r\n - Version: 20.0 (64-bit)\r\n\n", "before_files": [{"content": "\"\"\" the good stuff! the books! \"\"\"\nfrom functools import reduce\nimport operator\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db import transaction\nfrom django.db.models import Q\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom bookwyrm.views.helpers import is_api_request\n\n\n# pylint: disable=no-self-use\nclass Editions(View):\n \"\"\"list of editions\"\"\"\n\n def get(self, request, book_id):\n \"\"\"list of editions of a book\"\"\"\n work = get_object_or_404(models.Work, id=book_id)\n\n if is_api_request(request):\n return ActivitypubResponse(work.to_edition_list(**request.GET))\n filters = {}\n\n if request.GET.get(\"language\"):\n filters[\"languages__contains\"] = [request.GET.get(\"language\")]\n if request.GET.get(\"format\"):\n filters[\"physical_format__iexact\"] = request.GET.get(\"format\")\n\n editions = work.editions.order_by(\"-edition_rank\")\n languages = set(sum(editions.values_list(\"languages\", flat=True), []))\n\n editions = editions.filter(**filters)\n\n query = request.GET.get(\"q\")\n if query:\n searchable_array_fields = [\"languages\", \"publishers\"]\n searchable_fields = [\n \"title\",\n \"physical_format\",\n \"isbn_10\",\n \"isbn_13\",\n \"oclc_number\",\n \"asin\",\n \"aasin\",\n \"isfdb\",\n ]\n search_filter_entries = [\n {f\"{f}__icontains\": query} for f in searchable_fields\n ] + [{f\"{f}__iexact\": query} for f in searchable_array_fields]\n editions = editions.filter(\n reduce(operator.or_, (Q(**f) for f in search_filter_entries))\n )\n\n paginated = Paginator(editions, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"editions\": page,\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n \"work\": work,\n \"work_form\": forms.EditionFromWorkForm(instance=work),\n \"languages\": languages,\n \"formats\": set(\n e.physical_format.lower() for e in editions if e.physical_format\n ),\n }\n return TemplateResponse(request, \"book/editions/editions.html\", data)\n\n\n@login_required\n@require_POST\[email protected]\ndef switch_edition(request):\n \"\"\"switch your copy of a book to a different edition\"\"\"\n edition_id = request.POST.get(\"edition\")\n new_edition = get_object_or_404(models.Edition, id=edition_id)\n shelfbooks = models.ShelfBook.objects.filter(\n book__parent_work=new_edition.parent_work, shelf__user=request.user\n )\n for shelfbook in shelfbooks.all():\n with transaction.atomic():\n models.ShelfBook.objects.create(\n created_date=shelfbook.created_date,\n user=shelfbook.user,\n shelf=shelfbook.shelf,\n book=new_edition,\n )\n shelfbook.delete()\n\n readthroughs = models.ReadThrough.objects.filter(\n book__parent_work=new_edition.parent_work, user=request.user\n )\n for readthrough in readthroughs.all():\n readthrough.book = new_edition\n readthrough.save()\n\n return redirect(f\"/book/{new_edition.id}\")\n", "path": "bookwyrm/views/books/editions.py"}]} | 1,724 | 102 |
gh_patches_debug_20183 | rasdani/github-patches | git_diff | saleor__saleor-2826 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django 2.1 compatibility
We should switch our supported Django version to the following list:
* Django 1.11 (current LTS)
* Django 2.1 (latest stable)
Current blockers:
* [x] `graphene-django` depends on an old version of `django-filters` (https://github.com/graphql-python/graphene-django/pull/492)
* [x] WeightInput passes floats to its base class which is a DecimalField
* [x] Some form widgets pass `renderer` to functions that don't expect it
</issue>
<code>
[start of saleor/core/weight.py]
1 """In Saleor we are using 'weight' instead of a 'mass'.
2
3 For those of us who are earth-bound, weight is what we usually experience.
4 Mass is a theoretical construct.
5 Unless we are dealing with inertia and momentum, we are encountering
6 the attractive force between ourselves and the earth,
7 the isolated effects of mass alone being a little more esoteric.
8
9 So even though mass is more fundamental, most people think
10 in terms of weight.
11
12 In the end, it does not really matter unless you travel between
13 different planets.
14 """
15 from decimal import Decimal
16 from enum import Enum
17
18 from django import forms
19 from django.contrib.sites.models import Site
20 from django.core.validators import MinValueValidator
21 from django.template.loader import render_to_string
22 from django.utils.translation import pgettext_lazy
23 from measurement.measures import Weight
24
25
26 class WeightUnits:
27 KILOGRAM = 'kg'
28 POUND = 'lb'
29 OUNCE = 'oz'
30 GRAM = 'g'
31
32 CHOICES = [
33 (KILOGRAM, pgettext_lazy('Kilogram weight unit symbol', 'kg')),
34 (POUND, pgettext_lazy('Pound weight unit symbol', 'lb')),
35 (OUNCE, pgettext_lazy('Ounce weight unit symbol', 'oz')),
36 (GRAM, pgettext_lazy('Gram weight unit symbol', 'g'))]
37
38
39 WeightUnitsEnum = Enum(
40 'WeightUnitsEnum',
41 {unit: unit for unit in WeightUnits.CHOICES})
42
43
44 def zero_weight():
45 """Function used as a model's default."""
46 return Weight(kg=0)
47
48
49 def convert_weight(weight, unit):
50 # Weight amount from the Weight instance can be retrived in serveral units
51 # via its properties. eg. Weight(lb=10).kg
52 converted_weight = getattr(weight, unit)
53 return Weight(**{unit: converted_weight})
54
55
56 def get_default_weight_unit():
57 site = Site.objects.get_current()
58 return site.settings.default_weight_unit
59
60
61 class WeightInput(forms.TextInput):
62 template = 'dashboard/shipping/weight_widget.html'
63 input_type = 'number'
64
65 def format_value(self, value):
66 if isinstance(value, Weight):
67 unit = get_default_weight_unit()
68 if value.unit != unit:
69 value = convert_weight(value, unit)
70 return value.value
71 return value
72
73 def render(self, name, value, attrs=None):
74 widget = super().render(name, value, attrs=attrs)
75 unit = get_default_weight_unit()
76 translated_unit = dict(WeightUnits.CHOICES)[unit]
77 return render_to_string(
78 self.template,
79 {'widget': widget, 'value': value, 'unit': translated_unit})
80
81
82 class WeightField(forms.DecimalField):
83 def __init__(self, *args, widget=WeightInput, min_value=0, **kwargs):
84 if isinstance(widget, type):
85 widget = widget(attrs={'type': 'number', 'step': 'any'})
86 super().__init__(*args, widget=widget, **kwargs)
87 if min_value is not None:
88 self.validators.append(MinValueValidator(min_value))
89
90 def to_python(self, value):
91 value = super().to_python(value)
92 if value is None:
93 return value
94 unit = get_default_weight_unit()
95 return Weight(**{unit: value})
96
97 def validate(self, weight):
98 if weight is None or weight in self.empty_values:
99 super().validate(weight)
100 else:
101 unit = get_default_weight_unit()
102 if not isinstance(weight, Weight):
103 raise Exception(
104 '%r is not a valid weight.' % (weight,))
105 if weight.unit != unit:
106 raise forms.ValidationError(
107 'Invalid unit: %r (expected %r).' % (
108 weight.unit, unit))
109 super().validate(weight.value)
110
111 def clean(self, value):
112 value = value_to_be_validated = self.to_python(value)
113 self.validate(value_to_be_validated)
114 if isinstance(value, Weight):
115 value_to_be_validated = Decimal(value.value)
116 # default decimal validators can be used for Weight's value only
117 self.run_validators(value_to_be_validated)
118 return value
119
[end of saleor/core/weight.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/core/weight.py b/saleor/core/weight.py
--- a/saleor/core/weight.py
+++ b/saleor/core/weight.py
@@ -70,8 +70,8 @@
return value.value
return value
- def render(self, name, value, attrs=None):
- widget = super().render(name, value, attrs=attrs)
+ def render(self, name, value, attrs=None, renderer=None):
+ widget = super().render(name, value, attrs=attrs, renderer=renderer)
unit = get_default_weight_unit()
translated_unit = dict(WeightUnits.CHOICES)[unit]
return render_to_string(
@@ -79,7 +79,7 @@
{'widget': widget, 'value': value, 'unit': translated_unit})
-class WeightField(forms.DecimalField):
+class WeightField(forms.FloatField):
def __init__(self, *args, widget=WeightInput, min_value=0, **kwargs):
if isinstance(widget, type):
widget = widget(attrs={'type': 'number', 'step': 'any'})
| {"golden_diff": "diff --git a/saleor/core/weight.py b/saleor/core/weight.py\n--- a/saleor/core/weight.py\n+++ b/saleor/core/weight.py\n@@ -70,8 +70,8 @@\n return value.value\n return value\n \n- def render(self, name, value, attrs=None):\n- widget = super().render(name, value, attrs=attrs)\n+ def render(self, name, value, attrs=None, renderer=None):\n+ widget = super().render(name, value, attrs=attrs, renderer=renderer)\n unit = get_default_weight_unit()\n translated_unit = dict(WeightUnits.CHOICES)[unit]\n return render_to_string(\n@@ -79,7 +79,7 @@\n {'widget': widget, 'value': value, 'unit': translated_unit})\n \n \n-class WeightField(forms.DecimalField):\n+class WeightField(forms.FloatField):\n def __init__(self, *args, widget=WeightInput, min_value=0, **kwargs):\n if isinstance(widget, type):\n widget = widget(attrs={'type': 'number', 'step': 'any'})\n", "issue": "Django 2.1 compatibility\nWe should switch our supported Django version to the following list:\r\n* Django 1.11 (current LTS)\r\n* Django 2.1 (latest stable)\r\n\r\nCurrent blockers:\r\n* [x] `graphene-django` depends on an old version of `django-filters` (https://github.com/graphql-python/graphene-django/pull/492)\r\n* [x] WeightInput passes floats to its base class which is a DecimalField\r\n* [x] Some form widgets pass `renderer` to functions that don't expect it\n", "before_files": [{"content": "\"\"\"In Saleor we are using 'weight' instead of a 'mass'.\n\nFor those of us who are earth-bound, weight is what we usually experience.\nMass is a theoretical construct.\nUnless we are dealing with inertia and momentum, we are encountering\nthe attractive force between ourselves and the earth,\nthe isolated effects of mass alone being a little more esoteric.\n\nSo even though mass is more fundamental, most people think\nin terms of weight.\n\nIn the end, it does not really matter unless you travel between\ndifferent planets.\n\"\"\"\nfrom decimal import Decimal\nfrom enum import Enum\n\nfrom django import forms\nfrom django.contrib.sites.models import Site\nfrom django.core.validators import MinValueValidator\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import pgettext_lazy\nfrom measurement.measures import Weight\n\n\nclass WeightUnits:\n KILOGRAM = 'kg'\n POUND = 'lb'\n OUNCE = 'oz'\n GRAM = 'g'\n\n CHOICES = [\n (KILOGRAM, pgettext_lazy('Kilogram weight unit symbol', 'kg')),\n (POUND, pgettext_lazy('Pound weight unit symbol', 'lb')),\n (OUNCE, pgettext_lazy('Ounce weight unit symbol', 'oz')),\n (GRAM, pgettext_lazy('Gram weight unit symbol', 'g'))]\n\n\nWeightUnitsEnum = Enum(\n 'WeightUnitsEnum',\n {unit: unit for unit in WeightUnits.CHOICES})\n\n\ndef zero_weight():\n \"\"\"Function used as a model's default.\"\"\"\n return Weight(kg=0)\n\n\ndef convert_weight(weight, unit):\n # Weight amount from the Weight instance can be retrived in serveral units\n # via its properties. eg. Weight(lb=10).kg\n converted_weight = getattr(weight, unit)\n return Weight(**{unit: converted_weight})\n\n\ndef get_default_weight_unit():\n site = Site.objects.get_current()\n return site.settings.default_weight_unit\n\n\nclass WeightInput(forms.TextInput):\n template = 'dashboard/shipping/weight_widget.html'\n input_type = 'number'\n\n def format_value(self, value):\n if isinstance(value, Weight):\n unit = get_default_weight_unit()\n if value.unit != unit:\n value = convert_weight(value, unit)\n return value.value\n return value\n\n def render(self, name, value, attrs=None):\n widget = super().render(name, value, attrs=attrs)\n unit = get_default_weight_unit()\n translated_unit = dict(WeightUnits.CHOICES)[unit]\n return render_to_string(\n self.template,\n {'widget': widget, 'value': value, 'unit': translated_unit})\n\n\nclass WeightField(forms.DecimalField):\n def __init__(self, *args, widget=WeightInput, min_value=0, **kwargs):\n if isinstance(widget, type):\n widget = widget(attrs={'type': 'number', 'step': 'any'})\n super().__init__(*args, widget=widget, **kwargs)\n if min_value is not None:\n self.validators.append(MinValueValidator(min_value))\n\n def to_python(self, value):\n value = super().to_python(value)\n if value is None:\n return value\n unit = get_default_weight_unit()\n return Weight(**{unit: value})\n\n def validate(self, weight):\n if weight is None or weight in self.empty_values:\n super().validate(weight)\n else:\n unit = get_default_weight_unit()\n if not isinstance(weight, Weight):\n raise Exception(\n '%r is not a valid weight.' % (weight,))\n if weight.unit != unit:\n raise forms.ValidationError(\n 'Invalid unit: %r (expected %r).' % (\n weight.unit, unit))\n super().validate(weight.value)\n\n def clean(self, value):\n value = value_to_be_validated = self.to_python(value)\n self.validate(value_to_be_validated)\n if isinstance(value, Weight):\n value_to_be_validated = Decimal(value.value)\n # default decimal validators can be used for Weight's value only\n self.run_validators(value_to_be_validated)\n return value\n", "path": "saleor/core/weight.py"}]} | 1,794 | 243 |
gh_patches_debug_61923 | rasdani/github-patches | git_diff | ray-project__ray-3109 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ship Modin with Ray
### Describe the problem
<!-- Describe the problem clearly here. -->
I think it makes sense to ship Modin with Ray. I suggest doing this similar to how pyarrow is shipped with Ray.
We don't need to rely on the dependencies of Modin, but some of the Modin source will have to be updated to make sure that the pandas version is correct.
</issue>
<code>
[start of python/ray/__init__.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6 import sys
7
8 if "pyarrow" in sys.modules:
9 raise ImportError("Ray must be imported before pyarrow because Ray "
10 "requires a specific version of pyarrow (which is "
11 "packaged along with Ray).")
12
13 # Add the directory containing pyarrow to the Python path so that we find the
14 # pyarrow version packaged with ray and not a pre-existing pyarrow.
15 pyarrow_path = os.path.join(
16 os.path.abspath(os.path.dirname(__file__)), "pyarrow_files")
17 sys.path.insert(0, pyarrow_path)
18
19 # See https://github.com/ray-project/ray/issues/131.
20 helpful_message = """
21
22 If you are using Anaconda, try fixing this problem by running:
23
24 conda install libgcc
25 """
26
27 try:
28 import pyarrow # noqa: F401
29 except ImportError as e:
30 if ((hasattr(e, "msg") and isinstance(e.msg, str)
31 and ("libstdc++" in e.msg or "CXX" in e.msg))):
32 # This code path should be taken with Python 3.
33 e.msg += helpful_message
34 elif (hasattr(e, "message") and isinstance(e.message, str)
35 and ("libstdc++" in e.message or "CXX" in e.message)):
36 # This code path should be taken with Python 2.
37 condition = (hasattr(e, "args") and isinstance(e.args, tuple)
38 and len(e.args) == 1 and isinstance(e.args[0], str))
39 if condition:
40 e.args = (e.args[0] + helpful_message, )
41 else:
42 if not hasattr(e, "args"):
43 e.args = ()
44 elif not isinstance(e.args, tuple):
45 e.args = (e.args, )
46 e.args += (helpful_message, )
47 raise
48
49 from ray.raylet import ObjectID, _config # noqa: E402
50 from ray.profiling import profile # noqa: E402
51 from ray.worker import (error_info, init, connect, disconnect, get, put, wait,
52 remote, get_gpu_ids, get_resource_ids, get_webui_url,
53 register_custom_serializer, shutdown,
54 is_initialized) # noqa: E402
55 from ray.worker import (SCRIPT_MODE, WORKER_MODE, LOCAL_MODE,
56 PYTHON_MODE) # noqa: E402
57 from ray.worker import global_state # noqa: E402
58 import ray.internal # noqa: E402
59 # We import ray.actor because some code is run in actor.py which initializes
60 # some functions in the worker.
61 import ray.actor # noqa: F401
62 from ray.actor import method # noqa: E402
63
64 # Ray version string.
65 __version__ = "0.5.3"
66
67 __all__ = [
68 "error_info", "init", "connect", "disconnect", "get", "put", "wait",
69 "remote", "profile", "actor", "method", "get_gpu_ids", "get_resource_ids",
70 "get_webui_url", "register_custom_serializer", "shutdown",
71 "is_initialized", "SCRIPT_MODE", "WORKER_MODE", "LOCAL_MODE",
72 "PYTHON_MODE", "global_state", "ObjectID", "_config", "__version__",
73 "internal"
74 ]
75
76 import ctypes # noqa: E402
77 # Windows only
78 if hasattr(ctypes, "windll"):
79 # Makes sure that all child processes die when we die. Also makes sure that
80 # fatal crashes result in process termination rather than an error dialog
81 # (the latter is annoying since we have a lot of processes). This is done
82 # by associating all child processes with a "job" object that imposes this
83 # behavior.
84 (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, "\0" * 17 + chr(0x8 | 0x4 | 0x20) + "\0" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501
85
[end of python/ray/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/ray/__init__.py b/python/ray/__init__.py
--- a/python/ray/__init__.py
+++ b/python/ray/__init__.py
@@ -46,6 +46,9 @@
e.args += (helpful_message, )
raise
+modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), "modin")
+sys.path.insert(0, modin_path)
+
from ray.raylet import ObjectID, _config # noqa: E402
from ray.profiling import profile # noqa: E402
from ray.worker import (error_info, init, connect, disconnect, get, put, wait,
| {"golden_diff": "diff --git a/python/ray/__init__.py b/python/ray/__init__.py\n--- a/python/ray/__init__.py\n+++ b/python/ray/__init__.py\n@@ -46,6 +46,9 @@\n e.args += (helpful_message, )\n raise\n \n+modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"modin\")\n+sys.path.insert(0, modin_path)\n+\n from ray.raylet import ObjectID, _config # noqa: E402\n from ray.profiling import profile # noqa: E402\n from ray.worker import (error_info, init, connect, disconnect, get, put, wait,\n", "issue": "Ship Modin with Ray\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\nI think it makes sense to ship Modin with Ray. I suggest doing this similar to how pyarrow is shipped with Ray.\r\n\r\nWe don't need to rely on the dependencies of Modin, but some of the Modin source will have to be updated to make sure that the pandas version is correct.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nif \"pyarrow\" in sys.modules:\n raise ImportError(\"Ray must be imported before pyarrow because Ray \"\n \"requires a specific version of pyarrow (which is \"\n \"packaged along with Ray).\")\n\n# Add the directory containing pyarrow to the Python path so that we find the\n# pyarrow version packaged with ray and not a pre-existing pyarrow.\npyarrow_path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)), \"pyarrow_files\")\nsys.path.insert(0, pyarrow_path)\n\n# See https://github.com/ray-project/ray/issues/131.\nhelpful_message = \"\"\"\n\nIf you are using Anaconda, try fixing this problem by running:\n\n conda install libgcc\n\"\"\"\n\ntry:\n import pyarrow # noqa: F401\nexcept ImportError as e:\n if ((hasattr(e, \"msg\") and isinstance(e.msg, str)\n and (\"libstdc++\" in e.msg or \"CXX\" in e.msg))):\n # This code path should be taken with Python 3.\n e.msg += helpful_message\n elif (hasattr(e, \"message\") and isinstance(e.message, str)\n and (\"libstdc++\" in e.message or \"CXX\" in e.message)):\n # This code path should be taken with Python 2.\n condition = (hasattr(e, \"args\") and isinstance(e.args, tuple)\n and len(e.args) == 1 and isinstance(e.args[0], str))\n if condition:\n e.args = (e.args[0] + helpful_message, )\n else:\n if not hasattr(e, \"args\"):\n e.args = ()\n elif not isinstance(e.args, tuple):\n e.args = (e.args, )\n e.args += (helpful_message, )\n raise\n\nfrom ray.raylet import ObjectID, _config # noqa: E402\nfrom ray.profiling import profile # noqa: E402\nfrom ray.worker import (error_info, init, connect, disconnect, get, put, wait,\n remote, get_gpu_ids, get_resource_ids, get_webui_url,\n register_custom_serializer, shutdown,\n is_initialized) # noqa: E402\nfrom ray.worker import (SCRIPT_MODE, WORKER_MODE, LOCAL_MODE,\n PYTHON_MODE) # noqa: E402\nfrom ray.worker import global_state # noqa: E402\nimport ray.internal # noqa: E402\n# We import ray.actor because some code is run in actor.py which initializes\n# some functions in the worker.\nimport ray.actor # noqa: F401\nfrom ray.actor import method # noqa: E402\n\n# Ray version string.\n__version__ = \"0.5.3\"\n\n__all__ = [\n \"error_info\", \"init\", \"connect\", \"disconnect\", \"get\", \"put\", \"wait\",\n \"remote\", \"profile\", \"actor\", \"method\", \"get_gpu_ids\", \"get_resource_ids\",\n \"get_webui_url\", \"register_custom_serializer\", \"shutdown\",\n \"is_initialized\", \"SCRIPT_MODE\", \"WORKER_MODE\", \"LOCAL_MODE\",\n \"PYTHON_MODE\", \"global_state\", \"ObjectID\", \"_config\", \"__version__\",\n \"internal\"\n]\n\nimport ctypes # noqa: E402\n# Windows only\nif hasattr(ctypes, \"windll\"):\n # Makes sure that all child processes die when we die. Also makes sure that\n # fatal crashes result in process termination rather than an error dialog\n # (the latter is annoying since we have a lot of processes). This is done\n # by associating all child processes with a \"job\" object that imposes this\n # behavior.\n (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, \"\\0\" * 17 + chr(0x8 | 0x4 | 0x20) + \"\\0\" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501\n", "path": "python/ray/__init__.py"}]} | 1,785 | 155 |
gh_patches_debug_15080 | rasdani/github-patches | git_diff | pulp__pulpcore-5190 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix import in wsgi preventing startup
**Version**
Confirmed with Katello folks using 3.49 branch.
**Describe the bug**
We're getting an error during the startup stage:
```python
Starting Pulp API Server...
Traceback (most recent call last):
File "/usr/bin/pulpcore-api", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.49.1', 'console_scripts', 'pulpcore-api')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/entrypoint.py", line 140, in main
PulpcoreApiApplication(options).run()
File "/usr/lib/python3.11/site-packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/usr/lib/python3.11/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/gunicorn/arbiter.py", line 58, in __init__
self.setup(app)
File "/usr/lib/python3.11/site-packages/gunicorn/arbiter.py", line 118, in setup
self.app.wsgi()
File "/usr/lib/python3.11/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/entrypoint.py", line 95, in load
import pulpcore.app.wsgi
File "/usr/lib/python3.11/site-packages/pulpcore/app/wsgi.py", line 14, in <module>
from pulpcore.app.util import init_domain_metrics_exporter
File "/usr/lib/python3.11/site-packages/pulpcore/app/util.py", line 24, in <module>
from pulpcore.app import models
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/__init__.py", line 4, in <module>
from .base import (
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/base.py", line 3, in <module>
from django.contrib.contenttypes.fields import GenericRelation
File "/usr/lib/python3.11/site-packages/django/contrib/contenttypes/fields.py", line 7, in <module>
from django.contrib.contenttypes.models import ContentType
File "/usr/lib/python3.11/site-packages/django/contrib/contenttypes/models.py", line 139, in <module>
class ContentType(models.Model):
File "/usr/lib/python3.11/site-packages/django/db/models/base.py", line 129, in __new__
app_config = apps.get_containing_app_config(module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/apps/registry.py", line 260, in get_containing_app_config
```
and what got our eye was this line:
```python
File "/usr/lib/python3.11/site-packages/pulpcore/app/wsgi.py", line 14, in <module>
from pulpcore.app.util import init_domain_metrics_exporter
```
Also, there's already a fix for this in the main branch #5178
**To Reproduce**
Installing using pip and rpm packages.
**Expected behavior**
The application should start without issues
</issue>
<code>
[start of pulpcore/app/wsgi.py]
1 """
2 WSGI config for pulp project.
3
4 It exposes the WSGI callable as a module-level variable named ``application``.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.2/howto/deployment/wsgi/
8 """
9
10 from django.core.wsgi import get_wsgi_application
11 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
12
13 from pulpcore.app.entrypoint import using_pulp_api_worker
14 from pulpcore.app.util import init_domain_metrics_exporter
15
16 if not using_pulp_api_worker.get(False):
17 raise RuntimeError("This app must be executed using pulpcore-api entrypoint.")
18
19 application = get_wsgi_application()
20 application = OpenTelemetryMiddleware(application)
21
22 init_domain_metrics_exporter()
23
[end of pulpcore/app/wsgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/wsgi.py b/pulpcore/app/wsgi.py
--- a/pulpcore/app/wsgi.py
+++ b/pulpcore/app/wsgi.py
@@ -11,7 +11,6 @@
from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
from pulpcore.app.entrypoint import using_pulp_api_worker
-from pulpcore.app.util import init_domain_metrics_exporter
if not using_pulp_api_worker.get(False):
raise RuntimeError("This app must be executed using pulpcore-api entrypoint.")
@@ -19,4 +18,6 @@
application = get_wsgi_application()
application = OpenTelemetryMiddleware(application)
+from pulpcore.app.util import init_domain_metrics_exporter # noqa: E402
+
init_domain_metrics_exporter()
| {"golden_diff": "diff --git a/pulpcore/app/wsgi.py b/pulpcore/app/wsgi.py\n--- a/pulpcore/app/wsgi.py\n+++ b/pulpcore/app/wsgi.py\n@@ -11,7 +11,6 @@\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n \n from pulpcore.app.entrypoint import using_pulp_api_worker\n-from pulpcore.app.util import init_domain_metrics_exporter\n \n if not using_pulp_api_worker.get(False):\n raise RuntimeError(\"This app must be executed using pulpcore-api entrypoint.\")\n@@ -19,4 +18,6 @@\n application = get_wsgi_application()\n application = OpenTelemetryMiddleware(application)\n \n+from pulpcore.app.util import init_domain_metrics_exporter # noqa: E402\n+\n init_domain_metrics_exporter()\n", "issue": "Fix import in wsgi preventing startup\n**Version**\r\nConfirmed with Katello folks using 3.49 branch.\r\n\r\n**Describe the bug**\r\nWe're getting an error during the startup stage:\r\n```python\r\nStarting Pulp API Server...\r\nTraceback (most recent call last):\r\n File \"/usr/bin/pulpcore-api\", line 33, in <module>\r\n sys.exit(load_entry_point('pulpcore==3.49.1', 'console_scripts', 'pulpcore-api')())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/click/core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/click/core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/click/core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/entrypoint.py\", line 140, in main\r\n PulpcoreApiApplication(options).run()\r\n File \"/usr/lib/python3.11/site-packages/gunicorn/app/base.py\", line 231, in run\r\n super().run()\r\n File \"/usr/lib/python3.11/site-packages/gunicorn/app/base.py\", line 72, in run\r\n Arbiter(self).run()\r\n ^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/gunicorn/arbiter.py\", line 58, in __init__\r\n self.setup(app)\r\n File \"/usr/lib/python3.11/site-packages/gunicorn/arbiter.py\", line 118, in setup\r\n self.app.wsgi()\r\n File \"/usr/lib/python3.11/site-packages/gunicorn/app/base.py\", line 67, in wsgi\r\n self.callable = self.load()\r\n ^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/entrypoint.py\", line 95, in load\r\n import pulpcore.app.wsgi\r\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/wsgi.py\", line 14, in <module>\r\n from pulpcore.app.util import init_domain_metrics_exporter\r\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/util.py\", line 24, in <module>\r\n from pulpcore.app import models\r\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/models/__init__.py\", line 4, in <module>\r\n from .base import (\r\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/models/base.py\", line 3, in <module>\r\n from django.contrib.contenttypes.fields import GenericRelation\r\n File \"/usr/lib/python3.11/site-packages/django/contrib/contenttypes/fields.py\", line 7, in <module>\r\n from django.contrib.contenttypes.models import ContentType\r\n File \"/usr/lib/python3.11/site-packages/django/contrib/contenttypes/models.py\", line 139, in <module>\r\n class ContentType(models.Model):\r\n File \"/usr/lib/python3.11/site-packages/django/db/models/base.py\", line 129, in __new__\r\n app_config = apps.get_containing_app_config(module)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/django/apps/registry.py\", line 260, in get_containing_app_config\r\n```\r\n\r\nand what got our eye was this line:\r\n```python\r\nFile \"/usr/lib/python3.11/site-packages/pulpcore/app/wsgi.py\", line 14, in <module>\r\n from pulpcore.app.util import init_domain_metrics_exporter\r\n```\r\n\r\nAlso, there's already a fix for this in the main branch #5178\r\n\r\n**To Reproduce**\r\nInstalling using pip and rpm packages.\r\n\r\n**Expected behavior**\r\nThe application should start without issues\r\n\n", "before_files": [{"content": "\"\"\"\nWSGI config for pulp project.\n\nIt exposes the WSGI callable as a module-level variable named ``application``.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/howto/deployment/wsgi/\n\"\"\"\n\nfrom django.core.wsgi import get_wsgi_application\nfrom opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n\nfrom pulpcore.app.entrypoint import using_pulp_api_worker\nfrom pulpcore.app.util import init_domain_metrics_exporter\n\nif not using_pulp_api_worker.get(False):\n raise RuntimeError(\"This app must be executed using pulpcore-api entrypoint.\")\n\napplication = get_wsgi_application()\napplication = OpenTelemetryMiddleware(application)\n\ninit_domain_metrics_exporter()\n", "path": "pulpcore/app/wsgi.py"}]} | 1,716 | 174 |
gh_patches_debug_24448 | rasdani/github-patches | git_diff | conan-io__conan-center-index-9862 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Package]OpenSUSE Tumbleweed fix problem - glu/system
https://github.com/conan-io/conan-center-index/blob/8658ae021ce225d889fa4ee38d30cb80877a7c75/recipes/glu/all/conanfile.py#L17-L32
This fix the problem in openSUSE Tumbleweed:
```
elif tools.os_info.with_zypper:
packages = ["glu-devel"]
```
</issue>
<code>
[start of recipes/glu/all/conanfile.py]
1 from conans import ConanFile, tools
2 from conans.errors import ConanException
3 import os
4
5
6 class SysConfigGLUConan(ConanFile):
7 name = "glu"
8 version = "system"
9 description = "cross-platform virtual conan package for the GLU support"
10 topics = ("conan", "opengl", "glu")
11 url = "https://github.com/conan-io/conan-center-index"
12 homepage = "https://cgit.freedesktop.org/mesa/glu/"
13 license = "SGI-B-2.0"
14 settings = "os"
15 requires = "opengl/system"
16
17 def system_requirements(self):
18 packages = []
19 if tools.os_info.is_linux and self.settings.os == "Linux":
20 if tools.os_info.with_yum or tools.os_info.with_dnf:
21 packages = ["mesa-libGLU-devel"]
22 elif tools.os_info.with_apt:
23 packages = ["libglu1-mesa-dev"]
24 elif tools.os_info.with_pacman:
25 packages = ["glu"]
26 elif tools.os_info.with_zypper:
27 packages = ["Mesa-libGLU-devel"]
28 else:
29 self.output.warn("Don't know how to install GLU for your distro")
30 if tools.os_info.is_freebsd and self.settings.os == "FreeBSD":
31 packages = ["libGLU"]
32 if packages:
33 package_tool = tools.SystemPackageTool(conanfile=self, default_mode='verify')
34 for p in packages:
35 package_tool.install(update=True, packages=p)
36
37 def _fill_cppinfo_from_pkgconfig(self, name):
38 pkg_config = tools.PkgConfig(name)
39 if not pkg_config.provides:
40 raise ConanException("GLU development files aren't available, giving up")
41 libs = [lib[2:] for lib in pkg_config.libs_only_l]
42 lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]
43 ldflags = [flag for flag in pkg_config.libs_only_other]
44 include_dirs = [include[2:] for include in pkg_config.cflags_only_I]
45 cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith("-D")]
46 defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith("-D")]
47
48 self.cpp_info.system_libs.extend(libs)
49 self.cpp_info.libdirs.extend(lib_dirs)
50 self.cpp_info.sharedlinkflags.extend(ldflags)
51 self.cpp_info.exelinkflags.extend(ldflags)
52 self.cpp_info.defines.extend(defines)
53 self.cpp_info.includedirs.extend(include_dirs)
54 self.cpp_info.cflags.extend(cflags)
55 self.cpp_info.cxxflags.extend(cflags)
56
57 def package_info(self):
58 self.cpp_info.includedirs = []
59 self.cpp_info.libdirs = []
60
61 if self.settings.os == "Windows":
62 self.cpp_info.system_libs = ["Glu32"]
63 elif self.settings.os in ["Linux", "FreeBSD"]:
64 self._fill_cppinfo_from_pkgconfig("glu")
65
66 def package_id(self):
67 self.info.header_only()
68
[end of recipes/glu/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/glu/all/conanfile.py b/recipes/glu/all/conanfile.py
--- a/recipes/glu/all/conanfile.py
+++ b/recipes/glu/all/conanfile.py
@@ -1,13 +1,12 @@
from conans import ConanFile, tools
from conans.errors import ConanException
-import os
class SysConfigGLUConan(ConanFile):
name = "glu"
version = "system"
description = "cross-platform virtual conan package for the GLU support"
- topics = ("conan", "opengl", "glu")
+ topics = ("opengl", "glu")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://cgit.freedesktop.org/mesa/glu/"
license = "SGI-B-2.0"
@@ -24,7 +23,7 @@
elif tools.os_info.with_pacman:
packages = ["glu"]
elif tools.os_info.with_zypper:
- packages = ["Mesa-libGLU-devel"]
+ packages = ["glu-devel"]
else:
self.output.warn("Don't know how to install GLU for your distro")
if tools.os_info.is_freebsd and self.settings.os == "FreeBSD":
| {"golden_diff": "diff --git a/recipes/glu/all/conanfile.py b/recipes/glu/all/conanfile.py\n--- a/recipes/glu/all/conanfile.py\n+++ b/recipes/glu/all/conanfile.py\n@@ -1,13 +1,12 @@\n from conans import ConanFile, tools\n from conans.errors import ConanException\n-import os\n \n \n class SysConfigGLUConan(ConanFile):\n name = \"glu\"\n version = \"system\"\n description = \"cross-platform virtual conan package for the GLU support\"\n- topics = (\"conan\", \"opengl\", \"glu\")\n+ topics = (\"opengl\", \"glu\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://cgit.freedesktop.org/mesa/glu/\"\n license = \"SGI-B-2.0\"\n@@ -24,7 +23,7 @@\n elif tools.os_info.with_pacman:\n packages = [\"glu\"]\n elif tools.os_info.with_zypper:\n- packages = [\"Mesa-libGLU-devel\"]\n+ packages = [\"glu-devel\"]\n else:\n self.output.warn(\"Don't know how to install GLU for your distro\")\n if tools.os_info.is_freebsd and self.settings.os == \"FreeBSD\":\n", "issue": "[Package]OpenSUSE Tumbleweed fix problem - glu/system\nhttps://github.com/conan-io/conan-center-index/blob/8658ae021ce225d889fa4ee38d30cb80877a7c75/recipes/glu/all/conanfile.py#L17-L32\r\n\r\nThis fix the problem in openSUSE Tumbleweed:\r\n```\r\nelif tools.os_info.with_zypper:\r\n packages = [\"glu-devel\"]\r\n```\n", "before_files": [{"content": "from conans import ConanFile, tools\nfrom conans.errors import ConanException\nimport os\n\n\nclass SysConfigGLUConan(ConanFile):\n name = \"glu\"\n version = \"system\"\n description = \"cross-platform virtual conan package for the GLU support\"\n topics = (\"conan\", \"opengl\", \"glu\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://cgit.freedesktop.org/mesa/glu/\"\n license = \"SGI-B-2.0\"\n settings = \"os\"\n requires = \"opengl/system\"\n\n def system_requirements(self):\n packages = []\n if tools.os_info.is_linux and self.settings.os == \"Linux\":\n if tools.os_info.with_yum or tools.os_info.with_dnf:\n packages = [\"mesa-libGLU-devel\"]\n elif tools.os_info.with_apt:\n packages = [\"libglu1-mesa-dev\"]\n elif tools.os_info.with_pacman:\n packages = [\"glu\"]\n elif tools.os_info.with_zypper:\n packages = [\"Mesa-libGLU-devel\"]\n else:\n self.output.warn(\"Don't know how to install GLU for your distro\")\n if tools.os_info.is_freebsd and self.settings.os == \"FreeBSD\":\n packages = [\"libGLU\"]\n if packages:\n package_tool = tools.SystemPackageTool(conanfile=self, default_mode='verify')\n for p in packages:\n package_tool.install(update=True, packages=p)\n\n def _fill_cppinfo_from_pkgconfig(self, name):\n pkg_config = tools.PkgConfig(name)\n if not pkg_config.provides:\n raise ConanException(\"GLU development files aren't available, giving up\")\n libs = [lib[2:] for lib in pkg_config.libs_only_l]\n lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]\n ldflags = [flag for flag in pkg_config.libs_only_other]\n include_dirs = [include[2:] for include in pkg_config.cflags_only_I]\n cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith(\"-D\")]\n defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith(\"-D\")]\n\n self.cpp_info.system_libs.extend(libs)\n self.cpp_info.libdirs.extend(lib_dirs)\n self.cpp_info.sharedlinkflags.extend(ldflags)\n self.cpp_info.exelinkflags.extend(ldflags)\n self.cpp_info.defines.extend(defines)\n self.cpp_info.includedirs.extend(include_dirs)\n self.cpp_info.cflags.extend(cflags)\n self.cpp_info.cxxflags.extend(cflags)\n\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs = [\"Glu32\"]\n elif self.settings.os in [\"Linux\", \"FreeBSD\"]:\n self._fill_cppinfo_from_pkgconfig(\"glu\")\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/glu/all/conanfile.py"}]} | 1,450 | 293 |
gh_patches_debug_15440 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3372 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Google maps API requests should use an API key
`For development purposes only` watermark is being shown on our maps as Google has made it mandatory to use an API key to talk to the maps API.
</issue>
<code>
[start of akvo/rsr/context_processors.py]
1 # -*- coding: utf-8 -*-
2 """
3 Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please see
7 < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 import re
11 import django
12
13 from django.conf import settings
14 from django.core.exceptions import DisallowedHost
15 from django.contrib.sites.models import get_current_site
16
17
18 def extra_context(request, protocol="http"):
19 """Add information to the request context."""
20 try:
21 current_site = get_current_site(request)
22 except DisallowedHost:
23 current_site = None
24
25 django_version = django.get_version()
26 debug = getattr(settings, 'DEBUG', False)
27 deploy_tag = getattr(settings, 'DEPLOY_TAG', 'Unknown')
28 deploy_branch = getattr(settings, 'DEPLOY_BRANCH', 'Unknown')
29 deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')
30 deploy_commit_full_id = getattr(settings, 'DEPLOY_COMMIT_FULL_ID', 'Unknown')
31 sentry_dsn = get_sentry_dsn(settings)
32
33 return dict(
34 current_site=current_site,
35 django_version=django_version,
36 debug=debug,
37 deploy_tag=deploy_tag,
38 deploy_branch=deploy_branch,
39 deploy_commit_id=deploy_commit_id,
40 deploy_commit_full_id=deploy_commit_full_id,
41 sentry_dsn=sentry_dsn,
42 )
43
44
45 def get_sentry_dsn(settings):
46 sentry_dsn = getattr(settings, 'RAVEN_CONFIG', {}).get('dsn', '')
47 sentry_dsn = re.sub('(:\w*?)@', '@', sentry_dsn)
48 # Always use https!
49 sentry_dsn = sentry_dsn.replace('http://', 'https://')
50 return sentry_dsn
51
52
53 def get_current_path_without_lang(request):
54 """Return current path without lang."""
55 path = request.get_full_path()
56 path_bits = path.split('/')
57 path = '/'.join(path_bits[2:])
58 return {'current_path_without_lang': path}
59
60
61 def extra_pages_context(request):
62 """Add context information of an RSR Page."""
63 if request.rsr_page:
64 page = request.rsr_page
65 return {
66 'rsr_page': page,
67 'favicon': page.favicon,
68 'logo': page.logo,
69 'organisation': page.organisation,
70 'return_url': page.return_url,
71 'return_url_text': page.custom_return_url_text,
72 'page_stylesheet': page.stylesheet,
73 'akvoapp_root_url': '//{}'.format(settings.AKVOAPP_DOMAIN),
74 'domain_url': '//{}'.format(settings.RSR_DOMAIN),
75 'no_facebook': not page.facebook_button,
76 'facebook_app_id': page.facebook_app_id,
77 'no_twitter': not page.twitter_button,
78 }
79
80 return {}
81
[end of akvo/rsr/context_processors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/context_processors.py b/akvo/rsr/context_processors.py
--- a/akvo/rsr/context_processors.py
+++ b/akvo/rsr/context_processors.py
@@ -29,6 +29,7 @@
deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')
deploy_commit_full_id = getattr(settings, 'DEPLOY_COMMIT_FULL_ID', 'Unknown')
sentry_dsn = get_sentry_dsn(settings)
+ gmaps_api_key = getattr(settings, 'GOOGLE_MAPS_API_KEY', 'NO_API_KEY')
return dict(
current_site=current_site,
@@ -39,6 +40,7 @@
deploy_commit_id=deploy_commit_id,
deploy_commit_full_id=deploy_commit_full_id,
sentry_dsn=sentry_dsn,
+ gmaps_api_key=gmaps_api_key,
)
| {"golden_diff": "diff --git a/akvo/rsr/context_processors.py b/akvo/rsr/context_processors.py\n--- a/akvo/rsr/context_processors.py\n+++ b/akvo/rsr/context_processors.py\n@@ -29,6 +29,7 @@\n deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')\n deploy_commit_full_id = getattr(settings, 'DEPLOY_COMMIT_FULL_ID', 'Unknown')\n sentry_dsn = get_sentry_dsn(settings)\n+ gmaps_api_key = getattr(settings, 'GOOGLE_MAPS_API_KEY', 'NO_API_KEY')\n \n return dict(\n current_site=current_site,\n@@ -39,6 +40,7 @@\n deploy_commit_id=deploy_commit_id,\n deploy_commit_full_id=deploy_commit_full_id,\n sentry_dsn=sentry_dsn,\n+ gmaps_api_key=gmaps_api_key,\n )\n", "issue": "Google maps API requests should use an API key\n`For development purposes only` watermark is being shown on our maps as Google has made it mandatory to use an API key to talk to the maps API. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAkvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please see\n< http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nimport re\nimport django\n\nfrom django.conf import settings\nfrom django.core.exceptions import DisallowedHost\nfrom django.contrib.sites.models import get_current_site\n\n\ndef extra_context(request, protocol=\"http\"):\n \"\"\"Add information to the request context.\"\"\"\n try:\n current_site = get_current_site(request)\n except DisallowedHost:\n current_site = None\n\n django_version = django.get_version()\n debug = getattr(settings, 'DEBUG', False)\n deploy_tag = getattr(settings, 'DEPLOY_TAG', 'Unknown')\n deploy_branch = getattr(settings, 'DEPLOY_BRANCH', 'Unknown')\n deploy_commit_id = getattr(settings, 'DEPLOY_COMMIT_ID', 'Unknown')\n deploy_commit_full_id = getattr(settings, 'DEPLOY_COMMIT_FULL_ID', 'Unknown')\n sentry_dsn = get_sentry_dsn(settings)\n\n return dict(\n current_site=current_site,\n django_version=django_version,\n debug=debug,\n deploy_tag=deploy_tag,\n deploy_branch=deploy_branch,\n deploy_commit_id=deploy_commit_id,\n deploy_commit_full_id=deploy_commit_full_id,\n sentry_dsn=sentry_dsn,\n )\n\n\ndef get_sentry_dsn(settings):\n sentry_dsn = getattr(settings, 'RAVEN_CONFIG', {}).get('dsn', '')\n sentry_dsn = re.sub('(:\\w*?)@', '@', sentry_dsn)\n # Always use https!\n sentry_dsn = sentry_dsn.replace('http://', 'https://')\n return sentry_dsn\n\n\ndef get_current_path_without_lang(request):\n \"\"\"Return current path without lang.\"\"\"\n path = request.get_full_path()\n path_bits = path.split('/')\n path = '/'.join(path_bits[2:])\n return {'current_path_without_lang': path}\n\n\ndef extra_pages_context(request):\n \"\"\"Add context information of an RSR Page.\"\"\"\n if request.rsr_page:\n page = request.rsr_page\n return {\n 'rsr_page': page,\n 'favicon': page.favicon,\n 'logo': page.logo,\n 'organisation': page.organisation,\n 'return_url': page.return_url,\n 'return_url_text': page.custom_return_url_text,\n 'page_stylesheet': page.stylesheet,\n 'akvoapp_root_url': '//{}'.format(settings.AKVOAPP_DOMAIN),\n 'domain_url': '//{}'.format(settings.RSR_DOMAIN),\n 'no_facebook': not page.facebook_button,\n 'facebook_app_id': page.facebook_app_id,\n 'no_twitter': not page.twitter_button,\n }\n\n return {}\n", "path": "akvo/rsr/context_processors.py"}]} | 1,355 | 195 |
gh_patches_debug_30947 | rasdani/github-patches | git_diff | allegro__ralph-1541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu package
We should build ubuntu packages for ralph (without much of scan plugins) to be able to install easily if you're reluctant to use docker.
- all js and components integrated into the package
- /etc/ralph for system configuration
- only ubuntu supported
</issue>
<code>
[start of src/ralph/__main__.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5
6 def main(settings_module='ralph.settings'):
7 os.environ.setdefault('DJANGO_SETTINGS_MODULE', settings_module)
8
9 from django.core.management import execute_from_command_line
10
11 execute_from_command_line(sys.argv)
12
13
14 def dev():
15 main('ralph.settings.dev')
16
17
18 def test():
19 main('ralph.settings.test')
20
21
22 if __name__ == '__main__':
23 main()
24
[end of src/ralph/__main__.py]
[start of setup.py]
1 # -*- encoding: utf-8 -*-
2
3 import os
4 import sys
5 from setuptools import setup, find_packages
6
7 assert sys.version_info >= (3, 3), 'Python 3.3+ required.'
8
9
10 def read(fname):
11 return open(os.path.join(os.path.dirname(__file__), fname)).read()
12
13 setup(
14 name='ralph',
15 version='3.0.0', # TODO: import from ralph
16 author='Grupa Allegro Sp. z o.o. and Contributors',
17 author_email='[email protected]',
18 description="Advanced Asset Management and DCIM system for data center and back office.",
19 long_description='\n'.join([read('README.md'), read('CHANGES')]),
20 url='http://ralph.allegrogroup.com/',
21 keywords='',
22 platforms=['any'],
23 license='Apache Software License v2.0',
24 packages=find_packages('src'), # TODO: remove src intermediate directory
25 include_package_data=True,
26 package_dir={'': 'src'},
27 zip_safe=False, # because templates are loaded from file path
28 entry_points={
29 'console_scripts': [
30 'ralph = ralph.__main__:main',
31 'dev_ralph = ralph.__main__:dev',
32 'test_ralph = ralph.__main__:test',
33 ],
34 },
35 classifiers=[
36 'Development Status :: 4 - Beta',
37 'Framework :: Django',
38 'Intended Audience :: System Administrators',
39 'Intended Audience :: Information Technology',
40 'License :: OSI Approved :: Apache Software License',
41 'Natural Language :: English',
42 'Operating System :: POSIX',
43 'Operating System :: MacOS :: MacOS X',
44 'Operating System :: Microsoft :: Windows :: Windows NT/2000',
45 'Programming Language :: Python',
46 'Programming Language :: Python :: 3',
47 'Programming Language :: Python :: 3.4',
48 'Topic :: Internet :: WWW/HTTP',
49 ]
50 )
51
[end of setup.py]
[start of src/ralph/settings/prod.py]
1 from ralph.settings import * # noqa
2
3 STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage' # noqa
4 STATIC_ROOT = os.path.join(BASE_DIR, 'var', 'static')
5
6 LDAP_SERVER_OBJECT_USER_CLASS = 'user' # possible values: "user, person
7
[end of src/ralph/settings/prod.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,7 +12,7 @@
setup(
name='ralph',
- version='3.0.0', # TODO: import from ralph
+ version=read('./VERSION').strip(),
author='Grupa Allegro Sp. z o.o. and Contributors',
author_email='[email protected]',
description="Advanced Asset Management and DCIM system for data center and back office.",
@@ -21,13 +21,13 @@
keywords='',
platforms=['any'],
license='Apache Software License v2.0',
- packages=find_packages('src'), # TODO: remove src intermediate directory
+ packages=find_packages('src'),
include_package_data=True,
package_dir={'': 'src'},
zip_safe=False, # because templates are loaded from file path
entry_points={
'console_scripts': [
- 'ralph = ralph.__main__:main',
+ 'ralph = ralph.__main__:prod',
'dev_ralph = ralph.__main__:dev',
'test_ralph = ralph.__main__:test',
],
diff --git a/src/ralph/__main__.py b/src/ralph/__main__.py
--- a/src/ralph/__main__.py
+++ b/src/ralph/__main__.py
@@ -19,5 +19,9 @@
main('ralph.settings.test')
+def prod():
+ main('ralph.settings.prod')
+
+
if __name__ == '__main__':
- main()
+ main('ralph.settings.prod')
diff --git a/src/ralph/settings/prod.py b/src/ralph/settings/prod.py
--- a/src/ralph/settings/prod.py
+++ b/src/ralph/settings/prod.py
@@ -4,3 +4,7 @@
STATIC_ROOT = os.path.join(BASE_DIR, 'var', 'static')
LDAP_SERVER_OBJECT_USER_CLASS = 'user' # possible values: "user, person
+
+# FIXME: when going for full production, change it to False
+
+DEBUG = True
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,7 +12,7 @@\n \n setup(\n name='ralph',\n- version='3.0.0', # TODO: import from ralph\n+ version=read('./VERSION').strip(),\n author='Grupa Allegro Sp. z o.o. and Contributors',\n author_email='[email protected]',\n description=\"Advanced Asset Management and DCIM system for data center and back office.\",\n@@ -21,13 +21,13 @@\n keywords='',\n platforms=['any'],\n license='Apache Software License v2.0',\n- packages=find_packages('src'), # TODO: remove src intermediate directory\n+ packages=find_packages('src'),\n include_package_data=True,\n package_dir={'': 'src'},\n zip_safe=False, # because templates are loaded from file path\n entry_points={\n 'console_scripts': [\n- 'ralph = ralph.__main__:main',\n+ 'ralph = ralph.__main__:prod',\n 'dev_ralph = ralph.__main__:dev',\n 'test_ralph = ralph.__main__:test',\n ],\ndiff --git a/src/ralph/__main__.py b/src/ralph/__main__.py\n--- a/src/ralph/__main__.py\n+++ b/src/ralph/__main__.py\n@@ -19,5 +19,9 @@\n main('ralph.settings.test')\n \n \n+def prod():\n+ main('ralph.settings.prod')\n+\n+\n if __name__ == '__main__':\n- main()\n+ main('ralph.settings.prod')\ndiff --git a/src/ralph/settings/prod.py b/src/ralph/settings/prod.py\n--- a/src/ralph/settings/prod.py\n+++ b/src/ralph/settings/prod.py\n@@ -4,3 +4,7 @@\n STATIC_ROOT = os.path.join(BASE_DIR, 'var', 'static')\n \n LDAP_SERVER_OBJECT_USER_CLASS = 'user' # possible values: \"user, person\n+\n+# FIXME: when going for full production, change it to False\n+\n+DEBUG = True\n", "issue": "Ubuntu package\nWe should build ubuntu packages for ralph (without much of scan plugins) to be able to install easily if you're reluctant to use docker.\n- all js and components integrated into the package\n- /etc/ralph for system configuration\n- only ubuntu supported\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\n\ndef main(settings_module='ralph.settings'):\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', settings_module)\n\n from django.core.management import execute_from_command_line\n\n execute_from_command_line(sys.argv)\n\n\ndef dev():\n main('ralph.settings.dev')\n\n\ndef test():\n main('ralph.settings.test')\n\n\nif __name__ == '__main__':\n main()\n", "path": "src/ralph/__main__.py"}, {"content": "# -*- encoding: utf-8 -*-\n\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nassert sys.version_info >= (3, 3), 'Python 3.3+ required.'\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\nsetup(\n name='ralph',\n version='3.0.0', # TODO: import from ralph\n author='Grupa Allegro Sp. z o.o. and Contributors',\n author_email='[email protected]',\n description=\"Advanced Asset Management and DCIM system for data center and back office.\",\n long_description='\\n'.join([read('README.md'), read('CHANGES')]),\n url='http://ralph.allegrogroup.com/',\n keywords='',\n platforms=['any'],\n license='Apache Software License v2.0',\n packages=find_packages('src'), # TODO: remove src intermediate directory\n include_package_data=True,\n package_dir={'': 'src'},\n zip_safe=False, # because templates are loaded from file path\n entry_points={\n 'console_scripts': [\n 'ralph = ralph.__main__:main',\n 'dev_ralph = ralph.__main__:dev',\n 'test_ralph = ralph.__main__:test',\n ],\n },\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Framework :: Django',\n 'Intended Audience :: System Administrators',\n 'Intended Audience :: Information Technology',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows :: Windows NT/2000',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Internet :: WWW/HTTP',\n ]\n)\n", "path": "setup.py"}, {"content": "from ralph.settings import * # noqa\n\nSTATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage' # noqa\nSTATIC_ROOT = os.path.join(BASE_DIR, 'var', 'static')\n\nLDAP_SERVER_OBJECT_USER_CLASS = 'user' # possible values: \"user, person\n", "path": "src/ralph/settings/prod.py"}]} | 1,352 | 477 |
gh_patches_debug_39660 | rasdani/github-patches | git_diff | streamlink__streamlink-141 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Euronews plugin broken
I dig up EuroNews plugin which is broken since December 2014.
https://github.com/chrippa/livestreamer/issues/626
</issue>
<code>
[start of src/streamlink/plugins/euronews.py]
1 import re
2
3 from itertools import chain
4
5 from streamlink.compat import urlparse
6 from streamlink.plugin import Plugin
7 from streamlink.plugin.api import http
8 from streamlink.stream import HLSStream, HTTPStream
9
10 from streamlink.plugin.api.support_plugin import common_jwplayer as jwplayer
11
12 _url_re = re.compile("http(s)?://(\w+\.)?euronews.com")
13
14
15 class Euronews(Plugin):
16 @classmethod
17 def can_handle_url(self, url):
18 return _url_re.match(url)
19
20 def _create_stream(self, source):
21 url = source["file"]
22
23 if urlparse(url).path.endswith("m3u8"):
24 streams = HLSStream.parse_variant_playlist(self.session, url)
25
26 # TODO: Replace with "yield from" when dropping Python 2.
27 for stream in streams.items():
28 yield stream
29 else:
30 name = source.get("label", "vod")
31 yield name, HTTPStream(self.session, url)
32
33 def _get_streams(self):
34 res = http.get(self.url)
35 playlist = jwplayer.parse_playlist(res)
36 if not playlist:
37 return
38
39 for item in playlist:
40 streams = map(self._create_stream, item["sources"])
41
42 # TODO: Replace with "yield from" when dropping Python 2.
43 for stream in chain.from_iterable(streams):
44 yield stream
45
46 __plugin__ = Euronews
47
[end of src/streamlink/plugins/euronews.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/euronews.py b/src/streamlink/plugins/euronews.py
--- a/src/streamlink/plugins/euronews.py
+++ b/src/streamlink/plugins/euronews.py
@@ -1,46 +1,77 @@
import re
-from itertools import chain
-
-from streamlink.compat import urlparse
from streamlink.plugin import Plugin
from streamlink.plugin.api import http
+from streamlink.plugin.api import validate
from streamlink.stream import HLSStream, HTTPStream
-from streamlink.plugin.api.support_plugin import common_jwplayer as jwplayer
-
-_url_re = re.compile("http(s)?://(\w+\.)?euronews.com")
-
class Euronews(Plugin):
- @classmethod
- def can_handle_url(self, url):
- return _url_re.match(url)
+ _url_re = re.compile("http(?:s)?://(\w+)\.?euronews.com/(live|.*)")
+ _re_vod = re.compile(r'<meta\s+property="og:video"\s+content="(http.*?)"\s*/>')
+ _live_api_url = "http://fr.euronews.com/api/watchlive.json"
+ _live_schema = validate.Schema({
+ u"url": validate.url()
+ })
+ _stream_api_schema = validate.Schema({
+ u'status': u'ok',
+ u'primary': {
+ validate.text: {
+ validate.optional(u'hls'): validate.url(),
+ validate.optional(u'rtsp'): validate.url(scheme="rtsp")
+ }
+ },
+ validate.optional(u'backup'): {
+ validate.text: {
+ validate.optional(u'hls'): validate.url(),
+ validate.optional(u'rtsp'): validate.url(scheme="rtsp")
+ }
+ }
+ })
- def _create_stream(self, source):
- url = source["file"]
+ @classmethod
+ def can_handle_url(cls, url):
+ return cls._url_re.match(url)
- if urlparse(url).path.endswith("m3u8"):
- streams = HLSStream.parse_variant_playlist(self.session, url)
+ def _get_vod_stream(self):
+ """
+ Find the VOD video url
+ :return: video url
+ """
+ res = http.get(self.url)
+ video_urls = self._re_vod.findall(res.text)
+ if len(video_urls):
+ return dict(vod=HTTPStream(self.session, video_urls[0]))
- # TODO: Replace with "yield from" when dropping Python 2.
- for stream in streams.items():
- yield stream
- else:
- name = source.get("label", "vod")
- yield name, HTTPStream(self.session, url)
+ def _get_live_streams(self, language):
+ """
+ Get the live stream in a particular language
+ :param language:
+ :return:
+ """
+ res = http.get(self._live_api_url)
+ live_res = http.json(res, schema=self._live_schema)
+ api_res = http.get(live_res[u"url"])
+ stream_data = http.json(api_res, schema=self._stream_api_schema)
+ # find the stream in the requested language
+ if language in stream_data[u'primary']:
+ playlist_url = stream_data[u'primary'][language][u"hls"]
+ return HLSStream.parse_variant_playlist(self.session, playlist_url)
def _get_streams(self):
- res = http.get(self.url)
- playlist = jwplayer.parse_playlist(res)
- if not playlist:
- return
+ """
+ Find the streams for euronews
+ :return:
+ """
+ match = self._url_re.match(self.url)
+ language, path = match.groups()
- for item in playlist:
- streams = map(self._create_stream, item["sources"])
+ # remap domain to language (default to english)
+ language = {"www": "en", "": "en", "arabic": "ar"}.get(language, language)
- # TODO: Replace with "yield from" when dropping Python 2.
- for stream in chain.from_iterable(streams):
- yield stream
+ if path == "live":
+ return self._get_live_streams(language)
+ else:
+ return self._get_vod_stream()
__plugin__ = Euronews
| {"golden_diff": "diff --git a/src/streamlink/plugins/euronews.py b/src/streamlink/plugins/euronews.py\n--- a/src/streamlink/plugins/euronews.py\n+++ b/src/streamlink/plugins/euronews.py\n@@ -1,46 +1,77 @@\n import re\n \n-from itertools import chain\n-\n-from streamlink.compat import urlparse\n from streamlink.plugin import Plugin\n from streamlink.plugin.api import http\n+from streamlink.plugin.api import validate\n from streamlink.stream import HLSStream, HTTPStream\n \n-from streamlink.plugin.api.support_plugin import common_jwplayer as jwplayer\n-\n-_url_re = re.compile(\"http(s)?://(\\w+\\.)?euronews.com\")\n-\n \n class Euronews(Plugin):\n- @classmethod\n- def can_handle_url(self, url):\n- return _url_re.match(url)\n+ _url_re = re.compile(\"http(?:s)?://(\\w+)\\.?euronews.com/(live|.*)\")\n+ _re_vod = re.compile(r'<meta\\s+property=\"og:video\"\\s+content=\"(http.*?)\"\\s*/>')\n+ _live_api_url = \"http://fr.euronews.com/api/watchlive.json\"\n+ _live_schema = validate.Schema({\n+ u\"url\": validate.url()\n+ })\n+ _stream_api_schema = validate.Schema({\n+ u'status': u'ok',\n+ u'primary': {\n+ validate.text: {\n+ validate.optional(u'hls'): validate.url(),\n+ validate.optional(u'rtsp'): validate.url(scheme=\"rtsp\")\n+ }\n+ },\n+ validate.optional(u'backup'): {\n+ validate.text: {\n+ validate.optional(u'hls'): validate.url(),\n+ validate.optional(u'rtsp'): validate.url(scheme=\"rtsp\")\n+ }\n+ }\n+ })\n \n- def _create_stream(self, source):\n- url = source[\"file\"]\n+ @classmethod\n+ def can_handle_url(cls, url):\n+ return cls._url_re.match(url)\n \n- if urlparse(url).path.endswith(\"m3u8\"):\n- streams = HLSStream.parse_variant_playlist(self.session, url)\n+ def _get_vod_stream(self):\n+ \"\"\"\n+ Find the VOD video url\n+ :return: video url\n+ \"\"\"\n+ res = http.get(self.url)\n+ video_urls = self._re_vod.findall(res.text)\n+ if len(video_urls):\n+ return dict(vod=HTTPStream(self.session, video_urls[0]))\n \n- # TODO: Replace with \"yield from\" when dropping Python 2.\n- for stream in streams.items():\n- yield stream\n- else:\n- name = source.get(\"label\", \"vod\")\n- yield name, HTTPStream(self.session, url)\n+ def _get_live_streams(self, language):\n+ \"\"\"\n+ Get the live stream in a particular language\n+ :param language:\n+ :return:\n+ \"\"\"\n+ res = http.get(self._live_api_url)\n+ live_res = http.json(res, schema=self._live_schema)\n+ api_res = http.get(live_res[u\"url\"])\n+ stream_data = http.json(api_res, schema=self._stream_api_schema)\n+ # find the stream in the requested language\n+ if language in stream_data[u'primary']:\n+ playlist_url = stream_data[u'primary'][language][u\"hls\"]\n+ return HLSStream.parse_variant_playlist(self.session, playlist_url)\n \n def _get_streams(self):\n- res = http.get(self.url)\n- playlist = jwplayer.parse_playlist(res)\n- if not playlist:\n- return\n+ \"\"\"\n+ Find the streams for euronews\n+ :return:\n+ \"\"\"\n+ match = self._url_re.match(self.url)\n+ language, path = match.groups()\n \n- for item in playlist:\n- streams = map(self._create_stream, item[\"sources\"])\n+ # remap domain to language (default to english)\n+ language = {\"www\": \"en\", \"\": \"en\", \"arabic\": \"ar\"}.get(language, language)\n \n- # TODO: Replace with \"yield from\" when dropping Python 2.\n- for stream in chain.from_iterable(streams):\n- yield stream\n+ if path == \"live\":\n+ return self._get_live_streams(language)\n+ else:\n+ return self._get_vod_stream()\n \n __plugin__ = Euronews\n", "issue": "Euronews plugin broken\nI dig up EuroNews plugin which is broken since December 2014.\r\n\r\nhttps://github.com/chrippa/livestreamer/issues/626\n", "before_files": [{"content": "import re\n\nfrom itertools import chain\n\nfrom streamlink.compat import urlparse\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream, HTTPStream\n\nfrom streamlink.plugin.api.support_plugin import common_jwplayer as jwplayer\n\n_url_re = re.compile(\"http(s)?://(\\w+\\.)?euronews.com\")\n\n\nclass Euronews(Plugin):\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _create_stream(self, source):\n url = source[\"file\"]\n\n if urlparse(url).path.endswith(\"m3u8\"):\n streams = HLSStream.parse_variant_playlist(self.session, url)\n\n # TODO: Replace with \"yield from\" when dropping Python 2.\n for stream in streams.items():\n yield stream\n else:\n name = source.get(\"label\", \"vod\")\n yield name, HTTPStream(self.session, url)\n\n def _get_streams(self):\n res = http.get(self.url)\n playlist = jwplayer.parse_playlist(res)\n if not playlist:\n return\n\n for item in playlist:\n streams = map(self._create_stream, item[\"sources\"])\n\n # TODO: Replace with \"yield from\" when dropping Python 2.\n for stream in chain.from_iterable(streams):\n yield stream\n\n__plugin__ = Euronews\n", "path": "src/streamlink/plugins/euronews.py"}]} | 975 | 982 |
gh_patches_debug_3261 | rasdani/github-patches | git_diff | Kinto__kinto-476 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error while trying to generate a configuration file without subfolder with CLI.
```
$ kinto --ini kinto.ini
Traceback (most recent call last):
File "~/.virtualenvs/kinto/bin/kinto", line 9, in <module>
load_entry_point('kinto', 'console_scripts', 'kinto')()
File "~/mozilla/kinto/kinto/__main__.py", line 72, in main
init(config_file, backend)
File "~/mozilla/kinto/kinto/config/__init__.py", line 50, in init
render_template("kinto.tpl", config_file, **values)
File "~/mozilla/kinto/kinto/config/__init__.py", line 14, in render_template
os.makedirs(folder)
File "~/.virtualenvs/kinto/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 2] No such file or directory: ''
```
</issue>
<code>
[start of kinto/config/__init__.py]
1 import os
2 import codecs
3
4 from cliquet import utils as cliquet_utils
5
6 from kinto import logger
7
8 HERE = os.path.abspath(os.path.dirname(__file__))
9
10
11 def render_template(template, destination, **kwargs):
12 template = os.path.join(HERE, template)
13 folder = os.path.dirname(destination)
14
15 if not os.path.exists(folder):
16 os.makedirs(folder)
17
18 logger.info("Created config {}".format(os.path.abspath(destination)))
19
20 with codecs.open(template, 'r', encoding='utf-8') as f:
21 raw_template = f.read()
22 rendered = raw_template.format(**kwargs)
23 with codecs.open(destination, 'w+', encoding='utf-8') as output:
24 output.write(rendered)
25
26
27 def init(config_file, backend):
28 values = {}
29
30 values['secret'] = cliquet_utils.random_bytes_hex(32)
31
32 values['storage_backend'] = "cliquet.storage.%s" % backend
33 values['cache_backend'] = "cliquet.cache.%s" % backend
34 values['permission_backend'] = "cliquet.permission.%s" % backend
35
36 if backend == 'postgresql':
37 postgresql_url = "postgres://postgres:postgres@localhost/postgres"
38 values['storage_url'] = postgresql_url
39 values['cache_url'] = postgresql_url
40 values['permission_url'] = postgresql_url
41
42 elif backend == 'redis':
43 redis_url = "redis://localhost:6379"
44 values['storage_url'] = redis_url + "/1"
45 values['cache_url'] = redis_url + "/2"
46 values['permission_url'] = redis_url + "/3"
47
48 else:
49 values['storage_url'] = ''
50 values['cache_url'] = ''
51 values['permission_url'] = ''
52
53 render_template("kinto.tpl", config_file, **values)
54
[end of kinto/config/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/config/__init__.py b/kinto/config/__init__.py
--- a/kinto/config/__init__.py
+++ b/kinto/config/__init__.py
@@ -12,7 +12,7 @@
template = os.path.join(HERE, template)
folder = os.path.dirname(destination)
- if not os.path.exists(folder):
+ if folder and not os.path.exists(folder):
os.makedirs(folder)
logger.info("Created config {}".format(os.path.abspath(destination)))
| {"golden_diff": "diff --git a/kinto/config/__init__.py b/kinto/config/__init__.py\n--- a/kinto/config/__init__.py\n+++ b/kinto/config/__init__.py\n@@ -12,7 +12,7 @@\n template = os.path.join(HERE, template)\n folder = os.path.dirname(destination)\n \n- if not os.path.exists(folder):\n+ if folder and not os.path.exists(folder):\n os.makedirs(folder)\n \n logger.info(\"Created config {}\".format(os.path.abspath(destination)))\n", "issue": "Error while trying to generate a configuration file without subfolder with CLI.\n```\n$ kinto --ini kinto.ini\n\nTraceback (most recent call last):\n File \"~/.virtualenvs/kinto/bin/kinto\", line 9, in <module>\n load_entry_point('kinto', 'console_scripts', 'kinto')()\n File \"~/mozilla/kinto/kinto/__main__.py\", line 72, in main\n init(config_file, backend)\n File \"~/mozilla/kinto/kinto/config/__init__.py\", line 50, in init\n render_template(\"kinto.tpl\", config_file, **values)\n File \"~/mozilla/kinto/kinto/config/__init__.py\", line 14, in render_template\n os.makedirs(folder)\n File \"~/.virtualenvs/kinto/lib/python2.7/os.py\", line 157, in makedirs\n mkdir(name, mode)\nOSError: [Errno 2] No such file or directory: ''\n```\n\n", "before_files": [{"content": "import os\nimport codecs\n\nfrom cliquet import utils as cliquet_utils\n\nfrom kinto import logger\n\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef render_template(template, destination, **kwargs):\n template = os.path.join(HERE, template)\n folder = os.path.dirname(destination)\n\n if not os.path.exists(folder):\n os.makedirs(folder)\n\n logger.info(\"Created config {}\".format(os.path.abspath(destination)))\n\n with codecs.open(template, 'r', encoding='utf-8') as f:\n raw_template = f.read()\n rendered = raw_template.format(**kwargs)\n with codecs.open(destination, 'w+', encoding='utf-8') as output:\n output.write(rendered)\n\n\ndef init(config_file, backend):\n values = {}\n\n values['secret'] = cliquet_utils.random_bytes_hex(32)\n\n values['storage_backend'] = \"cliquet.storage.%s\" % backend\n values['cache_backend'] = \"cliquet.cache.%s\" % backend\n values['permission_backend'] = \"cliquet.permission.%s\" % backend\n\n if backend == 'postgresql':\n postgresql_url = \"postgres://postgres:postgres@localhost/postgres\"\n values['storage_url'] = postgresql_url\n values['cache_url'] = postgresql_url\n values['permission_url'] = postgresql_url\n\n elif backend == 'redis':\n redis_url = \"redis://localhost:6379\"\n values['storage_url'] = redis_url + \"/1\"\n values['cache_url'] = redis_url + \"/2\"\n values['permission_url'] = redis_url + \"/3\"\n\n else:\n values['storage_url'] = ''\n values['cache_url'] = ''\n values['permission_url'] = ''\n\n render_template(\"kinto.tpl\", config_file, **values)\n", "path": "kinto/config/__init__.py"}]} | 1,253 | 112 |
gh_patches_debug_2172 | rasdani/github-patches | git_diff | liqd__a4-opin-1799 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Changing the Organisation Details is not possible
**URL:**
https://opin.me/en/dashboard/organisations/liquid-democracy/settings/
**user:**
Initiators, who try to fill in the Organisations details & as an admin too.
**expected behaviour:**
If I fill in Organisation details, save them and it is there
**behaviour:**
I fill in the Organisation details, press save and it reloads, but do not save.
**important screensize:**
**device & browser:**
Firefox 73.0.1 (64-Bit)
**Comment/Question:**
Screenshot?
</issue>
<code>
[start of euth/dashboard/forms.py]
1
2 import parler
3 from django import forms
4 from django.conf import settings
5 from django.core.exceptions import ValidationError
6 from django.utils.translation import ugettext_lazy as _
7
8 from euth.organisations.models import Organisation
9
10
11 class OrganisationForm(forms.ModelForm):
12 translated_fields = [
13 ('description_why', forms.CharField, {
14 'label': _('description why'),
15 'widget': forms.Textarea,
16 }),
17 ('description_how', forms.CharField, {
18 'widget': forms.Textarea,
19 'label': _('description how')
20 }),
21 ('description', forms.CharField, {
22 'label': _('description'),
23 'help_text': _(
24 'More info about the organisation / '
25 'Short text for organisation overview'),
26 'widget': forms.Textarea,
27 })
28 ]
29 languages = [lang_code for lang_code, lang in settings.LANGUAGES]
30
31 class Meta:
32 model = Organisation
33 fields = [
34 'name', 'image', 'logo', 'twitter_handle', 'facebook_handle',
35 'instagram_handle', 'webpage', 'country', 'place'
36 ]
37 help_texts = {
38 'name': _('The title of your organisation'),
39 }
40
41 def _get_identifier(self, language, fieldname):
42 return '{}__{}'.format(language, fieldname)
43
44 def __init__(self, *args, **kwargs):
45 super().__init__(*args, **kwargs)
46
47 # inject additional form fields for translated model fields
48 for lang_code in self.languages:
49 for name, field_cls, kwargs in self.translated_fields:
50 self.instance.set_current_language(lang_code)
51 field = field_cls(**kwargs)
52 identifier = self._get_identifier(
53 lang_code, name)
54 field.required = False
55
56 try:
57 translation = self.instance.get_translation(lang_code)
58 initial = getattr(translation, name)
59 except parler.models.TranslationDoesNotExist:
60 initial = ''
61
62 field.initial = initial
63 self.fields[identifier] = field
64
65 def translated(self):
66 """
67 Return translated fields as list of tuples (language code, fields).
68 """
69
70 from itertools import groupby
71 fields = [(field.html_name.split('__')[0], field) for field in self
72 if '__' in field.html_name]
73 groups = groupby(fields, lambda x: x[0])
74 values = [(lang, list(map(lambda x: x[1], group)))
75 for lang, group in groups]
76 return values
77
78 def untranslated(self):
79 """
80 Return untranslated fields as flat list.
81 """
82 return [field for field in self if '__' not in field.html_name]
83
84 def prefiled_languages(self):
85 """
86 Return languages tabs that need to be displayed.
87 """
88 languages = [lang for lang in self.languages
89 if lang in self.data
90 or self.instance.has_translation(lang)]
91 # always provide english
92 if 'en' not in languages:
93 languages.insert(0, 'en')
94 return languages
95
96 def save(self, commit=True):
97 instance = super().save(commit=commit)
98 if commit is True:
99 for lang_code in self.languages:
100 if lang_code in self.data:
101 instance.set_current_language(lang_code)
102 for fieldname, _cls, _kwargs in self.translated_fields:
103 identifier = '{}__{}'.format(lang_code, fieldname)
104 setattr(instance, fieldname,
105 self.cleaned_data.get(identifier))
106 instance.save()
107 elif instance.has_translation(lang_code):
108 instance.delete_translation(lang_code)
109 return instance
110
111 def clean(self):
112 for lang_code in self.languages:
113 if lang_code in self.data:
114 for fieldname in self.translated_fields:
115 identifier = self._get_identifier(lang_code, fieldname[0])
116 data = self.cleaned_data
117 if identifier not in data or not data[identifier]:
118 msg = 'This field is required'
119 raise ValidationError((identifier, msg))
120
121 return self.cleaned_data
122
[end of euth/dashboard/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/dashboard/forms.py b/euth/dashboard/forms.py
--- a/euth/dashboard/forms.py
+++ b/euth/dashboard/forms.py
@@ -81,7 +81,7 @@
"""
return [field for field in self if '__' not in field.html_name]
- def prefiled_languages(self):
+ def prefilled_languages(self):
"""
Return languages tabs that need to be displayed.
"""
| {"golden_diff": "diff --git a/euth/dashboard/forms.py b/euth/dashboard/forms.py\n--- a/euth/dashboard/forms.py\n+++ b/euth/dashboard/forms.py\n@@ -81,7 +81,7 @@\n \"\"\"\n return [field for field in self if '__' not in field.html_name]\n \n- def prefiled_languages(self):\n+ def prefilled_languages(self):\n \"\"\"\n Return languages tabs that need to be displayed.\n \"\"\"\n", "issue": "Changing the Organisation Details is not possible\n**URL:** \r\nhttps://opin.me/en/dashboard/organisations/liquid-democracy/settings/\r\n**user:** \r\nInitiators, who try to fill in the Organisations details & as an admin too.\r\n**expected behaviour:** \r\nIf I fill in Organisation details, save them and it is there\r\n**behaviour:** \r\nI fill in the Organisation details, press save and it reloads, but do not save.\r\n**important screensize:**\r\n\r\n**device & browser:** \r\nFirefox 73.0.1 (64-Bit)\r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "\nimport parler\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom euth.organisations.models import Organisation\n\n\nclass OrganisationForm(forms.ModelForm):\n translated_fields = [\n ('description_why', forms.CharField, {\n 'label': _('description why'),\n 'widget': forms.Textarea,\n }),\n ('description_how', forms.CharField, {\n 'widget': forms.Textarea,\n 'label': _('description how')\n }),\n ('description', forms.CharField, {\n 'label': _('description'),\n 'help_text': _(\n 'More info about the organisation / '\n 'Short text for organisation overview'),\n 'widget': forms.Textarea,\n })\n ]\n languages = [lang_code for lang_code, lang in settings.LANGUAGES]\n\n class Meta:\n model = Organisation\n fields = [\n 'name', 'image', 'logo', 'twitter_handle', 'facebook_handle',\n 'instagram_handle', 'webpage', 'country', 'place'\n ]\n help_texts = {\n 'name': _('The title of your organisation'),\n }\n\n def _get_identifier(self, language, fieldname):\n return '{}__{}'.format(language, fieldname)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # inject additional form fields for translated model fields\n for lang_code in self.languages:\n for name, field_cls, kwargs in self.translated_fields:\n self.instance.set_current_language(lang_code)\n field = field_cls(**kwargs)\n identifier = self._get_identifier(\n lang_code, name)\n field.required = False\n\n try:\n translation = self.instance.get_translation(lang_code)\n initial = getattr(translation, name)\n except parler.models.TranslationDoesNotExist:\n initial = ''\n\n field.initial = initial\n self.fields[identifier] = field\n\n def translated(self):\n \"\"\"\n Return translated fields as list of tuples (language code, fields).\n \"\"\"\n\n from itertools import groupby\n fields = [(field.html_name.split('__')[0], field) for field in self\n if '__' in field.html_name]\n groups = groupby(fields, lambda x: x[0])\n values = [(lang, list(map(lambda x: x[1], group)))\n for lang, group in groups]\n return values\n\n def untranslated(self):\n \"\"\"\n Return untranslated fields as flat list.\n \"\"\"\n return [field for field in self if '__' not in field.html_name]\n\n def prefiled_languages(self):\n \"\"\"\n Return languages tabs that need to be displayed.\n \"\"\"\n languages = [lang for lang in self.languages\n if lang in self.data\n or self.instance.has_translation(lang)]\n # always provide english\n if 'en' not in languages:\n languages.insert(0, 'en')\n return languages\n\n def save(self, commit=True):\n instance = super().save(commit=commit)\n if commit is True:\n for lang_code in self.languages:\n if lang_code in self.data:\n instance.set_current_language(lang_code)\n for fieldname, _cls, _kwargs in self.translated_fields:\n identifier = '{}__{}'.format(lang_code, fieldname)\n setattr(instance, fieldname,\n self.cleaned_data.get(identifier))\n instance.save()\n elif instance.has_translation(lang_code):\n instance.delete_translation(lang_code)\n return instance\n\n def clean(self):\n for lang_code in self.languages:\n if lang_code in self.data:\n for fieldname in self.translated_fields:\n identifier = self._get_identifier(lang_code, fieldname[0])\n data = self.cleaned_data\n if identifier not in data or not data[identifier]:\n msg = 'This field is required'\n raise ValidationError((identifier, msg))\n\n return self.cleaned_data\n", "path": "euth/dashboard/forms.py"}]} | 1,758 | 96 |
gh_patches_debug_16640 | rasdani/github-patches | git_diff | PrefectHQ__prefect-9724 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PrefectHTTPStatusError: Client error '429 Too Many Requests' for url
### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
While using `prefect` with `prefect-dask` I encountered a rate limit error. this shouldn't be happening as prefect client base should retry on those. I'm not sure why this is happening but this has risen at `2.10.10` and did not exist before
### Reproduction
```python3
Any Flow with prefect-dask
```
### Error
```python3
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/distributed/client.py", line 1697, in _close
await self.scheduler_comm.close()
asyncio.exceptions.CancelledError
01:00:08.452 | ERROR | Flow run 'psi5-alastria-x' - Crash detected! Execution was interrupted by an unexpected exception: PrefectHTTPStatusError: Client error '429 Too Many Requests' for url 'https://cloud-url/task_runs/'
Response: {'detail': 'Orchestration API rate limit reached'}
For more information check: https://httpstatuses.com/429
```
### Versions
```Text
Version: 2.10.10
API version: 0.8.4
Python version: 3.11.2
Git commit: 8159450b
Built: Thu, May 18, 2023 3:43 PM
OS/Arch: linux/x86_64
Profile: default
Server type: server
```
### Additional context
_No response_
</issue>
<code>
[start of src/prefect/client/cloud.py]
1 import re
2 from typing import Any, Dict, List, Optional
3
4 import anyio
5 import httpx
6 import pydantic
7 from fastapi import status
8
9 import prefect.context
10 import prefect.settings
11 from prefect.client.schemas import Workspace
12 from prefect.exceptions import PrefectException
13 from prefect.settings import PREFECT_API_KEY, PREFECT_CLOUD_API_URL
14
15
16 def get_cloud_client(
17 host: Optional[str] = None,
18 api_key: Optional[str] = None,
19 httpx_settings: Optional[dict] = None,
20 infer_cloud_url: bool = False,
21 ) -> "CloudClient":
22 """
23 Needs a docstring.
24 """
25 if httpx_settings is not None:
26 httpx_settings = httpx_settings.copy()
27
28 if infer_cloud_url is False:
29 host = host or PREFECT_CLOUD_API_URL.value()
30 else:
31 configured_url = prefect.settings.PREFECT_API_URL.value()
32 host = re.sub(r"accounts/.{36}/workspaces/.{36}\Z", "", configured_url)
33
34 return CloudClient(
35 host=host,
36 api_key=api_key or PREFECT_API_KEY.value(),
37 httpx_settings=httpx_settings,
38 )
39
40
41 class CloudUnauthorizedError(PrefectException):
42 """
43 Raised when the CloudClient receives a 401 or 403 from the Cloud API.
44 """
45
46
47 class CloudClient:
48 def __init__(
49 self,
50 host: str,
51 api_key: str,
52 httpx_settings: dict = None,
53 ) -> None:
54 httpx_settings = httpx_settings or dict()
55 httpx_settings.setdefault("headers", dict())
56 httpx_settings["headers"].setdefault("Authorization", f"Bearer {api_key}")
57
58 httpx_settings.setdefault("base_url", host)
59 self._client = httpx.AsyncClient(**httpx_settings)
60
61 async def api_healthcheck(self):
62 """
63 Attempts to connect to the Cloud API and raises the encountered exception if not
64 successful.
65
66 If successful, returns `None`.
67 """
68 with anyio.fail_after(10):
69 await self.read_workspaces()
70
71 async def read_workspaces(self) -> List[Workspace]:
72 return pydantic.parse_obj_as(List[Workspace], await self.get("/me/workspaces"))
73
74 async def read_worker_metadata(self) -> Dict[str, Any]:
75 return await self.get("collections/views/aggregate-worker-metadata")
76
77 async def __aenter__(self):
78 await self._client.__aenter__()
79 return self
80
81 async def __aexit__(self, *exc_info):
82 return await self._client.__aexit__(*exc_info)
83
84 def __enter__(self):
85 raise RuntimeError(
86 "The `CloudClient` must be entered with an async context. Use 'async "
87 "with CloudClient(...)' not 'with CloudClient(...)'"
88 )
89
90 def __exit__(self, *_):
91 assert False, "This should never be called but must be defined for __enter__"
92
93 async def get(self, route, **kwargs):
94 try:
95 res = await self._client.get(route, **kwargs)
96 res.raise_for_status()
97 except httpx.HTTPStatusError as exc:
98 if exc.response.status_code in (
99 status.HTTP_401_UNAUTHORIZED,
100 status.HTTP_403_FORBIDDEN,
101 ):
102 raise CloudUnauthorizedError
103 else:
104 raise exc
105
106 return res.json()
107
[end of src/prefect/client/cloud.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/client/cloud.py b/src/prefect/client/cloud.py
--- a/src/prefect/client/cloud.py
+++ b/src/prefect/client/cloud.py
@@ -8,6 +8,7 @@
import prefect.context
import prefect.settings
+from prefect.client.base import PrefectHttpxClient
from prefect.client.schemas import Workspace
from prefect.exceptions import PrefectException
from prefect.settings import PREFECT_API_KEY, PREFECT_CLOUD_API_URL
@@ -56,7 +57,7 @@
httpx_settings["headers"].setdefault("Authorization", f"Bearer {api_key}")
httpx_settings.setdefault("base_url", host)
- self._client = httpx.AsyncClient(**httpx_settings)
+ self._client = PrefectHttpxClient(**httpx_settings)
async def api_healthcheck(self):
"""
| {"golden_diff": "diff --git a/src/prefect/client/cloud.py b/src/prefect/client/cloud.py\n--- a/src/prefect/client/cloud.py\n+++ b/src/prefect/client/cloud.py\n@@ -8,6 +8,7 @@\n \n import prefect.context\n import prefect.settings\n+from prefect.client.base import PrefectHttpxClient\n from prefect.client.schemas import Workspace\n from prefect.exceptions import PrefectException\n from prefect.settings import PREFECT_API_KEY, PREFECT_CLOUD_API_URL\n@@ -56,7 +57,7 @@\n httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n \n httpx_settings.setdefault(\"base_url\", host)\n- self._client = httpx.AsyncClient(**httpx_settings)\n+ self._client = PrefectHttpxClient(**httpx_settings)\n \n async def api_healthcheck(self):\n \"\"\"\n", "issue": "PrefectHTTPStatusError: Client error '429 Too Many Requests' for url\n### First check\r\n\r\n- [X] I added a descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the Prefect documentation for this issue.\r\n- [X] I checked that this issue is related to Prefect and not one of its dependencies.\r\n\r\n### Bug summary\r\n\r\nWhile using `prefect` with `prefect-dask` I encountered a rate limit error. this shouldn't be happening as prefect client base should retry on those. I'm not sure why this is happening but this has risen at `2.10.10` and did not exist before\r\n\r\n### Reproduction\r\n\r\n```python3\r\nAny Flow with prefect-dask\r\n```\r\n\r\n\r\n### Error\r\n\r\n```python3\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.11/dist-packages/distributed/client.py\", line 1697, in _close\r\n await self.scheduler_comm.close()\r\nasyncio.exceptions.CancelledError\r\n01:00:08.452 | ERROR | Flow run 'psi5-alastria-x' - Crash detected! Execution was interrupted by an unexpected exception: PrefectHTTPStatusError: Client error '429 Too Many Requests' for url 'https://cloud-url/task_runs/'\r\nResponse: {'detail': 'Orchestration API rate limit reached'}\r\nFor more information check: https://httpstatuses.com/429\r\n```\r\n\r\n\r\n### Versions\r\n\r\n```Text\r\nVersion: 2.10.10\r\nAPI version: 0.8.4\r\nPython version: 3.11.2\r\nGit commit: 8159450b\r\nBuilt: Thu, May 18, 2023 3:43 PM\r\nOS/Arch: linux/x86_64\r\nProfile: default\r\nServer type: server\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "import re\nfrom typing import Any, Dict, List, Optional\n\nimport anyio\nimport httpx\nimport pydantic\nfrom fastapi import status\n\nimport prefect.context\nimport prefect.settings\nfrom prefect.client.schemas import Workspace\nfrom prefect.exceptions import PrefectException\nfrom prefect.settings import PREFECT_API_KEY, PREFECT_CLOUD_API_URL\n\n\ndef get_cloud_client(\n host: Optional[str] = None,\n api_key: Optional[str] = None,\n httpx_settings: Optional[dict] = None,\n infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n \"\"\"\n Needs a docstring.\n \"\"\"\n if httpx_settings is not None:\n httpx_settings = httpx_settings.copy()\n\n if infer_cloud_url is False:\n host = host or PREFECT_CLOUD_API_URL.value()\n else:\n configured_url = prefect.settings.PREFECT_API_URL.value()\n host = re.sub(r\"accounts/.{36}/workspaces/.{36}\\Z\", \"\", configured_url)\n\n return CloudClient(\n host=host,\n api_key=api_key or PREFECT_API_KEY.value(),\n httpx_settings=httpx_settings,\n )\n\n\nclass CloudUnauthorizedError(PrefectException):\n \"\"\"\n Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n \"\"\"\n\n\nclass CloudClient:\n def __init__(\n self,\n host: str,\n api_key: str,\n httpx_settings: dict = None,\n ) -> None:\n httpx_settings = httpx_settings or dict()\n httpx_settings.setdefault(\"headers\", dict())\n httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n httpx_settings.setdefault(\"base_url\", host)\n self._client = httpx.AsyncClient(**httpx_settings)\n\n async def api_healthcheck(self):\n \"\"\"\n Attempts to connect to the Cloud API and raises the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n with anyio.fail_after(10):\n await self.read_workspaces()\n\n async def read_workspaces(self) -> List[Workspace]:\n return pydantic.parse_obj_as(List[Workspace], await self.get(\"/me/workspaces\"))\n\n async def read_worker_metadata(self) -> Dict[str, Any]:\n return await self.get(\"collections/views/aggregate-worker-metadata\")\n\n async def __aenter__(self):\n await self._client.__aenter__()\n return self\n\n async def __aexit__(self, *exc_info):\n return await self._client.__aexit__(*exc_info)\n\n def __enter__(self):\n raise RuntimeError(\n \"The `CloudClient` must be entered with an async context. Use 'async \"\n \"with CloudClient(...)' not 'with CloudClient(...)'\"\n )\n\n def __exit__(self, *_):\n assert False, \"This should never be called but must be defined for __enter__\"\n\n async def get(self, route, **kwargs):\n try:\n res = await self._client.get(route, **kwargs)\n res.raise_for_status()\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code in (\n status.HTTP_401_UNAUTHORIZED,\n status.HTTP_403_FORBIDDEN,\n ):\n raise CloudUnauthorizedError\n else:\n raise exc\n\n return res.json()\n", "path": "src/prefect/client/cloud.py"}]} | 1,930 | 189 |
gh_patches_debug_11073 | rasdani/github-patches | git_diff | mozmeao__snippets-service-1398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Re-generate all bundles with Distribution changes after Timestamp
</issue>
<code>
[start of snippets/base/management/commands/generate_bundles.py]
1 import os
2 import json
3 import itertools
4 from datetime import datetime
5
6 import brotli
7 from product_details import product_details
8
9 from django.conf import settings
10 from django.core.files.base import ContentFile
11 from django.core.management.base import BaseCommand
12 from django.db.models import Q
13 from django.core.files.storage import default_storage
14
15 from snippets.base.models import DistributionBundle, Job
16
17
18 class Command(BaseCommand):
19 args = '(no args)'
20 help = 'Generate bundles'
21
22 def add_arguments(self, parser):
23 # Named (optional) arguments
24 parser.add_argument(
25 '--timestamp',
26 help='Parse Jobs last modified after <timestamp>',
27 )
28
29 def handle(self, *args, **options):
30 if not options['timestamp']:
31 self.stdout.write('Generating all bundles.')
32 total_jobs = Job.objects.all()
33 else:
34 self.stdout.write(
35 'Generating bundles with Jobs modified on or after {}'.format(options['timestamp'])
36 )
37 total_jobs = Job.objects.filter(snippet__modified__gte=options['timestamp'])
38
39 if not total_jobs:
40 self.stdout.write('Nothing to do…')
41 return
42
43 self.stdout.write('Processing bundles…')
44
45 combinations_to_process = set(
46 itertools.chain.from_iterable(
47 itertools.product(
48 job.channels,
49 job.snippet.locale.code.strip(',').split(',')
50 )
51 for job in total_jobs
52 )
53 )
54 distribution_bundles_to_process = DistributionBundle.objects.filter(
55 distributions__jobs__in=total_jobs
56 ).distinct().order_by('id')
57
58 for distribution_bundle in distribution_bundles_to_process:
59 distributions = distribution_bundle.distributions.all()
60
61 for channel, locale in combinations_to_process:
62 additional_jobs = []
63 if channel == 'nightly' and settings.NIGHTLY_INCLUDES_RELEASE:
64 additional_jobs = Job.objects.filter(
65 status=Job.PUBLISHED).filter(**{
66 'targets__on_release': True,
67 'distribution__in': distributions,
68 })
69
70 channel_jobs = Job.objects.filter(
71 status=Job.PUBLISHED).filter(
72 Q(**{
73 'targets__on_{}'.format(channel): True,
74 'distribution__in': distributions,
75 }))
76
77 all_jobs = Job.objects.filter(
78 Q(id__in=additional_jobs) | Q(id__in=channel_jobs)
79 )
80
81 locales_to_process = [
82 key.lower() for key in product_details.languages.keys()
83 if key.lower().startswith(locale)
84 ]
85
86 for locale_to_process in locales_to_process:
87 filename = 'Firefox/{channel}/{locale}/{distribution}.json'.format(
88 channel=channel,
89 locale=locale_to_process,
90 distribution=distribution_bundle.code_name,
91 )
92 filename = os.path.join(settings.MEDIA_BUNDLES_PREGEN_ROOT, filename)
93 full_locale = ',{},'.format(locale_to_process.lower())
94 splitted_locale = ',{},'.format(locale_to_process.lower().split('-', 1)[0])
95 bundle_jobs = all_jobs.filter(
96 Q(snippet__locale__code__contains=splitted_locale) |
97 Q(snippet__locale__code__contains=full_locale)).distinct()
98
99 # If DistributionBundle is not enabled, or if there are no
100 # Published Jobs for the channel / locale / distribution
101 # combination, delete the current bundle file if it exists.
102 if not distribution_bundle.enabled or not bundle_jobs.exists():
103 if default_storage.exists(filename):
104 self.stdout.write('Removing {}'.format(filename))
105 default_storage.delete(filename)
106 continue
107
108 data = []
109 channel_job_ids = list(channel_jobs.values_list('id', flat=True))
110 for job in bundle_jobs:
111 if job.id in channel_job_ids:
112 render = job.render()
113 else:
114 render = job.render(always_eval_to_false=True)
115 data.append(render)
116
117 bundle_content = json.dumps({
118 'messages': data,
119 'metadata': {
120 'generated_at': datetime.utcnow().isoformat(),
121 'number_of_snippets': len(data),
122 'channel': channel,
123 }
124 })
125
126 # Convert str to bytes.
127 if isinstance(bundle_content, str):
128 bundle_content = bundle_content.encode('utf-8')
129
130 if settings.BUNDLE_BROTLI_COMPRESS:
131 content_file = ContentFile(brotli.compress(bundle_content))
132 content_file.content_encoding = 'br'
133 else:
134 content_file = ContentFile(bundle_content)
135
136 default_storage.save(filename, content_file)
137 self.stdout.write(self.style.SUCCESS('Writing bundle {}'.format(filename)))
138
[end of snippets/base/management/commands/generate_bundles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/snippets/base/management/commands/generate_bundles.py b/snippets/base/management/commands/generate_bundles.py
--- a/snippets/base/management/commands/generate_bundles.py
+++ b/snippets/base/management/commands/generate_bundles.py
@@ -34,7 +34,10 @@
self.stdout.write(
'Generating bundles with Jobs modified on or after {}'.format(options['timestamp'])
)
- total_jobs = Job.objects.filter(snippet__modified__gte=options['timestamp'])
+ total_jobs = Job.objects.filter(
+ Q(snippet__modified__gte=options['timestamp']) |
+ Q(distribution__distributionbundle__modified__gte=options['timestamp'])
+ ).distinct()
if not total_jobs:
self.stdout.write('Nothing to do…')
| {"golden_diff": "diff --git a/snippets/base/management/commands/generate_bundles.py b/snippets/base/management/commands/generate_bundles.py\n--- a/snippets/base/management/commands/generate_bundles.py\n+++ b/snippets/base/management/commands/generate_bundles.py\n@@ -34,7 +34,10 @@\n self.stdout.write(\n 'Generating bundles with Jobs modified on or after {}'.format(options['timestamp'])\n )\n- total_jobs = Job.objects.filter(snippet__modified__gte=options['timestamp'])\n+ total_jobs = Job.objects.filter(\n+ Q(snippet__modified__gte=options['timestamp']) |\n+ Q(distribution__distributionbundle__modified__gte=options['timestamp'])\n+ ).distinct()\n \n if not total_jobs:\n self.stdout.write('Nothing to do\u2026')\n", "issue": "Re-generate all bundles with Distribution changes after Timestamp\n\n", "before_files": [{"content": "import os\nimport json\nimport itertools\nfrom datetime import datetime\n\nimport brotli\nfrom product_details import product_details\n\nfrom django.conf import settings\nfrom django.core.files.base import ContentFile\nfrom django.core.management.base import BaseCommand\nfrom django.db.models import Q\nfrom django.core.files.storage import default_storage\n\nfrom snippets.base.models import DistributionBundle, Job\n\n\nclass Command(BaseCommand):\n args = '(no args)'\n help = 'Generate bundles'\n\n def add_arguments(self, parser):\n # Named (optional) arguments\n parser.add_argument(\n '--timestamp',\n help='Parse Jobs last modified after <timestamp>',\n )\n\n def handle(self, *args, **options):\n if not options['timestamp']:\n self.stdout.write('Generating all bundles.')\n total_jobs = Job.objects.all()\n else:\n self.stdout.write(\n 'Generating bundles with Jobs modified on or after {}'.format(options['timestamp'])\n )\n total_jobs = Job.objects.filter(snippet__modified__gte=options['timestamp'])\n\n if not total_jobs:\n self.stdout.write('Nothing to do\u2026')\n return\n\n self.stdout.write('Processing bundles\u2026')\n\n combinations_to_process = set(\n itertools.chain.from_iterable(\n itertools.product(\n job.channels,\n job.snippet.locale.code.strip(',').split(',')\n )\n for job in total_jobs\n )\n )\n distribution_bundles_to_process = DistributionBundle.objects.filter(\n distributions__jobs__in=total_jobs\n ).distinct().order_by('id')\n\n for distribution_bundle in distribution_bundles_to_process:\n distributions = distribution_bundle.distributions.all()\n\n for channel, locale in combinations_to_process:\n additional_jobs = []\n if channel == 'nightly' and settings.NIGHTLY_INCLUDES_RELEASE:\n additional_jobs = Job.objects.filter(\n status=Job.PUBLISHED).filter(**{\n 'targets__on_release': True,\n 'distribution__in': distributions,\n })\n\n channel_jobs = Job.objects.filter(\n status=Job.PUBLISHED).filter(\n Q(**{\n 'targets__on_{}'.format(channel): True,\n 'distribution__in': distributions,\n }))\n\n all_jobs = Job.objects.filter(\n Q(id__in=additional_jobs) | Q(id__in=channel_jobs)\n )\n\n locales_to_process = [\n key.lower() for key in product_details.languages.keys()\n if key.lower().startswith(locale)\n ]\n\n for locale_to_process in locales_to_process:\n filename = 'Firefox/{channel}/{locale}/{distribution}.json'.format(\n channel=channel,\n locale=locale_to_process,\n distribution=distribution_bundle.code_name,\n )\n filename = os.path.join(settings.MEDIA_BUNDLES_PREGEN_ROOT, filename)\n full_locale = ',{},'.format(locale_to_process.lower())\n splitted_locale = ',{},'.format(locale_to_process.lower().split('-', 1)[0])\n bundle_jobs = all_jobs.filter(\n Q(snippet__locale__code__contains=splitted_locale) |\n Q(snippet__locale__code__contains=full_locale)).distinct()\n\n # If DistributionBundle is not enabled, or if there are no\n # Published Jobs for the channel / locale / distribution\n # combination, delete the current bundle file if it exists.\n if not distribution_bundle.enabled or not bundle_jobs.exists():\n if default_storage.exists(filename):\n self.stdout.write('Removing {}'.format(filename))\n default_storage.delete(filename)\n continue\n\n data = []\n channel_job_ids = list(channel_jobs.values_list('id', flat=True))\n for job in bundle_jobs:\n if job.id in channel_job_ids:\n render = job.render()\n else:\n render = job.render(always_eval_to_false=True)\n data.append(render)\n\n bundle_content = json.dumps({\n 'messages': data,\n 'metadata': {\n 'generated_at': datetime.utcnow().isoformat(),\n 'number_of_snippets': len(data),\n 'channel': channel,\n }\n })\n\n # Convert str to bytes.\n if isinstance(bundle_content, str):\n bundle_content = bundle_content.encode('utf-8')\n\n if settings.BUNDLE_BROTLI_COMPRESS:\n content_file = ContentFile(brotli.compress(bundle_content))\n content_file.content_encoding = 'br'\n else:\n content_file = ContentFile(bundle_content)\n\n default_storage.save(filename, content_file)\n self.stdout.write(self.style.SUCCESS('Writing bundle {}'.format(filename)))\n", "path": "snippets/base/management/commands/generate_bundles.py"}]} | 1,815 | 176 |
gh_patches_debug_147 | rasdani/github-patches | git_diff | encode__httpx-868 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
0.12.0 PyPI wheel contains both public- and private-name modules
The following works in httpx 0.11.1:
```python
In [1]: import httpx
...: from httpx.exceptions import InvalidURL
In [2]: try:
...: httpx.get("foo.bar")
...: except InvalidURL:
...: pass
...:
```
In 0.12.0 the exception isn't caught:
```python
In [1]: import httpx
...: from httpx.exceptions import InvalidURL
In [2]: try:
...: httpx.get("foo.bar")
...: except InvalidURL:
...: pass
...:
---------------------------------------------------------------------------
InvalidURL Traceback (most recent call last)
<ipython-input-2-87135a63c42c> in <module>
1 try:
----> 2 httpx.get("foo.bar")
3 except InvalidURL:
4 pass
5
~/.venv/lib/python3.7/site-packages/httpx/_api.py in get(url, params, headers, cookies, auth, allow_redirects, cert, verify, timeout, trust_env)
166 verify=verify,
167 timeout=timeout,
--> 168 trust_env=trust_env,
169 )
170
~/.venv/lib/python3.7/site-packages/httpx/_api.py in request(method, url, params, data, files, json, headers, cookies, auth, timeout, allow_redirects, verify, cert, trust_env)
92 cookies=cookies,
93 auth=auth,
---> 94 allow_redirects=allow_redirects,
95 )
96
~/.venv/lib/python3.7/site-packages/httpx/_client.py in request(self, method, url, data, files, json, params, headers, cookies, auth, allow_redirects, timeout)
566 params=params,
567 headers=headers,
--> 568 cookies=cookies,
569 )
570 return self.send(
~/.venv/lib/python3.7/site-packages/httpx/_client.py in build_request(self, method, url, data, files, json, params, headers, cookies)
196 Build and return a request instance.
197 """
--> 198 url = self.merge_url(url)
199 headers = self.merge_headers(headers)
200 cookies = self.merge_cookies(cookies)
~/.venv/lib/python3.7/site-packages/httpx/_client.py in merge_url(self, url)
216 to create the URL used for the outgoing request.
217 """
--> 218 url = self.base_url.join(relative_url=url)
219 if url.scheme == "http" and hstspreload.in_hsts_preload(url.host):
220 port = None if url.port == 80 else url.port
~/.venv/lib/python3.7/site-packages/httpx/_models.py in join(self, relative_url)
227 """
228 if self.is_relative_url:
--> 229 return URL(relative_url)
230
231 # We drop any fragment portion, because RFC 3986 strictly
~/.venv/lib/python3.7/site-packages/httpx/_models.py in __init__(self, url, allow_relative, params)
104 if not allow_relative:
105 if not self.scheme:
--> 106 raise InvalidURL("No scheme included in URL.")
107 if not self.host:
108 raise InvalidURL("No host included in URL.")
InvalidURL: No scheme included in URL.
```
This works though:
```python
In [3]: import httpx
...: from httpx._exceptions import InvalidURL
In [4]: try:
...: httpx.get("foo.bar")
...: except InvalidURL:
...: pass
...:
```
</issue>
<code>
[start of httpx/__version__.py]
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.12.0"
4
[end of httpx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.12.0"
+__version__ = "0.12.1"
| {"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.12.0\"\n+__version__ = \"0.12.1\"\n", "issue": "0.12.0 PyPI wheel contains both public- and private-name modules\nThe following works in httpx 0.11.1:\r\n\r\n```python\r\nIn [1]: import httpx \r\n ...: from httpx.exceptions import InvalidURL \r\n\r\nIn [2]: try: \r\n ...: httpx.get(\"foo.bar\") \r\n ...: except InvalidURL: \r\n ...: pass \r\n ...: \r\n```\r\n\r\nIn 0.12.0 the exception isn't caught:\r\n\r\n```python\r\nIn [1]: import httpx \r\n ...: from httpx.exceptions import InvalidURL \r\n\r\nIn [2]: try: \r\n ...: httpx.get(\"foo.bar\") \r\n ...: except InvalidURL: \r\n ...: pass \r\n ...: \r\n---------------------------------------------------------------------------\r\nInvalidURL Traceback (most recent call last)\r\n<ipython-input-2-87135a63c42c> in <module>\r\n 1 try:\r\n----> 2 httpx.get(\"foo.bar\")\r\n 3 except InvalidURL:\r\n 4 pass\r\n 5 \r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_api.py in get(url, params, headers, cookies, auth, allow_redirects, cert, verify, timeout, trust_env)\r\n 166 verify=verify,\r\n 167 timeout=timeout,\r\n--> 168 trust_env=trust_env,\r\n 169 )\r\n 170 \r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_api.py in request(method, url, params, data, files, json, headers, cookies, auth, timeout, allow_redirects, verify, cert, trust_env)\r\n 92 cookies=cookies,\r\n 93 auth=auth,\r\n---> 94 allow_redirects=allow_redirects,\r\n 95 )\r\n 96 \r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_client.py in request(self, method, url, data, files, json, params, headers, cookies, auth, allow_redirects, timeout)\r\n 566 params=params,\r\n 567 headers=headers,\r\n--> 568 cookies=cookies,\r\n 569 )\r\n 570 return self.send(\r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_client.py in build_request(self, method, url, data, files, json, params, headers, cookies)\r\n 196 Build and return a request instance.\r\n 197 \"\"\"\r\n--> 198 url = self.merge_url(url)\r\n 199 headers = self.merge_headers(headers)\r\n 200 cookies = self.merge_cookies(cookies)\r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_client.py in merge_url(self, url)\r\n 216 to create the URL used for the outgoing request.\r\n 217 \"\"\"\r\n--> 218 url = self.base_url.join(relative_url=url)\r\n 219 if url.scheme == \"http\" and hstspreload.in_hsts_preload(url.host):\r\n 220 port = None if url.port == 80 else url.port\r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_models.py in join(self, relative_url)\r\n 227 \"\"\"\r\n 228 if self.is_relative_url:\r\n--> 229 return URL(relative_url)\r\n 230 \r\n 231 # We drop any fragment portion, because RFC 3986 strictly\r\n\r\n~/.venv/lib/python3.7/site-packages/httpx/_models.py in __init__(self, url, allow_relative, params)\r\n 104 if not allow_relative:\r\n 105 if not self.scheme:\r\n--> 106 raise InvalidURL(\"No scheme included in URL.\")\r\n 107 if not self.host:\r\n 108 raise InvalidURL(\"No host included in URL.\")\r\n\r\nInvalidURL: No scheme included in URL.\r\n```\r\n\r\nThis works though:\r\n\r\n```python\r\nIn [3]: import httpx \r\n ...: from httpx._exceptions import InvalidURL \r\n\r\nIn [4]: try: \r\n ...: httpx.get(\"foo.bar\") \r\n ...: except InvalidURL: \r\n ...: pass \r\n ...: \r\n```\n", "before_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.12.0\"\n", "path": "httpx/__version__.py"}]} | 1,550 | 97 |
gh_patches_debug_6543 | rasdani/github-patches | git_diff | saleor__saleor-10987 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to filter customers with 0 orders
### **Steps to reproduce the problem:**
```graphql
query Customers{
customers(filter: {numberOfOrders: {lte: 0, gte: 0}}, first: 10){
edges{
node{
id
email
orders{
totalCount
}
}
}
totalCount
}
}
```
### **Current result:**
Backend returns all customers instead of those with 0 orders
### **Expected result:**
Return all customers with 0 orders
### **Screenshots:**
### **System information:**
### **Environment:**
master.staging core v3.8.0-a
### **Additional info/links:**
https://master.staging.saleor.cloud/dashboard/customers/?asc=true&sort=name&numberOfOrdersFrom=0&numberOfOrdersTo=0
</issue>
<code>
[start of saleor/graphql/utils/filters.py]
1 from django.utils import timezone
2
3 from ..core.enums import ReportingPeriod
4
5
6 def reporting_period_to_date(period):
7 now = timezone.now()
8 if period == ReportingPeriod.TODAY:
9 start_date = now.replace(hour=0, minute=0, second=0, microsecond=0)
10 elif period == ReportingPeriod.THIS_MONTH:
11 start_date = now.replace(day=1, hour=0, minute=0, second=0, microsecond=0)
12 else:
13 raise ValueError("Unknown period: %s" % period)
14 return start_date
15
16
17 def filter_by_period(queryset, period, field_name):
18 start_date = reporting_period_to_date(period)
19 return queryset.filter(**{"%s__gte" % field_name: start_date})
20
21
22 def filter_range_field(qs, field, value):
23 gte, lte = value.get("gte"), value.get("lte")
24 if gte:
25 lookup = {f"{field}__gte": gte}
26 qs = qs.filter(**lookup)
27 if lte:
28 lookup = {f"{field}__lte": lte}
29 qs = qs.filter(**lookup)
30 return qs
31
32
33 def filter_by_id(object_type):
34 from . import resolve_global_ids_to_primary_keys
35
36 def inner(qs, _, value):
37 if not value:
38 return qs
39 _, obj_pks = resolve_global_ids_to_primary_keys(value, object_type)
40 return qs.filter(id__in=obj_pks)
41
42 return inner
43
[end of saleor/graphql/utils/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/graphql/utils/filters.py b/saleor/graphql/utils/filters.py
--- a/saleor/graphql/utils/filters.py
+++ b/saleor/graphql/utils/filters.py
@@ -21,10 +21,10 @@
def filter_range_field(qs, field, value):
gte, lte = value.get("gte"), value.get("lte")
- if gte:
+ if gte is not None:
lookup = {f"{field}__gte": gte}
qs = qs.filter(**lookup)
- if lte:
+ if lte is not None:
lookup = {f"{field}__lte": lte}
qs = qs.filter(**lookup)
return qs
| {"golden_diff": "diff --git a/saleor/graphql/utils/filters.py b/saleor/graphql/utils/filters.py\n--- a/saleor/graphql/utils/filters.py\n+++ b/saleor/graphql/utils/filters.py\n@@ -21,10 +21,10 @@\n \n def filter_range_field(qs, field, value):\n gte, lte = value.get(\"gte\"), value.get(\"lte\")\n- if gte:\n+ if gte is not None:\n lookup = {f\"{field}__gte\": gte}\n qs = qs.filter(**lookup)\n- if lte:\n+ if lte is not None:\n lookup = {f\"{field}__lte\": lte}\n qs = qs.filter(**lookup)\n return qs\n", "issue": "Unable to filter customers with 0 orders\n### **Steps to reproduce the problem:**\n```graphql\nquery Customers{\n customers(filter: {numberOfOrders: {lte: 0, gte: 0}}, first: 10){\n edges{\n node{\n id\n email\n orders{\n totalCount\n }\n }\n }\n totalCount\n }\n}\n```\n\n### **Current result:**\nBackend returns all customers instead of those with 0 orders\n\n### **Expected result:**\nReturn all customers with 0 orders\n\n### **Screenshots:**\n\n### **System information:**\n\n### **Environment:**\nmaster.staging core v3.8.0-a\n\n### **Additional info/links:**\nhttps://master.staging.saleor.cloud/dashboard/customers/?asc=true&sort=name&numberOfOrdersFrom=0&numberOfOrdersTo=0\n", "before_files": [{"content": "from django.utils import timezone\n\nfrom ..core.enums import ReportingPeriod\n\n\ndef reporting_period_to_date(period):\n now = timezone.now()\n if period == ReportingPeriod.TODAY:\n start_date = now.replace(hour=0, minute=0, second=0, microsecond=0)\n elif period == ReportingPeriod.THIS_MONTH:\n start_date = now.replace(day=1, hour=0, minute=0, second=0, microsecond=0)\n else:\n raise ValueError(\"Unknown period: %s\" % period)\n return start_date\n\n\ndef filter_by_period(queryset, period, field_name):\n start_date = reporting_period_to_date(period)\n return queryset.filter(**{\"%s__gte\" % field_name: start_date})\n\n\ndef filter_range_field(qs, field, value):\n gte, lte = value.get(\"gte\"), value.get(\"lte\")\n if gte:\n lookup = {f\"{field}__gte\": gte}\n qs = qs.filter(**lookup)\n if lte:\n lookup = {f\"{field}__lte\": lte}\n qs = qs.filter(**lookup)\n return qs\n\n\ndef filter_by_id(object_type):\n from . import resolve_global_ids_to_primary_keys\n\n def inner(qs, _, value):\n if not value:\n return qs\n _, obj_pks = resolve_global_ids_to_primary_keys(value, object_type)\n return qs.filter(id__in=obj_pks)\n\n return inner\n", "path": "saleor/graphql/utils/filters.py"}]} | 1,122 | 166 |
gh_patches_debug_29715 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-4090 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Detecting and alerting of duplication keys/components/entries in YAML file
### Is your feature request related to a problem? Please describe
it was found in release 1.3.11 , a PR to update [manifest](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/1.3.11/opensearch-1.3.11.yml) has duplicated components name.
It would cause the resource wasted on CI to rebuild the duplicated components
### Describe the solution you'd like
We want to have a check to detect if there is any duplication entries based on keys/components/names and probably fail the GitHub check
### Describe alternatives you've considered
Manually check for duplicate values
### Acceptance Criteria
* The manifest check should fail at CI level for components with duplicate components.name values in opensearch and opensearch-dashboard as well as test manifests. See what are [manifests](https://github.com/opensearch-project/opensearch-build/wiki/Building-an-OpenSearch-and-OpenSearch-Dashboards-Distribution#what-are-manifests)
</issue>
<code>
[start of src/ci_workflow/ci_manifests.py]
1 # Copyright OpenSearch Contributors
2 # SPDX-License-Identifier: Apache-2.0
3 #
4 # The OpenSearch Contributors require contributions made to
5 # this file be licensed under the Apache-2.0 license or a
6 # compatible open source license.
7
8
9 import re
10 from io import TextIOWrapper
11 from typing import Type, Union
12
13 from ci_workflow.ci_args import CiArgs
14 from ci_workflow.ci_input_manifest import CiInputManifest
15 from ci_workflow.ci_test_manifest import CiTestManifest
16
17
18 class CiManifests:
19 @staticmethod
20 def __klass(filename: str) -> Union[Type[CiTestManifest], Type[CiInputManifest]]:
21 if re.search("-test.yml$", filename):
22 return CiTestManifest
23 else:
24 return CiInputManifest
25
26 @classmethod
27 def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:
28 return cls.__klass(file.name)(file, args)
29
[end of src/ci_workflow/ci_manifests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ci_workflow/ci_manifests.py b/src/ci_workflow/ci_manifests.py
--- a/src/ci_workflow/ci_manifests.py
+++ b/src/ci_workflow/ci_manifests.py
@@ -7,9 +7,12 @@
import re
+from collections import Counter
from io import TextIOWrapper
from typing import Type, Union
+import yaml
+
from ci_workflow.ci_args import CiArgs
from ci_workflow.ci_input_manifest import CiInputManifest
from ci_workflow.ci_test_manifest import CiTestManifest
@@ -23,6 +26,29 @@
else:
return CiInputManifest
+ @staticmethod
+ def __get_duplicate_component_names(count_component_names: Counter) -> list:
+ duplicate_component_names = []
+ for component_name, count in count_component_names.items():
+ if count > 1:
+ duplicate_component_names.append(component_name)
+ return duplicate_component_names
+
+ @staticmethod
+ def __check_duplicate_component_names(file: TextIOWrapper) -> None:
+ yaml_dict = yaml.safe_load(file)
+ component_names = []
+ for component in yaml_dict['components']:
+ component_names.append(component['name'])
+ count_component_names = Counter(component_names)
+
+ if set(count_component_names.values()) != set([1]):
+ duplicate_component_names = CiManifests.__get_duplicate_component_names(count_component_names)
+ duplicate_component_names_string = ', '.join(duplicate_component_names)
+ raise ValueError(f"Found {duplicate_component_names_string} as a duplicate component(s) in manifest {file.name}. ")
+ file.seek(0)
+
@classmethod
def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:
+ cls.__check_duplicate_component_names(file)
return cls.__klass(file.name)(file, args)
| {"golden_diff": "diff --git a/src/ci_workflow/ci_manifests.py b/src/ci_workflow/ci_manifests.py\n--- a/src/ci_workflow/ci_manifests.py\n+++ b/src/ci_workflow/ci_manifests.py\n@@ -7,9 +7,12 @@\n \n \n import re\n+from collections import Counter\n from io import TextIOWrapper\n from typing import Type, Union\n \n+import yaml\n+\n from ci_workflow.ci_args import CiArgs\n from ci_workflow.ci_input_manifest import CiInputManifest\n from ci_workflow.ci_test_manifest import CiTestManifest\n@@ -23,6 +26,29 @@\n else:\n return CiInputManifest\n \n+ @staticmethod\n+ def __get_duplicate_component_names(count_component_names: Counter) -> list:\n+ duplicate_component_names = []\n+ for component_name, count in count_component_names.items():\n+ if count > 1:\n+ duplicate_component_names.append(component_name)\n+ return duplicate_component_names\n+\n+ @staticmethod\n+ def __check_duplicate_component_names(file: TextIOWrapper) -> None:\n+ yaml_dict = yaml.safe_load(file)\n+ component_names = []\n+ for component in yaml_dict['components']:\n+ component_names.append(component['name'])\n+ count_component_names = Counter(component_names)\n+\n+ if set(count_component_names.values()) != set([1]):\n+ duplicate_component_names = CiManifests.__get_duplicate_component_names(count_component_names)\n+ duplicate_component_names_string = ', '.join(duplicate_component_names)\n+ raise ValueError(f\"Found {duplicate_component_names_string} as a duplicate component(s) in manifest {file.name}. \")\n+ file.seek(0)\n+\n @classmethod\n def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:\n+ cls.__check_duplicate_component_names(file)\n return cls.__klass(file.name)(file, args)\n", "issue": "Detecting and alerting of duplication keys/components/entries in YAML file\n### Is your feature request related to a problem? Please describe\r\n\r\nit was found in release 1.3.11 , a PR to update [manifest](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/1.3.11/opensearch-1.3.11.yml) has duplicated components name.\r\nIt would cause the resource wasted on CI to rebuild the duplicated components \r\n\r\n### Describe the solution you'd like\r\n\r\nWe want to have a check to detect if there is any duplication entries based on keys/components/names and probably fail the GitHub check\r\n\r\n### Describe alternatives you've considered\r\n\r\nManually check for duplicate values\r\n\r\n### Acceptance Criteria\r\n* The manifest check should fail at CI level for components with duplicate components.name values in opensearch and opensearch-dashboard as well as test manifests. See what are [manifests](https://github.com/opensearch-project/opensearch-build/wiki/Building-an-OpenSearch-and-OpenSearch-Dashboards-Distribution#what-are-manifests)\n", "before_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\n\nimport re\nfrom io import TextIOWrapper\nfrom typing import Type, Union\n\nfrom ci_workflow.ci_args import CiArgs\nfrom ci_workflow.ci_input_manifest import CiInputManifest\nfrom ci_workflow.ci_test_manifest import CiTestManifest\n\n\nclass CiManifests:\n @staticmethod\n def __klass(filename: str) -> Union[Type[CiTestManifest], Type[CiInputManifest]]:\n if re.search(\"-test.yml$\", filename):\n return CiTestManifest\n else:\n return CiInputManifest\n\n @classmethod\n def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:\n return cls.__klass(file.name)(file, args)\n", "path": "src/ci_workflow/ci_manifests.py"}]} | 1,031 | 415 |
gh_patches_debug_47463 | rasdani/github-patches | git_diff | bokeh__bokeh-5968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Transform docstring ends abruptly
```
Bases: bokeh.model.Model
Base class for Transform models that represent a computation to be carried out on the client-side.
JavaScript implementations should implement the following methods:
```
<img width="879" alt="screen shot 2017-02-17 at 2 43 31 am" src="https://cloud.githubusercontent.com/assets/1796208/23058499/e52042e8-f4ba-11e6-8f8a-596498e00084.png">
Should add the methods that need to be implemented.
</issue>
<code>
[start of bokeh/models/transforms.py]
1 '''
2
3 '''
4 from __future__ import absolute_import
5
6 from ..core.enums import StepMode, JitterRandomDistribution
7 from ..core.has_props import abstract
8 from ..core.properties import Bool, Either, Enum, Float, Instance, Seq, String
9 from ..model import Model
10
11 from .sources import ColumnarDataSource
12
13 @abstract
14 class Transform(Model):
15 ''' Base class for ``Transform`` models that represent a computation
16 to be carried out on the client-side.
17
18 JavaScript implementations should implement the following methods:
19
20 .. code-block: coffeescript
21
22 compute: (x) ->
23 # compute the transform of a single value
24
25 v_compute: (xs) ->
26 # compute the transform of an array of values
27
28 '''
29 pass
30
31
32 class Jitter(Transform):
33 ''' Apply either a uniform or normally sampled random jitter to data.
34
35 '''
36
37
38 mean = Float(default=0, help="""
39 The central value for the random sample
40 """)
41
42 width = Float(default=1, help="""
43 The width (absolute for uniform distribution and sigma for the normal distribution) of the random sample.
44 """)
45
46 distribution = Enum(JitterRandomDistribution, default='uniform', help="""
47 The random distribution upon which to pull the random scatter
48 """)
49
50 @abstract
51 class Interpolator(Transform):
52 ''' Base class for interpolator transforms.
53
54 Interpolators return the value of a function which has been evaluated
55 between specified (x, y) pairs of data. As an example, if two control
56 point pairs were provided to the interpolator, a linear interpolaction
57 at a specific value of 'x' would result in the value of 'y' which existed
58 on the line conneting the two control points.
59
60 The control point pairs for the interpolators can be specified through either
61
62 * A literal sequence of values:
63
64 .. code-block: python
65
66 interp = Interpolator(x=[1, 2, 3, 4, 5], y=[2, 5, 10, 12, 16])
67
68 * or a pair of columns defined in a `ColumnDataSource` object:
69
70 .. code-block: python
71
72 interp = Interpolator(x="year", y="earnings", data=jewlery_prices))
73
74
75 This is the base class and is not intended to end use. Please see the
76 documentation for the final derived classes (Jitter, LineraInterpolator,
77 StepInterpolator) for mor information on their specific methods of
78 interpolation.
79
80 '''
81 x = Either(String, Seq(Float), help="""
82 Independant coordiante denoting the location of a point.
83 """)
84
85 y = Either(String, Seq(Float), help="""
86 Dependant coordinate denoting the value of a point at a location.
87 """)
88
89 data = Instance(ColumnarDataSource, help="""
90 Data which defines the source for the named columns if a string is passed to either the ``x`` or ``y`` parameters.
91 """)
92
93 clip = Bool(True, help="""
94 Determine if the interpolation should clip the result to include only values inside its predefined range.
95 If this is set to False, it will return the most value of the closest point.
96 """)
97
98 # Define an initialization routine to do some cross checking of input values
99 def __init__(self, **kwargs):
100 super(Interpolator, self).__init__(**kwargs)
101
102
103 class LinearInterpolator(Interpolator):
104 ''' Compute a linear interpolation between the control points provided through
105 the ``x``, ``y``, and ``data`` parameters.
106
107 '''
108 pass
109
110
111 class StepInterpolator(Interpolator):
112 ''' Compute a step-wise interpolation between the points provided through
113 the ``x``, ``y``, and ``data`` parameters.
114
115 '''
116
117 mode = Enum(StepMode, default="after", help="""
118 Adjust the behavior of the returned value in relation to the control points. The parameter can assume one of three values:
119
120 * ``after`` (default): Assume the y-value associated with the nearest x-value which is less than or equal to the point to transform.
121 * ``before``: Assume the y-value associated with the nearest x-value which is greater than the point to transform.
122 * ``center``: Assume the y-value associated with the nearest x-value to the point to transform.
123 """)
124
[end of bokeh/models/transforms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bokeh/models/transforms.py b/bokeh/models/transforms.py
--- a/bokeh/models/transforms.py
+++ b/bokeh/models/transforms.py
@@ -19,11 +19,11 @@
.. code-block: coffeescript
- compute: (x) ->
- # compute the transform of a single value
+ compute: (x) ->
+ # compute the transform of a single value
- v_compute: (xs) ->
- # compute the transform of an array of values
+ v_compute: (xs) ->
+ # compute the transform of an array of values
'''
pass
| {"golden_diff": "diff --git a/bokeh/models/transforms.py b/bokeh/models/transforms.py\n--- a/bokeh/models/transforms.py\n+++ b/bokeh/models/transforms.py\n@@ -19,11 +19,11 @@\n \n .. code-block: coffeescript\n \n- compute: (x) ->\n- # compute the transform of a single value\n+ compute: (x) ->\n+ # compute the transform of a single value\n \n- v_compute: (xs) ->\n- # compute the transform of an array of values\n+ v_compute: (xs) ->\n+ # compute the transform of an array of values\n \n '''\n pass\n", "issue": "Transform docstring ends abruptly\n```\r\n Bases: bokeh.model.Model\r\n Base class for Transform models that represent a computation to be carried out on the client-side.\r\n JavaScript implementations should implement the following methods:\r\n```\r\n<img width=\"879\" alt=\"screen shot 2017-02-17 at 2 43 31 am\" src=\"https://cloud.githubusercontent.com/assets/1796208/23058499/e52042e8-f4ba-11e6-8f8a-596498e00084.png\">\r\n\r\nShould add the methods that need to be implemented.\r\n\n", "before_files": [{"content": "'''\n\n'''\nfrom __future__ import absolute_import\n\nfrom ..core.enums import StepMode, JitterRandomDistribution\nfrom ..core.has_props import abstract\nfrom ..core.properties import Bool, Either, Enum, Float, Instance, Seq, String\nfrom ..model import Model\n\nfrom .sources import ColumnarDataSource\n\n@abstract\nclass Transform(Model):\n ''' Base class for ``Transform`` models that represent a computation\n to be carried out on the client-side.\n\n JavaScript implementations should implement the following methods:\n\n .. code-block: coffeescript\n\n compute: (x) ->\n # compute the transform of a single value\n\n v_compute: (xs) ->\n # compute the transform of an array of values\n\n '''\n pass\n\n\nclass Jitter(Transform):\n ''' Apply either a uniform or normally sampled random jitter to data.\n\n '''\n\n\n mean = Float(default=0, help=\"\"\"\n The central value for the random sample\n \"\"\")\n\n width = Float(default=1, help=\"\"\"\n The width (absolute for uniform distribution and sigma for the normal distribution) of the random sample.\n \"\"\")\n\n distribution = Enum(JitterRandomDistribution, default='uniform', help=\"\"\"\n The random distribution upon which to pull the random scatter\n \"\"\")\n\n@abstract\nclass Interpolator(Transform):\n ''' Base class for interpolator transforms.\n\n Interpolators return the value of a function which has been evaluated\n between specified (x, y) pairs of data. As an example, if two control\n point pairs were provided to the interpolator, a linear interpolaction\n at a specific value of 'x' would result in the value of 'y' which existed\n on the line conneting the two control points.\n\n The control point pairs for the interpolators can be specified through either\n\n * A literal sequence of values:\n\n .. code-block: python\n\n interp = Interpolator(x=[1, 2, 3, 4, 5], y=[2, 5, 10, 12, 16])\n\n * or a pair of columns defined in a `ColumnDataSource` object:\n\n .. code-block: python\n\n interp = Interpolator(x=\"year\", y=\"earnings\", data=jewlery_prices))\n\n\n This is the base class and is not intended to end use. Please see the\n documentation for the final derived classes (Jitter, LineraInterpolator,\n StepInterpolator) for mor information on their specific methods of\n interpolation.\n\n '''\n x = Either(String, Seq(Float), help=\"\"\"\n Independant coordiante denoting the location of a point.\n \"\"\")\n\n y = Either(String, Seq(Float), help=\"\"\"\n Dependant coordinate denoting the value of a point at a location.\n \"\"\")\n\n data = Instance(ColumnarDataSource, help=\"\"\"\n Data which defines the source for the named columns if a string is passed to either the ``x`` or ``y`` parameters.\n \"\"\")\n\n clip = Bool(True, help=\"\"\"\n Determine if the interpolation should clip the result to include only values inside its predefined range.\n If this is set to False, it will return the most value of the closest point.\n \"\"\")\n\n # Define an initialization routine to do some cross checking of input values\n def __init__(self, **kwargs):\n super(Interpolator, self).__init__(**kwargs)\n\n\nclass LinearInterpolator(Interpolator):\n ''' Compute a linear interpolation between the control points provided through\n the ``x``, ``y``, and ``data`` parameters.\n\n '''\n pass\n\n\nclass StepInterpolator(Interpolator):\n ''' Compute a step-wise interpolation between the points provided through\n the ``x``, ``y``, and ``data`` parameters.\n\n '''\n\n mode = Enum(StepMode, default=\"after\", help=\"\"\"\n Adjust the behavior of the returned value in relation to the control points. The parameter can assume one of three values:\n\n * ``after`` (default): Assume the y-value associated with the nearest x-value which is less than or equal to the point to transform.\n * ``before``: Assume the y-value associated with the nearest x-value which is greater than the point to transform.\n * ``center``: Assume the y-value associated with the nearest x-value to the point to transform.\n \"\"\")\n", "path": "bokeh/models/transforms.py"}]} | 1,903 | 150 |
gh_patches_debug_14281 | rasdani/github-patches | git_diff | litestar-org__litestar-3179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: CORS Middleware not setting all headers as per spec
### Description
Right now, there's only a handful of headers that are only being set for the preflight request. They must be set for both the preflight and actual request.
https://fetch.spec.whatwg.org/#http-responses
Only `Access-Control-Allow-Origin` is being set here.
https://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/middleware/cors.py#L61-L73
Only `Access-Control-Allow-Credentials` and `Access-Control-Expose-Headers` get set here, and this is what the above code uses to update headers
https://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/config/cors.py#L123-L136
This still doesn't account for:
- Access-Control-Allow-Methods
- Access-Control-Allow-Headers
which are only set on preflight, but should also be set to the actual request.
### Litestar Version
2.2.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/3178">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/middleware/cors.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from litestar.datastructures import Headers, MutableScopeHeaders
6 from litestar.enums import ScopeType
7 from litestar.middleware.base import AbstractMiddleware
8
9 __all__ = ("CORSMiddleware",)
10
11
12 if TYPE_CHECKING:
13 from litestar.config.cors import CORSConfig
14 from litestar.types import ASGIApp, Message, Receive, Scope, Send
15
16
17 class CORSMiddleware(AbstractMiddleware):
18 """CORS Middleware."""
19
20 __slots__ = ("config",)
21
22 def __init__(self, app: ASGIApp, config: CORSConfig) -> None:
23 """Middleware that adds CORS validation to the application.
24
25 Args:
26 app: The ``next`` ASGI app to call.
27 config: An instance of :class:`CORSConfig <litestar.config.cors.CORSConfig>`
28 """
29 super().__init__(app=app, scopes={ScopeType.HTTP})
30 self.config = config
31
32 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
33 """ASGI callable.
34
35 Args:
36 scope: The ASGI connection scope.
37 receive: The ASGI receive function.
38 send: The ASGI send function.
39
40 Returns:
41 None
42 """
43 headers = Headers.from_scope(scope=scope)
44 if origin := headers.get("origin"):
45 await self.app(scope, receive, self.send_wrapper(send=send, origin=origin, has_cookie="cookie" in headers))
46 else:
47 await self.app(scope, receive, send)
48
49 def send_wrapper(self, send: Send, origin: str, has_cookie: bool) -> Send:
50 """Wrap ``send`` to ensure that state is not disconnected.
51
52 Args:
53 has_cookie: Boolean flag dictating if the connection has a cookie set.
54 origin: The value of the ``Origin`` header.
55 send: The ASGI send function.
56
57 Returns:
58 An ASGI send function.
59 """
60
61 async def wrapped_send(message: Message) -> None:
62 if message["type"] == "http.response.start":
63 message.setdefault("headers", [])
64 headers = MutableScopeHeaders.from_message(message=message)
65 headers.update(self.config.simple_headers)
66
67 if (self.config.is_allow_all_origins and has_cookie) or (
68 not self.config.is_allow_all_origins and self.config.is_origin_allowed(origin=origin)
69 ):
70 headers["Access-Control-Allow-Origin"] = origin
71 headers["Vary"] = "Origin"
72
73 await send(message)
74
75 return wrapped_send
76
[end of litestar/middleware/cors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/middleware/cors.py b/litestar/middleware/cors.py
--- a/litestar/middleware/cors.py
+++ b/litestar/middleware/cors.py
@@ -70,6 +70,15 @@
headers["Access-Control-Allow-Origin"] = origin
headers["Vary"] = "Origin"
+ # We don't want to overwrite this for preflight requests.
+ allow_headers = headers.get("Access-Control-Allow-Headers")
+ if not allow_headers and self.config.allow_headers:
+ headers["Access-Control-Allow-Headers"] = ", ".join(sorted(set(self.config.allow_headers)))
+
+ allow_methods = headers.get("Access-Control-Allow-Methods")
+ if not allow_methods and self.config.allow_methods:
+ headers["Access-Control-Allow-Methods"] = ", ".join(sorted(set(self.config.allow_methods)))
+
await send(message)
return wrapped_send
| {"golden_diff": "diff --git a/litestar/middleware/cors.py b/litestar/middleware/cors.py\n--- a/litestar/middleware/cors.py\n+++ b/litestar/middleware/cors.py\n@@ -70,6 +70,15 @@\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers[\"Vary\"] = \"Origin\"\n \n+ # We don't want to overwrite this for preflight requests.\n+ allow_headers = headers.get(\"Access-Control-Allow-Headers\")\n+ if not allow_headers and self.config.allow_headers:\n+ headers[\"Access-Control-Allow-Headers\"] = \", \".join(sorted(set(self.config.allow_headers)))\n+\n+ allow_methods = headers.get(\"Access-Control-Allow-Methods\")\n+ if not allow_methods and self.config.allow_methods:\n+ headers[\"Access-Control-Allow-Methods\"] = \", \".join(sorted(set(self.config.allow_methods)))\n+\n await send(message)\n \n return wrapped_send\n", "issue": "Bug: CORS Middleware not setting all headers as per spec\n### Description\r\n\r\nRight now, there's only a handful of headers that are only being set for the preflight request. They must be set for both the preflight and actual request. \r\nhttps://fetch.spec.whatwg.org/#http-responses\r\n\r\nOnly `Access-Control-Allow-Origin` is being set here.\r\nhttps://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/middleware/cors.py#L61-L73\r\n\r\nOnly `Access-Control-Allow-Credentials` and `Access-Control-Expose-Headers` get set here, and this is what the above code uses to update headers\r\nhttps://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/config/cors.py#L123-L136\r\n\r\nThis still doesn't account for:\r\n- Access-Control-Allow-Methods\r\n- Access-Control-Allow-Headers\r\n\r\nwhich are only set on preflight, but should also be set to the actual request.\r\n\r\n### Litestar Version\r\n\r\n2.2.1\r\n\r\n### Platform\r\n\r\n- [X] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/3178\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom litestar.datastructures import Headers, MutableScopeHeaders\nfrom litestar.enums import ScopeType\nfrom litestar.middleware.base import AbstractMiddleware\n\n__all__ = (\"CORSMiddleware\",)\n\n\nif TYPE_CHECKING:\n from litestar.config.cors import CORSConfig\n from litestar.types import ASGIApp, Message, Receive, Scope, Send\n\n\nclass CORSMiddleware(AbstractMiddleware):\n \"\"\"CORS Middleware.\"\"\"\n\n __slots__ = (\"config\",)\n\n def __init__(self, app: ASGIApp, config: CORSConfig) -> None:\n \"\"\"Middleware that adds CORS validation to the application.\n\n Args:\n app: The ``next`` ASGI app to call.\n config: An instance of :class:`CORSConfig <litestar.config.cors.CORSConfig>`\n \"\"\"\n super().__init__(app=app, scopes={ScopeType.HTTP})\n self.config = config\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"ASGI callable.\n\n Args:\n scope: The ASGI connection scope.\n receive: The ASGI receive function.\n send: The ASGI send function.\n\n Returns:\n None\n \"\"\"\n headers = Headers.from_scope(scope=scope)\n if origin := headers.get(\"origin\"):\n await self.app(scope, receive, self.send_wrapper(send=send, origin=origin, has_cookie=\"cookie\" in headers))\n else:\n await self.app(scope, receive, send)\n\n def send_wrapper(self, send: Send, origin: str, has_cookie: bool) -> Send:\n \"\"\"Wrap ``send`` to ensure that state is not disconnected.\n\n Args:\n has_cookie: Boolean flag dictating if the connection has a cookie set.\n origin: The value of the ``Origin`` header.\n send: The ASGI send function.\n\n Returns:\n An ASGI send function.\n \"\"\"\n\n async def wrapped_send(message: Message) -> None:\n if message[\"type\"] == \"http.response.start\":\n message.setdefault(\"headers\", [])\n headers = MutableScopeHeaders.from_message(message=message)\n headers.update(self.config.simple_headers)\n\n if (self.config.is_allow_all_origins and has_cookie) or (\n not self.config.is_allow_all_origins and self.config.is_origin_allowed(origin=origin)\n ):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers[\"Vary\"] = \"Origin\"\n\n await send(message)\n\n return wrapped_send\n", "path": "litestar/middleware/cors.py"}]} | 1,837 | 198 |
gh_patches_debug_30165 | rasdani/github-patches | git_diff | pytorch__ignite-281 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request] More general metrics
I find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.
For instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.
The mask is necessary in the loss to know which outputs to use in the final averaging/ loss.
[Feature Request] More general metrics
I find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.
For instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.
The mask is necessary in the loss to know which outputs to use in the final averaging/ loss.
</issue>
<code>
[start of ignite/metrics/loss.py]
1 from __future__ import division
2
3 from ignite.exceptions import NotComputableError
4 from ignite.metrics.metric import Metric
5
6
7 class Loss(Metric):
8 """
9 Calculates the average loss according to the passed loss_fn.
10
11 - `loss_fn` must return the average loss over all observations in the batch.
12 - `update` must receive output of the form `(y_pred, y)`.
13 """
14 def __init__(self, loss_fn, output_transform=lambda x: x):
15 super(Loss, self).__init__(output_transform)
16 self._loss_fn = loss_fn
17
18 def reset(self):
19 self._sum = 0
20 self._num_examples = 0
21
22 def update(self, output):
23 y_pred, y = output
24 average_loss = self._loss_fn(y_pred, y)
25 assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
26 self._sum += average_loss.item() * y.shape[0]
27 self._num_examples += y.shape[0]
28
29 def compute(self):
30 if self._num_examples == 0:
31 raise NotComputableError(
32 'Loss must have at least one example before it can be computed')
33 return self._sum / self._num_examples
34
[end of ignite/metrics/loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py
--- a/ignite/metrics/loss.py
+++ b/ignite/metrics/loss.py
@@ -8,9 +8,21 @@
"""
Calculates the average loss according to the passed loss_fn.
- - `loss_fn` must return the average loss over all observations in the batch.
- - `update` must receive output of the form `(y_pred, y)`.
+ Args:
+ loss_fn (callable): a callable taking a prediction tensor, a target
+ tensor, optionally other arguments, and returns the average loss
+ over all observations in the batch.
+ output_transform (callable): a callable that is used to transform the
+ :class:`ignite.engine.Engine`'s `process_function`'s output into the
+ form expected by the metric.
+ This can be useful if, for example, you have a multi-output model and
+ you want to compute the metric with respect to one of the outputs.
+ The output is is expected to be a tuple (prediction, target) or
+ (prediction, target, kwargs) where kwargs is a dictionary of extra
+ keywords arguments.
+
"""
+
def __init__(self, loss_fn, output_transform=lambda x: x):
super(Loss, self).__init__(output_transform)
self._loss_fn = loss_fn
@@ -20,8 +32,12 @@
self._num_examples = 0
def update(self, output):
- y_pred, y = output
- average_loss = self._loss_fn(y_pred, y)
+ if len(output) == 2:
+ y_pred, y = output
+ kwargs = {}
+ else:
+ y_pred, y, kwargs = output
+ average_loss = self._loss_fn(y_pred, y, **kwargs)
assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
self._sum += average_loss.item() * y.shape[0]
self._num_examples += y.shape[0]
| {"golden_diff": "diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py\n--- a/ignite/metrics/loss.py\n+++ b/ignite/metrics/loss.py\n@@ -8,9 +8,21 @@\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n \n- - `loss_fn` must return the average loss over all observations in the batch.\n- - `update` must receive output of the form `(y_pred, y)`.\n+ Args:\n+ loss_fn (callable): a callable taking a prediction tensor, a target\n+ tensor, optionally other arguments, and returns the average loss\n+ over all observations in the batch.\n+ output_transform (callable): a callable that is used to transform the\n+ :class:`ignite.engine.Engine`'s `process_function`'s output into the\n+ form expected by the metric.\n+ This can be useful if, for example, you have a multi-output model and\n+ you want to compute the metric with respect to one of the outputs.\n+ The output is is expected to be a tuple (prediction, target) or\n+ (prediction, target, kwargs) where kwargs is a dictionary of extra\n+ keywords arguments.\n+\n \"\"\"\n+\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n@@ -20,8 +32,12 @@\n self._num_examples = 0\n \n def update(self, output):\n- y_pred, y = output\n- average_loss = self._loss_fn(y_pred, y)\n+ if len(output) == 2:\n+ y_pred, y = output\n+ kwargs = {}\n+ else:\n+ y_pred, y, kwargs = output\n+ average_loss = self._loss_fn(y_pred, y, **kwargs)\n assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n", "issue": "[Feature Request] More general metrics\nI find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.\r\nFor instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.\r\nThe mask is necessary in the loss to know which outputs to use in the final averaging/ loss.\n[Feature Request] More general metrics\nI find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.\r\nFor instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.\r\nThe mask is necessary in the loss to know which outputs to use in the final averaging/ loss.\n", "before_files": [{"content": "from __future__ import division\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n - `loss_fn` must return the average loss over all observations in the batch.\n - `update` must receive output of the form `(y_pred, y)`.\n \"\"\"\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n\n def reset(self):\n self._sum = 0\n self._num_examples = 0\n\n def update(self, output):\n y_pred, y = output\n average_loss = self._loss_fn(y_pred, y)\n assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\n 'Loss must have at least one example before it can be computed')\n return self._sum / self._num_examples\n", "path": "ignite/metrics/loss.py"}]} | 1,023 | 468 |
gh_patches_debug_12175 | rasdani/github-patches | git_diff | liqd__a4-opin-529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wording change for accept page
The page for accepting invites to private projects currently is a bit too straight-forward. ;) Let’s add some information
The headline should be changed to: Do you want to join “<project name>” ?
Then there should be another line underneath the headline set in our standard paragraph text style:
You were invited by the initiator of the project. If you accept you will be able to participate in the project. If you decline the invitation, you can also ask for membership at a later time.
The English label for the reject button should be changed to “decline”
The reject button looks strange. I think the button should be styled as a regular red button. Or is the small font-size on purpose?

</issue>
<code>
[start of euth/projects/rules.py]
1 import rules
2 from rules.predicates import is_superuser
3
4 from euth.organisations.predicates import is_initiator
5
6 from .predicates import is_live, is_member, is_public
7
8 rules.add_perm('euth_projects.edit_project',
9 is_superuser | is_initiator)
10
11
12 rules.add_perm('projects.view_project',
13 is_superuser | is_initiator |
14 ((is_public | is_member) & is_live))
15
[end of euth/projects/rules.py]
[start of euth/projects/views.py]
1 from django.shortcuts import redirect
2 from django.views import generic
3 from rules.contrib import views as rules_views
4
5 from . import mixins, models
6
7
8 class ProjectDetailView(rules_views.PermissionRequiredMixin,
9 mixins.PhaseDispatchMixin,
10 generic.DetailView):
11
12 model = models.Project
13 permission_required = 'projects.view_project'
14
15 @property
16 def raise_exception(self):
17 return self.request.user.is_authenticated()
18
19 def handle_no_permission(self):
20 """
21 Check if user clould join
22 """
23 membership_impossible = (
24 not self.request.user.is_authenticated()
25 or self.project.is_draft
26 or self.project.has_member(self.request.user)
27 )
28
29 if membership_impossible:
30 return super().handle_no_permission()
31 else:
32 return self._redirect_membership_request()
33
34 def _redirect_membership_request(self):
35 return redirect('memberships-request',
36 project_slug=self.project.slug)
37
38 @property
39 def project(self):
40 """
41 Emulate ProjectMixin interface for template sharing.
42 """
43 return self.get_object()
44
[end of euth/projects/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/projects/rules.py b/euth/projects/rules.py
--- a/euth/projects/rules.py
+++ b/euth/projects/rules.py
@@ -9,6 +9,6 @@
is_superuser | is_initiator)
-rules.add_perm('projects.view_project',
+rules.add_perm('euth_projects.view_project',
is_superuser | is_initiator |
((is_public | is_member) & is_live))
diff --git a/euth/projects/views.py b/euth/projects/views.py
--- a/euth/projects/views.py
+++ b/euth/projects/views.py
@@ -10,7 +10,7 @@
generic.DetailView):
model = models.Project
- permission_required = 'projects.view_project'
+ permission_required = 'euth_projects.view_project'
@property
def raise_exception(self):
| {"golden_diff": "diff --git a/euth/projects/rules.py b/euth/projects/rules.py\n--- a/euth/projects/rules.py\n+++ b/euth/projects/rules.py\n@@ -9,6 +9,6 @@\n is_superuser | is_initiator)\n \n \n-rules.add_perm('projects.view_project',\n+rules.add_perm('euth_projects.view_project',\n is_superuser | is_initiator |\n ((is_public | is_member) & is_live))\ndiff --git a/euth/projects/views.py b/euth/projects/views.py\n--- a/euth/projects/views.py\n+++ b/euth/projects/views.py\n@@ -10,7 +10,7 @@\n generic.DetailView):\n \n model = models.Project\n- permission_required = 'projects.view_project'\n+ permission_required = 'euth_projects.view_project'\n \n @property\n def raise_exception(self):\n", "issue": "Wording change for accept page\nThe page for accepting invites to private projects currently is a bit too straight-forward. ;) Let\u2019s add some information\r\n\r\nThe headline should be changed to: Do you want to join \u201c<project name>\u201d ?\r\nThen there should be another line underneath the headline set in our standard paragraph text style:\r\nYou were invited by the initiator of the project. If you accept you will be able to participate in the project. If you decline the invitation, you can also ask for membership at a later time.\r\n\r\nThe English label for the reject button should be changed to \u201cdecline\u201d\r\nThe reject button looks strange. I think the button should be styled as a regular red button. Or is the small font-size on purpose?\r\n\r\n\r\n\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom euth.organisations.predicates import is_initiator\n\nfrom .predicates import is_live, is_member, is_public\n\nrules.add_perm('euth_projects.edit_project',\n is_superuser | is_initiator)\n\n\nrules.add_perm('projects.view_project',\n is_superuser | is_initiator |\n ((is_public | is_member) & is_live))\n", "path": "euth/projects/rules.py"}, {"content": "from django.shortcuts import redirect\nfrom django.views import generic\nfrom rules.contrib import views as rules_views\n\nfrom . import mixins, models\n\n\nclass ProjectDetailView(rules_views.PermissionRequiredMixin,\n mixins.PhaseDispatchMixin,\n generic.DetailView):\n\n model = models.Project\n permission_required = 'projects.view_project'\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def handle_no_permission(self):\n \"\"\"\n Check if user clould join\n \"\"\"\n membership_impossible = (\n not self.request.user.is_authenticated()\n or self.project.is_draft\n or self.project.has_member(self.request.user)\n )\n\n if membership_impossible:\n return super().handle_no_permission()\n else:\n return self._redirect_membership_request()\n\n def _redirect_membership_request(self):\n return redirect('memberships-request',\n project_slug=self.project.slug)\n\n @property\n def project(self):\n \"\"\"\n Emulate ProjectMixin interface for template sharing.\n \"\"\"\n return self.get_object()\n", "path": "euth/projects/views.py"}]} | 1,196 | 181 |
gh_patches_debug_1300 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-368 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Better check for codec names
currently, codec name argument is not checked. A typo would result in worker interpreting encoded data.
</issue>
<code>
[start of elasticdl/master/main.py]
1 import logging
2 import time
3 import argparse
4 import os
5
6 import grpc
7 import tensorflow as tf
8
9 tf.enable_eager_execution()
10
11 from concurrent import futures
12 from recordio import File
13 from elasticdl.proto import master_pb2_grpc
14 from elasticdl.master.servicer import MasterServicer
15 from elasticdl.master.task_queue import _TaskQueue
16 from elasticdl.master.k8s_worker_manager import WorkerManager
17 from elasticdl.common.model_helper import load_user_model, build_model
18
19
20 def _make_task_queue(data_dir, record_per_task, num_epoch):
21 f_records = {}
22 for f in os.listdir(data_dir):
23 p = os.path.join(data_dir, f)
24 with File(p, "r") as rio:
25 f_records[p] = rio.count()
26 return _TaskQueue(f_records, record_per_task, num_epoch)
27
28
29 def _parse_args():
30 parser = argparse.ArgumentParser(description="ElasticDL Master")
31 parser.add_argument(
32 "--model_file",
33 help="Full file path of user defined neural model",
34 required=True,
35 )
36 parser.add_argument(
37 "--train_data_dir",
38 help="Training data directory. Files should be in RecordIO format",
39 required=True,
40 )
41 parser.add_argument("--record_per_task", type=int, required=True)
42 parser.add_argument("--num_epoch", type=int, required=True)
43 parser.add_argument(
44 "--grads_to_wait",
45 type=int,
46 help="Number of gradients to wait before updating model",
47 required=True,
48 )
49 parser.add_argument(
50 "--minibatch_size",
51 type=int,
52 help="Minibatch size used by workers to compute gradients",
53 required=True,
54 )
55 parser.add_argument(
56 "--num_worker",
57 type=int,
58 help="the number of workers used in training",
59 default=0,
60 )
61 parser.add_argument(
62 "--worker_image", help="docker image for worker", default=None
63 )
64 parser.add_argument("--job_name", help="job name", required=True)
65 parser.add_argument(
66 "--codec-type",
67 default=None,
68 help="Type of codec(tf_example or None)",
69 )
70 return parser.parse_args()
71
72
73 def main():
74 # TODO: pass port via flags.
75 PORT = 50001
76 logger = logging.getLogger("master")
77 args = _parse_args()
78 task_q = _make_task_queue(
79 args.train_data_dir, args.record_per_task, args.num_epoch
80 )
81 model_module = load_user_model(args.model_file)
82 model_inst = model_module.model
83 build_model(model_inst, model_module.feature_columns())
84 optimizer = model_module.optimizer()
85
86 server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))
87 master_pb2_grpc.add_MasterServicer_to_server(
88 MasterServicer(
89 logger,
90 args.grads_to_wait,
91 args.minibatch_size,
92 optimizer,
93 task_q,
94 init_var=model_inst.trainable_variables,
95 ),
96 server,
97 )
98 server.add_insecure_port("[::]:{}".format(PORT))
99 server.start()
100 logger.warning("Server started at port: %d", PORT)
101
102 if args.num_worker:
103 master_addr = "%s:%d" % (os.getenv("MY_POD_IP", "localhost"), PORT)
104 worker_command = ["python"]
105 worker_args = [
106 "-m",
107 "elasticdl.worker.main",
108 "--model_file",
109 args.model_file,
110 "--master_addr",
111 master_addr,
112 "--codec-type",
113 args.codec_type
114 ]
115
116 worker_manager = WorkerManager(
117 job_name=args.job_name,
118 worker_image=args.worker_image,
119 command=worker_command,
120 args=worker_args,
121 namespace="default",
122 num_worker=args.num_worker,
123 )
124 worker_manager.start_workers(restart_policy="Never")
125
126 try:
127 while True:
128 if task_q.finished():
129 break
130 time.sleep(30)
131 except KeyboardInterrupt:
132 logger.warning("Server stopping")
133
134 if args.num_worker:
135 # TODO: worker_manager.remove_workers supports synchronized call
136 worker_manager.remove_workers()
137 # wait for worker pod to be deleted
138 max_check_num = 10
139 for _ in range(max_check_num):
140 time.sleep(3)
141 counters = worker_manager.get_counters()
142 if not counters:
143 break
144 server.stop(0)
145
146
147 if __name__ == "__main__":
148 logging.basicConfig()
149 main()
150
[end of elasticdl/master/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/master/main.py b/elasticdl/master/main.py
--- a/elasticdl/master/main.py
+++ b/elasticdl/master/main.py
@@ -65,6 +65,7 @@
parser.add_argument(
"--codec-type",
default=None,
+ choices=["tf_example"],
help="Type of codec(tf_example or None)",
)
return parser.parse_args()
| {"golden_diff": "diff --git a/elasticdl/master/main.py b/elasticdl/master/main.py\n--- a/elasticdl/master/main.py\n+++ b/elasticdl/master/main.py\n@@ -65,6 +65,7 @@\n parser.add_argument(\n \"--codec-type\",\n default=None,\n+ choices=[\"tf_example\"],\n help=\"Type of codec(tf_example or None)\",\n )\n return parser.parse_args()\n", "issue": "Better check for codec names\ncurrently, codec name argument is not checked. A typo would result in worker interpreting encoded data.\n", "before_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec-type\",\n default=None,\n help=\"Type of codec(tf_example or None)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec-type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n # wait for worker pod to be deleted\n max_check_num = 10\n for _ in range(max_check_num):\n time.sleep(3)\n counters = worker_manager.get_counters()\n if not counters:\n break\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/master/main.py"}]} | 1,846 | 89 |
gh_patches_debug_17319 | rasdani/github-patches | git_diff | elastic__elasticsearch-py-210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for custom authentication objects for requests module
Hi,
Several transport classes are available, one of them is "requests".
Requests supports basic-authentication but far more than that ([0](http://docs.python-requests.org/en/latest/user/advanced/#custom-authentication)). In order to support this a few lines would need to be changed to allow for providing an authentication object ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)).
I have the code ready ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)) for this and am actively using it.
Would you be willing to accept this contribution ?
</issue>
<code>
[start of elasticsearch/connection/http_requests.py]
1 import time
2 import warnings
3 try:
4 import requests
5 REQUESTS_AVAILABLE = True
6 except ImportError:
7 REQUESTS_AVAILABLE = False
8
9 from .base import Connection
10 from ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout, SSLError
11 from ..compat import urlencode
12
13 class RequestsHttpConnection(Connection):
14 """
15 Connection using the `requests` library.
16
17 :arg http_auth: optional http auth information as either ':' separated
18 string or a tuple
19 :arg use_ssl: use ssl for the connection if `True`
20 :arg verify_certs: whether to verify SSL certificates
21 :arg ca_certs: optional path to CA bundle. By default standard requests'
22 bundle will be used.
23 :arg client_cert: path to the file containing the private key and the
24 certificate
25 """
26 def __init__(self, host='localhost', port=9200, http_auth=None,
27 use_ssl=False, verify_certs=False, ca_certs=None, client_cert=None,
28 **kwargs):
29 if not REQUESTS_AVAILABLE:
30 raise ImproperlyConfigured("Please install requests to use RequestsHttpConnection.")
31
32 super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)
33 self.session = requests.session()
34 if http_auth is not None:
35 if not isinstance(http_auth, (tuple, list)):
36 http_auth = http_auth.split(':', 1)
37 http_auth = tuple(http_auth)
38 self.session.auth = http_auth
39 self.base_url = 'http%s://%s:%d%s' % (
40 's' if use_ssl else '',
41 host, port, self.url_prefix
42 )
43 self.session.verify = verify_certs
44 self.session.cert = client_cert
45 if ca_certs:
46 if not verify_certs:
47 raise ImproperlyConfigured("You cannot pass CA certificates when verify SSL is off.")
48 self.session.verify = ca_certs
49
50 if use_ssl and not verify_certs:
51 warnings.warn(
52 'Connecting to %s using SSL with verify_certs=False is insecure.' % self.base_url)
53
54 def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):
55 url = self.base_url + url
56 if params:
57 url = '%s?%s' % (url, urlencode(params or {}))
58
59 start = time.time()
60 try:
61 response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)
62 duration = time.time() - start
63 raw_data = response.text
64 except requests.exceptions.SSLError as e:
65 self.log_request_fail(method, url, body, time.time() - start, exception=e)
66 raise SSLError('N/A', str(e), e)
67 except requests.Timeout as e:
68 self.log_request_fail(method, url, body, time.time() - start, exception=e)
69 raise ConnectionTimeout('TIMEOUT', str(e), e)
70 except requests.ConnectionError as e:
71 self.log_request_fail(method, url, body, time.time() - start, exception=e)
72 raise ConnectionError('N/A', str(e), e)
73
74 # raise errors based on http status codes, let the client handle those if needed
75 if not (200 <= response.status_code < 300) and response.status_code not in ignore:
76 self.log_request_fail(method, url, body, duration, response.status_code)
77 self._raise_error(response.status_code, raw_data)
78
79 self.log_request_success(method, url, response.request.path_url, body, response.status_code, raw_data, duration)
80
81 return response.status_code, response.headers, raw_data
82
[end of elasticsearch/connection/http_requests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticsearch/connection/http_requests.py b/elasticsearch/connection/http_requests.py
--- a/elasticsearch/connection/http_requests.py
+++ b/elasticsearch/connection/http_requests.py
@@ -1,3 +1,4 @@
+import six
import time
import warnings
try:
@@ -32,8 +33,10 @@
super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)
self.session = requests.session()
if http_auth is not None:
- if not isinstance(http_auth, (tuple, list)):
- http_auth = http_auth.split(':', 1)
+ if isinstance(http_auth, (tuple, list)):
+ http_auth = tuple(http_auth)
+ elif isinstance(http_auth, six.string_types):
+ http_auth = tuple(http_auth.split(':', 1))
http_auth = tuple(http_auth)
self.session.auth = http_auth
self.base_url = 'http%s://%s:%d%s' % (
| {"golden_diff": "diff --git a/elasticsearch/connection/http_requests.py b/elasticsearch/connection/http_requests.py\n--- a/elasticsearch/connection/http_requests.py\n+++ b/elasticsearch/connection/http_requests.py\n@@ -1,3 +1,4 @@\n+import six\n import time\n import warnings\n try:\n@@ -32,8 +33,10 @@\n super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)\n self.session = requests.session()\n if http_auth is not None:\n- if not isinstance(http_auth, (tuple, list)):\n- http_auth = http_auth.split(':', 1)\n+ if isinstance(http_auth, (tuple, list)):\n+ http_auth = tuple(http_auth)\n+ elif isinstance(http_auth, six.string_types):\n+ http_auth = tuple(http_auth.split(':', 1))\n http_auth = tuple(http_auth)\n self.session.auth = http_auth\n self.base_url = 'http%s://%s:%d%s' % (\n", "issue": "Support for custom authentication objects for requests module\nHi,\n\nSeveral transport classes are available, one of them is \"requests\".\nRequests supports basic-authentication but far more than that ([0](http://docs.python-requests.org/en/latest/user/advanced/#custom-authentication)). In order to support this a few lines would need to be changed to allow for providing an authentication object ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)).\n\nI have the code ready ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)) for this and am actively using it.\n\nWould you be willing to accept this contribution ?\n\n", "before_files": [{"content": "import time\nimport warnings\ntry:\n import requests\n REQUESTS_AVAILABLE = True\nexcept ImportError:\n REQUESTS_AVAILABLE = False\n\nfrom .base import Connection\nfrom ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout, SSLError\nfrom ..compat import urlencode\n\nclass RequestsHttpConnection(Connection):\n \"\"\"\n Connection using the `requests` library.\n\n :arg http_auth: optional http auth information as either ':' separated\n string or a tuple\n :arg use_ssl: use ssl for the connection if `True`\n :arg verify_certs: whether to verify SSL certificates\n :arg ca_certs: optional path to CA bundle. By default standard requests'\n bundle will be used.\n :arg client_cert: path to the file containing the private key and the\n certificate\n \"\"\"\n def __init__(self, host='localhost', port=9200, http_auth=None,\n use_ssl=False, verify_certs=False, ca_certs=None, client_cert=None,\n **kwargs):\n if not REQUESTS_AVAILABLE:\n raise ImproperlyConfigured(\"Please install requests to use RequestsHttpConnection.\")\n\n super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)\n self.session = requests.session()\n if http_auth is not None:\n if not isinstance(http_auth, (tuple, list)):\n http_auth = http_auth.split(':', 1)\n http_auth = tuple(http_auth)\n self.session.auth = http_auth\n self.base_url = 'http%s://%s:%d%s' % (\n 's' if use_ssl else '',\n host, port, self.url_prefix\n )\n self.session.verify = verify_certs\n self.session.cert = client_cert\n if ca_certs:\n if not verify_certs:\n raise ImproperlyConfigured(\"You cannot pass CA certificates when verify SSL is off.\")\n self.session.verify = ca_certs\n\n if use_ssl and not verify_certs:\n warnings.warn(\n 'Connecting to %s using SSL with verify_certs=False is insecure.' % self.base_url)\n\n def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):\n url = self.base_url + url\n if params:\n url = '%s?%s' % (url, urlencode(params or {}))\n\n start = time.time()\n try:\n response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)\n duration = time.time() - start\n raw_data = response.text\n except requests.exceptions.SSLError as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise SSLError('N/A', str(e), e)\n except requests.Timeout as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionTimeout('TIMEOUT', str(e), e)\n except requests.ConnectionError as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionError('N/A', str(e), e)\n\n # raise errors based on http status codes, let the client handle those if needed\n if not (200 <= response.status_code < 300) and response.status_code not in ignore:\n self.log_request_fail(method, url, body, duration, response.status_code)\n self._raise_error(response.status_code, raw_data)\n\n self.log_request_success(method, url, response.request.path_url, body, response.status_code, raw_data, duration)\n\n return response.status_code, response.headers, raw_data\n", "path": "elasticsearch/connection/http_requests.py"}]} | 1,629 | 211 |
gh_patches_debug_37274 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2964 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider tiffany is broken
During the global build at 2021-05-26-14-42-23, spider **tiffany** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tiffany.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson))
Tiffany
http://www.tiffany.com/jewelry-stores/store-list/united-states
</issue>
<code>
[start of locations/spiders/tiffany.py]
1 import scrapy
2 import re
3 import json
4 from locations.items import GeojsonPointItem
5
6 class TiffanySpider(scrapy.Spider):
7
8 name = "tiffany"
9 item_attributes = { 'brand': "Tiffany" }
10 allowed_domains = ["www.tiffany.com"]
11 download_delay = 0.5
12 start_urls = (
13 'http://www.tiffany.com/jewelry-stores/store-list/united-states',
14 )
15
16 def parse_day(self, day):
17 if re.search('-', day):
18 days = day.split('-')
19 osm_days = []
20 if len(days) == 2:
21 for day in days:
22 osm_day = day.strip()[:2]
23 osm_days.append(osm_day)
24 return "-".join(osm_days)
25 return day.strip()[:2]
26
27 def parse_times(self, times):
28 if times.strip() == 'CLOSED':
29 return 'Closed'
30 hours_to = [x.strip() for x in times.split('-')]
31 cleaned_times = []
32 for hour in hours_to:
33 if re.search('PM$', hour):
34 hour = re.sub('PM', '', hour).strip()
35 hour_min = hour.split(":")
36 if int(hour_min[0]) < 12:
37 hour_min[0] = str(12 + int(hour_min[0]))
38 cleaned_times.append(":".join(hour_min))
39
40 if re.search('AM$', hour):
41 hour = re.sub('AM', '', hour).strip()
42 hour_min = hour.split(":")
43 if len(hour_min[0]) <2:
44 hour_min[0] = hour_min[0].zfill(2)
45 else:
46 hour_min[0] = str(int(hour_min[0]))
47
48 cleaned_times.append(":".join(hour_min))
49 return "-".join(cleaned_times)
50
51 def parse_hours(self, lis):
52 hours = []
53 for li in lis:
54 if re.search(r"([0-9]{1,2}):([0-9]{1,2})([APM]{2})|CLOSED" , li):
55 day = li.split(':')[0]
56 times = li.replace(day+':','')
57 if times and day:
58 parsed_time = self.parse_times(times)
59 parsed_day = self.parse_day(day)
60 hours.append(parsed_day + ' ' + parsed_time)
61
62 return "; ".join(hours)
63
64 def parse_stores(self, response):
65 data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
66 properties = {
67 'addr_full': data['address']['streetAddress'],
68 'phone': data['telephone'],
69 'name': data['name'],
70 'city': data['address']['addressLocality'],
71 'state': data['address']['addressRegion'],
72 'postcode': data['address']['postalCode'],
73 'ref': data['name'].replace(' ','_'),
74 'website': response.url,
75 'lat': float(data['geo']['latitude']),
76 'lon': float(data['geo']['longitude']),
77 }
78
79 hours = self.parse_hours(response.xpath('//div[@id="divExtendedInfo"]/text()').extract())
80 if hours:
81 properties['opening_hours'] = hours
82 yield GeojsonPointItem(**properties)
83
84 def parse(self, response):
85 urls = response.xpath('//a[contains(text(),"View on Map")]/@href').extract()
86 for path in urls:
87 yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)
88
[end of locations/spiders/tiffany.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/tiffany.py b/locations/spiders/tiffany.py
--- a/locations/spiders/tiffany.py
+++ b/locations/spiders/tiffany.py
@@ -6,11 +6,11 @@
class TiffanySpider(scrapy.Spider):
name = "tiffany"
- item_attributes = { 'brand': "Tiffany" }
+ item_attributes = { 'brand': "Tiffany", 'brand_wikidata': "Q1066858" }
allowed_domains = ["www.tiffany.com"]
download_delay = 0.5
start_urls = (
- 'http://www.tiffany.com/jewelry-stores/store-list/united-states',
+ 'https://www.tiffany.com/jewelry-stores/store-list/',
)
def parse_day(self, day):
@@ -61,27 +61,31 @@
return "; ".join(hours)
- def parse_stores(self, response):
- data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
- properties = {
- 'addr_full': data['address']['streetAddress'],
- 'phone': data['telephone'],
- 'name': data['name'],
- 'city': data['address']['addressLocality'],
- 'state': data['address']['addressRegion'],
- 'postcode': data['address']['postalCode'],
- 'ref': data['name'].replace(' ','_'),
- 'website': response.url,
- 'lat': float(data['geo']['latitude']),
- 'lon': float(data['geo']['longitude']),
- }
+ def parse(self, response):
+ for href in response.xpath('//@href[contains(., "/jewelry-stores/")]').extract():
+ yield scrapy.Request(response.urljoin(href))
- hours = self.parse_hours(response.xpath('//div[@id="divExtendedInfo"]/text()').extract())
- if hours:
- properties['opening_hours'] = hours
- yield GeojsonPointItem(**properties)
+ for ldjson in response.xpath('//script[@type="application/ld+json"]/text()').extract():
+ data = json.loads(ldjson)
+ if data["@type"] != "Store":
+ continue
+
+ properties = {
+ 'name': data['name'],
+ 'phone': data['telephone'],
+ 'addr_full': data['address']['streetAddress'],
+ 'city': data['address']['addressLocality'],
+ 'state': data['address']['addressRegion'],
+ 'postcode': data['address']['postalCode'],
+ 'country': data['address']['addressCountry'],
+ 'ref': data['name'].replace(' ','_'),
+ 'website': response.url,
+ 'lat': response.xpath('//tiffany-maps/@markeratlat').extract_first(),
+ 'lon': response.xpath('//tiffany-maps/@markeratlng').extract_first(),
+ }
+
+ hours = self.parse_hours(response.xpath('//div[@id="divExtendedInfo"]/text()').extract())
+ if hours:
+ properties['opening_hours'] = hours
+ yield GeojsonPointItem(**properties)
- def parse(self, response):
- urls = response.xpath('//a[contains(text(),"View on Map")]/@href').extract()
- for path in urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)
| {"golden_diff": "diff --git a/locations/spiders/tiffany.py b/locations/spiders/tiffany.py\n--- a/locations/spiders/tiffany.py\n+++ b/locations/spiders/tiffany.py\n@@ -6,11 +6,11 @@\n class TiffanySpider(scrapy.Spider):\n \n name = \"tiffany\"\n- item_attributes = { 'brand': \"Tiffany\" }\n+ item_attributes = { 'brand': \"Tiffany\", 'brand_wikidata': \"Q1066858\" }\n allowed_domains = [\"www.tiffany.com\"]\n download_delay = 0.5\n start_urls = (\n- 'http://www.tiffany.com/jewelry-stores/store-list/united-states',\n+ 'https://www.tiffany.com/jewelry-stores/store-list/',\n )\n \n def parse_day(self, day):\n@@ -61,27 +61,31 @@\n \n return \"; \".join(hours)\n \n- def parse_stores(self, response):\n- data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n- properties = {\n- 'addr_full': data['address']['streetAddress'],\n- 'phone': data['telephone'],\n- 'name': data['name'],\n- 'city': data['address']['addressLocality'],\n- 'state': data['address']['addressRegion'],\n- 'postcode': data['address']['postalCode'],\n- 'ref': data['name'].replace(' ','_'),\n- 'website': response.url,\n- 'lat': float(data['geo']['latitude']),\n- 'lon': float(data['geo']['longitude']),\n- }\n+ def parse(self, response):\n+ for href in response.xpath('//@href[contains(., \"/jewelry-stores/\")]').extract():\n+ yield scrapy.Request(response.urljoin(href))\n \n- hours = self.parse_hours(response.xpath('//div[@id=\"divExtendedInfo\"]/text()').extract())\n- if hours:\n- properties['opening_hours'] = hours\n- yield GeojsonPointItem(**properties)\n+ for ldjson in response.xpath('//script[@type=\"application/ld+json\"]/text()').extract():\n+ data = json.loads(ldjson)\n+ if data[\"@type\"] != \"Store\":\n+ continue\n+\n+ properties = {\n+ 'name': data['name'],\n+ 'phone': data['telephone'],\n+ 'addr_full': data['address']['streetAddress'],\n+ 'city': data['address']['addressLocality'],\n+ 'state': data['address']['addressRegion'],\n+ 'postcode': data['address']['postalCode'],\n+ 'country': data['address']['addressCountry'],\n+ 'ref': data['name'].replace(' ','_'),\n+ 'website': response.url,\n+ 'lat': response.xpath('//tiffany-maps/@markeratlat').extract_first(),\n+ 'lon': response.xpath('//tiffany-maps/@markeratlng').extract_first(),\n+ }\n+\n+ hours = self.parse_hours(response.xpath('//div[@id=\"divExtendedInfo\"]/text()').extract())\n+ if hours:\n+ properties['opening_hours'] = hours\n+ yield GeojsonPointItem(**properties)\n \n- def parse(self, response):\n- urls = response.xpath('//a[contains(text(),\"View on Map\")]/@href').extract()\n- for path in urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)\n", "issue": "Spider tiffany is broken\nDuring the global build at 2021-05-26-14-42-23, spider **tiffany** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tiffany.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson))\nTiffany\nhttp://www.tiffany.com/jewelry-stores/store-list/united-states\n", "before_files": [{"content": "import scrapy\nimport re\nimport json\nfrom locations.items import GeojsonPointItem\n\nclass TiffanySpider(scrapy.Spider):\n\n name = \"tiffany\"\n item_attributes = { 'brand': \"Tiffany\" }\n allowed_domains = [\"www.tiffany.com\"]\n download_delay = 0.5\n start_urls = (\n 'http://www.tiffany.com/jewelry-stores/store-list/united-states',\n )\n\n def parse_day(self, day):\n if re.search('-', day):\n days = day.split('-')\n osm_days = []\n if len(days) == 2:\n for day in days:\n osm_day = day.strip()[:2]\n osm_days.append(osm_day)\n return \"-\".join(osm_days)\n return day.strip()[:2]\n\n def parse_times(self, times):\n if times.strip() == 'CLOSED':\n return 'Closed'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n for hour in hours_to:\n if re.search('PM$', hour):\n hour = re.sub('PM', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('AM$', hour):\n hour = re.sub('AM', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str(int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n if re.search(r\"([0-9]{1,2}):([0-9]{1,2})([APM]{2})|CLOSED\" , li):\n day = li.split(':')[0]\n times = li.replace(day+':','')\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n hours.append(parsed_day + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n def parse_stores(self, response):\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n properties = {\n 'addr_full': data['address']['streetAddress'],\n 'phone': data['telephone'],\n 'name': data['name'],\n 'city': data['address']['addressLocality'],\n 'state': data['address']['addressRegion'],\n 'postcode': data['address']['postalCode'],\n 'ref': data['name'].replace(' ','_'),\n 'website': response.url,\n 'lat': float(data['geo']['latitude']),\n 'lon': float(data['geo']['longitude']),\n }\n\n hours = self.parse_hours(response.xpath('//div[@id=\"divExtendedInfo\"]/text()').extract())\n if hours:\n properties['opening_hours'] = hours\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n urls = response.xpath('//a[contains(text(),\"View on Map\")]/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)\n", "path": "locations/spiders/tiffany.py"}]} | 1,653 | 757 |
gh_patches_debug_644 | rasdani/github-patches | git_diff | pex-tool__pex-1864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.101
On the docket:
+ [x] Pex fails to find RECORD for python-certifi-win32 1.6.1 #1861
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.100"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.100"
+__version__ = "2.1.101"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.100\"\n+__version__ = \"2.1.101\"\n", "issue": "Release 2.1.101\nOn the docket:\r\n+ [x] Pex fails to find RECORD for python-certifi-win32 1.6.1 #1861\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.100\"\n", "path": "pex/version.py"}]} | 628 | 99 |
gh_patches_debug_38106 | rasdani/github-patches | git_diff | TOMToolkit__tom_base-825 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ATLAS forced photometry data processor should correctly interpret limiting data points.
see [ATLAS Forced Photemetry Output Description](https://fallingstar-data.com/forcedphot/resultdesc/)
for how to interpret data.
</issue>
<code>
[start of tom_dataproducts/processors/atlas_processor.py]
1 import mimetypes
2
3 from astropy import units
4 import astropy.io.ascii
5 from astropy.time import Time, TimezoneInfo
6
7 from tom_dataproducts.data_processor import DataProcessor
8 from tom_dataproducts.exceptions import InvalidFileFormatException
9
10
11 class AtlasProcessor(DataProcessor):
12
13 def data_type_override(self):
14 return 'photometry'
15
16 def process_data(self, data_product):
17 """
18 Routes a atlas processing call to a method specific to a file-format.
19
20 :param data_product: Photometric DataProduct which will be processed into the specified format for database
21 ingestion
22 :type data_product: DataProduct
23
24 :returns: python list of 2-tuples, each with a timestamp and corresponding data
25 :rtype: list
26 """
27
28 mimetype = mimetypes.guess_type(data_product.data.path)[0]
29 if mimetype in self.PLAINTEXT_MIMETYPES:
30 photometry = self._process_photometry_from_plaintext(data_product)
31 return [(datum.pop('timestamp'), datum, datum.pop('source', 'ATLAS')) for datum in photometry]
32 else:
33 raise InvalidFileFormatException('Unsupported file type')
34
35 def _process_photometry_from_plaintext(self, data_product):
36 """
37 Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as
38 specified in the below documentation. The file is expected to be a multi-column delimited space delimited
39 text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot
40
41 The header looks like this:
42 ###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs
43
44 :param data_product: ATLAS Photometric DataProduct which will be processed into a list of dicts
45 :type data_product: DataProduct
46
47 :returns: python list containing the photometric data from the DataProduct
48 :rtype: list
49 """
50 photometry = []
51
52 data = astropy.io.ascii.read(data_product.data.path)
53 if len(data) < 1:
54 raise InvalidFileFormatException('Empty table or invalid file type')
55
56 try:
57 for datum in data:
58 time = Time(float(datum['##MJD']), format='mjd')
59 utc = TimezoneInfo(utc_offset=0*units.hour)
60 time.format = 'datetime'
61 value = {
62 'timestamp': time.to_datetime(timezone=utc),
63 'magnitude': float(datum['m']),
64 'magnitude_error': float(datum['dm']),
65 'filter': str(datum['F'])
66 }
67 photometry.append(value)
68 except Exception as e:
69 raise InvalidFileFormatException(e)
70
71 return photometry
72
[end of tom_dataproducts/processors/atlas_processor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tom_dataproducts/processors/atlas_processor.py b/tom_dataproducts/processors/atlas_processor.py
--- a/tom_dataproducts/processors/atlas_processor.py
+++ b/tom_dataproducts/processors/atlas_processor.py
@@ -21,7 +21,7 @@
ingestion
:type data_product: DataProduct
- :returns: python list of 2-tuples, each with a timestamp and corresponding data
+ :returns: python list of 3-tuples, each with a timestamp and corresponding data, and source
:rtype: list
"""
@@ -37,6 +37,7 @@
Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as
specified in the below documentation. The file is expected to be a multi-column delimited space delimited
text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot
+ See https://fallingstar-data.com/forcedphot/resultdesc/ for a description of the output format.
The header looks like this:
###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs
@@ -48,6 +49,7 @@
:rtype: list
"""
photometry = []
+ signal_to_noise_cutoff = 3.0 # cutoff to turn magnitudes into non-detection limits
data = astropy.io.ascii.read(data_product.data.path)
if len(data) < 1:
@@ -60,10 +62,19 @@
time.format = 'datetime'
value = {
'timestamp': time.to_datetime(timezone=utc),
- 'magnitude': float(datum['m']),
- 'magnitude_error': float(datum['dm']),
- 'filter': str(datum['F'])
+ 'filter': str(datum['F']),
+ 'error': float(datum['dm']),
+ 'telescope': 'ATLAS',
}
+ # If the signal is in the noise, set the non-detection limit to the
+ # absolute value of the reported magnitude.
+ # see https://fallingstar-data.com/forcedphot/resultdesc/
+ signal_to_noise = abs(float(datum['uJy']))/abs(float(datum['duJy']))
+ if signal_to_noise <= signal_to_noise_cutoff:
+ value['limit'] = abs(float(datum['m']))
+ else:
+ value['magnitude'] = abs(float(datum['m']))
+
photometry.append(value)
except Exception as e:
raise InvalidFileFormatException(e)
| {"golden_diff": "diff --git a/tom_dataproducts/processors/atlas_processor.py b/tom_dataproducts/processors/atlas_processor.py\n--- a/tom_dataproducts/processors/atlas_processor.py\n+++ b/tom_dataproducts/processors/atlas_processor.py\n@@ -21,7 +21,7 @@\n ingestion\n :type data_product: DataProduct\n \n- :returns: python list of 2-tuples, each with a timestamp and corresponding data\n+ :returns: python list of 3-tuples, each with a timestamp and corresponding data, and source\n :rtype: list\n \"\"\"\n \n@@ -37,6 +37,7 @@\n Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as\n specified in the below documentation. The file is expected to be a multi-column delimited space delimited\n text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot\n+ See https://fallingstar-data.com/forcedphot/resultdesc/ for a description of the output format.\n \n The header looks like this:\n ###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs\n@@ -48,6 +49,7 @@\n :rtype: list\n \"\"\"\n photometry = []\n+ signal_to_noise_cutoff = 3.0 # cutoff to turn magnitudes into non-detection limits\n \n data = astropy.io.ascii.read(data_product.data.path)\n if len(data) < 1:\n@@ -60,10 +62,19 @@\n time.format = 'datetime'\n value = {\n 'timestamp': time.to_datetime(timezone=utc),\n- 'magnitude': float(datum['m']),\n- 'magnitude_error': float(datum['dm']),\n- 'filter': str(datum['F'])\n+ 'filter': str(datum['F']),\n+ 'error': float(datum['dm']),\n+ 'telescope': 'ATLAS',\n }\n+ # If the signal is in the noise, set the non-detection limit to the\n+ # absolute value of the reported magnitude.\n+ # see https://fallingstar-data.com/forcedphot/resultdesc/\n+ signal_to_noise = abs(float(datum['uJy']))/abs(float(datum['duJy']))\n+ if signal_to_noise <= signal_to_noise_cutoff:\n+ value['limit'] = abs(float(datum['m']))\n+ else:\n+ value['magnitude'] = abs(float(datum['m']))\n+\n photometry.append(value)\n except Exception as e:\n raise InvalidFileFormatException(e)\n", "issue": "ATLAS forced photometry data processor should correctly interpret limiting data points.\nsee [ATLAS Forced Photemetry Output Description](https://fallingstar-data.com/forcedphot/resultdesc/)\nfor how to interpret data.\n", "before_files": [{"content": "import mimetypes\n\nfrom astropy import units\nimport astropy.io.ascii\nfrom astropy.time import Time, TimezoneInfo\n\nfrom tom_dataproducts.data_processor import DataProcessor\nfrom tom_dataproducts.exceptions import InvalidFileFormatException\n\n\nclass AtlasProcessor(DataProcessor):\n\n def data_type_override(self):\n return 'photometry'\n\n def process_data(self, data_product):\n \"\"\"\n Routes a atlas processing call to a method specific to a file-format.\n\n :param data_product: Photometric DataProduct which will be processed into the specified format for database\n ingestion\n :type data_product: DataProduct\n\n :returns: python list of 2-tuples, each with a timestamp and corresponding data\n :rtype: list\n \"\"\"\n\n mimetype = mimetypes.guess_type(data_product.data.path)[0]\n if mimetype in self.PLAINTEXT_MIMETYPES:\n photometry = self._process_photometry_from_plaintext(data_product)\n return [(datum.pop('timestamp'), datum, datum.pop('source', 'ATLAS')) for datum in photometry]\n else:\n raise InvalidFileFormatException('Unsupported file type')\n\n def _process_photometry_from_plaintext(self, data_product):\n \"\"\"\n Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as\n specified in the below documentation. The file is expected to be a multi-column delimited space delimited\n text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot\n\n The header looks like this:\n ###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs\n\n :param data_product: ATLAS Photometric DataProduct which will be processed into a list of dicts\n :type data_product: DataProduct\n\n :returns: python list containing the photometric data from the DataProduct\n :rtype: list\n \"\"\"\n photometry = []\n\n data = astropy.io.ascii.read(data_product.data.path)\n if len(data) < 1:\n raise InvalidFileFormatException('Empty table or invalid file type')\n\n try:\n for datum in data:\n time = Time(float(datum['##MJD']), format='mjd')\n utc = TimezoneInfo(utc_offset=0*units.hour)\n time.format = 'datetime'\n value = {\n 'timestamp': time.to_datetime(timezone=utc),\n 'magnitude': float(datum['m']),\n 'magnitude_error': float(datum['dm']),\n 'filter': str(datum['F'])\n }\n photometry.append(value)\n except Exception as e:\n raise InvalidFileFormatException(e)\n\n return photometry\n", "path": "tom_dataproducts/processors/atlas_processor.py"}]} | 1,337 | 609 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.