instance_id
stringlengths 10
57
| base_commit
stringlengths 40
40
| created_at
stringdate 2014-04-30 14:58:36
2025-04-30 20:14:11
| environment_setup_commit
stringlengths 40
40
| hints_text
stringlengths 0
273k
| patch
stringlengths 251
7.06M
| problem_statement
stringlengths 11
52.5k
| repo
stringlengths 7
53
| test_patch
stringlengths 231
997k
| meta
dict | version
stringclasses 851
values | install_config
dict | requirements
stringlengths 93
34.2k
⌀ | environment
stringlengths 760
20.5k
⌀ | FAIL_TO_PASS
listlengths 1
9.39k
| FAIL_TO_FAIL
listlengths 0
2.69k
| PASS_TO_PASS
listlengths 0
7.87k
| PASS_TO_FAIL
listlengths 0
192
| license_name
stringclasses 55
values | __index_level_0__
int64 0
21.4k
| before_filepaths
listlengths 1
105
| after_filepaths
listlengths 1
105
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
python-trio__outcome-11 | 9df51224a3e0efa3fffa0e950df1d4e680fa9c3e | 2018-04-16 21:56:28 | 9df51224a3e0efa3fffa0e950df1d4e680fa9c3e | codecov[bot]: # [Codecov](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=h1) Report
> Merging [#11](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=desc) into [master](https://codecov.io/gh/python-trio/outcome/commit/9df51224a3e0efa3fffa0e950df1d4e680fa9c3e?src=pr&el=desc) will **decrease** coverage by `3.37%`.
> The diff coverage is `85.96%`.
[](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #11 +/- ##
==========================================
- Coverage 100% 96.62% -3.38%
==========================================
Files 8 8
Lines 196 237 +41
Branches 10 16 +6
==========================================
+ Hits 196 229 +33
- Misses 0 4 +4
- Partials 0 4 +4
```
| [Impacted Files](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [tests/test\_sync.py](https://codecov.io/gh/python-trio/outcome/pull/11/diff?src=pr&el=tree#diff-dGVzdHMvdGVzdF9zeW5jLnB5) | `100% <100%> (ø)` | :arrow_up: |
| [src/outcome/\_util.py](https://codecov.io/gh/python-trio/outcome/pull/11/diff?src=pr&el=tree#diff-c3JjL291dGNvbWUvX3V0aWwucHk=) | `100% <100%> (ø)` | :arrow_up: |
| [src/outcome/\_\_init\_\_.py](https://codecov.io/gh/python-trio/outcome/pull/11/diff?src=pr&el=tree#diff-c3JjL291dGNvbWUvX19pbml0X18ucHk=) | `100% <100%> (ø)` | :arrow_up: |
| [src/outcome/\_async.py](https://codecov.io/gh/python-trio/outcome/pull/11/diff?src=pr&el=tree#diff-c3JjL291dGNvbWUvX2FzeW5jLnB5) | `86.66% <55.55%> (-13.34%)` | :arrow_down: |
| [src/outcome/\_sync.py](https://codecov.io/gh/python-trio/outcome/pull/11/diff?src=pr&el=tree#diff-c3JjL291dGNvbWUvX3N5bmMucHk=) | `92.85% <88.88%> (-7.15%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=footer). Last update [9df5122...4b5a233](https://codecov.io/gh/python-trio/outcome/pull/11?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
smurfix: Ah. I didn't know that you could split frozen and not-frozen simply by subclassing.
New commit prepared. | diff --git a/docs/source/api.rst b/docs/source/api.rst
index 0e06fdf..2e7f5cd 100644
--- a/docs/source/api.rst
+++ b/docs/source/api.rst
@@ -21,3 +21,5 @@ API Reference
.. autoclass:: Error
:members:
:inherited-members:
+
+.. autoclass:: AlreadyUsedError
diff --git a/docs/source/tutorial.rst b/docs/source/tutorial.rst
index bd61a13..220d6d4 100644
--- a/docs/source/tutorial.rst
+++ b/docs/source/tutorial.rst
@@ -25,4 +25,7 @@ which, like before, is the same as::
x = await f(*args, **kwargs)
+An Outcome object may not be unwrapped twice. Attempting to do so will
+raise an :class:`AlreadyUsedError`.
+
See the :ref:`api-reference` for the types involved.
diff --git a/newsfragments/7.feature.rst b/newsfragments/7.feature.rst
new file mode 100644
index 0000000..c6d3dc4
--- /dev/null
+++ b/newsfragments/7.feature.rst
@@ -0,0 +1,3 @@
+An Outcome may only be unwrapped or sent once.
+
+Attempting to do so a second time will raise an :class:`AlreadyUsedError`.
diff --git a/src/outcome/__init__.py b/src/outcome/__init__.py
index a90c3e9..4fe8b0e 100644
--- a/src/outcome/__init__.py
+++ b/src/outcome/__init__.py
@@ -8,12 +8,12 @@ import sys
if sys.version_info >= (3, 5):
from ._async import Error, Outcome, Value, acapture, capture
- __all__ = ('Error', 'Outcome', 'Value', 'acapture', 'capture')
+ __all__ = ('Error', 'Outcome', 'Value', 'acapture', 'capture', 'AlreadyUsedError')
else:
from ._sync import Error, Outcome, Value, capture
- __all__ = ('Error', 'Outcome', 'Value', 'capture')
+ __all__ = ('Error', 'Outcome', 'Value', 'capture', 'AlreadyUsedError')
-from ._util import fixup_module_metadata
+from ._util import fixup_module_metadata, AlreadyUsedError
fixup_module_metadata(__name__, globals())
del fixup_module_metadata
diff --git a/src/outcome/_async.py b/src/outcome/_async.py
index 293e5c0..a9ec1ba 100644
--- a/src/outcome/_async.py
+++ b/src/outcome/_async.py
@@ -4,6 +4,7 @@ from ._sync import (
Error as ErrorBase, Outcome as OutcomeBase, Value as ValueBase
)
+from ._util import AlreadyUsedError
__all__ = ['Error', 'Outcome', 'Value', 'acapture', 'capture']
@@ -49,11 +50,13 @@ class Outcome(OutcomeBase):
class Value(ValueBase):
async def asend(self, agen):
+ self._set_unwrapped()
return await agen.asend(self.value)
class Error(ErrorBase):
async def asend(self, agen):
+ self._set_unwrapped()
return await agen.athrow(self.error)
diff --git a/src/outcome/_sync.py b/src/outcome/_sync.py
index a74bbc3..02a1c09 100644
--- a/src/outcome/_sync.py
+++ b/src/outcome/_sync.py
@@ -4,7 +4,7 @@ from __future__ import absolute_import, division, print_function
import abc
import attr
-from ._util import ABC
+from ._util import ABC, AlreadyUsedError
__all__ = ['Error', 'Outcome', 'Value', 'capture']
@@ -21,7 +21,7 @@ def capture(sync_fn, *args, **kwargs):
except BaseException as exc:
return Error(exc)
-
[email protected](repr=False, init=False, slots=True)
class Outcome(ABC):
"""An abstract class representing the result of a Python computation.
@@ -37,7 +37,13 @@ class Outcome(ABC):
hashable.
"""
- __slots__ = ()
+ _unwrapped = attr.ib(default=False, cmp=False, init=False)
+
+ def _set_unwrapped(self):
+ if self._unwrapped:
+ raise AlreadyUsedError
+ object.__setattr__(self, '_unwrapped', True)
+
@abc.abstractmethod
def unwrap(self):
@@ -62,7 +68,7 @@ class Outcome(ABC):
"""
[email protected](frozen=True, repr=False)
[email protected](frozen=True, repr=False, slots=True)
class Value(Outcome):
"""Concrete :class:`Outcome` subclass representing a regular value.
@@ -75,13 +81,15 @@ class Value(Outcome):
return 'Value({!r})'.format(self.value)
def unwrap(self):
+ self._set_unwrapped()
return self.value
def send(self, gen):
+ self._set_unwrapped()
return gen.send(self.value)
[email protected](frozen=True, repr=False)
[email protected](frozen=True, repr=False, slots=True)
class Error(Outcome):
"""Concrete :class:`Outcome` subclass representing a raised exception.
@@ -94,7 +102,10 @@ class Error(Outcome):
return 'Error({!r})'.format(self.error)
def unwrap(self):
+ self._set_unwrapped()
raise self.error
def send(self, it):
+ self._set_unwrapped()
return it.throw(self.error)
+
diff --git a/src/outcome/_util.py b/src/outcome/_util.py
index a9ca4f5..ec7ce3d 100644
--- a/src/outcome/_util.py
+++ b/src/outcome/_util.py
@@ -6,6 +6,11 @@ import abc
import sys
+class AlreadyUsedError(RuntimeError):
+ """An Outcome may not be unwrapped twice."""
+ pass
+
+
def fixup_module_metadata(module_name, namespace):
def fix_one(obj):
mod = getattr(obj, "__module__", None)
| Should Outcome objects consume themselves when unwrapped?
From python-trio/trio#466:
@njsmith:
> It's generally a mistake to unwrap the same `Result` object twice, because if it's an exception you'll end up with a corrupted traceback as the same exception object gets raised in two unrelated call stacks.
>
> We should at least document this, and maybe we should make `unwrap` "consume" the object so that if you call it twice then the second time it raises an error. | python-trio/outcome | diff --git a/tests/test_async.py b/tests/test_async.py
index 9dba615..606aa1c 100644
--- a/tests/test_async.py
+++ b/tests/test_async.py
@@ -5,7 +5,7 @@ import trio
from async_generator import async_generator, yield_
import outcome
-from outcome import Error, Value
+from outcome import Error, Value, AlreadyUsedError
pytestmark = pytest.mark.trio
@@ -38,8 +38,15 @@ async def test_asend():
my_agen = my_agen_func().__aiter__()
if sys.version_info < (3, 5, 2):
my_agen = await my_agen
+ v = Value("value")
+ e = Error(KeyError())
assert (await my_agen.asend(None)) == 1
- assert (await Value("value").asend(my_agen)) == 2
- assert (await Error(KeyError()).asend(my_agen)) == 3
+ assert (await v.asend(my_agen)) == 2
+ with pytest.raises(AlreadyUsedError):
+ await v.asend(my_agen)
+
+ assert (await e.asend(my_agen)) == 3
+ with pytest.raises(AlreadyUsedError):
+ await e.asend(my_agen)
with pytest.raises(StopAsyncIteration):
await my_agen.asend(None)
diff --git a/tests/test_sync.py b/tests/test_sync.py
index 73900bb..192ac21 100644
--- a/tests/test_sync.py
+++ b/tests/test_sync.py
@@ -5,7 +5,7 @@ import sys
import pytest
import outcome
-from outcome import Error, Value
+from outcome import Error, Value, AlreadyUsedError
def test_Outcome():
@@ -14,13 +14,21 @@ def test_Outcome():
assert v.unwrap() == 1
assert repr(v) == "Value(1)"
+ with pytest.raises(AlreadyUsedError):
+ v.unwrap()
+
+ v = Value(1)
+
exc = RuntimeError("oops")
e = Error(exc)
assert e.error is exc
with pytest.raises(RuntimeError):
e.unwrap()
+ with pytest.raises(AlreadyUsedError):
+ e.unwrap()
assert repr(e) == "Error({!r})".format(exc)
+ e = Error(exc)
with pytest.raises(TypeError):
Error("hello")
with pytest.raises(TypeError):
@@ -33,6 +41,8 @@ def test_Outcome():
it = iter(expect_1())
next(it)
assert v.send(it) == "ok"
+ with pytest.raises(AlreadyUsedError):
+ v.send(it)
def expect_RuntimeError():
with pytest.raises(RuntimeError):
@@ -42,6 +52,8 @@ def test_Outcome():
it = iter(expect_RuntimeError())
next(it)
assert e.send(it) == "ok"
+ with pytest.raises(AlreadyUsedError):
+ e.send(it)
def test_Outcome_eq_hash():
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 6
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov",
"trio",
"pytest-trio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | async-generator==1.10
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
contextvars==2.4
coverage==6.2
idna==3.10
immutables==0.19
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
-e git+https://github.com/python-trio/outcome.git@9df51224a3e0efa3fffa0e950df1d4e680fa9c3e#egg=outcome
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
pytest-trio==0.7.0
sniffio==1.2.0
sortedcontainers==2.4.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
trio==0.19.0
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: outcome
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- async-generator==1.10
- contextvars==2.4
- coverage==6.2
- idna==3.10
- immutables==0.19
- pytest-cov==4.0.0
- pytest-trio==0.7.0
- sniffio==1.2.0
- sortedcontainers==2.4.0
- tomli==1.2.3
- trio==0.19.0
prefix: /opt/conda/envs/outcome
| [
"tests/test_async.py::test_asend",
"tests/test_sync.py::test_Outcome",
"tests/test_sync.py::test_Outcome_eq_hash",
"tests/test_sync.py::test_Value_compare",
"tests/test_sync.py::test_capture"
]
| [
"tests/test_async.py::test_acapture"
]
| []
| []
| MIT/Apache-2.0 Dual License | 2,416 | [
"src/outcome/__init__.py",
"src/outcome/_sync.py",
"docs/source/api.rst",
"docs/source/tutorial.rst",
"src/outcome/_util.py",
"src/outcome/_async.py",
"newsfragments/7.feature.rst"
]
| [
"src/outcome/__init__.py",
"src/outcome/_sync.py",
"docs/source/api.rst",
"docs/source/tutorial.rst",
"src/outcome/_util.py",
"src/outcome/_async.py",
"newsfragments/7.feature.rst"
]
|
F5Networks__f5-common-python-1427 | 7220c3db2f3a0b004968a524e6c1f4b91ca4f787 | 2018-04-16 23:38:46 | a97a2ef2abb9114a2152453671dee0ac2ae70d1f | diff --git a/f5-sdk-dist/scripts/build_exceptions.py b/f5-sdk-dist/scripts/build_exceptions.py
index f471647..e1b5e24 100644
--- a/f5-sdk-dist/scripts/build_exceptions.py
+++ b/f5-sdk-dist/scripts/build_exceptions.py
@@ -108,7 +108,7 @@ Raised when there is an issue producing the .rpm package for Redhat builds.
super(RedhatError, self).__init__(*args, **kargs)
-class TestError(BuildError):
+class ErrorInTest(BuildError):
"""TestError
An Error occurred during testing...
@@ -117,7 +117,7 @@ An Error occurred during testing...
def __init__(self, *args, **kargs):
# exception-specific logic here...
- super(TestError, self).__init__(*args, **kargs)
+ super(ErrorInTest, self).__init__(*args, **kargs)
# vim: set fileencoding=utf-8
diff --git a/f5/bigip/cm/system.py b/f5/bigip/cm/system.py
index dd9102f..7115d33 100644
--- a/f5/bigip/cm/system.py
+++ b/f5/bigip/cm/system.py
@@ -47,9 +47,9 @@ class Providers(OrganizingCollection):
class Tmos_s(Collection):
- def __init__(self, providers):
- super(Tmos_s, self).__init__(providers)
- if (self._meta_data['bigip']._meta_data['tmos_version'] < '13.1.0'):
+ def __init__(self, container):
+ super(Tmos_s, self).__init__(container)
+ if container._meta_data['bigip']._meta_data['tmos_version'] < '13.1.0':
# Starting from bigip v13.1.0 tmos resources 'kind' was changed
# from 'mcpremoteproviderstate' to 'authproviderstate'
# and tmos collection 'kind'
@@ -67,9 +67,9 @@ class Tmos_s(Collection):
class Tmos(Resource):
- def __init__(self, tokens):
- super(Tmos, self).__init__(tokens)
- if (self._meta_data['bigip']._meta_data['tmos_version'] < '13.1.0'):
+ def __init__(self, container):
+ super(Tmos, self).__init__(container)
+ if container._meta_data['bigip']._meta_data['tmos_version'] < '13.1.0':
# Starting from bigip v13.1.0 tmos resource 'kind' was changed
# from 'mcpremoteproviderstate' to 'authproviderstate'
self._meta_data['required_json_kind'] = 'cm:system:authn:providers:tmos:mcpremoteproviderstate'
diff --git a/f5/bigip/tm/security/__init__.py b/f5/bigip/tm/security/__init__.py
index 8dbd746..5b18b9a 100644
--- a/f5/bigip/tm/security/__init__.py
+++ b/f5/bigip/tm/security/__init__.py
@@ -31,6 +31,7 @@ from f5.bigip.resource import OrganizingCollection
from f5.bigip.tm.security.analytics import Analytics
from f5.bigip.tm.security.dos import Dos
from f5.bigip.tm.security.firewall import Firewall
+from f5.bigip.tm.security.log import Log
from f5.bigip.tm.security.protocol_inspection import Protocol_Inspection
@@ -39,4 +40,10 @@ class Security(OrganizingCollection):
def __init__(self, tm):
super(Security, self).__init__(tm)
- self._meta_data['allowed_lazy_attributes'] = [Dos, Firewall, Analytics, Protocol_Inspection]
+ self._meta_data['allowed_lazy_attributes'] = [
+ Analytics,
+ Dos,
+ Firewall,
+ Log,
+ Protocol_Inspection,
+ ]
diff --git a/f5/bigip/tm/security/log.py b/f5/bigip/tm/security/log.py
new file mode 100644
index 0000000..4e3ca72
--- /dev/null
+++ b/f5/bigip/tm/security/log.py
@@ -0,0 +1,354 @@
+# coding=utf-8
+#
+# Copyright 2018 F5 Networks Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from distutils.version import LooseVersion
+from f5.bigip.mixins import CheckExistenceMixin
+from f5.bigip.resource import Collection
+from f5.bigip.resource import OrganizingCollection
+from f5.bigip.resource import Resource
+from requests.exceptions import HTTPError
+from requests import Response
+
+
+class Log(OrganizingCollection):
+ def __init__(self, security):
+ super(Log, self).__init__(security)
+ self._meta_data['allowed_lazy_attributes'] = [
+ Profiles,
+ ]
+
+
+class Profiles(Collection):
+ def __init__(self, log):
+ super(Profiles, self).__init__(log)
+ self._meta_data['allowed_lazy_attributes'] = [Profile]
+ self._meta_data['attribute_registry'] = {
+ 'tm:security:log:profile:profilestate': Profile
+ }
+
+
+class Profile(Resource):
+ def __init__(self, profiles):
+ super(Profile, self).__init__(profiles)
+ self._meta_data['required_creation_parameters'].update(('partition',))
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:profilestate'
+ self._meta_data['allowed_lazy_attributes'] = []
+ self._meta_data['attribute_registry'] = {
+ 'tm:security:log:profile:application:applicationcollectionstate': Applications,
+ 'tm:security:log:profile:network:networkcollectionstate': Networks,
+ 'tm:security:log:profile:protocol-dns:protocol-dnscollectionstate': Protocol_Dns_s,
+ 'tm:security:log:profile:protocol-sip:protocol-sipcollectionstate': Protocol_Sips,
+ }
+
+
+class Applications(Collection):
+ def __init__(self, profile):
+ super(Applications, self).__init__(profile)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:application:applicationcollectionstate'
+ self._meta_data['allowed_lazy_attributes'] = [Application]
+ self._meta_data['attribute_registry'] = {
+ 'tm:security:log:profile:application:applicationstate': Application
+ }
+
+
+class Application(Resource, CheckExistenceMixin):
+ def __init__(self, applications):
+ super(Application, self).__init__(applications)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:application:applicationstate'
+ self.tmos_ver = self._meta_data['bigip'].tmos_version
+
+ def load(self, **kwargs):
+ """Custom load method to address issue in 11.6.0 Final,
+
+ where non existing objects would be True.
+ """
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._load_11_6(**kwargs)
+ else:
+ return super(Application, self)._load(**kwargs)
+
+ def _load_11_6(self, **kwargs):
+ """Must check if rule actually exists before proceeding with load."""
+ if self._check_existence_by_collection(self._meta_data['container'], kwargs['name']):
+ return super(Application, self)._load(**kwargs)
+ msg = 'The application resource named, {}, does not exist on the device.'.format(kwargs['name'])
+ resp = Response()
+ resp.status_code = 404
+ ex = HTTPError(msg, response=resp)
+ raise ex
+
+ def exists(self, **kwargs):
+ """Some objects when deleted still return when called by their
+
+ direct URI, this is a known issue in 11.6.0.
+ """
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._exists_11_6(**kwargs)
+ else:
+ return super(Application, self)._load(**kwargs)
+
+ def _exists_11_6(self, **kwargs):
+ """Check rule existence on device."""
+
+ return self._check_existence_by_collection(self._meta_data['container'], kwargs['name'])
+
+ def modify(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._modify(**kwargs)
+
+ def update(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._update(**kwargs)
+
+ def delete(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._delete(**kwargs)
+
+ def create(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._create(**kwargs)
+
+ def _mutate_name(self, kwargs):
+ partition = kwargs.pop('partition', None)
+ if partition is not None:
+ kwargs['name'] = '/{0}/{1}'.format(partition, kwargs['name'])
+ return kwargs
+
+
+class Networks(Collection):
+ def __init__(self, profile):
+ super(Networks, self).__init__(profile)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:application:applicationcollectionstate'
+ self._meta_data['allowed_lazy_attributes'] = [Network]
+ self._meta_data['attribute_registry'] = {
+ 'tm:security:log:profile:network:networkstate': Network
+ }
+
+
+class Network(Resource, CheckExistenceMixin):
+ def __init__(self, networks):
+ super(Network, self).__init__(networks)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:network:networkstate'
+ self.tmos_ver = self._meta_data['bigip'].tmos_version
+
+ def load(self, **kwargs):
+ """Custom load method to address issue in 11.6.0 Final,
+
+ where non existing objects would be True.
+ """
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._load_11_6(**kwargs)
+ else:
+ return super(Network, self)._load(**kwargs)
+
+ def _load_11_6(self, **kwargs):
+ """Must check if rule actually exists before proceeding with load."""
+ if self._check_existence_by_collection(self._meta_data['container'], kwargs['name']):
+ return super(Network, self)._load(**kwargs)
+ msg = 'The application resource named, {}, does not exist on the device.'.format(kwargs['name'])
+ resp = Response()
+ resp.status_code = 404
+ ex = HTTPError(msg, response=resp)
+ raise ex
+
+ def exists(self, **kwargs):
+ """Some objects when deleted still return when called by their
+
+ direct URI, this is a known issue in 11.6.0.
+ """
+
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._exists_11_6(**kwargs)
+ else:
+ return super(Network, self)._load(**kwargs)
+
+ def _exists_11_6(self, **kwargs):
+ """Check rule existence on device."""
+
+ return self._check_existence_by_collection(self._meta_data['container'], kwargs['name'])
+
+ def modify(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._modify(**kwargs)
+
+ def update(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._update(**kwargs)
+
+ def delete(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._delete(**kwargs)
+
+ def create(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._create(**kwargs)
+
+ def _mutate_name(self, kwargs):
+ partition = kwargs.pop('partition', None)
+ if partition is not None:
+ kwargs['name'] = '/{0}/{1}'.format(partition, kwargs['name'])
+ return kwargs
+
+
+class Protocol_Dns_s(Collection):
+ def __init__(self, profile):
+ super(Protocol_Dns_s, self).__init__(profile)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:protocol-dns:protocol-dnscollectionstate'
+ self._meta_data['allowed_lazy_attributes'] = [Protocol_Dns]
+ self._meta_data['attribute_registry'] = {
+ 'tm:security:log:profile:protocol-dns:protocol-dnsstate': Protocol_Dns
+ }
+
+
+class Protocol_Dns(Resource, CheckExistenceMixin):
+ def __init__(self, protocol_dns_s):
+ super(Protocol_Dns, self).__init__(protocol_dns_s)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:protocol-dns:protocol-dnsstate'
+ self.tmos_ver = self._meta_data['bigip'].tmos_version
+
+ def load(self, **kwargs):
+ """Custom load method to address issue in 11.6.0 Final,
+
+ where non existing objects would be True.
+ """
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._load_11_6(**kwargs)
+ else:
+ return super(Protocol_Dns, self)._load(**kwargs)
+
+ def _load_11_6(self, **kwargs):
+ """Must check if rule actually exists before proceeding with load."""
+ if self._check_existence_by_collection(self._meta_data['container'], kwargs['name']):
+ return super(Protocol_Dns, self)._load(**kwargs)
+ msg = 'The application resource named, {}, does not exist on the device.'.format(kwargs['name'])
+ resp = Response()
+ resp.status_code = 404
+ ex = HTTPError(msg, response=resp)
+ raise ex
+
+ def exists(self, **kwargs):
+ """Some objects when deleted still return when called by their
+
+ direct URI, this is a known issue in 11.6.0.
+ """
+
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._exists_11_6(**kwargs)
+ else:
+ return super(Protocol_Dns, self)._load(**kwargs)
+
+ def _exists_11_6(self, **kwargs):
+ """Check rule existence on device."""
+
+ return self._check_existence_by_collection(self._meta_data['container'], kwargs['name'])
+
+ def modify(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._modify(**kwargs)
+
+ def update(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._update(**kwargs)
+
+ def delete(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._delete(**kwargs)
+
+ def create(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._create(**kwargs)
+
+ def _mutate_name(self, kwargs):
+ partition = kwargs.pop('partition', None)
+ if partition is not None:
+ kwargs['name'] = '/{0}/{1}'.format(partition, kwargs['name'])
+ return kwargs
+
+
+class Protocol_Sips(Collection):
+ def __init__(self, profile):
+ super(Protocol_Sips, self).__init__(profile)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:protocol-sip:protocol-sipcollectionstate'
+ self._meta_data['allowed_lazy_attributes'] = [Protocol_Sip]
+ self._meta_data['attribute_registry'] = {
+ 'tm:security:log:profile:protocol-sip:protocol-sipstate': Protocol_Sip
+ }
+
+
+class Protocol_Sip(Resource, CheckExistenceMixin):
+ def __init__(self, protocol_sips):
+ super(Protocol_Sip, self).__init__(protocol_sips)
+ self._meta_data['required_json_kind'] = 'tm:security:log:profile:protocol-sip:protocol-sipstate'
+ self.tmos_ver = self._meta_data['bigip'].tmos_version
+
+ def load(self, **kwargs):
+ """Custom load method to address issue in 11.6.0 Final,
+
+ where non existing objects would be True.
+ """
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._load_11_6(**kwargs)
+ else:
+ return super(Protocol_Sip, self)._load(**kwargs)
+
+ def _load_11_6(self, **kwargs):
+ """Must check if rule actually exists before proceeding with load."""
+ if self._check_existence_by_collection(self._meta_data['container'], kwargs['name']):
+ return super(Protocol_Sip, self)._load(**kwargs)
+ msg = 'The application resource named, {}, does not exist on the device.'.format(kwargs['name'])
+ resp = Response()
+ resp.status_code = 404
+ ex = HTTPError(msg, response=resp)
+ raise ex
+
+ def exists(self, **kwargs):
+ """Some objects when deleted still return when called by their
+
+ direct URI, this is a known issue in 11.6.0.
+ """
+
+ if LooseVersion(self.tmos_ver) == LooseVersion('11.6.0'):
+ return self._exists_11_6(**kwargs)
+ else:
+ return super(Protocol_Sip, self)._load(**kwargs)
+
+ def _exists_11_6(self, **kwargs):
+ """Check rule existence on device."""
+
+ return self._check_existence_by_collection(self._meta_data['container'], kwargs['name'])
+
+ def modify(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._modify(**kwargs)
+
+ def update(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._update(**kwargs)
+
+ def delete(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._delete(**kwargs)
+
+ def create(self, **kwargs):
+ kwargs = self._mutate_name(kwargs)
+ return self._create(**kwargs)
+
+ def _mutate_name(self, kwargs):
+ partition = kwargs.pop('partition', None)
+ if partition is not None:
+ kwargs['name'] = '/{0}/{1}'.format(partition, kwargs['name'])
+ return kwargs
diff --git a/tox.ini b/tox.ini
index 9eac500..75a83b6 100644
--- a/tox.ini
+++ b/tox.ini
@@ -20,7 +20,7 @@ basepython =
pycodestyle: python
coveralls: python
docs: python
-passenv = COVERALLS_REPO_TOKEN
+passenv = *
setenv =
PYTHONDONTWRITEBYTECODE = 1
deps =
@@ -51,7 +51,7 @@ commands =
py{27,35,36}-iwf-v2.1.0: py.test --bigip localhost --port 10443 -s -vv --release 2.1.0 {posargs:f5/iworkflow}
# Misc tests
- unit: pytest -k "not /functional/" -vv --cov {posargs:f5/}
+ unit: py.test -x -k "not /functional/" -s -vv --cov {posargs:f5}
pycodestyle: pycodestyle {posargs:f5}
flake: flake8 {posargs:f5}
coveralls: coveralls
| add security log profile endpoint
needed for ansible | F5Networks/f5-common-python | diff --git a/f5-sdk-dist/scripts/install_test.py b/f5-sdk-dist/scripts/install_test.py
index 37f7809..fd62ff3 100644
--- a/f5-sdk-dist/scripts/install_test.py
+++ b/f5-sdk-dist/scripts/install_test.py
@@ -44,7 +44,7 @@ from inspect import getframeinfo as gfi
from . import build_expectations
from . terminal import terminal
-from . build_exceptions import TestError
+from . build_exceptions import ErrorInTest
# Globals:
builds = build_expectations.Builds()
@@ -128,7 +128,7 @@ This object attr is immutable.
if opts:
if not isinstance(opts, tuple) or \
not isinstance(opts[0][0], OperationSettings):
- raise TestError(msg=err_msg, frame=gfi(cf()),
+ raise ErrorInTest(msg=err_msg, frame=gfi(cf()),
errnum=errno.ESPIPE)
self.__expected_opts = opts
@@ -180,7 +180,7 @@ tests that are lined up.
pkg_search = dist + "/rpms/build/*.rpm"
pkg_re = re.compile('el(\d+)\.noarch')
else:
- raise TestError(opt.os_type, frame=gfi(cf()),
+ raise ErrorInTest(opt.os_type, frame=gfi(cf()),
errnum=errno.ESPIPE,
msg='opt.os_type(%s) is not recognized!')
entropy = glob.glob(pkg_search)
@@ -208,7 +208,7 @@ tests that are lined up.
break
if not found:
self._set_failure_reason = \
- TestError(opt.os_type, opt.version,
+ ErrorInTest(opt.os_type, opt.version,
frame=gfi(cf()), errno=errno.ESPIPE,
msg='No pkg found built for')
print(str(self.failure_reason))
@@ -266,7 +266,7 @@ Outside:
msg = "Failed to build docker container to test with"
if frame:
self._set_failure_reason = \
- TestError(pkg.pkg, msg=msg, frame=frame, errnum=errno.ESPIPE)
+ ErrorInTest(pkg.pkg, msg=msg, frame=frame, errnum=errno.ESPIPE)
raise self.failure_reason
@@ -301,13 +301,13 @@ on its own.
try:
tests = InstallTest()
tests.execute_tests()
- except TestError as Error:
+ except ErrorInTest as Error:
print(str(Error))
exit(1)
traceback.print_exc()
except Exception as Error:
print(str(Error))
- TestError(Error, frame=gfi(cf()), errno=-1)
+ ErrorInTest(Error, frame=gfi(cf()), errno=-1)
print(str(Error))
traceback.print_exc()
diff --git a/f5/bigip/cm/test/unit/test_system.py b/f5/bigip/cm/test/unit/test_system.py
index 78fdd23..ad43de5 100644
--- a/f5/bigip/cm/test/unit/test_system.py
+++ b/f5/bigip/cm/test/unit/test_system.py
@@ -23,7 +23,13 @@ import pytest
@pytest.fixture
def FakeTmos():
mo = mock.MagicMock()
+ r = {'tmos_version': '11.6.0'}
+ m = mock.MagicMock()
+ m.__getitem__.side_effect = r.__getitem__
+ m.__iter__.side_effect = r.__iter__
+ mo._meta_data['bigip']._meta_data = m
resource = Tmos(mo)
+
return resource
diff --git a/f5/bigip/tm/security/test/functional/test_log.py b/f5/bigip/tm/security/test/functional/test_log.py
new file mode 100644
index 0000000..004aca8
--- /dev/null
+++ b/f5/bigip/tm/security/test/functional/test_log.py
@@ -0,0 +1,404 @@
+# Copyright 2018 F5 Networks Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import os
+import pytest
+import tempfile
+
+from f5.bigip.tm.security.log import Application
+from f5.bigip.tm.security.log import Network
+from f5.bigip.tm.security.log import Profile
+from f5.bigip.tm.security.log import Protocol_Dns
+from f5.bigip.tm.security.log import Protocol_Sip
+from requests.exceptions import HTTPError
+
+
[email protected](scope='function')
+def profile(mgmt_root):
+ file = tempfile.NamedTemporaryFile()
+ name = os.path.basename(file.name)
+ r1 = mgmt_root.tm.security.log.profiles.profile.create(
+ name=name, partition='Common'
+ )
+ yield r1
+ r1.delete()
+
+
[email protected](scope='function')
+def application_profile(mgmt_root):
+ file = tempfile.NamedTemporaryFile()
+ name = os.path.basename(file.name)
+ r1 = mgmt_root.tm.security.log.profiles.profile.create(
+ name=name, partition='Common'
+ )
+ r2 = r1.applications.application.create(name=name, partition='Common')
+ yield r2
+ r1.delete()
+
+
[email protected](scope='function')
+def network_profile(mgmt_root):
+ file = tempfile.NamedTemporaryFile()
+ name = os.path.basename(file.name)
+ r1 = mgmt_root.tm.security.log.profiles.profile.create(
+ name=name, partition='Common'
+ )
+ r2 = r1.networks.network.create(name=name, partition='Common')
+ yield r2
+ r1.delete()
+
+
[email protected](scope='function')
+def protocol_dns_profile(mgmt_root):
+ file = tempfile.NamedTemporaryFile()
+ name = os.path.basename(file.name)
+ r1 = mgmt_root.tm.security.log.profiles.profile.create(
+ name=name, partition='Common'
+ )
+ r2 = r1.protocol_dns_s.protocol_dns.create(name=name, partition='Common')
+ yield r2
+ r1.delete()
+
+
[email protected](scope='function')
+def protocol_sip_profile(mgmt_root):
+ file = tempfile.NamedTemporaryFile()
+ name = os.path.basename(file.name)
+ r1 = mgmt_root.tm.security.log.profiles.profile.create(
+ name=name, partition='Common'
+ )
+ r2 = r1.protocol_sips.protocol_sip.create(name=name, partition='Common')
+ yield r2
+ r1.delete()
+
+
+class TestProfileGeneral(object):
+ def test_create_req_args(self, profile):
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert profile.partition == 'Common'
+ assert profile.selfLink.startswith(URI)
+
+ def test_refresh(self, mgmt_root, profile):
+ rc = mgmt_root.tm.security.log.profiles
+ r1 = profile
+ r2 = rc.profile.load(name=profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.kind == r2.kind
+ assert r1.selfLink == r2.selfLink
+ assert not hasattr(r1, 'description')
+ assert not hasattr(r2, 'description')
+ r2.modify(description='my description')
+ assert hasattr(r2, 'description')
+ assert r2.description == 'my description'
+ r1.refresh()
+ assert r1.selfLink == r2.selfLink
+ assert hasattr(r1, 'description')
+ assert r1.description == r2.description
+
+ def test_delete(self, mgmt_root):
+ rc = mgmt_root.tm.security.log.profiles
+ r1 = rc.profile.create(name='delete_me', partition='Common')
+ r1.delete()
+ with pytest.raises(HTTPError) as err:
+ rc.profile.load(name='delete_me', partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_no_object(self, mgmt_root):
+ rc = mgmt_root.tm.security.log.profiles
+ with pytest.raises(HTTPError) as err:
+ rc.profile.load(name='not_exists', partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_and_update(self, mgmt_root, profile):
+ r1 = profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+ assert not hasattr(r1, 'description')
+ r1.description = 'my description'
+ r1.update()
+ assert hasattr(r1, 'description')
+ assert r1.description == 'my description'
+ rc = mgmt_root.tm.security.log.profiles
+ r2 = rc.profile.load(name=profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.partition == r2.partition
+ assert r1.selfLink == r2.selfLink
+ assert hasattr(r2, 'description')
+ assert r1.description == r2.description
+
+ def test_profiles_collection(self, mgmt_root, profile):
+ r1 = profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+
+ rc = mgmt_root.tm.security.log.profiles.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+ assert isinstance(rc[0], Profile)
+
+
+class TestProfileApplication(object):
+ def test_refresh(self, mgmt_root, application_profile):
+ rc = mgmt_root.tm.security.log.profiles
+ r1 = application_profile
+ r2 = rc.profile.load(name=application_profile.name, partition='Common')
+ r2 = r2.applications.application.load(name=application_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.kind == r2.kind
+ assert r1.selfLink == r2.selfLink
+ assert r1.guaranteeLogging == r2.guaranteeLogging
+ r2.modify(guaranteeLogging='disabled')
+ assert r2.guaranteeLogging == 'disabled'
+ r1.refresh()
+ assert r1.selfLink == r2.selfLink
+ assert r1.guaranteeLogging == r2.guaranteeLogging
+
+ def test_delete(self, profile):
+ r1 = profile.applications.application.create(name=profile.name, partition='Common')
+ r1.delete()
+ with pytest.raises(HTTPError) as err:
+ profile.applications.application.load(name=profile.name, partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_no_object(self, profile):
+ with pytest.raises(HTTPError) as err:
+ profile.applications.application.load(name='not_exists', partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_and_update(self, mgmt_root, application_profile):
+ r1 = application_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+ r1.guaranteeLogging = 'disabled'
+ r1.update()
+ assert hasattr(r1, 'guaranteeLogging')
+ assert r1.guaranteeLogging == 'disabled'
+ rc = mgmt_root.tm.security.log.profiles.profile.load(name=application_profile.name, partition='Common')
+ r2 = rc.applications.application.load(name=application_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.partition == r2.partition
+ assert r1.selfLink == r2.selfLink
+ assert hasattr(r2, 'guaranteeLogging')
+ assert r1.guaranteeLogging == r2.guaranteeLogging
+
+ def test_profiles_collection(self, mgmt_root, application_profile):
+ r1 = application_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+
+ rc = mgmt_root.tm.security.log.profiles.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+
+ resource = next((x for x in rc if x.name == application_profile.name), None)
+
+ rc2 = resource.applications.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+
+ assert isinstance(rc2[0], Application)
+
+
+class TestProfileNetwork(object):
+ def test_refresh(self, mgmt_root, network_profile):
+ rc = mgmt_root.tm.security.log.profiles
+ r1 = network_profile
+ r2 = rc.profile.load(name=network_profile.name, partition='Common')
+ r2 = r2.networks.network.load(name=network_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.kind == r2.kind
+ assert r1.selfLink == r2.selfLink
+ assert r1.filter['logIpErrors'] == r2.filter['logIpErrors']
+
+ r2.modify(filter={'logIpErrors': 'enabled'})
+ assert r2.filter['logIpErrors'] == 'enabled'
+ r1.refresh()
+ assert r1.selfLink == r2.selfLink
+ assert r1.filter['logIpErrors'] == r2.filter['logIpErrors']
+
+ def test_delete(self, profile):
+ r1 = profile.networks.network.create(name=profile.name, partition='Common')
+ r1.delete()
+ with pytest.raises(HTTPError) as err:
+ profile.networks.network.load(name=profile.name, partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_no_object(self, profile):
+ with pytest.raises(HTTPError) as err:
+ profile.networks.network.load(name='not_exists', partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_and_update(self, mgmt_root, network_profile):
+ r1 = network_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+ r1.filter['logIpErrors'] = 'enabled'
+ r1.update()
+ assert r1.filter['logIpErrors'] == 'enabled'
+ rc = mgmt_root.tm.security.log.profiles.profile.load(name=network_profile.name, partition='Common')
+ r2 = rc.networks.network.load(name=network_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.partition == r2.partition
+ assert r1.selfLink == r2.selfLink
+ assert hasattr(r2, 'filter')
+ assert r1.filter['logIpErrors'] == r2.filter['logIpErrors']
+
+ def test_profiles_collection(self, mgmt_root, network_profile):
+ r1 = network_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+
+ rc = mgmt_root.tm.security.log.profiles.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+
+ resource = next((x for x in rc if x.name == network_profile.name), None)
+
+ rc2 = resource.networks.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+ assert isinstance(rc2[0], Network)
+
+
+class TestProfileProtocolDns(object):
+ def test_refresh(self, mgmt_root, protocol_dns_profile):
+ rc = mgmt_root.tm.security.log.profiles
+ r1 = protocol_dns_profile
+ r2 = rc.profile.load(name=protocol_dns_profile.name, partition='Common')
+ r2 = r2.protocol_dns_s.protocol_dns.load(name=protocol_dns_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.kind == r2.kind
+ assert r1.selfLink == r2.selfLink
+ assert r1.filter['logDnsDrop'] == r2.filter['logDnsDrop']
+
+ r2.modify(filter={'logDnsDrop': 'enabled'})
+ assert r2.filter['logDnsDrop'] == 'enabled'
+ r1.refresh()
+ assert r1.selfLink == r2.selfLink
+ assert r1.filter['logDnsDrop'] == r2.filter['logDnsDrop']
+
+ def test_delete(self, profile):
+ r1 = profile.protocol_dns_s.protocol_dns.create(name=profile.name, partition='Common')
+ r1.delete()
+ with pytest.raises(HTTPError) as err:
+ profile.protocol_dns_s.protocol_dns.load(name=profile.name, partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_no_object(self, profile):
+ with pytest.raises(HTTPError) as err:
+ profile.protocol_dns_s.protocol_dns.load(name='not_exists', partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_and_update(self, mgmt_root, protocol_dns_profile):
+ r1 = protocol_dns_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+ r1.filter['logDnsDrop'] = 'enabled'
+ r1.update()
+ assert r1.filter['logDnsDrop'] == 'enabled'
+ rc = mgmt_root.tm.security.log.profiles.profile.load(name=protocol_dns_profile.name, partition='Common')
+ r2 = rc.protocol_dns_s.protocol_dns.load(name=protocol_dns_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.partition == r2.partition
+ assert r1.selfLink == r2.selfLink
+ assert hasattr(r2, 'filter')
+ assert r1.filter['logDnsDrop'] == r2.filter['logDnsDrop']
+
+ def test_profiles_collection(self, mgmt_root, protocol_dns_profile):
+ r1 = protocol_dns_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+
+ rc = mgmt_root.tm.security.log.profiles.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+
+ resource = next((x for x in rc if x.name == protocol_dns_profile.name), None)
+
+ rc2 = resource.protocol_dns_s.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+ assert isinstance(rc2[0], Protocol_Dns)
+
+
+class TestProfileProtocolSip(object):
+ def test_refresh(self, mgmt_root, protocol_sip_profile):
+ rc = mgmt_root.tm.security.log.profiles
+ r1 = protocol_sip_profile
+ r2 = rc.profile.load(name=protocol_sip_profile.name, partition='Common')
+ r2 = r2.protocol_sips.protocol_sip.load(name=protocol_sip_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.kind == r2.kind
+ assert r1.selfLink == r2.selfLink
+ assert r1.filter['logSipDrop'] == r2.filter['logSipDrop']
+
+ r2.modify(filter={'logSipDrop': 'enabled'})
+ assert r2.filter['logSipDrop'] == 'enabled'
+ r1.refresh()
+ assert r1.selfLink == r2.selfLink
+ assert r1.filter['logSipDrop'] == r2.filter['logSipDrop']
+
+ def test_delete(self, profile):
+ r1 = profile.protocol_sips.protocol_sip.create(name=profile.name, partition='Common')
+ r1.delete()
+ with pytest.raises(HTTPError) as err:
+ profile.protocol_sips.protocol_sip.load(name=profile.name, partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_no_object(self, profile):
+ with pytest.raises(HTTPError) as err:
+ profile.protocol_sips.protocol_sip.load(name='not_exists', partition='Common')
+ assert err.value.response.status_code == 404
+
+ def test_load_and_update(self, mgmt_root, protocol_sip_profile):
+ r1 = protocol_sip_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+ r1.filter['logSipDrop'] = 'enabled'
+ r1.update()
+ assert r1.filter['logSipDrop'] == 'enabled'
+ rc = mgmt_root.tm.security.log.profiles.profile.load(name=protocol_sip_profile.name, partition='Common')
+ r2 = rc.protocol_sips.protocol_sip.load(name=protocol_sip_profile.name, partition='Common')
+ assert r1.name == r2.name
+ assert r1.partition == r2.partition
+ assert r1.selfLink == r2.selfLink
+ assert hasattr(r2, 'filter')
+ assert r1.filter['logSipDrop'] == r2.filter['logSipDrop']
+
+ def test_profiles_collection(self, mgmt_root, protocol_sip_profile):
+ r1 = protocol_sip_profile
+ URI = 'https://localhost/mgmt/tm/security/log/profile/~Common~'
+ assert r1.partition == 'Common'
+ assert r1.selfLink.startswith(URI)
+
+ rc = mgmt_root.tm.security.log.profiles.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+
+ resource = next((x for x in rc if x.name == protocol_sip_profile.name), None)
+
+ rc2 = resource.protocol_sips.get_collection()
+ assert isinstance(rc, list)
+ assert len(rc)
+ assert isinstance(rc2[0], Protocol_Sip)
diff --git a/f5/bigip/tm/security/test/unit/test_log.py b/f5/bigip/tm/security/test/unit/test_log.py
new file mode 100644
index 0000000..98ed33c
--- /dev/null
+++ b/f5/bigip/tm/security/test/unit/test_log.py
@@ -0,0 +1,46 @@
+# Copyright 2018 F5 Networks Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import mock
+import pytest
+
+
+from f5.bigip import ManagementRoot
+from f5.bigip.tm.security.log import Profile
+from f5.sdk_exception import MissingRequiredCreationParameter
+
+
[email protected]
+def FakeProfile():
+ fake_col = mock.MagicMock()
+ fake_profile = Profile(fake_col)
+ return fake_profile
+
+
+class TestProfile(object):
+ def test_create_two(self, fakeicontrolsession):
+ b = ManagementRoot('192.168.1.1', 'admin', 'admin')
+ r1 = b.tm.security.log.profiles.profile
+ r2 = b.tm.security.log.profiles.profile
+ assert r1 is not r2
+
+ def test_create_no_args(self, FakeProfile):
+ with pytest.raises(MissingRequiredCreationParameter):
+ FakeProfile.create()
+
+ def test_create_mandatory_args_missing(self, fakeicontrolsession):
+ b = ManagementRoot('192.168.1.1', 'admin', 'admin')
+ with pytest.raises(MissingRequiredCreationParameter):
+ b.tm.security.log.profiles.profile.create(name='destined_to_fail')
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 4
} | 3.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio",
"pytest-bdd",
"pytest-benchmark",
"pytest-randomly",
"responses",
"mock",
"hypothesis",
"freezegun",
"trustme",
"requests-mock",
"requests",
"tomlkit"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==25.3.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
coverage==7.8.0
cryptography==44.0.2
exceptiongroup==1.2.2
execnet==2.1.1
f5-icontrol-rest==1.3.13
-e git+https://github.com/F5Networks/f5-common-python.git@7220c3db2f3a0b004968a524e6c1f4b91ca4f787#egg=f5_sdk
freezegun==1.5.1
gherkin-official==29.0.0
hypothesis==6.130.5
idna==3.10
importlib_metadata==8.6.1
iniconfig==2.1.0
Mako==1.3.9
MarkupSafe==3.0.2
mock==5.2.0
packaging==24.2
parse==1.20.2
parse_type==0.6.4
pluggy==1.5.0
py-cpuinfo==9.0.0
pycparser==2.22
pytest==8.3.5
pytest-asyncio==0.26.0
pytest-bdd==8.1.0
pytest-benchmark==5.1.0
pytest-cov==6.0.0
pytest-mock==3.14.0
pytest-randomly==3.16.0
pytest-xdist==3.6.1
python-dateutil==2.9.0.post0
PyYAML==6.0.2
requests==2.32.3
requests-mock==1.12.1
responses==0.25.7
six==1.17.0
sortedcontainers==2.4.0
tomli==2.2.1
tomlkit==0.13.2
trustme==1.2.1
typing_extensions==4.13.0
urllib3==2.3.0
zipp==3.21.0
| name: f5-common-python
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==25.3.0
- certifi==2025.1.31
- cffi==1.17.1
- charset-normalizer==3.4.1
- coverage==7.8.0
- cryptography==44.0.2
- exceptiongroup==1.2.2
- execnet==2.1.1
- f5-icontrol-rest==1.3.13
- freezegun==1.5.1
- gherkin-official==29.0.0
- hypothesis==6.130.5
- idna==3.10
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- mako==1.3.9
- markupsafe==3.0.2
- mock==5.2.0
- packaging==24.2
- parse==1.20.2
- parse-type==0.6.4
- pluggy==1.5.0
- py-cpuinfo==9.0.0
- pycparser==2.22
- pytest==8.3.5
- pytest-asyncio==0.26.0
- pytest-bdd==8.1.0
- pytest-benchmark==5.1.0
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pytest-randomly==3.16.0
- pytest-xdist==3.6.1
- python-dateutil==2.9.0.post0
- pyyaml==6.0.2
- requests==2.32.3
- requests-mock==1.12.1
- responses==0.25.7
- six==1.17.0
- sortedcontainers==2.4.0
- tomli==2.2.1
- tomlkit==0.13.2
- trustme==1.2.1
- typing-extensions==4.13.0
- urllib3==2.3.0
- zipp==3.21.0
prefix: /opt/conda/envs/f5-common-python
| [
"f5/bigip/cm/test/unit/test_system.py::TestTmos::test_update",
"f5/bigip/cm/test/unit/test_system.py::TestTmos::test_delete",
"f5/bigip/cm/test/unit/test_system.py::TestTmos::test_create",
"f5/bigip/cm/test/unit/test_system.py::TestTmos::test_modify",
"f5/bigip/tm/security/test/unit/test_log.py::TestProfile::test_create_no_args",
"f5/bigip/tm/security/test/unit/test_log.py::TestProfile::test_create_two",
"f5/bigip/tm/security/test/unit/test_log.py::TestProfile::test_create_mandatory_args_missing"
]
| []
| []
| []
| Apache License 2.0 | 2,417 | [
"f5/bigip/tm/security/__init__.py",
"f5-sdk-dist/scripts/build_exceptions.py",
"f5/bigip/cm/system.py",
"f5/bigip/tm/security/log.py",
"tox.ini"
]
| [
"f5/bigip/tm/security/__init__.py",
"f5-sdk-dist/scripts/build_exceptions.py",
"f5/bigip/cm/system.py",
"f5/bigip/tm/security/log.py",
"tox.ini"
]
|
|
TheFriendlyCoder__friendlypins-56 | 671c3a7d0546b2996f4b5c248621cc2899bad727 | 2018-04-17 01:48:23 | 671c3a7d0546b2996f4b5c248621cc2899bad727 | diff --git a/src/friendlypins/api.py b/src/friendlypins/api.py
index cff46b6..63b0360 100644
--- a/src/friendlypins/api.py
+++ b/src/friendlypins/api.py
@@ -17,20 +17,13 @@ class API(object): # pylint: disable=too-few-public-methods
self._log = logging.getLogger(__name__)
self._io = RestIO(personal_access_token)
- def get_user(self, username=None):
- """Gets all primitives associated with a particular Pinterest user
-
- :param str username:
- Optional name of a user to look up
- If not provided, the currently authentcated user will be returned
-
- :returns: Pinterest user with the given name
+ @property
+ def user(self):
+ """Gets all primitives associated with the authenticated user
+ :returns: currently authenticated pinterest user
:rtype: :class:`friendlypins.user.User`
"""
self._log.debug("Getting authenticated user details...")
- if username:
- raise NotImplementedError(
- "Querying arbitrary Pinerest users is not yet supported.")
fields = "id,username,first_name,last_name,bio,created_at,counts,image"
result = self._io.get("me", {"fields": fields})
diff --git a/src/friendlypins/utils/console_actions.py b/src/friendlypins/utils/console_actions.py
index 8789490..761d5b9 100644
--- a/src/friendlypins/utils/console_actions.py
+++ b/src/friendlypins/utils/console_actions.py
@@ -60,7 +60,7 @@ def download_thumbnails(api_token, board_name, output_folder, delete):
"""
log = logging.getLogger(__name__)
obj = API(api_token)
- user = obj.get_user()
+ user = obj.user
selected_board = None
for cur_board in user.boards:
@@ -97,7 +97,7 @@ def delete_board(api_token, board_name):
"""
log = logging.getLogger(__name__)
obj = API(api_token)
- user = obj.get_user()
+ user = obj.user
selected_board = None
for cur_board in user.boards:
| rename get_user to current_user
Seeing as how we currently only support retrieving data for the currently authenticated user, we should rename get_user to current_user and make it a property with no method parameters. | TheFriendlyCoder/friendlypins | diff --git a/unit_tests/test_api.py b/unit_tests/test_api.py
index b013c9e..8332b3f 100644
--- a/unit_tests/test_api.py
+++ b/unit_tests/test_api.py
@@ -21,7 +21,7 @@ def test_get_user():
mock_io.return_value = mock_obj
obj = API('abcd1234')
- result = obj.get_user()
+ result = obj.user
assert expected_url == result.url
assert expected_firstname == result.first_name
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 2
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
astroid==2.6.6
attrs==22.2.0
Babel==2.11.0
bleach==4.1.0
cachetools==4.2.4
certifi==2021.5.30
chardet==5.0.0
charset-normalizer==2.0.12
colorama==0.4.5
coverage==6.2
dateutils==0.6.12
distlib==0.3.9
docutils==0.18.1
filelock==3.4.1
-e git+https://github.com/TheFriendlyCoder/friendlypins.git@671c3a7d0546b2996f4b5c248621cc2899bad727#egg=friendlypins
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
isort==5.10.1
Jinja2==3.0.3
lazy-object-proxy==1.7.1
mando==0.7.1
MarkupSafe==2.0.1
mccabe==0.6.1
mock==5.2.0
packaging==21.3
Pillow==8.4.0
pkginfo==1.10.0
platformdirs==2.4.0
pluggy==1.0.0
py==1.11.0
Pygments==2.14.0
pylint==3.0.0a4
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
pytz==2025.2
radon==6.0.1
readme-renderer==34.0
requests==2.27.1
requests-toolbelt==1.0.0
six==1.17.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml==0.10.2
tomli==1.2.3
tox==4.0.0a9
tqdm==4.64.1
twine==1.15.0
typed-ast==1.4.3
typing_extensions==4.1.1
urllib3==1.26.20
virtualenv==20.17.1
webencodings==0.5.1
wrapt==1.12.1
zipp==3.6.0
| name: friendlypins
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- astroid==2.6.6
- attrs==22.2.0
- babel==2.11.0
- bleach==4.1.0
- cachetools==4.2.4
- chardet==5.0.0
- charset-normalizer==2.0.12
- colorama==0.4.5
- coverage==6.2
- dateutils==0.6.12
- distlib==0.3.9
- docutils==0.18.1
- filelock==3.4.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- isort==5.10.1
- jinja2==3.0.3
- lazy-object-proxy==1.7.1
- mando==0.7.1
- markupsafe==2.0.1
- mccabe==0.6.1
- mock==5.2.0
- packaging==21.3
- pillow==8.4.0
- pkginfo==1.10.0
- platformdirs==2.4.0
- pluggy==1.0.0
- py==1.11.0
- pygments==2.14.0
- pylint==3.0.0a4
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- radon==6.0.1
- readme-renderer==34.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- toml==0.10.2
- tomli==1.2.3
- tox==4.0.0a9
- tqdm==4.64.1
- twine==1.15.0
- typed-ast==1.4.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- virtualenv==20.17.1
- webencodings==0.5.1
- wrapt==1.12.1
- zipp==3.6.0
prefix: /opt/conda/envs/friendlypins
| [
"unit_tests/test_api.py::test_get_user"
]
| []
| []
| []
| Apache License 2.0 | 2,418 | [
"src/friendlypins/api.py",
"src/friendlypins/utils/console_actions.py"
]
| [
"src/friendlypins/api.py",
"src/friendlypins/utils/console_actions.py"
]
|
|
google__mobly-437 | d8e4c34b46d4bd0f2aa328823e543162933d76c0 | 2018-04-17 08:43:24 | 95286a01a566e056d44acfa9577a45bc7f37f51d | dthkao:
Review status: 0 of 2 files reviewed at latest revision, all discussions resolved.
---
*[mobly/controllers/android_device_lib/adb.py, line 279 at r1](https://beta.reviewable.io/reviews/google/mobly/437#-LAIsFf7-05sU6C6Hx5K:-LAIsFf7-05sU6C6Hx5L:b1zt8o6) ([raw file](https://github.com/google/mobly/blob/93853c8ba9cc06560e500141a68f3fa9039824ab/mobly/controllers/android_device_lib/adb.py#L279)):*
> ```Python
>
> def __getattr__(self, name):
> def adb_call(args=None, shell=False, timeout=None, return_all=False):
> ```
I'm not a huge fan of signatures changing based on a flag. Is there a way we can make this an init-level setting instead of per-call?
---
*Comments from [Reviewable](https://beta.reviewable.io/reviews/google/mobly/437)*
<!-- Sent from Reviewable.io -->
xpconanfan:
Review status: 0 of 2 files reviewed at latest revision, 1 unresolved discussion.
---
*[mobly/controllers/android_device_lib/adb.py, line 279 at r1](https://beta.reviewable.io/reviews/google/mobly/437#-LAIsFf7-05sU6C6Hx5K:-LAIv4JN3Pnm3xAhzfbd:bwk8scm) ([raw file](https://github.com/google/mobly/blob/93853c8ba9cc06560e500141a68f3fa9039824ab/mobly/controllers/android_device_lib/adb.py#L279)):*
<details><summary><i>Previously, dthkao (David T.H. Kao) wrote…</i></summary><blockquote>
I'm not a huge fan of signatures changing based on a flag. Is there a way we can make this an init-level setting instead of per-call?
</blockquote></details>
I thought of the same thing initially.
However that wouldn't work since that'll break any util that makes adb calls and expect a single output, which is quite a lot.
---
*Comments from [Reviewable](https://beta.reviewable.io/reviews/google/mobly/437)*
<!-- Sent from Reviewable.io -->
| diff --git a/mobly/controllers/android_device_lib/adb.py b/mobly/controllers/android_device_lib/adb.py
index 432f08e..12c14bd 100644
--- a/mobly/controllers/android_device_lib/adb.py
+++ b/mobly/controllers/android_device_lib/adb.py
@@ -138,7 +138,7 @@ class AdbProxy(object):
def __init__(self, serial=''):
self.serial = serial
- def _exec_cmd(self, args, shell, timeout):
+ def _exec_cmd(self, args, shell, timeout, stderr):
"""Executes adb commands.
Args:
@@ -148,6 +148,8 @@ class AdbProxy(object):
False to invoke it directly. See subprocess.Popen() docs.
timeout: float, the number of seconds to wait before timing out.
If not specified, no timeout takes effect.
+ stderr: a Byte stream, like io.BytesIO, stderr of the command will
+ be written to this object if provided.
Returns:
The output of the adb command run if exit code is 0.
@@ -169,6 +171,8 @@ class AdbProxy(object):
raise AdbTimeoutError(cmd=args, timeout=timeout)
(out, err) = proc.communicate()
+ if stderr:
+ stderr.write(err)
ret = proc.returncode
logging.debug('cmd: %s, stdout: %s, stderr: %s, ret: %s',
cli_cmd_to_string(args), out, err, ret)
@@ -177,7 +181,7 @@ class AdbProxy(object):
else:
raise AdbError(cmd=args, stdout=out, stderr=err, ret_code=ret)
- def _exec_adb_cmd(self, name, args, shell, timeout):
+ def _exec_adb_cmd(self, name, args, shell, timeout, stderr):
if shell:
# Add quotes around "adb" in case the ADB path contains spaces. This
# is pretty common on Windows (e.g. Program Files).
@@ -195,7 +199,9 @@ class AdbProxy(object):
adb_cmd.append(args)
else:
adb_cmd.extend(args)
- return self._exec_cmd(adb_cmd, shell=shell, timeout=timeout)
+ out = self._exec_cmd(
+ adb_cmd, shell=shell, timeout=timeout, stderr=stderr)
+ return out
def getprop(self, prop_name):
"""Get a property of the device.
@@ -273,7 +279,7 @@ class AdbProxy(object):
return self.shell(instrumentation_command)
def __getattr__(self, name):
- def adb_call(args=None, shell=False, timeout=None):
+ def adb_call(args=None, shell=False, timeout=None, stderr=None):
"""Wrapper for an ADB command.
Args:
@@ -283,6 +289,8 @@ class AdbProxy(object):
False to invoke it directly. See subprocess.Proc() docs.
timeout: float, the number of seconds to wait before timing out.
If not specified, no timeout takes effect.
+ stderr: a Byte stream, like io.BytesIO, stderr of the command
+ will be written to this object if provided.
Returns:
The output of the adb command run if exit code is 0.
@@ -290,6 +298,6 @@ class AdbProxy(object):
args = args or ''
clean_name = name.replace('_', '-')
return self._exec_adb_cmd(
- clean_name, args, shell=shell, timeout=timeout)
+ clean_name, args, shell=shell, timeout=timeout, stderr=stderr)
return adb_call
| Propagate stderr from adb commands
The current mobly adb proxy does not propagate stderr if ret code is zero.
We thought this was ok since Android has fixed return code issues in M.
But turns out many China manufacturers did not fix this in China devices.
In order to better support China devices and potentially other devices of the same ret code problem, we need to surface stderr. | google/mobly | diff --git a/mobly/base_instrumentation_test.py b/mobly/base_instrumentation_test.py
index 4966cd4..bb72075 100644
--- a/mobly/base_instrumentation_test.py
+++ b/mobly/base_instrumentation_test.py
@@ -927,7 +927,7 @@ class BaseInstrumentationTestClass(base_test.BaseTestClass):
package=package,
options=options,
runner=runner,
- )
+ ).decode('utf-8')
logging.info('Outputting instrumentation test log...')
logging.info(instrumentation_output)
@@ -935,5 +935,5 @@ class BaseInstrumentationTestClass(base_test.BaseTestClass):
instrumentation_block = _InstrumentationBlock(prefix=prefix)
for line in instrumentation_output.splitlines():
instrumentation_block = self._parse_line(instrumentation_block,
- line.decode('utf-8'))
+ line)
return self._finish_parsing(instrumentation_block)
diff --git a/mobly/base_test.py b/mobly/base_test.py
index 8b761fa..e4e047b 100644
--- a/mobly/base_test.py
+++ b/mobly/base_test.py
@@ -26,6 +26,7 @@ from mobly import expects
from mobly import records
from mobly import signals
from mobly import runtime_test_info
+from mobly import utils
# Macro strings for test result reporting
TEST_CASE_TOKEN = '[Test]'
@@ -351,7 +352,7 @@ class BaseTestClass(object):
content: dict, the data to add to summary file.
"""
if 'timestamp' not in content:
- content['timestamp'] = time.time()
+ content['timestamp'] = utils.get_current_epoch_time()
self.summary_writer.dump(content,
records.TestSummaryEntryType.USER_DATA)
diff --git a/tests/mobly/base_instrumentation_test_test.py b/tests/mobly/base_instrumentation_test_test.py
index 2256475..3908015 100755
--- a/tests/mobly/base_instrumentation_test_test.py
+++ b/tests/mobly/base_instrumentation_test_test.py
@@ -34,6 +34,17 @@ MOCK_PREFIX = 'my_prefix'
# A mock name for the instrumentation test subclass.
MOCK_INSTRUMENTATION_TEST_CLASS_NAME = 'MockInstrumentationTest'
+MOCK_EMPTY_INSTRUMENTATION_TEST = """\
+INSTRUMENTATION_RESULT: stream=
+
+Time: 0.001
+
+OK (0 tests)
+
+
+INSTRUMENTATION_CODE: -1
+"""
+
class MockInstrumentationTest(BaseInstrumentationTestClass):
def __init__(self, tmp_dir, user_params={}):
@@ -229,18 +240,21 @@ INSTRUMENTATION_STATUS_CODE: -1
instrumentation_output, expected_has_error=True)
def test_run_instrumentation_test_with_no_tests(self):
- instrumentation_output = """\
-INSTRUMENTATION_RESULT: stream=
-
-Time: 0.001
-
-OK (0 tests)
-
+ instrumentation_output = MOCK_EMPTY_INSTRUMENTATION_TEST
+ self.assert_run_instrumentation_test(
+ instrumentation_output, expected_completed_and_passed=True)
-INSTRUMENTATION_CODE: -1
-"""
+ @unittest.skipUnless(
+ sys.version_info >= (3, 0),
+ 'Only python3 displays different string types differently.')
+ @mock.patch('logging.info')
+ def test_run_instrumentation_test_logs_correctly(self, mock_info_logger):
+ instrumentation_output = MOCK_EMPTY_INSTRUMENTATION_TEST
self.assert_run_instrumentation_test(
instrumentation_output, expected_completed_and_passed=True)
+ for mock_call in mock_info_logger.mock_calls:
+ logged_format = mock_call[1][0]
+ self.assertIsInstance(logged_format, str)
def test_run_instrumentation_test_with_passing_test(self):
instrumentation_output = """\
diff --git a/tests/mobly/controllers/android_device_lib/adb_test.py b/tests/mobly/controllers/android_device_lib/adb_test.py
index 9eb3ab8..7bf61ab 100755
--- a/tests/mobly/controllers/android_device_lib/adb_test.py
+++ b/tests/mobly/controllers/android_device_lib/adb_test.py
@@ -12,11 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import io
import mock
+import subprocess
from collections import OrderedDict
from future.tests.base import unittest
-
from mobly.controllers.android_device_lib import adb
# Mock parameters for instrumentation.
@@ -42,6 +43,9 @@ MOCK_OPTIONS_INSTRUMENTATION_COMMAND = ('am instrument -r -w -e option1 value1'
# Mock Shell Command
MOCK_SHELL_COMMAND = 'ls'
MOCK_COMMAND_OUTPUT = '/system/bin/ls'.encode('utf-8')
+MOCK_DEFAULT_STDOUT = 'out'
+MOCK_DEFAULT_STDERR = 'err'
+MOCK_DEFAULT_COMMAND_OUTPUT = MOCK_DEFAULT_STDOUT.encode('utf-8')
MOCK_ADB_SHELL_COMMAND_CHECK = 'adb shell command -v ls'
@@ -58,7 +62,8 @@ class AdbTest(unittest.TestCase):
mock_psutil_process.return_value = mock.Mock()
mock_proc.communicate = mock.Mock(
- return_value=('out'.encode('utf-8'), 'err'.encode('utf-8')))
+ return_value=(MOCK_DEFAULT_STDOUT.encode('utf-8'),
+ MOCK_DEFAULT_STDERR.encode('utf-8')))
mock_proc.returncode = 0
return (mock_psutil_process, mock_popen)
@@ -68,9 +73,9 @@ class AdbTest(unittest.TestCase):
mock_Popen):
self._mock_process(mock_psutil_process, mock_Popen)
- reply = adb.AdbProxy()._exec_cmd(
- ['fake_cmd'], shell=False, timeout=None)
- self.assertEqual('out', reply.decode('utf-8'))
+ out = adb.AdbProxy()._exec_cmd(
+ ['fake_cmd'], shell=False, timeout=None, stderr=None)
+ self.assertEqual(MOCK_DEFAULT_STDOUT, out.decode('utf-8'))
@mock.patch('mobly.controllers.android_device_lib.adb.subprocess.Popen')
@mock.patch('mobly.controllers.android_device_lib.adb.psutil.Process')
@@ -81,7 +86,8 @@ class AdbTest(unittest.TestCase):
with self.assertRaisesRegex(adb.AdbError,
'Error executing adb cmd .*'):
- adb.AdbProxy()._exec_cmd(['fake_cmd'], shell=False, timeout=None)
+ adb.AdbProxy()._exec_cmd(
+ ['fake_cmd'], shell=False, timeout=None, stderr=None)
@mock.patch('mobly.controllers.android_device_lib.adb.subprocess.Popen')
@mock.patch('mobly.controllers.android_device_lib.adb.psutil.Process')
@@ -89,8 +95,9 @@ class AdbTest(unittest.TestCase):
mock_popen):
self._mock_process(mock_psutil_process, mock_popen)
- reply = adb.AdbProxy()._exec_cmd(['fake_cmd'], shell=False, timeout=1)
- self.assertEqual('out', reply.decode('utf-8'))
+ out = adb.AdbProxy()._exec_cmd(
+ ['fake_cmd'], shell=False, timeout=1, stderr=None)
+ self.assertEqual(MOCK_DEFAULT_STDOUT, out.decode('utf-8'))
@mock.patch('mobly.controllers.android_device_lib.adb.subprocess.Popen')
@mock.patch('mobly.controllers.android_device_lib.adb.psutil.Process')
@@ -104,7 +111,8 @@ class AdbTest(unittest.TestCase):
with self.assertRaisesRegex(adb.AdbTimeoutError,
'Timed out executing command "fake_cmd" '
'after 0.1s.'):
- adb.AdbProxy()._exec_cmd(['fake_cmd'], shell=False, timeout=0.1)
+ adb.AdbProxy()._exec_cmd(
+ ['fake_cmd'], shell=False, timeout=0.1, stderr=None)
@mock.patch('mobly.controllers.android_device_lib.adb.subprocess.Popen')
@mock.patch('mobly.controllers.android_device_lib.adb.psutil.Process')
@@ -113,66 +121,100 @@ class AdbTest(unittest.TestCase):
self._mock_process(mock_psutil_process, mock_popen)
with self.assertRaisesRegex(adb.Error,
'Timeout is not a positive value: -1'):
- adb.AdbProxy()._exec_cmd(['fake_cmd'], shell=False, timeout=-1)
+ adb.AdbProxy()._exec_cmd(
+ ['fake_cmd'], shell=False, timeout=-1, stderr=None)
def test_exec_adb_cmd(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy().shell(['arg1', 'arg2'])
mock_exec_cmd.assert_called_once_with(
- ['adb', 'shell', 'arg1', 'arg2'], shell=False, timeout=None)
+ ['adb', 'shell', 'arg1', 'arg2'],
+ shell=False,
+ timeout=None,
+ stderr=None)
+
+ def test_exec_adb_cmd_with_serial(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy('12345').shell(['arg1', 'arg2'])
mock_exec_cmd.assert_called_once_with(
['adb', '-s', '12345', 'shell', 'arg1', 'arg2'],
shell=False,
- timeout=None)
+ timeout=None,
+ stderr=None)
def test_exec_adb_cmd_with_shell_true(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy().shell('arg1 arg2', shell=True)
mock_exec_cmd.assert_called_once_with(
- '"adb" shell arg1 arg2', shell=True, timeout=None)
+ '"adb" shell arg1 arg2', shell=True, timeout=None, stderr=None)
+
+ def test_exec_adb_cmd_with_shell_true_with_serial(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy('12345').shell('arg1 arg2', shell=True)
mock_exec_cmd.assert_called_once_with(
- '"adb" -s "12345" shell arg1 arg2', shell=True, timeout=None)
+ '"adb" -s "12345" shell arg1 arg2',
+ shell=True,
+ timeout=None,
+ stderr=None)
+
+ @mock.patch('mobly.controllers.android_device_lib.adb.subprocess.Popen')
+ @mock.patch('mobly.controllers.android_device_lib.adb.psutil.Process')
+ def test_exec_adb_cmd_with_stderr_pipe(self, mock_psutil_process,
+ mock_popen):
+ self._mock_process(mock_psutil_process, mock_popen)
+ stderr_redirect = io.BytesIO()
+ out = adb.AdbProxy().shell(
+ 'arg1 arg2', shell=True, stderr=stderr_redirect)
+ self.assertEqual(MOCK_DEFAULT_STDOUT, out.decode('utf-8'))
+ self.assertEqual(MOCK_DEFAULT_STDERR,
+ stderr_redirect.getvalue().decode('utf-8'))
def test_instrument_without_parameters(self):
"""Verifies the AndroidDevice object's instrument command is correct in
the basic case.
"""
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy().instrument(MOCK_INSTRUMENTATION_PACKAGE)
mock_exec_cmd.assert_called_once_with(
['adb', 'shell', MOCK_BASIC_INSTRUMENTATION_COMMAND],
shell=False,
- timeout=None)
+ timeout=None,
+ stderr=None)
def test_instrument_with_runner(self):
"""Verifies the AndroidDevice object's instrument command is correct
with a runner specified.
"""
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy().instrument(
MOCK_INSTRUMENTATION_PACKAGE,
runner=MOCK_INSTRUMENTATION_RUNNER)
mock_exec_cmd.assert_called_once_with(
['adb', 'shell', MOCK_RUNNER_INSTRUMENTATION_COMMAND],
shell=False,
- timeout=None)
+ timeout=None,
+ stderr=None)
def test_instrument_with_options(self):
"""Verifies the AndroidDevice object's instrument command is correct
with options.
"""
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy().instrument(
MOCK_INSTRUMENTATION_PACKAGE,
options=MOCK_INSTRUMENTATION_OPTIONS)
mock_exec_cmd.assert_called_once_with(
['adb', 'shell', MOCK_OPTIONS_INSTRUMENTATION_COMMAND],
shell=False,
- timeout=None)
+ timeout=None,
+ stderr=None)
def test_cli_cmd_to_string(self):
cmd = ['"adb"', 'a b', 'c//']
@@ -182,11 +224,13 @@ class AdbTest(unittest.TestCase):
def test_has_shell_command_called_correctly(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
adb.AdbProxy().has_shell_command(MOCK_SHELL_COMMAND)
mock_exec_cmd.assert_called_once_with(
['adb', 'shell', 'command', '-v', MOCK_SHELL_COMMAND],
shell=False,
- timeout=None)
+ timeout=None,
+ stderr=None)
def test_has_shell_command_with_existing_command(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
@@ -196,6 +240,7 @@ class AdbTest(unittest.TestCase):
def test_has_shell_command_with_missing_command_on_older_devices(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
mock_exec_cmd.side_effect = adb.AdbError(
MOCK_ADB_SHELL_COMMAND_CHECK, '', '', 0)
self.assertFalse(
@@ -203,6 +248,7 @@ class AdbTest(unittest.TestCase):
def test_has_shell_command_with_missing_command_on_newer_devices(self):
with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ mock_exec_cmd.return_value = MOCK_DEFAULT_COMMAND_OUTPUT
mock_exec_cmd.side_effect = adb.AdbError(
MOCK_ADB_SHELL_COMMAND_CHECK, '', '', 1)
self.assertFalse(
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 1
} | 1.7 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.5",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
coverage==6.2
execnet==1.9.0
future==1.0.0
importlib-metadata==4.8.3
iniconfig==1.1.1
-e git+https://github.com/google/mobly.git@d8e4c34b46d4bd0f2aa328823e543162933d76c0#egg=mobly
mock==1.0.1
packaging==21.3
pluggy==1.0.0
portpicker==1.6.0
psutil==7.0.0
py==1.11.0
pyparsing==3.1.4
pyserial==3.5
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-xdist==3.0.2
pytz==2025.2
PyYAML==6.0.1
timeout-decorator==0.5.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: mobly
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- coverage==6.2
- execnet==1.9.0
- future==1.0.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- mock==1.0.1
- packaging==21.3
- pluggy==1.0.0
- portpicker==1.6.0
- psutil==7.0.0
- py==1.11.0
- pyparsing==3.1.4
- pyserial==3.5
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
- pytz==2025.2
- pyyaml==6.0.1
- timeout-decorator==0.5.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/mobly
| [
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_serial",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_shell_true",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_shell_true_with_serial",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_stderr_pipe",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_error_no_timeout",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_no_timeout_success",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_timed_out",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_with_negative_timeout_value",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_with_timeout_success",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_called_correctly",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_with_options",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_with_runner",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_without_parameters"
]
| []
| [
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test__Instrumentation_block_set_key_on_multiple_equals_sign",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_parse_instrumentation_options_with_mixed_user_params",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_parse_instrumentation_options_with_no_instrumentation_params",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_parse_instrumentation_options_with_no_user_params",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_parse_instrumentation_options_with_only_instrumentation_params",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_logs_correctly",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_assumption_failure_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_crashed_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_crashing_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_failing_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_ignored_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_invalid_syntax",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_missing_runner",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_missing_test_package",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_multiple_tests",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_no_output",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_no_tests",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_passing_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_prefix_test",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_random_whitespace",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_runner_setup_crash",
"tests/mobly/base_instrumentation_test_test.py::BaseInstrumentationTestTest::test_run_instrumentation_test_with_runner_teardown_crash",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_cli_cmd_to_string",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_existing_command",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_missing_command_on_newer_devices",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_missing_command_on_older_devices"
]
| []
| Apache License 2.0 | 2,419 | [
"mobly/controllers/android_device_lib/adb.py"
]
| [
"mobly/controllers/android_device_lib/adb.py"
]
|
python-trio__trio-502 | 94d49f95ffba3634197c173b771dca80ebc70b08 | 2018-04-17 09:40:50 | 72dba90e31604c083a177978c40c4dd8570aee21 | diff --git a/trio/_path.py b/trio/_path.py
index 4b1bee16..7f777936 100644
--- a/trio/_path.py
+++ b/trio/_path.py
@@ -128,6 +128,28 @@ class Path(metaclass=AsyncAutoWrapperType):
self._wrapped = pathlib.Path(*args)
+ async def iterdir(self):
+ """
+ Like :meth:`pathlib.Path.iterdir`, but async.
+
+ This is an async method that returns a synchronous iterator, so you
+ use it like::
+
+ for subpath in await mypath.iterdir():
+ ...
+
+ Note that it actually loads the whole directory list into memory
+ immediately, during the initial call. (See `issue #501
+ <https://github.com/python-trio/trio/issues/501>`__ for discussion.)
+
+ """
+
+ def _load_items():
+ return list(self._wrapped.iterdir())
+
+ items = await trio.run_sync_in_worker_thread(_load_items)
+ return (Path(item) for item in items)
+
def __getattr__(self, name):
if name in self._forward:
value = getattr(self._wrapped, name)
| trio.Path.iterdir wrapping is broken
Given `pathlib.Path.iterdir` returns a generator that does IO access on each iteration, `trio.Path.iterdir` is currently broken given it currently only generates the generator asynchronously (which I suppose is pointless given there is no need for IO at generator creation)
The solution would be to modify `trio.Path.iterdir` to return an async generator, however this means creating a special case given the current implementation is only an async wrapper on `pathlib.Path.iterdir`. | python-trio/trio | diff --git a/trio/tests/test_path.py b/trio/tests/test_path.py
index 6b9d1c15..1289cfa2 100644
--- a/trio/tests/test_path.py
+++ b/trio/tests/test_path.py
@@ -198,3 +198,17 @@ async def test_path_nonpath():
async def test_open_file_can_open_path(path):
async with await trio.open_file(path, 'w') as f:
assert f.name == fspath(path)
+
+
+async def test_iterdir(path):
+ # Populate a directory
+ await path.mkdir()
+ await (path / 'foo').mkdir()
+ await (path / 'bar.txt').write_bytes(b'')
+
+ entries = set()
+ for entry in await path.iterdir():
+ assert isinstance(entry, trio.Path)
+ entries.add(entry.name)
+
+ assert entries == {'bar.txt', 'foo'}
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"ipython",
"pyOpenSSL",
"trustme",
"pytest-faulthandler"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asttokens==3.0.0
async-generator==1.10
attrs==25.3.0
cffi==1.17.1
coverage==7.8.0
cryptography==44.0.2
decorator==5.2.1
exceptiongroup==1.2.2
executing==2.2.0
idna==3.10
iniconfig==2.1.0
ipython==8.18.1
jedi==0.19.2
matplotlib-inline==0.1.7
packaging==24.2
parso==0.8.4
pexpect==4.9.0
pluggy==1.5.0
prompt_toolkit==3.0.50
ptyprocess==0.7.0
pure_eval==0.2.3
pycparser==2.22
Pygments==2.19.1
pyOpenSSL==25.0.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-faulthandler==2.0.1
sortedcontainers==2.4.0
stack-data==0.6.3
tomli==2.2.1
traitlets==5.14.3
-e git+https://github.com/python-trio/trio.git@94d49f95ffba3634197c173b771dca80ebc70b08#egg=trio
trustme==1.2.1
typing_extensions==4.13.0
wcwidth==0.2.13
| name: trio
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asttokens==3.0.0
- async-generator==1.10
- attrs==25.3.0
- cffi==1.17.1
- coverage==7.8.0
- cryptography==44.0.2
- decorator==5.2.1
- exceptiongroup==1.2.2
- executing==2.2.0
- idna==3.10
- iniconfig==2.1.0
- ipython==8.18.1
- jedi==0.19.2
- matplotlib-inline==0.1.7
- packaging==24.2
- parso==0.8.4
- pexpect==4.9.0
- pluggy==1.5.0
- prompt-toolkit==3.0.50
- ptyprocess==0.7.0
- pure-eval==0.2.3
- pycparser==2.22
- pygments==2.19.1
- pyopenssl==25.0.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-faulthandler==2.0.1
- sortedcontainers==2.4.0
- stack-data==0.6.3
- tomli==2.2.1
- traitlets==5.14.3
- trio==0.4.0+dev
- trustme==1.2.1
- typing-extensions==4.13.0
- wcwidth==0.2.13
prefix: /opt/conda/envs/trio
| [
"trio/tests/test_path.py::test_iterdir"
]
| []
| [
"trio/tests/test_path.py::test_open_is_async_context_manager",
"trio/tests/test_path.py::test_magic",
"trio/tests/test_path.py::test_cmp_magic[Path-Path0]",
"trio/tests/test_path.py::test_cmp_magic[Path-Path1]",
"trio/tests/test_path.py::test_cmp_magic[Path-Path2]",
"trio/tests/test_path.py::test_div_magic[Path-Path0]",
"trio/tests/test_path.py::test_div_magic[Path-Path1]",
"trio/tests/test_path.py::test_div_magic[Path-str]",
"trio/tests/test_path.py::test_div_magic[str-Path]",
"trio/tests/test_path.py::test_forwarded_properties",
"trio/tests/test_path.py::test_async_method_signature",
"trio/tests/test_path.py::test_compare_async_stat_methods[is_dir]",
"trio/tests/test_path.py::test_compare_async_stat_methods[is_file]",
"trio/tests/test_path.py::test_invalid_name_not_wrapped",
"trio/tests/test_path.py::test_async_methods_rewrap[absolute]",
"trio/tests/test_path.py::test_async_methods_rewrap[resolve]",
"trio/tests/test_path.py::test_forward_methods_rewrap",
"trio/tests/test_path.py::test_forward_properties_rewrap",
"trio/tests/test_path.py::test_forward_methods_without_rewrap",
"trio/tests/test_path.py::test_repr",
"trio/tests/test_path.py::test_type_forwards_unsupported",
"trio/tests/test_path.py::test_type_wraps_unsupported",
"trio/tests/test_path.py::test_type_forwards_private",
"trio/tests/test_path.py::test_type_wraps_private",
"trio/tests/test_path.py::test_path_wraps_path[__init__]",
"trio/tests/test_path.py::test_path_wraps_path[joinpath]",
"trio/tests/test_path.py::test_path_nonpath",
"trio/tests/test_path.py::test_open_file_can_open_path"
]
| []
| MIT/Apache-2.0 Dual License | 2,420 | [
"trio/_path.py"
]
| [
"trio/_path.py"
]
|
|
TheFriendlyCoder__friendlypins-59 | eed1f246c388b9c1c92755d2c6dd77b5133a686c | 2018-04-18 00:31:38 | eed1f246c388b9c1c92755d2c6dd77b5133a686c | diff --git a/src/friendlypins/api.py b/src/friendlypins/api.py
index f8b7255..4b014c9 100644
--- a/src/friendlypins/api.py
+++ b/src/friendlypins/api.py
@@ -25,7 +25,18 @@ class API(object): # pylint: disable=too-few-public-methods
"""
self._log.debug("Getting authenticated user details...")
- fields = "id,username,first_name,last_name,bio,created_at,counts,image"
+ fields = ",".join([
+ "id",
+ "username",
+ "first_name",
+ "last_name",
+ "bio",
+ "created_at",
+ "counts",
+ "image",
+ "account_type",
+ "url"
+ ])
result = self._io.get("me", {"fields": fields})
assert 'data' in result
diff --git a/src/friendlypins/board.py b/src/friendlypins/board.py
index d8626f6..4118157 100644
--- a/src/friendlypins/board.py
+++ b/src/friendlypins/board.py
@@ -47,6 +47,14 @@ class Board(object):
"""
return self._data['name']
+ @property
+ def description(self):
+ """Gets the descriptive text associated with this board
+
+ :rtype: :class:`str`
+ """
+ return self._data['description']
+
@property
def url(self):
"""Web address for the UI associated with the dashboard
diff --git a/src/friendlypins/user.py b/src/friendlypins/user.py
index 42367b6..2230b65 100644
--- a/src/friendlypins/user.py
+++ b/src/friendlypins/user.py
@@ -109,7 +109,9 @@ class User(object):
"creator",
"created_at",
"counts",
- "image"
+ "image",
+ "reason",
+ "privacy"
])
}
@@ -119,6 +121,35 @@ class User(object):
for cur_item in cur_page['data']:
yield Board(cur_item, self._io)
+ def create_board(self, name, description=None):
+ """Creates a new board for the currently authenticated user
+
+ :param str name: name for the new board
+ :param str description: optional descriptive text for the board
+ :returns: reference to the newly created board
+ :rtype: :class:`friendlypins.board.Board`
+ """
+ properties = {
+ "fields": ','.join([
+ "id",
+ "name",
+ "url",
+ "description",
+ "creator",
+ "created_at",
+ "counts",
+ "image",
+ "reason",
+ "privacy"
+ ])
+ }
+
+ data = {"name": name}
+ if description:
+ data["description"] = description
+
+ result = self._io.post("boards", data, properties)
+ return Board(result['data'], self._io)
if __name__ == "__main__":
pass
diff --git a/src/friendlypins/utils/rest_io.py b/src/friendlypins/utils/rest_io.py
index 20456a5..ed7a77e 100644
--- a/src/friendlypins/utils/rest_io.py
+++ b/src/friendlypins/utils/rest_io.py
@@ -59,12 +59,44 @@ class RestIO(object):
properties["access_token"] = self._token
response = requests.get(temp_url, params=properties)
+
+ self._log.debug("Get response text is %s", response.text)
self._latest_header = Headers(response.headers)
self._log.debug("%s query header: %s", path, self._latest_header)
response.raise_for_status()
return response.json()
+ def post(self, path, data, properties=None):
+ """Posts API data to a given sub-path
+
+ :param str path: sub-path with in the REST API to send data to
+ :param dict data: form data to be posted to the API endpoint
+ :param dict properties:
+ optional set of request properties to append to the API call
+ :returns: json data returned from the API endpoint
+ :rtype: :class:`dict`
+ """
+ self._log.debug(
+ "Posting data from %s with options %s",
+ path,
+ properties
+ )
+ temp_url = "{0}/{1}/".format(self._root_url, path)
+
+ if properties is None:
+ properties = dict()
+ properties["access_token"] = self._token
+
+ response = requests.post(temp_url, data=data, params=properties)
+ self._latest_header = Headers(response.headers)
+ self._log.debug("%s query header: %s", path, self._latest_header)
+ self._log.debug("Post response text is %s", response.text)
+
+ response.raise_for_status()
+
+ return response.json()
+
def get_pages(self, path, properties=None):
"""Generator for iterating over paged results returned from API
| Add code to create new boards
The next logical progression in the API development is to add code for creating new boards for a particular authenticated user. | TheFriendlyCoder/friendlypins | diff --git a/unit_tests/test_rest_io.py b/unit_tests/test_rest_io.py
index b8b4b92..4be8eab 100644
--- a/unit_tests/test_rest_io.py
+++ b/unit_tests/test_rest_io.py
@@ -43,5 +43,30 @@ def test_get_headers(mock_requests):
assert tmp.bytes == expected_bytes
[email protected]("friendlypins.utils.rest_io.requests")
+def test_post(mock_requests):
+ obj = RestIO("1234abcd")
+ expected_path = "me/boards"
+ expected_data = {
+ "name": "My New Board",
+ "description": "Here is my cool description"
+ }
+
+ expected_results = {
+ "testing": "123"
+ }
+ mock_response = mock.MagicMock()
+ mock_requests.post.return_value = mock_response
+ mock_response.json.return_value = expected_results
+
+ res = obj.post(expected_path, expected_data)
+
+ mock_response.json.assert_called_once()
+ mock_requests.post.assert_called_once()
+
+ assert expected_path in mock_requests.post.call_args[0][0]
+ assert "data" in mock_requests.post.call_args[1]
+ assert mock_requests.post.call_args[1]["data"] == expected_data
+
if __name__ == "__main__":
pytest.main([__file__, "-v", "-s"])
diff --git a/unit_tests/test_user.py b/unit_tests/test_user.py
index 6ce221e..75d051e 100644
--- a/unit_tests/test_user.py
+++ b/unit_tests/test_user.py
@@ -63,6 +63,29 @@ def test_get_boards():
assert expected_name == result[0].name
assert expected_id == result[0].unique_id
+def test_create_board():
+ expected_name = "My Board"
+ expected_desc = "My new board is about this stuff..."
+ data = {
+ "id": "1234",
+ "first_name": "Jonh",
+ "last_name": "Doe"
+ }
+ mock_io = mock.MagicMock()
+ mock_io.post.return_value = {
+ "data": {
+ "name": expected_name,
+ "description": expected_desc,
+ "id": "12345"
+ }
+ }
+ obj = User(data, mock_io)
+
+ board = obj.create_board(expected_name, expected_desc)
+ mock_io.post.assert_called_once()
+ assert board is not None
+ assert board.name == expected_name
+ assert board.description == expected_desc
if __name__ == "__main__":
pytest.main([__file__, "-v", "-s"])
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 4
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"pip install wheel twine tox"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
astroid==2.6.6
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
Babel==2.11.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
colorama==0.4.5
coverage==6.2
cryptography==40.0.2
dateutils==0.6.12
distlib==0.3.9
docutils==0.18.1
filelock==3.4.1
-e git+https://github.com/TheFriendlyCoder/friendlypins.git@eed1f246c388b9c1c92755d2c6dd77b5133a686c#egg=friendlypins
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
isort==5.10.1
jeepney==0.7.1
Jinja2==3.0.3
keyring==23.4.1
lazy-object-proxy==1.7.1
mando==0.7.1
MarkupSafe==2.0.1
mccabe==0.6.1
mock==5.2.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==8.4.0
pkginfo==1.10.0
platformdirs==2.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycparser==2.21
Pygments==2.14.0
pylint==3.0.0a4
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
pytz==2025.2
radon==6.0.1
readme-renderer==34.0
requests==2.27.1
requests-toolbelt==1.0.0
rfc3986==1.5.0
SecretStorage==3.3.3
six==1.17.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
tox==3.28.0
tqdm==4.64.1
twine==1.15.0
typed-ast==1.4.3
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
virtualenv==20.17.1
webencodings==0.5.1
wrapt==1.12.1
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: friendlypins
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- astroid==2.6.6
- babel==2.11.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- colorama==0.4.5
- coverage==6.2
- cryptography==40.0.2
- dateutils==0.6.12
- distlib==0.3.9
- docutils==0.18.1
- filelock==3.4.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- isort==5.10.1
- jeepney==0.7.1
- jinja2==3.0.3
- keyring==23.4.1
- lazy-object-proxy==1.7.1
- mando==0.7.1
- markupsafe==2.0.1
- mccabe==0.6.1
- mock==5.2.0
- pillow==8.4.0
- pkginfo==1.10.0
- platformdirs==2.4.0
- pycparser==2.21
- pygments==2.14.0
- pylint==3.0.0a4
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- radon==6.0.1
- readme-renderer==34.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- rfc3986==1.5.0
- secretstorage==3.3.3
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==1.2.3
- tox==3.28.0
- tqdm==4.64.1
- twine==1.15.0
- typed-ast==1.4.3
- urllib3==1.26.20
- virtualenv==20.17.1
- webencodings==0.5.1
- wrapt==1.12.1
prefix: /opt/conda/envs/friendlypins
| [
"unit_tests/test_rest_io.py::test_post",
"unit_tests/test_user.py::test_create_board"
]
| []
| [
"unit_tests/test_rest_io.py::test_get_method",
"unit_tests/test_rest_io.py::test_get_headers",
"unit_tests/test_user.py::test_user_properties",
"unit_tests/test_user.py::test_get_boards"
]
| []
| Apache License 2.0 | 2,421 | [
"src/friendlypins/board.py",
"src/friendlypins/user.py",
"src/friendlypins/utils/rest_io.py",
"src/friendlypins/api.py"
]
| [
"src/friendlypins/board.py",
"src/friendlypins/user.py",
"src/friendlypins/utils/rest_io.py",
"src/friendlypins/api.py"
]
|
|
dwavesystems__dwave-cloud-client-126 | 31e170e1fc5df92bb7d57a45825379814c1aab84 | 2018-04-18 20:30:37 | 0314a6761ba389bb20ba48ef65476a286d1bf38c | diff --git a/dwave/cloud/cli.py b/dwave/cloud/cli.py
index d2a1e95..b4b37d9 100644
--- a/dwave/cloud/cli.py
+++ b/dwave/cloud/cli.py
@@ -8,9 +8,10 @@ from dwave.cloud import Client
from dwave.cloud.utils import readline_input
from dwave.cloud.package_info import __title__, __version__
from dwave.cloud.exceptions import (
- SolverAuthenticationError, InvalidAPIResponseError, UnsupportedSolverError)
+ SolverAuthenticationError, InvalidAPIResponseError, UnsupportedSolverError,
+ ConfigFileReadError, ConfigFileParseError)
from dwave.cloud.config import (
- load_config_from_files, get_default_config,
+ load_profile_from_files, load_config_from_files, get_default_config,
get_configfile_path, get_default_configfile_path,
get_configfile_paths)
@@ -64,6 +65,29 @@ def list_local_config():
click.echo(path)
+def inspect_config(ctx, param, value):
+ if not value or ctx.resilient_parsing:
+ return
+
+ config_file = ctx.params.get('config_file')
+ profile = ctx.params.get('profile')
+
+ try:
+ section = load_profile_from_files(
+ [config_file] if config_file else None, profile)
+
+ click.echo("Config file: {}".format(config_file if config_file else "auto-detected"))
+ click.echo("Profile: {}".format(profile if profile else "auto-detected"))
+ click.echo("---")
+ for key, val in section.items():
+ click.echo("{} = {}".format(key, val))
+
+ except (ValueError, ConfigFileReadError, ConfigFileParseError) as e:
+ click.echo(e)
+
+ ctx.exit()
+
+
@click.group()
@click.version_option(prog_name=__title__, version=__version__)
def cli():
@@ -71,10 +95,13 @@ def cli():
@cli.command()
[email protected]('--config-file', default=None, help='Config file path',
- type=click.Path(exists=False, dir_okay=False))
[email protected]('--profile', default=None,
[email protected]('--config-file', '-c', default=None, is_eager=True,
+ type=click.Path(exists=False, dir_okay=False),
+ help='Config file path')
[email protected]('--profile', '-p', default=None, is_eager=True,
help='Connection profile name (config section name)')
[email protected]('--inspect', is_flag=True, expose_value=False, callback=inspect_config,
+ help='Only inspect existing config/profile (no update)')
@click.option('--list-config-files', is_flag=True, callback=list_config_files,
expose_value=False, is_eager=True,
help='List paths of all config files detected on this system')
@@ -163,9 +190,9 @@ def configure(config_file, profile):
@cli.command()
[email protected]('--config-file', default=None, help='Config file path',
[email protected]('--config-file', '-c', default=None, help='Config file path',
type=click.Path(exists=True, dir_okay=False))
[email protected]('--profile', default=None, help='Connection profile name')
[email protected]('--profile', '-p', default=None, help='Connection profile name')
def ping(config_file, profile):
"""Ping the QPU by submitting a single-qubit problem."""
diff --git a/dwave/cloud/config.py b/dwave/cloud/config.py
index e61d1b1..ae2e9c1 100644
--- a/dwave/cloud/config.py
+++ b/dwave/cloud/config.py
@@ -18,7 +18,7 @@ def get_configfile_paths(system=True, user=True, local=True, only_existing=True)
Candidates examined depend on the OS, but for Linux possible list is:
``dwave.conf`` in CWD, user-local ``.config/dwave/``, system-wide
- ``/etc/dwave/``. For details, see :func:`load_config_from_file`.
+ ``/etc/dwave/``. For details, see :func:`load_config_from_files`.
Args:
system (boolean, default=True):
@@ -160,6 +160,70 @@ def load_config_from_files(filenames=None):
return config
+def load_profile_from_files(filenames=None, profile=None):
+ """Load config from a list of `filenames`, returning only section
+ defined with `profile`.
+
+ Note:
+ Config files and profile name are **not** read from process environment.
+
+ Args:
+ filenames (list[str], default=None):
+ D-Wave cloud client configuration file locations. Set to ``None`` to
+ auto-detect config files, as described in
+ :func:`load_config_from_files`.
+
+ profile (str, default=None):
+ Name of the profile to return from configuration read from config
+ file(s). Set to ``None`` fallback to ``profile`` key under
+ ``[defaults]`` section, or the first non-defaults section, or the
+ actual ``[defaults]`` section.
+
+ Returns:
+ dict:
+ Mapping of config keys to config values. If no valid config/profile
+ found, returns an empty dict.
+
+ Raises:
+ :exc:`~dwave.cloud.exceptions.ConfigFileReadError`:
+ Config file specified or detected could not be opened or read.
+
+ :exc:`~dwave.cloud.exceptions.ConfigFileParseError`:
+ Config file parse failed.
+
+ :exc:`ValueError`:
+ Profile name not found.
+ """
+
+ # progressively build config from a file, or a list of auto-detected files
+ # raises ConfigFileReadError/ConfigFileParseError on error
+ config = load_config_from_files(filenames)
+
+ # determine profile name fallback:
+ # (1) profile key under [defaults],
+ # (2) first non-[defaults] section
+ # (3) [defaults] section
+ first_section = next(iter(config.sections() + [None]))
+ config_defaults = config.defaults()
+ if not profile:
+ profile = config_defaults.get('profile', first_section)
+
+ if profile:
+ try:
+ section = dict(config[profile])
+ except KeyError:
+ raise ValueError("Config profile {!r} not found".format(profile))
+ else:
+ # as the very last resort (unspecified profile name and
+ # no profiles defined in config), try to use [defaults]
+ if config_defaults:
+ section = config_defaults
+ else:
+ section = {}
+
+ return section
+
+
def get_default_config():
config = configparser.ConfigParser(default_section="defaults")
config.read_string(u"""
@@ -204,7 +268,7 @@ def load_config(config_file=None, profile=None, client=None,
performed (looking first for ``dwave.conf`` in process' current working
directory, then in user-local config directories, and finally in system-wide
config dirs). For details on format and location detection, see
- :func:`load_config_from_file`.
+ :func:`load_config_from_files`.
If location of ``config_file`` is explicitly specified (via arguments or
environment variable), but the file does not exits, or is not readable,
@@ -327,51 +391,22 @@ def load_config(config_file=None, profile=None, client=None,
Config file parse failed.
"""
- def _get_section(filenames, profile):
- """Load config from a list of `filenames`, returning only section
- defined with `profile`."""
-
- # progressively build config from a file, or a list of auto-detected files
- # raises ConfigFileReadError/ConfigFileParseError on error
- config = load_config_from_files(filenames)
-
- # determine profile name fallback:
- # (1) profile key under [defaults],
- # (2) first non-[defaults] section
- first_section = next(iter(config.sections() + [None]))
- config_defaults = config.defaults()
- default_profile = config_defaults.get('profile', first_section)
-
- # select profile from the config
- if profile is None:
- profile = os.getenv("DWAVE_PROFILE", default_profile)
- if profile:
- try:
- section = dict(config[profile])
- except KeyError:
- raise ValueError("Config profile {!r} not found".format(profile))
- else:
- # as the very last resort (unspecified profile name and
- # no profiles defined in config), try to use [defaults]
- if config_defaults:
- section = config_defaults
- else:
- section = {}
-
- return section
+ if profile is None:
+ profile = os.getenv("DWAVE_PROFILE")
if config_file == False:
# skip loading from file altogether
section = {}
elif config_file == True:
# force auto-detection, disregarding DWAVE_CONFIG_FILE
- section = _get_section(None, profile)
+ section = load_profile_from_files(None, profile)
else:
# auto-detect if not specified with arg or env
if config_file is None:
# note: both empty and undefined DWAVE_CONFIG_FILE treated as None
config_file = os.getenv("DWAVE_CONFIG_FILE")
- section = _get_section([config_file] if config_file else None, profile)
+ section = load_profile_from_files(
+ [config_file] if config_file else None, profile)
# override a selected subset of values via env or kwargs,
# pass-through the rest unmodified
@@ -424,7 +459,7 @@ def legacy_load_config(profile=None, endpoint=None, token=None, solver=None,
profile-b|https://two.com,token-two
Assuming the new config file ``dwave.conf`` is not found (in any of the
- standard locations, see :meth:`~dwave.cloud.config.load_config_from_file`
+ standard locations, see :meth:`~dwave.cloud.config.load_config_from_files`
and :meth:`~dwave.cloud.config.load_config`), then:
>>> client = dwave.cloud.Client.from_config()
| Add config debug/show feature to dwave CLI | dwavesystems/dwave-cloud-client | diff --git a/tests/test_cli.py b/tests/test_cli.py
index 266ee25..c3522ab 100644
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -76,6 +76,49 @@ class TestCli(unittest.TestCase):
])
self.assertEqual(result.output.strip(), './dwave.conf')
+ def test_configure_inspect(self):
+ runner = CliRunner()
+ with runner.isolated_filesystem():
+ config_file = 'dwave.conf'
+ with open(config_file, 'w') as f:
+ f.write('''
+ [defaults]
+ endpoint = 1
+ [a]
+ endpoint = 2
+ [b]
+ token = 3''')
+
+ # test auto-detected case
+ with mock.patch('dwave.cloud.config.get_configfile_paths',
+ lambda **kw: [config_file]):
+ result = runner.invoke(cli, [
+ 'configure', '--inspect'
+ ])
+ self.assertIn('endpoint = 2', result.output)
+
+ # test explicit config
+ result = runner.invoke(cli, [
+ 'configure', '--inspect', '--config-file', config_file
+ ])
+ self.assertIn('endpoint = 2', result.output)
+
+ # test explicit profile
+ result = runner.invoke(cli, [
+ 'configure', '--inspect', '--config-file', config_file,
+ '--profile', 'b'
+ ])
+ self.assertIn('endpoint = 1', result.output)
+ self.assertIn('token = 3', result.output)
+
+ # test eagerness of config-file ane profile
+ result = runner.invoke(cli, [
+ 'configure', '--config-file', config_file,
+ '--profile', 'b', '--inspect'
+ ])
+ self.assertIn('endpoint = 1', result.output)
+ self.assertIn('token = 3', result.output)
+
def test_ping(self):
config_file = 'dwave.conf'
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 2
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"mock",
"requests_mock",
"coverage",
"coveralls"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
coveralls==3.3.1
docopt==0.6.2
-e git+https://github.com/dwavesystems/dwave-cloud-client.git@31e170e1fc5df92bb7d57a45825379814c1aab84#egg=dwave_cloud_client
homebase==1.0.1
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
mock==5.2.0
numpy==1.19.5
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pyreadline==2.1
PySocks==1.7.1
pytest==7.0.1
python-dateutil==2.9.0.post0
requests==2.27.1
requests-mock==1.12.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: dwave-cloud-client
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- coveralls==3.3.1
- docopt==0.6.2
- homebase==1.0.1
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- mock==5.2.0
- numpy==1.19.5
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pyreadline==2.1
- pysocks==1.7.1
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- requests==2.27.1
- requests-mock==1.12.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/dwave-cloud-client
| [
"tests/test_cli.py::TestCli::test_configure_inspect"
]
| []
| [
"tests/test_cli.py::TestCli::test_configure",
"tests/test_cli.py::TestCli::test_configure_list_config",
"tests/test_cli.py::TestCli::test_ping"
]
| []
| Apache License 2.0 | 2,422 | [
"dwave/cloud/cli.py",
"dwave/cloud/config.py"
]
| [
"dwave/cloud/cli.py",
"dwave/cloud/config.py"
]
|
|
TheFriendlyCoder__friendlypins-63 | 4ce39e5b0d0c774670d08116b730bb7cabfcb8dd | 2018-04-18 23:09:26 | 4ce39e5b0d0c774670d08116b730bb7cabfcb8dd | diff --git a/src/friendlypins/scripts/fpins.py b/src/friendlypins/scripts/fpins.py
index 1338bec..f9656c7 100644
--- a/src/friendlypins/scripts/fpins.py
+++ b/src/friendlypins/scripts/fpins.py
@@ -14,7 +14,7 @@ def _download_thumbnails(args):
:returns: zero on success, non-zero on failure
:rtype: :class:`int`
"""
- return download_thumbnails(args.token, args.board, args.path, args.delete)
+ return download_thumbnails(args.token, args.board, args.path)
def _edit_board(args):
"""Callback for manipulating a Pinterest board
@@ -80,11 +80,6 @@ def get_args(args):
required=True,
help="Path to the folder where thumbnails are to be downloaded",
)
- thumbnails_cmd.add_argument(
- '--delete', '-d',
- action="store_true",
- help="Deletes each pin as it's thumbnail is downloaded"
- )
# Board manipulation sub-command
desc = 'Manipulates boards owned by the authenticated user'
diff --git a/src/friendlypins/utils/console_actions.py b/src/friendlypins/utils/console_actions.py
index 30ecd5f..943a7bb 100644
--- a/src/friendlypins/utils/console_actions.py
+++ b/src/friendlypins/utils/console_actions.py
@@ -48,13 +48,12 @@ def _download_pin(pin, folder):
return 0
-def download_thumbnails(api_token, board_name, output_folder, delete):
+def download_thumbnails(api_token, board_name, output_folder,):
"""Downloads thumbnails of all pins on a board
:param str api_token: Authentication token for accessing the Pinterest API
:param str board_name: name of the board containing the pins to process
:param str output_folder: path where the thumbnails are to be downloaded
- :param bool delete: flag to delete pins as their thumbnails are downloaded
:returns:
status code describing the result of the action
zero on success, non-zero on failure
@@ -89,8 +88,6 @@ def download_thumbnails(api_token, board_name, output_folder, delete):
retval = _download_pin(cur_pin, output_folder)
if retval:
return retval
- if delete:
- cur_pin.delete()
pbar.update()
return 0
| rework dt command to delete boards
Instead of having the '-d' option delete each pin one at a time, we should rework it to delete the board once complete ... either that or remove the option completely and add a new command called "delete_board" to just delete a given board. | TheFriendlyCoder/friendlypins | diff --git a/unit_tests/test_console_actions.py b/unit_tests/test_console_actions.py
index 37ae68c..509de7d 100644
--- a/unit_tests/test_console_actions.py
+++ b/unit_tests/test_console_actions.py
@@ -74,7 +74,7 @@ def test_download_thumbnails(rest_io, action_requests, mock_open, mock_os):
mock_os.path.exists.return_value = False
# Flex our code
- result = download_thumbnails("1234abcd", expected_board_name, "/tmp", False)
+ result = download_thumbnails("1234abcd", expected_board_name, "/tmp")
# Make sure the call was successful, and that our mock APIs
# that must have executed as part of the process were called
@@ -85,86 +85,6 @@ def test_download_thumbnails(rest_io, action_requests, mock_open, mock_os):
mock_open.assert_called()
[email protected]("friendlypins.utils.console_actions.os")
[email protected]("friendlypins.utils.console_actions.open")
[email protected]("friendlypins.utils.console_actions.requests")
[email protected]("friendlypins.api.RestIO")
-def test_download_thumbnails_with_delete(rest_io, action_requests, mock_open, mock_os):
-
- # Fake user data for the user authenticating to Pinterest
- expected_user_data = {
- 'data': {
- 'url': 'https://www.pinterest.com/MyUserName/',
- 'first_name': "John",
- 'last_name': "Doe",
- 'id': "12345678"
- }
- }
-
- # Fake board data for the boards owned by the fake authenticated user
- expected_board_name = "MyBoard"
- expected_board_data = {
- "data": [{
- "id": "6789",
- "name": expected_board_name,
- "url": "https://www.pinterest.ca/MyName/MyBoard/",
- "counts": {
- "pins": 1
- }
- }]
- }
-
- # Fake pin data for the fake board, with fake thumbnail metadata
- expected_thumbnail_url = "https://i.pinimg.com/originals/1/2/3/abcd.jpg"
- expected_pin_data = {
- "data": [{
- "id": "1234",
- "url": "https://www.pinterest.ca/MyName/MyPin/",
- "note": "My Pin descriptive text",
- "link": "http://www.mysite.com/target",
- "media": {
- "type": "image"
- },
- "image": {
- "original": {
- "url": expected_thumbnail_url,
- "width": "800",
- "height": "600"
- }
- }
- }],
- "page": {
- "cursor": None
- }
- }
-
- # fake our Pinterest API data to flex our implementation logic
- mock_response = mock.MagicMock()
- mock_response.get.side_effect = [
- expected_user_data,
- ]
- mock_response.get_pages.side_effect = [
- [expected_board_data],
- [expected_pin_data]
- ]
- rest_io.return_value = mock_response
-
- # Make sure the code think's the output file where the
- # thumbnail is to be downloaded doesn't already exist
- mock_os.path.exists.return_value = False
-
- # Flex our code
- result = download_thumbnails("1234abcd", expected_board_name, "/tmp", True)
-
- # Make sure the call was successful, and that our mock APIs
- # that must have executed as part of the process were called
- assert result == 0
- action_requests.get.assert_called_once_with(expected_thumbnail_url, stream=True)
- mock_os.makedirs.assert_called()
- mock_os.path.exists.assert_called()
- mock_open.assert_called()
- mock_response.delete.assert_called_once()
-
@mock.patch("friendlypins.utils.console_actions.os")
@mock.patch("friendlypins.utils.console_actions.open")
@mock.patch("friendlypins.utils.console_actions.requests")
@@ -239,7 +159,7 @@ def test_download_thumbnails_error(rest_io, action_requests, mock_open, mock_os)
action_requests.get.return_value = mock_action_response
# Flex our code
- result = download_thumbnails("1234abcd", expected_board_name, "/tmp", False)
+ result = download_thumbnails("1234abcd", expected_board_name, "/tmp")
# Make sure the call was successful, and that our mock APIs
# that must have executed as part of the process were called
@@ -318,7 +238,7 @@ def test_download_thumbnails_missing_board(rest_io, action_requests, mock_open,
mock_os.path.exists.return_value = False
# Flex our code
- result = download_thumbnails("1234abcd", "FuBar", "/tmp", False)
+ result = download_thumbnails("1234abcd", "FuBar", "/tmp")
# Make sure the call was successful, and that our mock APIs
# that must have executed as part of the process were called
@@ -401,7 +321,7 @@ def test_download_thumbnails_exists(rest_io, action_requests, mock_open, mock_os
# Flex our code
output_folder = "/tmp"
- result = download_thumbnails("1234abcd", expected_board_name, output_folder, False)
+ result = download_thumbnails("1234abcd", expected_board_name, output_folder)
# Make sure the call was successful, and that our mock APIs
# that must have executed as part of the process were called
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 2
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov",
"mock",
"pylint"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
astroid==2.11.7
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
Babel==2.11.0
bleach==4.1.0
cachetools==4.2.4
certifi==2021.5.30
chardet==5.0.0
charset-normalizer==2.0.12
colorama==0.4.5
coverage==6.2
dateutils==0.6.12
dill==0.3.4
distlib==0.3.9
docutils==0.18.1
filelock==3.4.1
-e git+https://github.com/TheFriendlyCoder/friendlypins.git@4ce39e5b0d0c774670d08116b730bb7cabfcb8dd#egg=friendlypins
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
isort==5.10.1
Jinja2==3.0.3
lazy-object-proxy==1.7.1
mando==0.7.1
MarkupSafe==2.0.1
mccabe==0.7.0
mock==5.2.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==8.4.0
pkginfo==1.10.0
platformdirs==2.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
Pygments==2.14.0
pylint==2.13.9
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
pytz==2025.2
radon==6.0.1
readme-renderer==34.0
requests==2.27.1
requests-toolbelt==1.0.0
six==1.17.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
tox==4.0.0a9
tqdm==4.64.1
twine==1.15.0
typed-ast==1.5.5
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
virtualenv==20.17.1
webencodings==0.5.1
wrapt==1.16.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: friendlypins
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- astroid==2.11.7
- babel==2.11.0
- bleach==4.1.0
- cachetools==4.2.4
- chardet==5.0.0
- charset-normalizer==2.0.12
- colorama==0.4.5
- coverage==6.2
- dateutils==0.6.12
- dill==0.3.4
- distlib==0.3.9
- docutils==0.18.1
- filelock==3.4.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- isort==5.10.1
- jinja2==3.0.3
- lazy-object-proxy==1.7.1
- mando==0.7.1
- markupsafe==2.0.1
- mccabe==0.7.0
- mock==5.2.0
- pillow==8.4.0
- pkginfo==1.10.0
- platformdirs==2.4.0
- pygments==2.14.0
- pylint==2.13.9
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- radon==6.0.1
- readme-renderer==34.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==1.2.3
- tox==4.0.0a9
- tqdm==4.64.1
- twine==1.15.0
- typed-ast==1.5.5
- urllib3==1.26.20
- virtualenv==20.17.1
- webencodings==0.5.1
- wrapt==1.16.0
prefix: /opt/conda/envs/friendlypins
| [
"unit_tests/test_console_actions.py::test_download_thumbnails",
"unit_tests/test_console_actions.py::test_download_thumbnails_error",
"unit_tests/test_console_actions.py::test_download_thumbnails_missing_board",
"unit_tests/test_console_actions.py::test_download_thumbnails_exists"
]
| []
| [
"unit_tests/test_console_actions.py::test_delete_board",
"unit_tests/test_console_actions.py::test_delete_missing_board"
]
| []
| Apache License 2.0 | 2,423 | [
"src/friendlypins/scripts/fpins.py",
"src/friendlypins/utils/console_actions.py"
]
| [
"src/friendlypins/scripts/fpins.py",
"src/friendlypins/utils/console_actions.py"
]
|
|
Azure__WALinuxAgent-1120 | dc6db7594f3c0ee24e69fb63b3ad05a7ac3c035d | 2018-04-19 04:27:01 | 6e9b985c1d7d564253a1c344bab01b45093103cd | diff --git a/azurelinuxagent/common/protocol/wire.py b/azurelinuxagent/common/protocol/wire.py
index 67a55203..5df35a98 100644
--- a/azurelinuxagent/common/protocol/wire.py
+++ b/azurelinuxagent/common/protocol/wire.py
@@ -1129,8 +1129,7 @@ class WireClient(object):
host = self.get_host_plugin()
uri, headers = host.get_artifact_request(blob)
- config = self.fetch(uri, headers, use_proxy=False)
- profile = self.decode_config(config)
+ profile = self.fetch(uri, headers, use_proxy=False)
if not textutil.is_str_none_or_whitespace(profile):
logger.verbose("Artifacts profile downloaded")
| Exception retrieving artifacts profile: TypeError: decoding str is not supported
I am tracking a annoying issue in the BVTs with the following signature.
```text
Exception retrieving artifacts profile: TypeError: decoding str is not supported
```
I increased debugging, and found this call stack.
```text
2018/04/18 06:03:51.919139 WARNING Exception retrieving artifacts profile: Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/azurelinuxagent/common/protocol/wire.py", line 1135, in get_artifacts_profile
profile = self.decode_config(config)
File "/usr/local/lib/python3.5/dist-packages/azurelinuxagent/common/protocol/wire.py", line 563, in decode_config
xml_text = ustr(data, encoding='utf-8')
TypeError: decoding str is not supported
```
When I read the code for `get_artifacts_profile`, I noticed that decode_config is called twice. The method `fetch` calls decode_config, and then `get_artifacts_profile` calls decode_config. It appears that you cannot call decode_config twice with the same data.
The method decode_config is confusing because it has variables like xml_text, but the data passed in this case is actually JSON. Azure does not do us any favors because some systems return XML and other returns JSON. The agent could or should be smarter about this.
The bug was introduced with d0b583cc. | Azure/WALinuxAgent | diff --git a/tests/protocol/test_wire.py b/tests/protocol/test_wire.py
index 34b82862..ffe72edf 100644
--- a/tests/protocol/test_wire.py
+++ b/tests/protocol/test_wire.py
@@ -342,7 +342,7 @@ class TestWireProtocol(AgentTestCase):
wire_protocol_client.ext_conf.artifacts_profile_blob = testurl
goal_state = GoalState(WireProtocolData(DATA_FILE).goal_state)
wire_protocol_client.get_goal_state = Mock(return_value=goal_state)
- wire_protocol_client.fetch = Mock(side_effect=[None, '{"onHold": "true"}'.encode('utf-8')])
+ wire_protocol_client.fetch = Mock(side_effect=[None, '{"onHold": "true"}'])
with patch.object(HostPluginProtocol,
"get_artifact_request",
return_value=['dummy_url', {}]) as artifact_request:
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 2.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "",
"pip_packages": [
"pyasn1",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.4",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
importlib-metadata==4.8.3
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyasn1==0.5.1
pyparsing==3.1.4
pytest==7.0.1
tomli==1.2.3
typing_extensions==4.1.1
-e git+https://github.com/Azure/WALinuxAgent.git@dc6db7594f3c0ee24e69fb63b3ad05a7ac3c035d#egg=WALinuxAgent
zipp==3.6.0
| name: WALinuxAgent
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyasn1==0.5.1
- pyparsing==3.1.4
- pytest==7.0.1
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/WALinuxAgent
| [
"tests/protocol/test_wire.py::TestWireProtocol::test_get_in_vm_artifacts_profile_host_ga_plugin"
]
| [
"tests/protocol/test_wire.py::TestWireProtocol::test_getters",
"tests/protocol/test_wire.py::TestWireProtocol::test_getters_ext_no_public",
"tests/protocol/test_wire.py::TestWireProtocol::test_getters_ext_no_settings",
"tests/protocol/test_wire.py::TestWireProtocol::test_getters_no_ext",
"tests/protocol/test_wire.py::TestWireProtocol::test_getters_with_stale_goal_state"
]
| [
"tests/protocol/test_wire.py::TestWireProtocol::test_call_storage_kwargs",
"tests/protocol/test_wire.py::TestWireProtocol::test_download_ext_handler_pkg_fallback",
"tests/protocol/test_wire.py::TestWireProtocol::test_fetch_manifest_fallback",
"tests/protocol/test_wire.py::TestWireProtocol::test_get_host_ga_plugin",
"tests/protocol/test_wire.py::TestWireProtocol::test_get_in_vm_artifacts_profile_blob_not_available",
"tests/protocol/test_wire.py::TestWireProtocol::test_get_in_vm_artifacts_profile_default",
"tests/protocol/test_wire.py::TestWireProtocol::test_get_in_vm_artifacts_profile_response_body_not_valid",
"tests/protocol/test_wire.py::TestWireProtocol::test_report_vm_status",
"tests/protocol/test_wire.py::TestWireProtocol::test_status_blob_parsing",
"tests/protocol/test_wire.py::TestWireProtocol::test_upload_status_blob_default",
"tests/protocol/test_wire.py::TestWireProtocol::test_upload_status_blob_host_ga_plugin",
"tests/protocol/test_wire.py::TestWireProtocol::test_upload_status_blob_reports_prepare_error",
"tests/protocol/test_wire.py::TestWireProtocol::test_upload_status_blob_unknown_type_assumes_block"
]
| []
| Apache License 2.0 | 2,424 | [
"azurelinuxagent/common/protocol/wire.py"
]
| [
"azurelinuxagent/common/protocol/wire.py"
]
|
|
ttu__ruuvitag-sensor-41 | c0d986391149d31d60d9649cfd9f3946db92a50c | 2018-04-19 15:39:25 | c0d986391149d31d60d9649cfd9f3946db92a50c | diff --git a/ruuvitag_sensor/ruuvi.py b/ruuvitag_sensor/ruuvi.py
index ffd6bc6..0dffc62 100644
--- a/ruuvitag_sensor/ruuvi.py
+++ b/ruuvitag_sensor/ruuvi.py
@@ -202,13 +202,12 @@ class RuuviTagSensor(object):
Returns:
string: Sensor data
"""
+ # Search of FF990403 (Manufacturer Specific Data (FF) / Ruuvi Innovations ltd (9904) / Format 3 (03))
try:
- if len(raw) != 54:
+ if "FF990403" not in raw:
return None
- if raw[16:18] != '03':
- return None
-
- return raw[16:]
+ payload_start = raw.index("FF990403") + 6;
+ return raw[payload_start:]
except:
return None
| Bug: incompatible with RuuviFW 1.2.8
The 1.2.8 update to Ruuvi Firmware trims extra NULLs at the end of transmission which breaks the data format type check. I can fix this and implement #29 . | ttu/ruuvitag-sensor | diff --git a/tests/test_decoder.py b/tests/test_decoder.py
index cd92d1d..639b71a 100644
--- a/tests/test_decoder.py
+++ b/tests/test_decoder.py
@@ -51,6 +51,16 @@ class TestDecoder(TestCase):
self.assertNotEqual(data['acceleration_y'], 0)
self.assertNotEqual(data['acceleration_z'], 0)
+ data = decoder.decode_data('03291A1ECE1EFC18F94202CA0B53BB')
+ self.assertEqual(data['temperature'], 26.3)
+ self.assertEqual(data['pressure'], 1027.66)
+ self.assertEqual(data['humidity'], 20.5)
+ self.assertEqual(data['battery'], 2899)
+ self.assertNotEqual(data['acceleration'], 0)
+ self.assertEqual(data['acceleration_x'], -1000)
+ self.assertNotEqual(data['acceleration_y'], 0)
+ self.assertNotEqual(data['acceleration_z'], 0)
+
def test_df3decode_is_valid_max_values(self):
decoder = Df3Decoder()
humidity = 'C8'
diff --git a/tests/test_ruuvitag_sensor.py b/tests/test_ruuvitag_sensor.py
index ac9e3bb..16fcbc0 100644
--- a/tests/test_ruuvitag_sensor.py
+++ b/tests/test_ruuvitag_sensor.py
@@ -47,7 +47,8 @@ class TestRuuviTagSensor(TestCase):
('CC:2C:6A:1E:59:3D', '1E0201060303AAFE1616AAFE10EE037275752E76692F23416A7759414D4663CD'),
('DD:2C:6A:1E:59:3D', '1E0201060303AAFE1616AAFE10EE037275752E76692F23416A7759414D4663CD'),
('EE:2C:6A:1E:59:3D', '1F0201060303AAFE1716AAFE10F9037275752E76692F23416A5558314D417730C3'),
- ('FF:2C:6A:1E:59:3D', '1902010415FF990403291A1ECE1E02DEF94202CA0B5300000000BB')
+ ('FF:2C:6A:1E:59:3D', '1902010415FF990403291A1ECE1E02DEF94202CA0B5300000000BB'),
+ ('00:2C:6A:1E:59:3D', '1902010415FF990403291A1ECE1E02DEF94202CA0B53BB')
]
for data in datas:
@@ -59,7 +60,7 @@ class TestRuuviTagSensor(TestCase):
get_datas)
def test_find_tags(self):
tags = RuuviTagSensor.find_ruuvitags()
- self.assertEqual(5, len(tags))
+ self.assertEqual(6, len(tags))
@patch('ruuvitag_sensor.ble_communication.BleCommunicationDummy.get_datas',
get_datas)
@@ -87,7 +88,7 @@ class TestRuuviTagSensor(TestCase):
def test_get_datas(self):
datas = []
RuuviTagSensor.get_datas(lambda x: datas.append(x))
- self.assertEqual(5, len(datas))
+ self.assertEqual(6, len(datas))
@patch('ruuvitag_sensor.ble_communication.BleCommunicationDummy.get_datas',
get_datas)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_issue_reference"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 1
} | 0.10 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y bluez bluez-hcidump"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup==1.2.2
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
psutil==7.0.0
ptyprocess==0.7.0
pytest==8.3.5
-e git+https://github.com/ttu/ruuvitag-sensor.git@c0d986391149d31d60d9649cfd9f3946db92a50c#egg=ruuvitag_sensor
Rx==3.2.0
tomli==2.2.1
| name: ruuvitag-sensor
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- psutil==7.0.0
- ptyprocess==0.7.0
- pytest==8.3.5
- rx==3.2.0
- tomli==2.2.1
prefix: /opt/conda/envs/ruuvitag-sensor
| [
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_find_tags",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_get_datas"
]
| []
| [
"tests/test_decoder.py::TestDecoder::test_decode_is_valid",
"tests/test_decoder.py::TestDecoder::test_decode_is_valid_case2",
"tests/test_decoder.py::TestDecoder::test_decode_is_valid_weatherstation_2017_04_12",
"tests/test_decoder.py::TestDecoder::test_df3decode_is_valid",
"tests/test_decoder.py::TestDecoder::test_df3decode_is_valid_max_values",
"tests/test_decoder.py::TestDecoder::test_df3decode_is_valid_min_values",
"tests/test_decoder.py::TestDecoder::test_getcorrectdecoder",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_convert_data_not_valid",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_false_mac_raise_error",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_get_data_for_sensors",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_get_datas_with_macs",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_tag_correct_properties",
"tests/test_ruuvitag_sensor.py::TestRuuviTagSensor::test_tag_update_is_valid"
]
| []
| MIT License | 2,425 | [
"ruuvitag_sensor/ruuvi.py"
]
| [
"ruuvitag_sensor/ruuvi.py"
]
|
|
colinfike__easy-ptvsd-2 | 6ec7940a227939039464fb4e9beb48819470d8b4 | 2018-04-19 21:00:42 | 6ec7940a227939039464fb4e9beb48819470d8b4 | diff --git a/.gitignore b/.gitignore
index cb1663a..473dc19 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,3 +4,4 @@
__pycache__
*.egg-info
dist
+*.pyc
diff --git a/easy_ptvsd.py b/easy_ptvsd.py
index 6af5619..538747f 100644
--- a/easy_ptvsd.py
+++ b/easy_ptvsd.py
@@ -30,9 +30,11 @@ class wait_and_break:
def __call__(self, function):
"""Run ptvsd code and continue with decorated function."""
+
def wait_and_break_deco(*args, **kwargs):
ptvsd.enable_attach(self.secret, address=self.address)
ptvsd.wait_for_attach()
ptvsd.break_into_debugger()
- function(*args, **kwargs)
+ return function(*args, **kwargs)
+
return wait_and_break_deco
diff --git a/setup.py b/setup.py
index 91fccaf..dac13b1 100644
--- a/setup.py
+++ b/setup.py
@@ -3,7 +3,7 @@ from setuptools import setup
setup(
name="easy_ptvsd",
- version="0.1.0",
+ version="0.1.1",
description="A convenience package for PTVSD.",
long_description=(
"EasyPtvsd is a convenience library that makes it a bit easy to remote"
@@ -16,16 +16,14 @@ setup(
author_email="[email protected]",
license="MIT",
classifiers=[
- 'Development Status :: 3 - Alpha',
- 'Intended Audience :: Developers',
- 'Topic :: Software Development',
- 'Programming Language :: Python :: 3',
+ "Development Status :: 3 - Alpha",
+ "Intended Audience :: Developers",
+ "Topic :: Software Development",
+ "Programming Language :: Python :: 3",
],
keywords="ptvsd easy python remote debugging",
- install_requires=[
- 'ptvsd==3.0.0',
- ],
- python_requires='>=3',
+ install_requires=["ptvsd==3.0.0"],
+ python_requires=">=3",
py_modules=["easy_ptvsd"],
packages=[],
)
| Strange functionality when used with @classmethod decorator
I'm seeing return values not being returned when using `wait_and_break` with `@classmethod` decorator.
Actually this is issue is definitely that I don't return the value the decorated function returns. | colinfike/easy-ptvsd | diff --git a/tests/test_easy_ptvsd.py b/tests/test_easy_ptvsd.py
index 945c629..63587e5 100644
--- a/tests/test_easy_ptvsd.py
+++ b/tests/test_easy_ptvsd.py
@@ -32,15 +32,16 @@ class TestWaitAndBreakClass(unittest.TestCase):
@patch("easy_ptvsd.ptvsd")
def test_decorated_function_wrapper_functionality(self, mock_ptvsd):
"""Test that the function returned by invoking wait_and_break is functional."""
- decorated_func_mock = Mock()
+ decorated_func_mock = Mock(return_value="ret val")
wait_and_break_obj = wait_and_break()
result = wait_and_break_obj(decorated_func_mock)
- result("positional_arg", key_word_arg="keywordarg")
+ return_value = result("positional_arg", key_word_arg="keywordarg")
self.assertTrue(mock_ptvsd.enable_attach.called)
self.assertTrue(mock_ptvsd.wait_for_attach.called)
self.assertTrue(mock_ptvsd.break_into_debugger.called)
+ self.assertEqual(return_value, "ret val")
decorated_func_mock.assert_called_once_with(
"positional_arg", key_word_arg="keywordarg"
)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 3
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.8",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/colinfike/easy-ptvsd.git@6ec7940a227939039464fb4e9beb48819470d8b4#egg=easy_ptvsd
exceptiongroup==1.2.2
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
ptvsd==3.0.0
pytest==8.3.5
python-dotenv==1.0.1
tomli==2.2.1
| name: easy-ptvsd
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=24.2=py38h06a4308_0
- python=3.8.20=he870216_0
- readline=8.2=h5eee18b_0
- setuptools=75.1.0=py38h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.44.0=py38h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- ptvsd==3.0.0
- pytest==8.3.5
- python-dotenv==1.0.1
- tomli==2.2.1
prefix: /opt/conda/envs/easy-ptvsd
| [
"tests/test_easy_ptvsd.py::TestWaitAndBreakClass::test_decorated_function_wrapper_functionality"
]
| []
| [
"tests/test_easy_ptvsd.py::TestWaitAndBreakClass::test_custom_init_parameters",
"tests/test_easy_ptvsd.py::TestWaitAndBreakClass::test_default_init_parameters",
"tests/test_easy_ptvsd.py::TestWaitAndBreakClass::test_invocation_returns_function"
]
| []
| MIT License | 2,426 | [
"setup.py",
".gitignore",
"easy_ptvsd.py"
]
| [
"setup.py",
".gitignore",
"easy_ptvsd.py"
]
|
|
sedders123__phial-29 | b786240e7834e6b5d98d447638144612869027c2 | 2018-04-19 21:06:41 | 8f6c931b420b4ad29fd8ea32164786c9c6d5f4ed | diff --git a/phial/bot.py b/phial/bot.py
index 9f25b14..9d7cc3c 100644
--- a/phial/bot.py
+++ b/phial/bot.py
@@ -1,6 +1,7 @@
from slackclient import SlackClient # type: ignore
import re
from typing import Dict, List, Pattern, Callable, Union, Tuple, Any # noqa
+import logging
from .globals import _command_ctx_stack, command, _global_ctx_stack
from .wrappers import Command, Response, Message, Attachment
@@ -17,12 +18,17 @@ class Phial():
'prefix': '!'
}
- def __init__(self, token: str, config: dict = default_config) -> None:
+ def __init__(self,
+ token: str,
+ config: dict = default_config,
+ logger: logging.Logger = logging.getLogger(__name__)) -> None:
self.slack_client = SlackClient(token)
self.commands = {} # type: Dict
self.middleware_functions = [] # type: List
self.config = config
self.running = False
+ self.logger = logger
+
_global_ctx_stack.push({})
@staticmethod
@@ -69,6 +75,8 @@ class Phial():
case_sensitive)
if command_pattern not in self.commands:
self.commands[command_pattern] = command_func
+ self.logger.debug("Command {0} added"
+ .format(command_pattern_template))
else:
raise ValueError('Command {0} already exists'
.format(command_pattern.split("<")[0]))
@@ -158,7 +166,7 @@ class Phial():
'''
def decorator(f: Callable) -> Callable:
- self.middleware_functions.append(f)
+ self.add_middleware(f)
return f
return decorator
@@ -183,6 +191,10 @@ class Phial():
middleware_func(func): The function to be added to the middleware
pipeline
'''
+ self.logger.debug("Middleware {0} added"
+ .format(getattr(middleware_func,
+ '__name__',
+ repr(middleware_func))))
self.middleware_functions.append(middleware_func)
def alias(self,
@@ -247,6 +259,8 @@ class Phial():
if output_list and len(output_list) > 0:
for output in output_list:
if(output and 'text' in output):
+ self.logger.debug("Message recieved from Slack: {0}"
+ .format(output))
bot_id = None
if 'bot_id' in output:
bot_id = output['bot_id']
@@ -359,7 +373,7 @@ class Phial():
response = self._handle_command(command)
self._execute_response(response)
except ValueError as err:
- print('ValueError: {}'.format(err))
+ self.logger.exception('ValueError: {}'.format(err))
finally:
_command_ctx_stack.pop()
@@ -370,7 +384,7 @@ class Phial():
if not slack_client.rtm_connect():
raise ValueError("Connection failed. Invalid Token or bot ID")
- print("Phial connected and running!")
+ self.logger.info("Phial connected and running!")
while self._is_running():
try:
message = self._parse_slack_output(slack_client
@@ -378,4 +392,4 @@ class Phial():
if message:
self._handle_message(message)
except Exception as e:
- print("Error: {0}".format(e))
+ self.logger.exception("Error: {0}".format(e))
| Add support for logger
Allow a user to pass in a `logger` and log when events of interest happen. | sedders123/phial | diff --git a/tests/test_bot.py b/tests/test_bot.py
index 93faed7..19d905c 100644
--- a/tests/test_bot.py
+++ b/tests/test_bot.py
@@ -4,7 +4,7 @@ from phial import Phial, command, Response, Attachment, g
import phial.wrappers
import phial.globals
import re
-from .helpers import captured_output, MockTrueFunc
+from .helpers import MockTrueFunc
class TestPhialBotIsRunning(unittest.TestCase):
@@ -580,12 +580,13 @@ class TestRun(TestPhialBot):
'user_id',
'timestamp')
self.bot._parse_slack_output = MagicMock(return_value=test_command)
- with captured_output() as (out, err):
- self.bot.run()
- output = out.getvalue().strip()
expected_msg = 'ValueError: Command "test" has not been registered'
- self.assertTrue(expected_msg in output)
+ with self.assertLogs(logger='phial.bot', level='ERROR') as cm:
+ self.bot.run()
+
+ error = cm.output[0]
+ self.assertIn(expected_msg, error)
class TestGlobalContext(unittest.TestCase):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": [],
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": [],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
coverage==7.2.7
exceptiongroup==1.2.2
execnet==2.0.2
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
packaging==24.0
-e git+https://github.com/sedders123/phial.git@b786240e7834e6b5d98d447638144612869027c2#egg=phial_slack
pluggy==1.2.0
pytest==7.4.4
pytest-asyncio==0.21.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-xdist==3.5.0
requests==2.31.0
six==1.17.0
slackclient==1.2.1
tomli==2.0.1
typing_extensions==4.7.1
urllib3==2.0.7
websocket-client==0.59.0
Werkzeug==0.12.2
zipp==3.15.0
| name: phial
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- charset-normalizer==3.4.1
- coverage==7.2.7
- exceptiongroup==1.2.2
- execnet==2.0.2
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- packaging==24.0
- pluggy==1.2.0
- pytest==7.4.4
- pytest-asyncio==0.21.2
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-xdist==3.5.0
- requests==2.31.0
- six==1.17.0
- slackclient==1.2.1
- tomli==2.0.1
- typing-extensions==4.7.1
- urllib3==2.0.7
- websocket-client==0.59.0
- werkzeug==0.12.2
- zipp==3.15.0
prefix: /opt/conda/envs/phial
| [
"tests/test_bot.py::TestRun::test_errors_with_invalid_command"
]
| [
"tests/test_bot.py::TestGlobalContext::test_global_context_fails_outside_app_context"
]
| [
"tests/test_bot.py::TestPhialBotIsRunning::test_returns_expected_value",
"tests/test_bot.py::TestCommandDecarator::test_command_decorator_calls_add_command",
"tests/test_bot.py::TestCommandDecarator::test_command_decorator_functionality",
"tests/test_bot.py::TestAliasDecarator::test_command_decorator_calls_add_command_case_insensitive",
"tests/test_bot.py::TestAliasDecarator::test_command_decorator_calls_add_command_case_sensitive",
"tests/test_bot.py::TestAliasDecarator::test_command_decorator_functionality",
"tests/test_bot.py::TestAddCommand::test_add_command_errors_on_duplicate_name",
"tests/test_bot.py::TestAddCommand::test_add_command_functionality",
"tests/test_bot.py::TestBuildCommandPattern::test_build_command_pattern_multiple_substition_case_sensitive",
"tests/test_bot.py::TestBuildCommandPattern::test_build_command_pattern_multiple_substition_ignore_case",
"tests/test_bot.py::TestBuildCommandPattern::test_build_command_pattern_no_substition_case_sensitive",
"tests/test_bot.py::TestBuildCommandPattern::test_build_command_pattern_no_substition_ignore_case",
"tests/test_bot.py::TestBuildCommandPattern::test_build_command_pattern_single_substition_case_sensitive",
"tests/test_bot.py::TestBuildCommandPattern::test_build_command_pattern_single_substition_ignore_case",
"tests/test_bot.py::TestGetCommandMatch::test_basic_functionality",
"tests/test_bot.py::TestGetCommandMatch::test_multi_substition_matching",
"tests/test_bot.py::TestGetCommandMatch::test_returns_none_correctly",
"tests/test_bot.py::TestGetCommandMatch::test_single_substition_matching",
"tests/test_bot.py::TestCreateCommand::test_basic_functionality",
"tests/test_bot.py::TestCreateCommand::test_basic_functionality_with_args",
"tests/test_bot.py::TestCreateCommand::test_errors_when_no_command_match",
"tests/test_bot.py::TestHandleCommand::test_handle_command_basic_functionality",
"tests/test_bot.py::TestCommandContextWorksCorrectly::test_command_context_pops_correctly",
"tests/test_bot.py::TestCommandContextWorksCorrectly::test_command_context_works_correctly",
"tests/test_bot.py::TestParseSlackOutput::test_basic_functionality",
"tests/test_bot.py::TestParseSlackOutput::test_returns_message_correctly_for_normal_message",
"tests/test_bot.py::TestParseSlackOutput::test_returns_message_with_bot_id_correctly",
"tests/test_bot.py::TestParseSlackOutput::test_returns_none_correctly_if_no_messages",
"tests/test_bot.py::TestSendMessage::test_send_message",
"tests/test_bot.py::TestSendMessage::test_send_reply",
"tests/test_bot.py::TestSendReaction::test_basic_functionality",
"tests/test_bot.py::TestUploadAttachment::test_basic_functionality",
"tests/test_bot.py::TestExecuteResponse::test_errors_on_invalid_response",
"tests/test_bot.py::TestExecuteResponse::test_errors_with_invalid_attachment",
"tests/test_bot.py::TestExecuteResponse::test_errors_with_reaction_and_reply",
"tests/test_bot.py::TestExecuteResponse::test_send_message",
"tests/test_bot.py::TestExecuteResponse::test_send_reaction",
"tests/test_bot.py::TestExecuteResponse::test_send_reply",
"tests/test_bot.py::TestExecuteResponse::test_send_string",
"tests/test_bot.py::TestExecuteResponse::test_upload_attachment",
"tests/test_bot.py::TestMiddleware::test_add_function",
"tests/test_bot.py::TestMiddleware::test_decarator",
"tests/test_bot.py::TestMiddleware::test_halts_message_when_none_returned",
"tests/test_bot.py::TestMiddleware::test_passes_on_message_correctly",
"tests/test_bot.py::TestHandleMessage::test_bot_message_does_not_trigger_command",
"tests/test_bot.py::TestRun::test_basic_functionality",
"tests/test_bot.py::TestRun::test_errors_with_invalid_token",
"tests/test_bot.py::TestGlobalContext::test_global_context_in_command"
]
| []
| MIT License | 2,427 | [
"phial/bot.py"
]
| [
"phial/bot.py"
]
|
|
uptick__pymyob-10 | 7baef26a62b54be57dd4dfbc80cf6962b04acf74 | 2018-04-20 07:19:04 | 7baef26a62b54be57dd4dfbc80cf6962b04acf74 | diff --git a/myob/managers.py b/myob/managers.py
index a9010ec..1e17411 100644
--- a/myob/managers.py
+++ b/myob/managers.py
@@ -29,75 +29,45 @@ class Manager():
def build_method(self, method, endpoint, hint):
full_endpoint = self.base_url + endpoint
- required_args = re.findall('\[([^\]]*)\]', full_endpoint)
- if method in ('PUT', 'POST'):
- required_args.append('data')
+ url_keys = re.findall('\[([^\]]*)\]', full_endpoint)
template = full_endpoint.replace('[', '{').replace(']', '}')
+ required_kwargs = url_keys.copy()
+ if method in ('PUT', 'POST'):
+ required_kwargs.append('data')
+
def inner(*args, **kwargs):
if args:
raise AttributeError("Unnamed args provided. Only keyword args accepted.")
- # Ensure all required args have been provided.
- missing_args = set(required_args) - set(kwargs.keys())
- if missing_args:
- raise KeyError("Missing args %s. Endpoint requires %s." % (
- list(missing_args), required_args
+ # Ensure all required url kwargs have been provided.
+ missing_kwargs = set(required_kwargs) - set(kwargs.keys())
+ if missing_kwargs:
+ raise KeyError("Missing kwargs %s. Endpoint requires %s." % (
+ list(missing_kwargs), required_kwargs
))
+ # Parse kwargs.
+ url_kwargs = {}
+ request_kwargs_raw = {}
+ for k, v in kwargs.items():
+ if k in url_keys:
+ url_kwargs[k] = v
+ elif k != 'data':
+ request_kwargs_raw[k] = v
+
# Determine request method.
request_method = 'GET' if method == 'ALL' else method
# Build url.
- url = template.format(**kwargs)
-
- request_kwargs = {}
-
- # Build headers.
- request_kwargs['headers'] = {
- 'Authorization': 'Bearer %s' % self.credentials.oauth_token,
- 'x-myobapi-cftoken': self.credentials.userpass,
- 'x-myobapi-key': self.credentials.consumer_key,
- 'x-myobapi-version': 'v2',
- }
-
- # Build query.
- request_kwargs['params'] = {}
- filters = []
- for k, v in kwargs.items():
- if k not in required_args + ['orderby', 'format', 'headers', 'page', 'limit', 'templatename']:
- if isinstance(v, str):
- v = [v]
- filters.append(' or '.join("%s eq '%s'" % (k, v_) for v_ in v))
- if filters:
- request_kwargs['params']['$filter'] = '&'.join(filters)
-
- if 'orderby' in kwargs:
- request_kwargs['params']['$orderby'] = kwargs['orderby']
-
- page_size = DEFAULT_PAGE_SIZE
- if 'limit' in kwargs:
- page_size = int(kwargs['limit'])
- request_kwargs['params']['$top'] = page_size
-
- if 'page' in kwargs:
- request_kwargs['params']['$skip'] = (int(kwargs['page']) - 1) * page_size
+ url = template.format(**url_kwargs)
- if 'format' in kwargs:
- request_kwargs['params']['format'] = kwargs['format']
-
- if 'templatename' in kwargs:
- request_kwargs['params']['templatename'] = kwargs['templatename']
-
- if request_method in ('PUT', 'POST'):
- request_kwargs['params']['returnBody'] = 'true'
-
- if 'headers' in kwargs:
- request_kwargs['headers'].update(kwargs['headers'])
-
- # Build body.
- if 'data' in kwargs:
- request_kwargs['json'] = kwargs['data']
+ # Build request kwargs (header/query/body)
+ request_kwargs = self.build_request_kwargs(
+ request_method,
+ data=kwargs.get('data'),
+ **request_kwargs_raw,
+ )
response = requests.request(request_method, url, **request_kwargs)
@@ -129,11 +99,66 @@ class Manager():
elif hasattr(self, method_name):
method_name = '%s_%s' % (method.lower(), method_name)
self.method_details[method_name] = {
- 'args': required_args,
+ 'kwargs': required_kwargs,
'hint': hint,
}
setattr(self, method_name, inner)
+ def build_request_kwargs(self, method, data=None, **kwargs):
+ request_kwargs = {}
+
+ # Build headers.
+ request_kwargs['headers'] = {
+ 'Authorization': 'Bearer %s' % self.credentials.oauth_token,
+ 'x-myobapi-cftoken': self.credentials.userpass,
+ 'x-myobapi-key': self.credentials.consumer_key,
+ 'x-myobapi-version': 'v2',
+ }
+ if 'headers' in kwargs:
+ request_kwargs['headers'].update(kwargs['headers'])
+
+ # Build query.
+ request_kwargs['params'] = {}
+ filters = []
+ for k, v in kwargs.items():
+ if k not in ['orderby', 'format', 'headers', 'page', 'limit', 'templatename']:
+ if isinstance(v, str):
+ v = [v]
+ operator = 'eq'
+ for op in ['lt', 'gt']:
+ if k.endswith('__%s' % op):
+ k = k[:-4]
+ operator = op
+ filters.append(' or '.join("%s %s '%s'" % (k, operator, v_) for v_ in v))
+ if filters:
+ request_kwargs['params']['$filter'] = ' and '.join(filters)
+
+ if 'orderby' in kwargs:
+ request_kwargs['params']['$orderby'] = kwargs['orderby']
+
+ page_size = DEFAULT_PAGE_SIZE
+ if 'limit' in kwargs:
+ page_size = int(kwargs['limit'])
+ request_kwargs['params']['$top'] = page_size
+
+ if 'page' in kwargs:
+ request_kwargs['params']['$skip'] = (int(kwargs['page']) - 1) * page_size
+
+ if 'format' in kwargs:
+ request_kwargs['params']['format'] = kwargs['format']
+
+ if 'templatename' in kwargs:
+ request_kwargs['params']['templatename'] = kwargs['templatename']
+
+ if method in ('PUT', 'POST'):
+ request_kwargs['params']['returnBody'] = 'true'
+
+ # Build body.
+ if data is not None:
+ request_kwargs['json'] = data
+
+ return request_kwargs
+
def __repr__(self):
def print_method(name, args):
return '%s(%s)' % (name, ', '.join(args))
@@ -144,7 +169,7 @@ class Manager():
)
return '%s%s:\n %s' % (self.name, self.__class__.__name__, '\n '.join(
formatstr % (
- print_method(k, v['args']),
+ print_method(k, v['kwargs']),
v['hint'],
) for k, v in sorted(self.method_details.items())
))
| Support for `gt` and `lt` filtering.
Hi there,
I can't find anything about this in the documentation, but does pymyob support query strings?
Thanks
Barton | uptick/pymyob | diff --git a/tests/endpoints.py b/tests/endpoints.py
index 156ae96..4d59358 100644
--- a/tests/endpoints.py
+++ b/tests/endpoints.py
@@ -12,12 +12,12 @@ DATA = {'dummy': 'data'}
class EndpointTests(TestCase):
def setUp(self):
- self.cred = PartnerCredentials(
+ cred = PartnerCredentials(
consumer_key='KeyToTheKingdom',
consumer_secret='TellNoOne',
callback_uri='CallOnlyWhenCalledTo',
)
- self.myob = Myob(self.cred)
+ self.myob = Myob(cred)
self.request_headers = {
'Authorization': 'Bearer None',
'x-myobapi-cftoken': None,
diff --git a/tests/managers.py b/tests/managers.py
index e69de29..71dcb10 100644
--- a/tests/managers.py
+++ b/tests/managers.py
@@ -0,0 +1,65 @@
+from unittest import TestCase
+
+from myob.constants import DEFAULT_PAGE_SIZE
+from myob.credentials import PartnerCredentials
+from myob.managers import Manager
+
+
+class QueryParamTests(TestCase):
+ def setUp(self):
+ cred = PartnerCredentials(
+ consumer_key='KeyToTheKingdom',
+ consumer_secret='TellNoOne',
+ callback_uri='CallOnlyWhenCalledTo',
+ )
+ self.manager = Manager('', credentials=cred)
+
+ def assertParamsEqual(self, raw_kwargs, expected_params, method='GET'):
+ self.assertEqual(
+ self.manager.build_request_kwargs(method, {}, **raw_kwargs)['params'],
+ expected_params
+ )
+
+ def test_filter(self):
+ self.assertParamsEqual({'Type': 'Customer'}, {'$filter': "Type eq 'Customer'"})
+ self.assertParamsEqual({'Type': ['Customer', 'Supplier']}, {'$filter': "Type eq 'Customer' or Type eq 'Supplier'"})
+ self.assertParamsEqual({'DisplayID__gt': '5-0000'}, {'$filter': "DisplayID gt '5-0000'"})
+ self.assertParamsEqual({'DateOccurred__lt': '2013-08-30T19:00:59.043'}, {'$filter': "DateOccurred lt '2013-08-30T19:00:59.043'"})
+ self.assertParamsEqual({'Type': ['Customer', 'Supplier'], 'DisplayID__gt': '5-0000'}, {'$filter': "Type eq 'Customer' or Type eq 'Supplier' and DisplayID gt '5-0000'"})
+
+ def test_orderby(self):
+ self.assertParamsEqual({'orderby': 'Date'}, {'$orderby': "Date"})
+
+ def test_pagination(self):
+ self.assertParamsEqual({'page': 7}, {'$skip': 6 * DEFAULT_PAGE_SIZE})
+ self.assertParamsEqual({'limit': 20}, {'$top': 20})
+ self.assertParamsEqual({'limit': 20, 'page': 7}, {'$top': 20, '$skip': 120})
+
+ def test_format(self):
+ self.assertParamsEqual({'format': 'json'}, {'format': 'json'})
+
+ def test_templatename(self):
+ self.assertParamsEqual({'templatename': 'InvoiceTemplate - 7'}, {'templatename': 'InvoiceTemplate - 7'})
+
+ def test_returnBody(self):
+ self.assertParamsEqual({}, {'returnBody': 'true'}, method='PUT')
+ self.assertParamsEqual({}, {'returnBody': 'true'}, method='POST')
+
+ def test_combination(self):
+ self.assertParamsEqual(
+ {
+ 'Type': ['Customer', 'Supplier'],
+ 'DisplayID__gt': '3-0900',
+ 'orderby': 'Date',
+ 'page': 5,
+ 'limit': 13,
+ 'format': 'json',
+ },
+ {
+ '$filter': "Type eq 'Customer' or Type eq 'Supplier' and DisplayID gt '3-0900'",
+ '$orderby': 'Date',
+ '$skip': 52,
+ '$top': 13,
+ 'format': 'json'
+ },
+ )
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 1
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
charset-normalizer==3.4.1
coverage==7.8.0
exceptiongroup==1.2.2
execnet==2.1.1
idna==3.10
iniconfig==2.1.0
oauthlib==3.2.2
packaging==24.2
pluggy==1.5.0
-e git+https://github.com/uptick/pymyob.git@7baef26a62b54be57dd4dfbc80cf6962b04acf74#egg=pymyob
pytest==8.3.5
pytest-asyncio==0.26.0
pytest-cov==6.0.0
pytest-mock==3.14.0
pytest-xdist==3.6.1
requests==2.32.3
requests-oauthlib==2.0.0
tomli==2.2.1
typing_extensions==4.13.0
urllib3==2.3.0
| name: pymyob
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- charset-normalizer==3.4.1
- coverage==7.8.0
- exceptiongroup==1.2.2
- execnet==2.1.1
- idna==3.10
- iniconfig==2.1.0
- oauthlib==3.2.2
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-asyncio==0.26.0
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pytest-xdist==3.6.1
- requests==2.32.3
- requests-oauthlib==2.0.0
- tomli==2.2.1
- typing-extensions==4.13.0
- urllib3==2.3.0
prefix: /opt/conda/envs/pymyob
| [
"tests/managers.py::QueryParamTests::test_combination",
"tests/managers.py::QueryParamTests::test_filter",
"tests/managers.py::QueryParamTests::test_format",
"tests/managers.py::QueryParamTests::test_orderby",
"tests/managers.py::QueryParamTests::test_pagination",
"tests/managers.py::QueryParamTests::test_returnBody",
"tests/managers.py::QueryParamTests::test_templatename"
]
| []
| [
"tests/endpoints.py::EndpointTests::test_companyfiles",
"tests/endpoints.py::EndpointTests::test_contacts",
"tests/endpoints.py::EndpointTests::test_general_ledger",
"tests/endpoints.py::EndpointTests::test_inventory",
"tests/endpoints.py::EndpointTests::test_invoices",
"tests/endpoints.py::EndpointTests::test_purchase_orders"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,428 | [
"myob/managers.py"
]
| [
"myob/managers.py"
]
|
|
elastic__rally-477 | 6ce036c1e92f9badbe839b85102096a99d0e5b83 | 2018-04-20 10:17:00 | ebc10f53af246b3e34f751c1346aec9ed800981e | diff --git a/docs/telemetry.rst b/docs/telemetry.rst
index 15cf7743..39924664 100644
--- a/docs/telemetry.rst
+++ b/docs/telemetry.rst
@@ -16,12 +16,13 @@ You probably want to gain additional insights from a race. Therefore, we have ad
Available telemetry devices:
- Command Name Description
- --------- --------------------- -----------------------------------------------------
- jit JIT Compiler Profiler Enables JIT compiler logs.
- gc GC log Enables GC logs.
- jfr Flight Recorder Enables Java Flight Recorder (requires an Oracle JDK)
- perf perf stat Reads CPU PMU counters (requires Linux and perf)
+ Command Name Description
+ ---------- --------------------- -----------------------------------------------------
+ jit JIT Compiler Profiler Enables JIT compiler logs.
+ gc GC log Enables GC logs.
+ jfr Flight Recorder Enables Java Flight Recorder (requires an Oracle JDK)
+ perf perf stat Reads CPU PMU counters (requires Linux and perf)
+ node-stats Node Stats Regularly samples node stats
Keep in mind that each telemetry device may incur a runtime overhead which can skew results.
@@ -67,4 +68,26 @@ The ``gc`` telemetry device enables GC logs for the benchmark candidate. You can
perf
----
-The ``perf`` telemetry device runs ``perf stat`` on each benchmarked node and writes the output to a log file. It can be used to capture low-level CPU statistics. Note that the perf tool, which is only available on Linux, must be installed before using this telemetry device.
\ No newline at end of file
+The ``perf`` telemetry device runs ``perf stat`` on each benchmarked node and writes the output to a log file. It can be used to capture low-level CPU statistics. Note that the perf tool, which is only available on Linux, must be installed before using this telemetry device.
+
+node-stats
+----------
+
+.. warning::
+
+ This telemetry device will record a lot of metrics and likely skew your measurement results.
+
+The node-stats telemetry devices regularly calls the node-stats API and records metrics from the following sections:
+
+* Indices stats (key ``indices`` in the node-stats API)
+* Thread pool stats (key ``jvm.thread_pool`` in the node-stats API)
+* JVM buffer pool stats (key ``jvm.buffer_pools`` in the node-stats API)
+* Circuit breaker stats (key ``breakers`` in the node-stats API)
+
+Supported telemetry parameters:
+
+* ``node-stats-sample-interval`` (default: 1): A positive number greater than zero denoting the sampling interval in seconds.
+* ``node-stats-include-indices`` (default: ``false``): A boolean indicating whether indices stats should be included.
+* ``node-stats-include-thread-pools`` (default: ``true``): A boolean indicating whether thread pool stats should be included.
+* ``node-stats-include-buffer-pools`` (default: ``true``): A boolean indicating whether buffer pool stats should be included.
+* ``node-stats-include-breakers`` (default: ``true``): A boolean indicating whether circuit breaker stats should be included.
\ No newline at end of file
diff --git a/docs/track.rst b/docs/track.rst
index 75b6858e..a39b6b15 100644
--- a/docs/track.rst
+++ b/docs/track.rst
@@ -313,6 +313,8 @@ With the operation type ``bulk`` you can execute `bulk requests <http://www.elas
* ``batch-size`` (optional): Defines how many documents Rally will read at once. This is an expert setting and only meant to avoid accidental bottlenecks for very small bulk sizes (e.g. if you want to benchmark with a bulk-size of 1, you should set ``batch-size`` higher).
* ``pipeline`` (optional): Defines the name of an (existing) ingest pipeline that should be used (only supported from Elasticsearch 5.0).
* ``conflicts`` (optional): Type of index conflicts to simulate. If not specified, no conflicts will be simulated. Valid values are: 'sequential' (A document id is replaced with a document id with a sequentially increasing id), 'random' (A document id is replaced with a document id with a random other id).
+* ``conflict-probability`` (optional, defaults to 25 percent): A number between (0, 100] that defines how many of the documents will get replaced.
+* ``on-conflict`` (optional, defaults to ``index``): Determines whether Rally should use the action ``index`` or ``update`` on id conflicts.
* ``detailed-results`` (optional, defaults to ``false``): Records more detailed meta-data for bulk requests. As it analyzes the corresponding bulk response in more detail, this might incur additional overhead which can skew measurement results.
Example::
diff --git a/esrally/mechanic/launcher.py b/esrally/mechanic/launcher.py
index 968fbcd7..a946e177 100644
--- a/esrally/mechanic/launcher.py
+++ b/esrally/mechanic/launcher.py
@@ -31,17 +31,43 @@ def wait_for_rest_layer(es, max_attempts=20):
class ClusterLauncher:
- def __init__(self, cfg, metrics_store, client_factory_class=client.EsClientFactory):
+ """
+ The cluster launcher performs cluster-wide tasks that need to be done in the startup / shutdown phase.
+
+ """
+ def __init__(self, cfg, metrics_store, on_post_launch=None, client_factory_class=client.EsClientFactory):
+ """
+
+ Creates a new ClusterLauncher.
+
+ :param cfg: The config object.
+ :param metrics_store: A metrics store that is configured to receive system metrics.
+ :param on_post_launch: An optional function that takes the Elasticsearch client as a parameter. It is invoked after the
+ REST API is available.
+ :param client_factory_class: A factory class that can create an Elasticsearch client.
+ """
self.cfg = cfg
self.metrics_store = metrics_store
+ self.on_post_launch = on_post_launch
self.client_factory = client_factory_class
def start(self):
+ """
+ Performs final startup tasks.
+
+ Precondition: All cluster nodes have been started.
+ Postcondition: The cluster is ready to receive HTTP requests or a ``LaunchError`` is raised.
+
+ :return: A representation of the launched cluster.
+ """
+ enabled_devices = self.cfg.opts("mechanic", "telemetry.devices")
+ telemetry_params = self.cfg.opts("mechanic", "telemetry.params")
hosts = self.cfg.opts("client", "hosts")
client_options = self.cfg.opts("client", "options")
es = self.client_factory(hosts, client_options).create()
- t = telemetry.Telemetry(devices=[
+ t = telemetry.Telemetry(enabled_devices, devices=[
+ telemetry.NodeStats(telemetry_params, es, self.metrics_store),
telemetry.ClusterMetaDataInfo(es),
telemetry.ClusterEnvironmentInfo(es, self.metrics_store),
telemetry.GcTimesSummary(es, self.metrics_store),
@@ -61,10 +87,16 @@ class ClusterLauncher:
logger.error("REST API layer is not yet available. Forcefully terminating cluster.")
self.stop(c)
raise exceptions.LaunchError("Elasticsearch REST API layer is not available. Forcefully terminated cluster.")
-
+ if self.on_post_launch:
+ self.on_post_launch(es)
return c
def stop(self, c):
+ """
+ Performs cleanup tasks. This method should be called before nodes are shut down.
+
+ :param c: The cluster that is about to be stopped.
+ """
c.telemetry.detach_from_cluster(c)
diff --git a/esrally/mechanic/mechanic.py b/esrally/mechanic/mechanic.py
index 08fdd111..53fcf054 100644
--- a/esrally/mechanic/mechanic.py
+++ b/esrally/mechanic/mechanic.py
@@ -10,6 +10,8 @@ from esrally.mechanic import supplier, provisioner, launcher, team
logger = logging.getLogger("rally.mechanic")
+METRIC_FLUSH_INTERVAL_SECONDS = 30
+
##############################
# Public Messages
@@ -214,6 +216,10 @@ def nodes_by_host(ip_port_pairs):
class MechanicActor(actor.RallyActor):
+ WAKEUP_RESET_RELATIVE_TIME = "relative_time"
+
+ WAKEUP_FLUSH_METRICS = "flush_metrics"
+
"""
This actor coordinates all associated mechanics on remote hosts (which do the actual work).
"""
@@ -226,6 +232,7 @@ class MechanicActor(actor.RallyActor):
self.race_control = None
self.cluster_launcher = None
self.cluster = None
+ self.plugins = None
def receiveUnrecognizedMessage(self, msg, sender):
logger.info("MechanicActor#receiveMessage unrecognized(msg = [%s] sender = [%s])" % (str(type(msg)), str(sender)))
@@ -256,6 +263,7 @@ class MechanicActor(actor.RallyActor):
cls = metrics.metrics_store_class(self.cfg)
self.metrics_store = cls(self.cfg)
self.metrics_store.open(ctx=msg.open_metrics_context)
+ _, self.plugins = load_team(self.cfg, msg.external)
# In our startup procedure we first create all mechanics. Only if this succeeds we'll continue.
hosts = self.cfg.opts("client", "hosts")
@@ -311,12 +319,19 @@ class MechanicActor(actor.RallyActor):
@actor.no_retry("mechanic")
def receiveMsg_ResetRelativeTime(self, msg, sender):
if msg.reset_in_seconds > 0:
- self.wakeupAfter(msg.reset_in_seconds)
+ self.wakeupAfter(msg.reset_in_seconds, payload=MechanicActor.WAKEUP_RESET_RELATIVE_TIME)
else:
self.reset_relative_time()
def receiveMsg_WakeupMessage(self, msg, sender):
- self.reset_relative_time()
+ if msg.payload == MechanicActor.WAKEUP_RESET_RELATIVE_TIME:
+ self.reset_relative_time()
+ elif msg.payload == MechanicActor.WAKEUP_FLUSH_METRICS:
+ logger.info("Flushing cluster-wide system metrics store.")
+ self.metrics_store.flush(refresh=False)
+ self.wakeupAfter(METRIC_FLUSH_INTERVAL_SECONDS, payload=MechanicActor.WAKEUP_FLUSH_METRICS)
+ else:
+ raise exceptions.RallyAssertionError("Unknown wakeup reason [{}]".format(msg.payload))
def receiveMsg_BenchmarkFailure(self, msg, sender):
self.send(self.race_control, msg)
@@ -345,7 +360,8 @@ class MechanicActor(actor.RallyActor):
self.transition_when_all_children_responded(sender, msg, "cluster_stopping", "cluster_stopped", self.on_all_nodes_stopped)
def on_all_nodes_started(self):
- self.cluster_launcher = launcher.ClusterLauncher(self.cfg, self.metrics_store)
+ plugin_handler = PostLaunchPluginHandler(self.plugins)
+ self.cluster_launcher = launcher.ClusterLauncher(self.cfg, self.metrics_store, on_post_launch=plugin_handler)
# Workaround because we could raise a LaunchError here and thespian will attempt to retry a failed message.
# In that case, we will get a followup RallyAssertionError because on the second attempt, Rally will check
# the status which is now "nodes_started" but we expected the status to be "nodes_starting" previously.
@@ -366,6 +382,7 @@ class MechanicActor(actor.RallyActor):
self.cluster.source_revision,
self.cluster.distribution_version),
self.metrics_store.meta_info))
+ self.wakeupAfter(METRIC_FLUSH_INTERVAL_SECONDS, payload=MechanicActor.WAKEUP_FLUSH_METRICS)
def on_benchmark_started(self):
self.cluster.on_benchmark_start()
@@ -392,6 +409,21 @@ class MechanicActor(actor.RallyActor):
# do not self-terminate, let the parent actor handle this
+class PostLaunchPluginHandler:
+ def __init__(self, plugins, hook_handler_class=team.PluginBootstrapHookHandler):
+ self.handlers = []
+ if plugins:
+ for plugin in plugins:
+ handler = hook_handler_class(plugin)
+ if handler.can_load():
+ handler.load()
+ self.handlers.append(handler)
+
+ def __call__(self, client):
+ for handler in self.handlers:
+ handler.invoke(team.PluginBootstrapPhase.post_launch.name, client=client)
+
+
@thespian.actors.requireCapability('coordinator')
class Dispatcher(thespian.actors.ActorTypeDispatcher):
def __init__(self):
@@ -476,8 +508,6 @@ class Dispatcher(thespian.actors.ActorTypeDispatcher):
class NodeMechanicActor(actor.RallyActor):
- METRIC_FLUSH_INTERVAL_SECONDS = 30
-
"""
One instance of this actor is run on each target host and coordinates the actual work of starting / stopping all nodes that should run
on this host.
@@ -553,13 +583,13 @@ class NodeMechanicActor(actor.RallyActor):
elif isinstance(msg, OnBenchmarkStart):
self.metrics_store.lap = msg.lap
self.mechanic.on_benchmark_start()
- self.wakeupAfter(NodeMechanicActor.METRIC_FLUSH_INTERVAL_SECONDS)
+ self.wakeupAfter(METRIC_FLUSH_INTERVAL_SECONDS)
self.send(sender, BenchmarkStarted())
elif isinstance(msg, thespian.actors.WakeupMessage):
if self.running:
logger.info("Flushing system metrics store on host [%s]." % self.host)
self.metrics_store.flush(refresh=False)
- self.wakeupAfter(NodeMechanicActor.METRIC_FLUSH_INTERVAL_SECONDS)
+ self.wakeupAfter(METRIC_FLUSH_INTERVAL_SECONDS)
elif isinstance(msg, OnBenchmarkStop):
self.mechanic.on_benchmark_stop()
self.metrics_store.flush(refresh=False)
@@ -590,11 +620,7 @@ class NodeMechanicActor(actor.RallyActor):
# Internal API (only used by the actor and for tests)
#####################################################
-def create(cfg, metrics_store, all_node_ips, cluster_settings=None, sources=False, build=False, distribution=False, external=False,
- docker=False):
- races_root = paths.races_root(cfg)
- challenge_root_path = paths.race_root(cfg)
- node_ids = cfg.opts("provisioning", "node.ids", mandatory=False)
+def load_team(cfg, external):
# externally provisioned clusters do not support cars / plugins
if external:
car = None
@@ -603,6 +629,15 @@ def create(cfg, metrics_store, all_node_ips, cluster_settings=None, sources=Fals
team_path = team.team_path(cfg)
car = team.load_car(team_path, cfg.opts("mechanic", "car.names"), cfg.opts("mechanic", "car.params"))
plugins = team.load_plugins(team_path, cfg.opts("mechanic", "car.plugins"), cfg.opts("mechanic", "plugin.params"))
+ return car, plugins
+
+
+def create(cfg, metrics_store, all_node_ips, cluster_settings=None, sources=False, build=False, distribution=False, external=False,
+ docker=False):
+ races_root = paths.races_root(cfg)
+ challenge_root_path = paths.race_root(cfg)
+ node_ids = cfg.opts("provisioning", "node.ids", mandatory=False)
+ car, plugins = load_team(cfg, external)
if sources or distribution:
s = supplier.create(cfg, sources, distribution, build, challenge_root_path, plugins)
diff --git a/esrally/mechanic/provisioner.py b/esrally/mechanic/provisioner.py
index 4a1c3b38..2942caae 100644
--- a/esrally/mechanic/provisioner.py
+++ b/esrally/mechanic/provisioner.py
@@ -2,12 +2,12 @@ import os
import glob
import shutil
import logging
-from enum import Enum
import jinja2
from esrally import exceptions
-from esrally.utils import io, console, process, modules, versions
+from esrally.mechanic import team
+from esrally.utils import io, console, process, versions
logger = logging.getLogger("rally.provisioner")
@@ -102,21 +102,6 @@ def cleanup(preserve, install_dir, data_paths):
logger.exception("Could not delete [%s]. Skipping..." % install_dir)
-class ProvisioningPhase(Enum):
- post_install = 10
-
- @classmethod
- def valid(cls, name):
- for n in ProvisioningPhase.names():
- if n == name:
- return True
- return False
-
- @classmethod
- def names(cls):
- return [p.name for p in list(ProvisioningPhase)]
-
-
def _apply_config(source_root_path, target_root_path, config_vars):
for root, dirs, files in os.walk(source_root_path):
env = jinja2.Environment(loader=jinja2.FileSystemLoader(root))
@@ -173,7 +158,7 @@ class BareProvisioner:
for installer in self.plugin_installers:
# Never let install hooks modify our original provisioner variables and just provide a copy!
- installer.invoke_install_hook(ProvisioningPhase.post_install, provisioner_vars.copy())
+ installer.invoke_install_hook(team.PluginBootstrapPhase.post_install, provisioner_vars.copy())
return NodeConfiguration(self.es_installer.car, self.es_installer.node_ip, self.es_installer.node_name,
self.es_installer.node_root_dir, self.es_installer.es_home_path, self.es_installer.node_log_dir,
@@ -297,52 +282,8 @@ class ElasticsearchInstaller:
return [os.path.join(self.es_home_path, "data")]
-class InstallHookHandler:
- def __init__(self, plugin, loader_class=modules.ComponentLoader):
- self.plugin = plugin
- # Don't allow the loader to recurse. The subdirectories may contain Elasticsearch specific files which we do not want to add to
- # Rally's Python load path. We may need to define a more advanced strategy in the future.
- self.loader = loader_class(root_path=self.plugin.root_path, component_entry_point="plugin", recurse=False)
- self.hooks = {}
-
- def can_load(self):
- return self.loader.can_load()
-
- def load(self):
- root_module = self.loader.load()
- try:
- # every module needs to have a register() method
- root_module.register(self)
- except exceptions.RallyError:
- # just pass our own exceptions transparently.
- raise
- except BaseException:
- msg = "Could not load install hooks in [%s]" % self.loader.root_path
- logger.exception(msg)
- raise exceptions.SystemSetupError(msg)
-
- def register(self, phase, hook):
- logger.info("Registering install hook [%s] for phase [%s] in plugin [%s]" % (hook.__name__, phase, self.plugin.name))
- if not ProvisioningPhase.valid(phase):
- raise exceptions.SystemSetupError("Provisioning phase [%s] is unknown. Valid phases are: %s." %
- (phase, ProvisioningPhase.names()))
- if phase not in self.hooks:
- self.hooks[phase] = []
- self.hooks[phase].append(hook)
-
- def invoke(self, phase, variables):
- if phase in self.hooks:
- logger.info("Invoking phase [%s] for plugin [%s] in config [%s]" % (phase, self.plugin.name, self.plugin.config))
- for hook in self.hooks[phase]:
- logger.info("Invoking hook [%s]." % hook.__name__)
- # hooks should only take keyword arguments to be forwards compatible with Rally!
- hook(config_names=self.plugin.config, variables=variables)
- else:
- logger.debug("Plugin [%s] in config [%s] has no hook registered for phase [%s]." % (self.plugin.name, self.plugin.config, phase))
-
-
class PluginInstaller:
- def __init__(self, plugin, hook_handler_class=InstallHookHandler):
+ def __init__(self, plugin, hook_handler_class=team.PluginBootstrapHookHandler):
self.plugin = plugin
self.hook_handler = hook_handler_class(self.plugin)
if self.hook_handler.can_load():
@@ -371,7 +312,7 @@ class PluginInstaller:
(self.plugin_name, str(return_code)))
def invoke_install_hook(self, phase, variables):
- self.hook_handler.invoke(phase.name, variables)
+ self.hook_handler.invoke(phase.name, variables=variables)
@property
def variables(self):
diff --git a/esrally/mechanic/team.py b/esrally/mechanic/team.py
index e37e5c3a..37c4a518 100644
--- a/esrally/mechanic/team.py
+++ b/esrally/mechanic/team.py
@@ -1,11 +1,12 @@
import os
import logging
import configparser
+from enum import Enum
import tabulate
from esrally import exceptions, PROGRAM_NAME
-from esrally.utils import console, repo, io
+from esrally.utils import console, repo, io, modules
logger = logging.getLogger("rally.team")
@@ -341,3 +342,62 @@ class PluginDescriptor:
def __eq__(self, other):
return isinstance(other, type(self)) and (self.name, self.config, self.core_plugin) == (other.name, other.config, other.core_plugin)
+
+
+class PluginBootstrapPhase(Enum):
+ post_install = 10
+ post_launch = 20
+
+ @classmethod
+ def valid(cls, name):
+ for n in PluginBootstrapPhase.names():
+ if n == name:
+ return True
+ return False
+
+ @classmethod
+ def names(cls):
+ return [p.name for p in list(PluginBootstrapPhase)]
+
+
+class PluginBootstrapHookHandler:
+ def __init__(self, plugin, loader_class=modules.ComponentLoader):
+ self.plugin = plugin
+ # Don't allow the loader to recurse. The subdirectories may contain Elasticsearch specific files which we do not want to add to
+ # Rally's Python load path. We may need to define a more advanced strategy in the future.
+ self.loader = loader_class(root_path=self.plugin.root_path, component_entry_point="plugin", recurse=False)
+ self.hooks = {}
+
+ def can_load(self):
+ return self.loader.can_load()
+
+ def load(self):
+ root_module = self.loader.load()
+ try:
+ # every module needs to have a register() method
+ root_module.register(self)
+ except exceptions.RallyError:
+ # just pass our own exceptions transparently.
+ raise
+ except BaseException:
+ msg = "Could not load plugin bootstrap hooks in [{}]".format(self.loader.root_path)
+ logger.exception(msg)
+ raise exceptions.SystemSetupError(msg)
+
+ def register(self, phase, hook):
+ logger.info("Registering plugin bootstrap hook [%s] for phase [%s] in plugin [%s]", hook.__name__, phase, self.plugin.name)
+ if not PluginBootstrapPhase.valid(phase):
+ raise exceptions.SystemSetupError("Phase [{}] is unknown. Valid phases are: {}.".format(phase, PluginBootstrapPhase.names()))
+ if phase not in self.hooks:
+ self.hooks[phase] = []
+ self.hooks[phase].append(hook)
+
+ def invoke(self, phase, **kwargs):
+ if phase in self.hooks:
+ logger.info("Invoking phase [%s] for plugin [%s] in config [%s]", phase, self.plugin.name, self.plugin.config)
+ for hook in self.hooks[phase]:
+ logger.info("Invoking hook [%s].", hook.__name__)
+ # hooks should only take keyword arguments to be forwards compatible with Rally!
+ hook(config_names=self.plugin.config, **kwargs)
+ else:
+ logger.debug("Plugin [%s] in config [%s] has no hook registered for phase [%s].", self.plugin.name, self.plugin.config, phase)
\ No newline at end of file
diff --git a/esrally/mechanic/telemetry.py b/esrally/mechanic/telemetry.py
index 7746a5e0..c78387ea 100644
--- a/esrally/mechanic/telemetry.py
+++ b/esrally/mechanic/telemetry.py
@@ -6,7 +6,7 @@ import subprocess
import threading
import tabulate
-from esrally import metrics, time
+from esrally import metrics, time, exceptions
from esrally.utils import io, sysstats, process, console, versions
logger = logging.getLogger("rally.telemetry")
@@ -14,7 +14,7 @@ logger = logging.getLogger("rally.telemetry")
def list_telemetry():
console.println("Available telemetry devices:\n")
- devices = [[device.command, device.human_name, device.help] for device in [JitCompiler, Gc, FlightRecorder, PerfStat]]
+ devices = [[device.command, device.human_name, device.help] for device in [JitCompiler, Gc, FlightRecorder, PerfStat, NodeStats]]
console.println(tabulate.tabulate(devices, ["Command", "Name", "Description"]))
console.println("\nKeep in mind that each telemetry device may incur a runtime overhead which can skew results.")
@@ -116,6 +116,26 @@ class InternalTelemetryDevice(TelemetryDevice):
internal = True
+class SamplerThread(threading.Thread):
+ def __init__(self, recorder):
+ threading.Thread.__init__(self)
+ self.stop = False
+ self.recorder = recorder
+
+ def finish(self):
+ self.stop = True
+ self.join()
+
+ def run(self):
+ # noinspection PyBroadException
+ try:
+ while not self.stop:
+ self.recorder.record()
+ time.sleep(self.recorder.sample_interval)
+ except BaseException as e:
+ logger.exception("Could not determine {}".format(self.recorder))
+
+
class FlightRecorder(TelemetryDevice):
internal = False
command = "jfr"
@@ -256,6 +276,133 @@ class PerfStat(TelemetryDevice):
self.attached = False
+class NodeStats(TelemetryDevice):
+ internal = False
+ command = "node-stats"
+ human_name = "Node Stats"
+ help = "Regularly samples node stats"
+
+ """
+ Gathers different node stats.
+ """
+ def __init__(self, telemetry_params, client, metrics_store):
+ super().__init__()
+ self.telemetry_params = telemetry_params
+ self.client = client
+ self.metrics_store = metrics_store
+ self.sampler = None
+
+ def attach_to_cluster(self, cluster):
+ super().attach_to_cluster(cluster)
+
+ def on_benchmark_start(self):
+ recorder = NodeStatsRecorder(self.telemetry_params, self.client, self.metrics_store)
+ self.sampler = SamplerThread(recorder)
+ self.sampler.setDaemon(True)
+ self.sampler.start()
+
+ def on_benchmark_stop(self):
+ if self.sampler:
+ self.sampler.finish()
+
+
+class NodeStatsRecorder:
+ def __init__(self, telemetry_params, client, metrics_store):
+ self.sample_interval = telemetry_params.get("node-stats-sample-interval", 1)
+ if self.sample_interval <= 0:
+ raise exceptions.SystemSetupError(
+ "The telemetry parameter 'node-stats-sample-interval' must be greater than zero but was {}.".format(self.sample_interval))
+
+ self.include_indices = telemetry_params.get("node-stats-include-indices", False)
+ self.include_thread_pools = telemetry_params.get("node-stats-include-thread-pools", True)
+ self.include_buffer_pools = telemetry_params.get("node-stats-include-buffer-pools", True)
+ self.include_breakers = telemetry_params.get("node-stats-include-breakers", True)
+ self.client = client
+ self.metrics_store = metrics_store
+
+ def __str__(self):
+ return "node stats"
+
+ def record(self):
+ current_sample = self.sample()
+ for node_stats in current_sample:
+ node_name = node_stats["name"]
+ if self.include_indices:
+ self.record_indices_stats(node_name, node_stats,
+ include=["indexing", "search", "merges", "query_cache", "segments", "translog",
+ "request_cache"])
+ if self.include_thread_pools:
+ self.record_thread_pool_stats(node_name, node_stats)
+ if self.include_breakers:
+ self.record_circuit_breaker_stats(node_name, node_stats)
+ if self.include_buffer_pools:
+ self.record_jvm_buffer_pool_stats(node_name, node_stats)
+
+ time.sleep(self.sample_interval)
+
+ def record_indices_stats(self, node_name, node_stats, include):
+ indices_stats = node_stats["indices"]
+ for section in include:
+ if section in indices_stats:
+ for metric_name, metric_value in indices_stats[section].items():
+ self.put_value(node_name,
+ metric_name="indices_{}_{}".format(section, metric_name),
+ node_stats_metric_name=metric_name,
+ metric_value=metric_value)
+
+ def record_thread_pool_stats(self, node_name, node_stats):
+ thread_pool_stats = node_stats["thread_pool"]
+ for pool_name, pool_metrics in thread_pool_stats.items():
+ for metric_name, metric_value in pool_metrics.items():
+ self.put_value(node_name,
+ metric_name="thread_pool_{}_{}".format(pool_name, metric_name),
+ node_stats_metric_name=metric_name,
+ metric_value=metric_value)
+
+ def record_circuit_breaker_stats(self, node_name, node_stats):
+ breaker_stats = node_stats["breakers"]
+ for breaker_name, breaker_metrics in breaker_stats.items():
+ for metric_name, metric_value in breaker_metrics.items():
+ self.put_value(node_name,
+ metric_name="breaker_{}_{}".format(breaker_name, metric_name),
+ node_stats_metric_name=metric_name,
+ metric_value=metric_value)
+
+ def record_jvm_buffer_pool_stats(self, node_name, node_stats):
+ buffer_pool_stats = node_stats["jvm"]["buffer_pools"]
+ for pool_name, pool_metrics in buffer_pool_stats.items():
+ for metric_name, metric_value in pool_metrics.items():
+ self.put_value(node_name,
+ metric_name="jvm_buffer_pool_{}_{}".format(pool_name, metric_name),
+ node_stats_metric_name=metric_name,
+ metric_value=metric_value)
+
+ def put_value(self, node_name, metric_name, node_stats_metric_name, metric_value):
+ if isinstance(metric_value, (int, float)) and not isinstance(metric_value, bool):
+ # auto-recognize metric keys ending with well-known suffixes
+ if node_stats_metric_name.endswith("in_bytes"):
+ self.metrics_store.put_value_node_level(node_name=node_name,
+ name=metric_name,
+ value=metric_value, unit="byte")
+ elif node_stats_metric_name.endswith("in_millis"):
+ self.metrics_store.put_value_node_level(node_name=node_name,
+ name=metric_name,
+ value=metric_value, unit="ms")
+ else:
+ self.metrics_store.put_count_node_level(node_name=node_name,
+ name=metric_name,
+ count=metric_value)
+
+ def sample(self):
+ import elasticsearch
+ try:
+ stats = self.client.nodes.stats(metric="_all")
+ except elasticsearch.TransportError:
+ logger.exception("Could not retrieve node stats.")
+ return {}
+ return stats["nodes"].values()
+
+
class StartupTime(InternalTelemetryDevice):
def __init__(self, metrics_store, stopwatch=time.StopWatch):
self.metrics_store = metrics_store
@@ -400,7 +547,8 @@ class CpuUsage(InternalTelemetryDevice):
def on_benchmark_start(self):
if self.node:
- self.sampler = SampleCpuUsage(self.node, self.metrics_store)
+ recorder = CpuUsageRecorder(self.node, self.metrics_store)
+ self.sampler = SamplerThread(recorder)
self.sampler.setDaemon(True)
self.sampler.start()
@@ -409,30 +557,25 @@ class CpuUsage(InternalTelemetryDevice):
self.sampler.finish()
-class SampleCpuUsage(threading.Thread):
+class CpuUsageRecorder:
def __init__(self, node, metrics_store):
- threading.Thread.__init__(self)
- self.stop = False
self.node = node
self.process = sysstats.setup_process_stats(node.process.pid)
self.metrics_store = metrics_store
+ # the call is blocking already; there is no need for additional waiting in the sampler thread.
+ self.sample_interval = 0
- def finish(self):
- self.stop = True
- self.join()
-
- def run(self):
+ def record(self):
import psutil
- # noinspection PyBroadException
try:
- while not self.stop:
- self.metrics_store.put_value_node_level(node_name=self.node.node_name, name="cpu_utilization_1s",
- value=sysstats.cpu_utilization(self.process), unit="%")
+ self.metrics_store.put_value_node_level(node_name=self.node.node_name, name="cpu_utilization_1s",
+ value=sysstats.cpu_utilization(self.process), unit="%")
# this can happen when the Elasticsearch process has been terminated already and we were not quick enough to stop.
except psutil.NoSuchProcess:
pass
- except BaseException:
- logger.exception("Could not determine CPU utilization")
+
+ def __str__(self):
+ return "cpu utilization"
def store_node_attribute_metadata(metrics_store, nodes_info):
diff --git a/esrally/track/params.py b/esrally/track/params.py
index 37bb2ea0..1287df65 100644
--- a/esrally/track/params.py
+++ b/esrally/track/params.py
@@ -442,6 +442,15 @@ class BulkIndexParamSource(ParamSource):
else:
raise exceptions.InvalidSyntax("Unknown 'conflicts' setting [%s]" % id_conflicts)
+ if self.id_conflicts != IndexIdConflict.NoConflicts:
+ self.conflict_probability = self.float_param(params, name="conflict-probability", default_value=25, min_value=0, max_value=100)
+ self.on_conflict = params.get("on-conflict", "index")
+ if self.on_conflict not in ["index", "update"]:
+ raise exceptions.InvalidSyntax("Unknown 'on-conflict' setting [{}]".format(self.on_conflict))
+ else:
+ self.conflict_probability = None
+ self.on_conflict = None
+
self.corpora = self.used_corpora(track, params)
for corpus in self.corpora:
@@ -473,13 +482,17 @@ class BulkIndexParamSource(ParamSource):
except ValueError:
raise exceptions.InvalidSyntax("'batch-size' must be numeric")
+ self.ingest_percentage = self.float_param(params, name="ingest-percentage", default_value=100, min_value=0, max_value=100)
+
+ def float_param(self, params, name, default_value, min_value, max_value):
try:
- self.ingest_percentage = float(params.get("ingest-percentage", 100.0))
- if self.ingest_percentage <= 0 or self.ingest_percentage > 100.0:
+ value = float(params.get(name, default_value))
+ if value <= min_value or value > max_value:
raise exceptions.InvalidSyntax(
- "'ingest-percentage' must be in the range (0.0, 100.0] but was {:.1f}".format(self.ingest_percentage))
+ "'{}' must be in the range ({:.1f}, {:.1f}] but was {:.1f}".format(name, min_value, max_value, value))
+ return value
except ValueError:
- raise exceptions.InvalidSyntax("'ingest-percentage' must be numeric")
+ raise exceptions.InvalidSyntax("'{}' must be numeric".format(name))
def used_corpora(self, t, params):
corpora = []
@@ -503,7 +516,8 @@ class BulkIndexParamSource(ParamSource):
def partition(self, partition_index, total_partitions):
return PartitionBulkIndexParamSource(self.corpora, partition_index, total_partitions, self.batch_size, self.bulk_size,
- self.ingest_percentage, self.id_conflicts, self.pipeline, self._params)
+ self.ingest_percentage, self.id_conflicts, self.conflict_probability, self.on_conflict,
+ self.pipeline, self._params)
def params(self):
raise exceptions.RallyError("Do not use a BulkIndexParamSource without partitioning")
@@ -513,8 +527,8 @@ class BulkIndexParamSource(ParamSource):
class PartitionBulkIndexParamSource:
- def __init__(self, corpora, partition_index, total_partitions, batch_size, bulk_size, ingest_percentage, id_conflicts=None,
- pipeline=None, original_params=None):
+ def __init__(self, corpora, partition_index, total_partitions, batch_size, bulk_size, ingest_percentage,
+ id_conflicts, conflict_probability, on_conflict, pipeline=None, original_params=None):
"""
:param corpora: Specification of affected document corpora.
@@ -524,6 +538,8 @@ class PartitionBulkIndexParamSource:
:param bulk_size: The size of bulk index operations (number of documents per bulk).
:param ingest_percentage: A number between (0.0, 100.0] that defines how much of the whole corpus should be ingested.
:param id_conflicts: The type of id conflicts.
+ :param conflict_probability: A number between (0.0, 100.0] that defines the probability that a document is replaced by another one.
+ :param on_conflict: A string indicating which action should be taken on id conflicts (either "index" or "update").
:param pipeline: The name of the ingest pipeline to run.
:param original_params: The original dict passed to the parent parameter source.
"""
@@ -536,7 +552,7 @@ class PartitionBulkIndexParamSource:
self.id_conflicts = id_conflicts
self.pipeline = pipeline
self.internal_params = bulk_data_based(total_partitions, partition_index, corpora, batch_size,
- bulk_size, id_conflicts, pipeline, original_params)
+ bulk_size, id_conflicts, conflict_probability, on_conflict, pipeline, original_params)
def partition(self, partition_index, total_partitions):
raise exceptions.RallyError("Cannot partition a PartitionBulkIndexParamSource further")
@@ -593,18 +609,20 @@ def chain(*iterables):
yield element
-def create_default_reader(docs, offset, num_lines, num_docs, batch_size, bulk_size, id_conflicts):
+def create_default_reader(docs, offset, num_lines, num_docs, batch_size, bulk_size, id_conflicts, conflict_probability, on_conflict):
source = Slice(io.FileSource, offset, num_lines)
if docs.includes_action_and_meta_data:
am_handler = SourceActionMetaData(source)
else:
- am_handler = GenerateActionMetaData(docs.target_index, docs.target_type, build_conflicting_ids(id_conflicts, num_docs, offset))
+ am_handler = GenerateActionMetaData(docs.target_index, docs.target_type,
+ build_conflicting_ids(id_conflicts, num_docs, offset), conflict_probability, on_conflict)
return IndexDataReader(docs.document_file, batch_size, bulk_size, source, am_handler, docs.target_index, docs.target_type)
-def create_readers(num_clients, client_index, corpora, batch_size, bulk_size, id_conflicts, create_reader):
+def create_readers(num_clients, client_index, corpora, batch_size, bulk_size, id_conflicts, conflict_probability, on_conflict,
+ create_reader):
readers = []
for corpus in corpora:
for docs in corpus.documents:
@@ -613,7 +631,8 @@ def create_readers(num_clients, client_index, corpora, batch_size, bulk_size, id
if num_docs > 0:
logger.info("Task-relative client at index [%d] will bulk index [%d] docs starting from line offset [%d] for [%s/%s] "
"from corpus [%s]." % (client_index, num_docs, offset, docs.target_index, docs.target_type, corpus.name))
- readers.append(create_reader(docs, offset, num_lines, num_docs, batch_size, bulk_size, id_conflicts))
+ readers.append(create_reader(docs, offset, num_lines, num_docs, batch_size, bulk_size, id_conflicts, conflict_probability,
+ on_conflict))
else:
logger.info("Task-relative client at index [%d] skips [%s] (no documents to read)." % (client_index, corpus.name))
return readers
@@ -673,8 +692,8 @@ def bulk_generator(readers, client_index, pipeline, original_params):
yield params
-def bulk_data_based(num_clients, client_index, corpora, batch_size, bulk_size, id_conflicts, pipeline, original_params,
- create_reader=create_default_reader):
+def bulk_data_based(num_clients, client_index, corpora, batch_size, bulk_size, id_conflicts, conflict_probability, on_conflict, pipeline,
+ original_params, create_reader=create_default_reader):
"""
Calculates the necessary schedule for bulk operations.
@@ -684,22 +703,31 @@ def bulk_data_based(num_clients, client_index, corpora, batch_size, bulk_size, i
:param batch_size: The number of documents to read in one go.
:param bulk_size: The size of bulk index operations (number of documents per bulk).
:param id_conflicts: The type of id conflicts to simulate.
+ :param conflict_probability: A number between (0.0, 100.0] that defines the probability that a document is replaced by another one.
+ :param on_conflict: A string indicating which action should be taken on id conflicts (either "index" or "update").
:param pipeline: Name of the ingest pipeline to use. May be None.
:param original_params: A dict of original parameters that were passed from the track. They will be merged into the returned parameters.
:param create_reader: A function to create the index reader. By default a file based index reader will be created. This parameter is
intended for testing only.
:return: A generator for the bulk operations of the given client.
"""
- readers = create_readers(num_clients, client_index, corpora, batch_size, bulk_size, id_conflicts, create_reader)
+ readers = create_readers(num_clients, client_index, corpora, batch_size, bulk_size, id_conflicts, conflict_probability, on_conflict,
+ create_reader)
return bulk_generator(chain(*readers), client_index, pipeline, original_params)
class GenerateActionMetaData:
- def __init__(self, index_name, type_name, conflicting_ids, rand=random.randint):
+ def __init__(self, index_name, type_name, conflicting_ids=None, conflict_probability=None, on_conflict=None,
+ rand=random.random, randint=random.randint):
self.index_name = index_name
self.type_name = type_name
self.conflicting_ids = conflicting_ids
+ self.on_conflict = on_conflict
+ # random() produces numbers between 0 and 1 and the user denotes the probability in percentage between 0 and 100.
+ self.conflict_probability = conflict_probability / 100.0 if conflict_probability else None
+
self.rand = rand
+ self.randint = randint
self.id_up_to = 0
def __iter__(self):
@@ -707,15 +735,22 @@ class GenerateActionMetaData:
def __next__(self):
if self.conflicting_ids is not None:
- # 25% of the time we replace a doc:
- if self.id_up_to > 0 and self.rand(0, 3) == 3:
- doc_id = self.conflicting_ids[self.rand(0, self.id_up_to - 1)]
+ if self.id_up_to > 0 and self.rand() <= self.conflict_probability:
+ doc_id = self.conflicting_ids[self.randint(0, self.id_up_to - 1)]
+ action = self.on_conflict
else:
doc_id = self.conflicting_ids[self.id_up_to]
self.id_up_to += 1
- return '{"index": {"_index": "%s", "_type": "%s", "_id": "%s"}}' % (self.index_name, self.type_name, doc_id)
+ action = "index"
+
+ if action == "index":
+ return "index", '{"index": {"_index": "%s", "_type": "%s", "_id": "%s"}}' % (self.index_name, self.type_name, doc_id)
+ elif action == "update":
+ return "update", '{"update": {"_index": "%s", "_type": "%s", "_id": "%s"}}' % (self.index_name, self.type_name, doc_id)
+ else:
+ raise exceptions.RallyAssertionError("Unknown action [{}]".format(action))
else:
- return '{"index": {"_index": "%s", "_type": "%s"}}' % (self.index_name, self.type_name)
+ return "index", '{"index": {"_index": "%s", "_type": "%s"}}' % (self.index_name, self.type_name)
class SourceActionMetaData:
@@ -726,7 +761,7 @@ class SourceActionMetaData:
return self
def __next__(self):
- return next(self.source)
+ return "source", next(self.source)
class Slice:
@@ -815,11 +850,16 @@ class IndexDataReader:
def read_bulk(self):
docs_in_bulk = 0
current_bulk = []
-
- for action_metadata_line, document in zip(self.action_metadata, self.file_source):
- if action_metadata_line:
+ for action_metadata_item, document in zip(self.action_metadata, self.file_source):
+ if action_metadata_item:
+ action_type, action_metadata_line = action_metadata_item
current_bulk.append(action_metadata_line)
- current_bulk.append(document)
+ if action_type == "update":
+ current_bulk.append("{\"doc\":%s}" % document)
+ else:
+ current_bulk.append(document)
+ else:
+ current_bulk.append(document)
docs_in_bulk += 1
if docs_in_bulk == self.bulk_size:
break
| Mechanism needed for benchmarking update-heavy workflows against create-only or index-only
Currently, rally can be configured to generate metadata that contains conflicting ids for 25% of the documents in a corpus. However, the action specified in the metadata is always `index`. Since the performance characteristics of `update` changed significantly in 5.0+, this seems like a blind spot in the current suite.
While I understand that metadata can be interleaved with the documents in a corpus instead of being generated by rally, side-by-side comparison of performance of the same corpus with conflicting ids would be considerably simpler if the create/update workflow could use generated metadata. | elastic/rally | diff --git a/tests/mechanic/launcher_test.py b/tests/mechanic/launcher_test.py
index 7e9d89bf..8c54bfae 100644
--- a/tests/mechanic/launcher_test.py
+++ b/tests/mechanic/launcher_test.py
@@ -1,6 +1,6 @@
-from unittest import TestCase
+from unittest import TestCase, mock
-from esrally import config
+from esrally import config, exceptions
from esrally.mechanic import launcher
@@ -11,14 +11,15 @@ class MockMetricsStore:
class MockClientFactory:
def __init__(self, hosts, client_options):
- pass
+ self.client_options = client_options
def create(self):
- return MockClient()
+ return MockClient(self.client_options)
class MockClient:
- def __init__(self):
+ def __init__(self, client_options):
+ self.client_options = client_options
self.cluster = SubClient({
"cluster_name": "rally-benchmark-cluster",
"nodes": {
@@ -54,8 +55,14 @@ class MockClient:
}
def info(self):
+ if self.client_options.get("raise-error-on-info", False):
+ import elasticsearch
+ raise elasticsearch.ConnectionError("Unittest error")
return self._info
+ def search(self, *args, **kwargs):
+ return {}
+
class SubClient:
def __init__(self, info):
@@ -73,7 +80,7 @@ class ExternalLauncherTests(TestCase):
cfg = config.Config()
cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
- cfg.add(config.Scope.application, "client", "options", [])
+ cfg.add(config.Scope.application, "client", "options", {})
m = launcher.ExternalLauncher(cfg, MockMetricsStore(), client_factory_class=MockClientFactory)
m.start()
@@ -85,10 +92,60 @@ class ExternalLauncherTests(TestCase):
cfg = config.Config()
cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
- cfg.add(config.Scope.application, "client", "options", [])
+ cfg.add(config.Scope.application, "client", "options", {})
cfg.add(config.Scope.application, "mechanic", "distribution.version", "2.3.3")
m = launcher.ExternalLauncher(cfg, MockMetricsStore(), client_factory_class=MockClientFactory)
m.start()
# did not change user defined value
self.assertEqual(cfg.opts("mechanic", "distribution.version"), "2.3.3")
+
+
+class ClusterLauncherTests(TestCase):
+ def test_launches_cluster_with_post_launch_handler(self):
+ on_post_launch = mock.Mock()
+ cfg = config.Config()
+ cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
+ cfg.add(config.Scope.application, "client", "options", {})
+ cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
+ cfg.add(config.Scope.application, "mechanic", "telemetry.params", {})
+
+ cluster_launcher = launcher.ClusterLauncher(cfg, MockMetricsStore(),
+ on_post_launch=on_post_launch, client_factory_class=MockClientFactory)
+ cluster = cluster_launcher.start()
+
+ self.assertEqual(["10.0.0.10:9200", "10.0.0.11:9200"], cluster.hosts)
+ self.assertIsNotNone(cluster.telemetry)
+ # this requires at least Python 3.6
+ # on_post_launch.assert_called_once()
+ self.assertEqual(1, on_post_launch.call_count)
+
+ def test_launches_cluster_without_post_launch_handler(self):
+ cfg = config.Config()
+ cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
+ cfg.add(config.Scope.application, "client", "options", {})
+ cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
+ cfg.add(config.Scope.application, "mechanic", "telemetry.params", {})
+
+ cluster_launcher = launcher.ClusterLauncher(cfg, MockMetricsStore(), client_factory_class=MockClientFactory)
+ cluster = cluster_launcher.start()
+
+ self.assertEqual(["10.0.0.10:9200", "10.0.0.11:9200"], cluster.hosts)
+ self.assertIsNotNone(cluster.telemetry)
+
+ @mock.patch("time.sleep")
+ def test_error_on_cluster_launch(self, sleep):
+ on_post_launch = mock.Mock()
+ cfg = config.Config()
+ cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
+ # Simulate that the client will raise an error upon startup
+ cfg.add(config.Scope.application, "client", "options", {"raise-error-on-info": True})
+ cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
+ cfg.add(config.Scope.application, "mechanic", "telemetry.params", {})
+
+ cluster_launcher = launcher.ClusterLauncher(cfg, MockMetricsStore(),
+ on_post_launch=on_post_launch, client_factory_class=MockClientFactory)
+ with self.assertRaisesRegex(exceptions.LaunchError,
+ "Elasticsearch REST API layer is not available. Forcefully terminated cluster."):
+ cluster_launcher.start()
+ on_post_launch.assert_not_called()
\ No newline at end of file
diff --git a/tests/mechanic/provisioner_test.py b/tests/mechanic/provisioner_test.py
index 7c29095a..b06ead1f 100644
--- a/tests/mechanic/provisioner_test.py
+++ b/tests/mechanic/provisioner_test.py
@@ -439,60 +439,11 @@ class PluginInstallerTests(TestCase):
installer = provisioner.PluginInstaller(plugin, hook_handler_class=PluginInstallerTests.NoopHookHandler)
self.assertEqual(0, len(installer.hook_handler.hook_calls))
- installer.invoke_install_hook(provisioner.ProvisioningPhase.post_install, {"foo": "bar"})
+ installer.invoke_install_hook(team.PluginBootstrapPhase.post_install, {"foo": "bar"})
self.assertEqual(1, len(installer.hook_handler.hook_calls))
self.assertEqual({"foo": "bar"}, installer.hook_handler.hook_calls["post_install"])
-class InstallHookHandlerTests(TestCase):
- class UnitTestComponentLoader:
- def __init__(self, root_path, component_entry_point, recurse):
- self.root_path = root_path
- self.component_entry_point = component_entry_point
- self.recurse = recurse
- self.registration_function = None
-
- def load(self):
- return self.registration_function
-
- class UnitTestHook:
- def __init__(self, phase="post_install"):
- self.phase = phase
- self.call_counter = 0
-
- def post_install_hook(self, config_names, variables, **kwargs):
- self.call_counter += variables["increment"]
-
- def register(self, handler):
- # we can register multiple hooks here
- handler.register(self.phase, self.post_install_hook)
- handler.register(self.phase, self.post_install_hook)
-
- def test_loads_module(self):
- plugin = team.PluginDescriptor("unittest-plugin")
- hook = InstallHookHandlerTests.UnitTestHook()
- handler = provisioner.InstallHookHandler(plugin, loader_class=InstallHookHandlerTests.UnitTestComponentLoader)
-
- handler.loader.registration_function = hook
- handler.load()
-
- handler.invoke("post_install", {"increment": 4})
-
- # we registered our hook twice. Check that it has been called twice.
- self.assertEqual(hook.call_counter, 2 * 4)
-
- def test_cannot_register_for_unknown_phase(self):
- plugin = team.PluginDescriptor("unittest-plugin")
- hook = InstallHookHandlerTests.UnitTestHook(phase="this_is_an_unknown_install_phase")
- handler = provisioner.InstallHookHandler(plugin, loader_class=InstallHookHandlerTests.UnitTestComponentLoader)
-
- handler.loader.registration_function = hook
- with self.assertRaises(exceptions.SystemSetupError) as ctx:
- handler.load()
- self.assertEqual("Provisioning phase [this_is_an_unknown_install_phase] is unknown. Valid phases are: ['post_install'].",
- ctx.exception.args[0])
-
-
class DockerProvisionerTests(TestCase):
@mock.patch("esrally.utils.sysstats.total_memory")
@mock.patch("uuid.uuid4")
diff --git a/tests/mechanic/team_test.py b/tests/mechanic/team_test.py
index 37f80635..b8a94ecb 100644
--- a/tests/mechanic/team_test.py
+++ b/tests/mechanic/team_test.py
@@ -118,3 +118,52 @@ class PluginLoaderTests(TestCase):
"var": "0",
"hello": "true"
}, plugin.variables)
+
+
+class PluginBootstrapHookHandlerTests(TestCase):
+ class UnitTestComponentLoader:
+ def __init__(self, root_path, component_entry_point, recurse):
+ self.root_path = root_path
+ self.component_entry_point = component_entry_point
+ self.recurse = recurse
+ self.registration_function = None
+
+ def load(self):
+ return self.registration_function
+
+ class UnitTestHook:
+ def __init__(self, phase="post_install"):
+ self.phase = phase
+ self.call_counter = 0
+
+ def post_install_hook(self, config_names, variables, **kwargs):
+ self.call_counter += variables["increment"]
+
+ def register(self, handler):
+ # we can register multiple hooks here
+ handler.register(self.phase, self.post_install_hook)
+ handler.register(self.phase, self.post_install_hook)
+
+ def test_loads_module(self):
+ plugin = team.PluginDescriptor("unittest-plugin")
+ hook = PluginBootstrapHookHandlerTests.UnitTestHook()
+ handler = team.PluginBootstrapHookHandler(plugin, loader_class=PluginBootstrapHookHandlerTests.UnitTestComponentLoader)
+
+ handler.loader.registration_function = hook
+ handler.load()
+
+ handler.invoke("post_install", variables={"increment": 4})
+
+ # we registered our hook twice. Check that it has been called twice.
+ self.assertEqual(hook.call_counter, 2 * 4)
+
+ def test_cannot_register_for_unknown_phase(self):
+ plugin = team.PluginDescriptor("unittest-plugin")
+ hook = PluginBootstrapHookHandlerTests.UnitTestHook(phase="this_is_an_unknown_install_phase")
+ handler = team.PluginBootstrapHookHandler(plugin, loader_class=PluginBootstrapHookHandlerTests.UnitTestComponentLoader)
+
+ handler.loader.registration_function = hook
+ with self.assertRaises(exceptions.SystemSetupError) as ctx:
+ handler.load()
+ self.assertEqual("Phase [this_is_an_unknown_install_phase] is unknown. Valid phases are: ['post_install', 'post_launch'].",
+ ctx.exception.args[0])
diff --git a/tests/mechanic/telemetry_test.py b/tests/mechanic/telemetry_test.py
index a6266c0c..63d9b1d4 100644
--- a/tests/mechanic/telemetry_test.py
+++ b/tests/mechanic/telemetry_test.py
@@ -3,7 +3,7 @@ import collections
import unittest.mock as mock
from unittest import TestCase
-from esrally import config, metrics
+from esrally import config, metrics, exceptions
from esrally.mechanic import telemetry, team, cluster
@@ -200,6 +200,340 @@ class GcTests(TestCase):
env["ES_JAVA_OPTS"])
+class NodStatsRecorderTests(TestCase):
+ def test_negative_sample_interval_forbidden(self):
+ client = Client()
+ cfg = create_config()
+ metrics_store = metrics.EsMetricsStore(cfg)
+ telemetry_params = {
+ "node-stats-sample-interval": -1 * random.random()
+ }
+ with self.assertRaisesRegex(exceptions.SystemSetupError,
+ "The telemetry parameter 'node-stats-sample-interval' must be greater than zero but was .*\."):
+ telemetry.NodeStatsRecorder(telemetry_params, client, metrics_store=metrics_store)
+
+ @mock.patch("esrally.metrics.EsMetricsStore.put_count_node_level")
+ @mock.patch("esrally.metrics.EsMetricsStore.put_value_node_level")
+ def test_stores_default_nodes_stats(self, metrics_store_put_value, metrics_store_put_count):
+ node_stats_response = {
+ "cluster_name" : "elasticsearch",
+ "nodes" : {
+ "Zbl_e8EyRXmiR47gbHgPfg" : {
+ "timestamp" : 1524379617017,
+ "name" : "rally0",
+ "transport_address" : "127.0.0.1:9300",
+ "host" : "127.0.0.1",
+ "ip" : "127.0.0.1:9300",
+ "roles" : [
+ "master",
+ "data",
+ "ingest"
+ ],
+ "indices" : {
+ "docs" : {
+ "count" : 0,
+ "deleted" : 0
+ },
+ "store" : {
+ "size_in_bytes" : 0
+ },
+ "indexing" : {
+ "is_throttled" : False,
+ "throttle_time_in_millis" : 0
+ },
+ "search" : {
+ "open_contexts" : 0,
+ "query_total" : 0,
+ "query_time_in_millis" : 0
+ },
+ "merges" : {
+ "current" : 0,
+ "current_docs" : 0,
+ "current_size_in_bytes" : 0
+ },
+ "query_cache" : {
+ "memory_size_in_bytes" : 0,
+ "total_count" : 0,
+ "hit_count" : 0,
+ "miss_count" : 0,
+ "cache_size" : 0,
+ "cache_count" : 0,
+ "evictions" : 0
+ },
+ "completion" : {
+ "size_in_bytes" : 0
+ },
+ "segments" : {
+ "count" : 0,
+ "memory_in_bytes" : 0,
+ "max_unsafe_auto_id_timestamp" : -9223372036854775808,
+ "file_sizes" : { }
+ },
+ "translog" : {
+ "operations" : 0,
+ "size_in_bytes" : 0,
+ "uncommitted_operations" : 0,
+ "uncommitted_size_in_bytes" : 0
+ },
+ "request_cache" : {
+ "memory_size_in_bytes" : 0,
+ "evictions" : 0,
+ "hit_count" : 0,
+ "miss_count" : 0
+ },
+ "recovery" : {
+ "current_as_source" : 0,
+ "current_as_target" : 0,
+ "throttle_time_in_millis" : 0
+ }
+ },
+ "jvm" : {
+ "buffer_pools" : {
+ "mapped" : {
+ "count" : 7,
+ "used_in_bytes" : 3120,
+ "total_capacity_in_bytes" : 9999
+ },
+ "direct" : {
+ "count" : 6,
+ "used_in_bytes" : 73868,
+ "total_capacity_in_bytes" : 73867
+ }
+ },
+ "classes" : {
+ "current_loaded_count" : 9992,
+ "total_loaded_count" : 9992,
+ "total_unloaded_count" : 0
+ }
+ },
+ "thread_pool" : {
+ "generic" : {
+ "threads" : 4,
+ "queue" : 0,
+ "active" : 0,
+ "rejected" : 0,
+ "largest" : 4,
+ "completed" : 8
+ }
+ },
+ "breakers" : {
+ "parent" : {
+ "limit_size_in_bytes" : 726571417,
+ "limit_size" : "692.9mb",
+ "estimated_size_in_bytes" : 0,
+ "estimated_size" : "0b",
+ "overhead" : 1.0,
+ "tripped" : 0
+ }
+ }
+ }
+ }
+ }
+
+ client = Client(nodes=SubClient(stats=node_stats_response))
+ cfg = create_config()
+ metrics_store = metrics.EsMetricsStore(cfg)
+ telemetry_params = {}
+ recorder = telemetry.NodeStatsRecorder(telemetry_params, client, metrics_store=metrics_store)
+ recorder.record()
+
+ metrics_store_put_count.assert_has_calls([
+ mock.call(node_name="rally0", name="thread_pool_generic_threads", count=4),
+ mock.call(node_name="rally0", name="thread_pool_generic_queue", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_active", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_rejected", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_largest", count=4),
+ mock.call(node_name="rally0", name="thread_pool_generic_completed", count=8),
+ mock.call(node_name="rally0", name="breaker_parent_overhead", count=1.0),
+ mock.call(node_name="rally0", name="breaker_parent_tripped", count=0),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_mapped_count", count=7),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_direct_count", count=6),
+ ], any_order=True)
+
+ metrics_store_put_value.assert_has_calls([
+ mock.call(node_name="rally0", name="breaker_parent_limit_size_in_bytes", value=726571417, unit="byte"),
+ mock.call(node_name="rally0", name="breaker_parent_estimated_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_mapped_used_in_bytes", value=3120, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_mapped_total_capacity_in_bytes", value=9999, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_direct_used_in_bytes", value=73868, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_direct_total_capacity_in_bytes", value=73867, unit="byte"),
+ ], any_order=True)
+
+ @mock.patch("esrally.metrics.EsMetricsStore.put_count_node_level")
+ @mock.patch("esrally.metrics.EsMetricsStore.put_value_node_level")
+ def test_stores_all_nodes_stats(self, metrics_store_put_value, metrics_store_put_count):
+ node_stats_response = {
+ "cluster_name" : "elasticsearch",
+ "nodes" : {
+ "Zbl_e8EyRXmiR47gbHgPfg" : {
+ "timestamp" : 1524379617017,
+ "name" : "rally0",
+ "transport_address" : "127.0.0.1:9300",
+ "host" : "127.0.0.1",
+ "ip" : "127.0.0.1:9300",
+ "roles" : [
+ "master",
+ "data",
+ "ingest"
+ ],
+ "indices" : {
+ "docs" : {
+ "count" : 0,
+ "deleted" : 0
+ },
+ "store" : {
+ "size_in_bytes" : 0
+ },
+ "indexing" : {
+ "is_throttled" : False,
+ "throttle_time_in_millis" : 0
+ },
+ "search" : {
+ "open_contexts" : 0,
+ "query_total" : 0,
+ "query_time_in_millis" : 0
+ },
+ "merges" : {
+ "current" : 0,
+ "current_docs" : 0,
+ "current_size_in_bytes" : 0
+ },
+ "query_cache" : {
+ "memory_size_in_bytes" : 0,
+ "total_count" : 0,
+ "hit_count" : 0,
+ "miss_count" : 0,
+ "cache_size" : 0,
+ "cache_count" : 0,
+ "evictions" : 0
+ },
+ "completion" : {
+ "size_in_bytes" : 0
+ },
+ "segments" : {
+ "count" : 0,
+ "memory_in_bytes" : 0,
+ "max_unsafe_auto_id_timestamp" : -9223372036854775808,
+ "file_sizes" : { }
+ },
+ "translog" : {
+ "operations" : 0,
+ "size_in_bytes" : 0,
+ "uncommitted_operations" : 0,
+ "uncommitted_size_in_bytes" : 0
+ },
+ "request_cache" : {
+ "memory_size_in_bytes" : 0,
+ "evictions" : 0,
+ "hit_count" : 0,
+ "miss_count" : 0
+ },
+ "recovery" : {
+ "current_as_source" : 0,
+ "current_as_target" : 0,
+ "throttle_time_in_millis" : 0
+ }
+ },
+ "jvm" : {
+ "buffer_pools" : {
+ "mapped" : {
+ "count" : 7,
+ "used_in_bytes" : 3120,
+ "total_capacity_in_bytes" : 9999
+ },
+ "direct" : {
+ "count" : 6,
+ "used_in_bytes" : 73868,
+ "total_capacity_in_bytes" : 73867
+ }
+ },
+ "classes" : {
+ "current_loaded_count" : 9992,
+ "total_loaded_count" : 9992,
+ "total_unloaded_count" : 0
+ }
+ },
+ "thread_pool" : {
+ "generic" : {
+ "threads" : 4,
+ "queue" : 0,
+ "active" : 0,
+ "rejected" : 0,
+ "largest" : 4,
+ "completed" : 8
+ }
+ },
+ "breakers" : {
+ "parent" : {
+ "limit_size_in_bytes" : 726571417,
+ "limit_size" : "692.9mb",
+ "estimated_size_in_bytes" : 0,
+ "estimated_size" : "0b",
+ "overhead" : 1.0,
+ "tripped" : 0
+ }
+ }
+ }
+ }
+ }
+
+ client = Client(nodes=SubClient(stats=node_stats_response))
+ cfg = create_config()
+ metrics_store = metrics.EsMetricsStore(cfg)
+ telemetry_params = {
+ "node-stats-include-indices": True
+ }
+ recorder = telemetry.NodeStatsRecorder(telemetry_params, client, metrics_store=metrics_store)
+ recorder.record()
+
+ metrics_store_put_count.assert_has_calls([
+ mock.call(node_name="rally0", name="indices_search_open_contexts", count=0),
+ mock.call(node_name="rally0", name="indices_search_query_total", count=0),
+ mock.call(node_name="rally0", name="indices_merges_current", count=0),
+ mock.call(node_name="rally0", name="indices_merges_current_docs", count=0),
+ mock.call(node_name="rally0", name="indices_query_cache_total_count", count=0),
+ mock.call(node_name="rally0", name="indices_query_cache_hit_count", count=0),
+ mock.call(node_name="rally0", name="indices_query_cache_miss_count", count=0),
+ mock.call(node_name="rally0", name="indices_query_cache_cache_size", count=0),
+ mock.call(node_name="rally0", name="indices_query_cache_cache_count", count=0),
+ mock.call(node_name="rally0", name="indices_query_cache_evictions", count=0),
+ mock.call(node_name="rally0", name="indices_segments_count", count=0),
+ mock.call(node_name="rally0", name="indices_segments_max_unsafe_auto_id_timestamp", count=-9223372036854775808),
+ mock.call(node_name="rally0", name="indices_translog_operations", count=0),
+ mock.call(node_name="rally0", name="indices_translog_uncommitted_operations", count=0),
+ mock.call(node_name="rally0", name="indices_request_cache_evictions", count=0),
+ mock.call(node_name="rally0", name="indices_request_cache_hit_count", count=0),
+ mock.call(node_name="rally0", name="indices_request_cache_miss_count", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_threads", count=4),
+ mock.call(node_name="rally0", name="thread_pool_generic_queue", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_active", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_rejected", count=0),
+ mock.call(node_name="rally0", name="thread_pool_generic_largest", count=4),
+ mock.call(node_name="rally0", name="thread_pool_generic_completed", count=8),
+ mock.call(node_name="rally0", name="breaker_parent_overhead", count=1.0),
+ mock.call(node_name="rally0", name="breaker_parent_tripped", count=0),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_mapped_count", count=7),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_direct_count", count=6),
+ ], any_order=True)
+
+ metrics_store_put_value.assert_has_calls([
+ mock.call(node_name="rally0", name="indices_indexing_throttle_time_in_millis", value=0, unit="ms"),
+ mock.call(node_name="rally0", name="indices_search_query_time_in_millis", value=0, unit="ms"),
+ mock.call(node_name="rally0", name="indices_merges_current_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="indices_query_cache_memory_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="indices_segments_memory_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="indices_translog_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="indices_translog_uncommitted_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="indices_request_cache_memory_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="breaker_parent_limit_size_in_bytes", value=726571417, unit="byte"),
+ mock.call(node_name="rally0", name="breaker_parent_estimated_size_in_bytes", value=0, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_mapped_used_in_bytes", value=3120, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_mapped_total_capacity_in_bytes", value=9999, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_direct_used_in_bytes", value=73868, unit="byte"),
+ mock.call(node_name="rally0", name="jvm_buffer_pool_direct_total_capacity_in_bytes", value=73867, unit="byte"),
+ ], any_order=True)
+
+
class ClusterEnvironmentInfoTests(TestCase):
@mock.patch("esrally.metrics.EsMetricsStore.add_meta_info")
def test_stores_cluster_level_metrics_on_attach(self, metrics_store_add_meta_info):
diff --git a/tests/track/params_test.py b/tests/track/params_test.py
index 18328575..39976959 100644
--- a/tests/track/params_test.py
+++ b/tests/track/params_test.py
@@ -1,3 +1,4 @@
+import random
from unittest import TestCase
from esrally import exceptions
@@ -100,31 +101,53 @@ class ConflictingIdsBuilderTests(TestCase):
class ActionMetaDataTests(TestCase):
def test_generate_action_meta_data_without_id_conflicts(self):
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type"}}',
- next(params.GenerateActionMetaData("test_index", "test_type", conflicting_ids=None)))
+ self.assertEqual(("index", '{"index": {"_index": "test_index", "_type": "test_type"}}'),
+ next(params.GenerateActionMetaData("test_index", "test_type")))
def test_generate_action_meta_data_with_id_conflicts(self):
- pseudo_random_sequence = iter([
- # first column == 3 -> we'll draw a "random" id, second column == "random" id
- 3, 1,
- 3, 3,
- 3, 2,
- 0,
- 3, 0])
-
- generator = params.GenerateActionMetaData("test_index", "test_type", conflicting_ids=[100, 200, 300, 400],
- rand=lambda x, y: next(pseudo_random_sequence))
-
- # first one is always not drawn from a random index
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type", "_id": "100"}}', next(generator))
+ def idx(id):
+ return "index", '{"index": {"_index": "test_index", "_type": "test_type", "_id": "%s"}}' % id
+
+ def conflict(action, id):
+ return action, '{"%s": {"_index": "test_index", "_type": "test_type", "_id": "%s"}}' % (action, id)
+
+ pseudo_random_conflicts = iter([
+ # if this value is <= our chosen threshold of 0.25 (see conflict_probability) we produce a conflict.
+ 0.2,
+ 0.25,
+ 0.2,
+ # no conflict
+ 0.3,
+ # conflict again
+ 0.0
+ ])
+
+ chosen_index_of_conflicting_ids = iter([
+ # the "random" index of the id in the array `conflicting_ids` that will produce a conflict
+ 1,
+ 3,
+ 2,
+ 0])
+
+ conflict_action = random.choice(["index", "update"])
+
+ generator = params.GenerateActionMetaData("test_index", "test_type",
+ conflicting_ids=[100, 200, 300, 400],
+ conflict_probability=25,
+ on_conflict=conflict_action,
+ rand=lambda: next(pseudo_random_conflicts),
+ randint=lambda x, y: next(chosen_index_of_conflicting_ids))
+
+ # first one is always *not* drawn from a random index
+ self.assertEqual(idx("100"), next(generator))
# now we start using random ids, i.e. look in the first line of the pseudo-random sequence
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type", "_id": "200"}}', next(generator))
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type", "_id": "400"}}', next(generator))
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type", "_id": "300"}}', next(generator))
- # "random" returns 0 instead of 3 -> we draw the next sequential one, which is 200
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type", "_id": "200"}}', next(generator))
+ self.assertEqual(conflict(conflict_action, "200"), next(generator))
+ self.assertEqual(conflict(conflict_action, "400"), next(generator))
+ self.assertEqual(conflict(conflict_action, "300"), next(generator))
+ # no conflict -> we draw the next sequential one, which is 200
+ self.assertEqual(idx("200"), next(generator))
# and we're back to random
- self.assertEqual('{"index": {"_index": "test_index", "_type": "test_type", "_id": "100"}}', next(generator))
+ self.assertEqual(conflict(conflict_action, "100"), next(generator))
def test_source_file_action_meta_data(self):
source = params.Slice(io.StringAsFileSource, 0, 5)
@@ -139,7 +162,7 @@ class ActionMetaDataTests(TestCase):
]
source.open(data, "r")
- self.assertEqual(data, list(generator))
+ self.assertEqual([("source", doc) for doc in data], list(generator))
source.close()
@@ -155,7 +178,7 @@ class IndexDataReaderTests(TestCase):
bulk_size = 50
source = params.Slice(io.StringAsFileSource, 0, len(data))
- am_handler = params.GenerateActionMetaData("test_index", "test_type", conflicting_ids=None)
+ am_handler = params.GenerateActionMetaData("test_index", "test_type")
reader = params.IndexDataReader(data, batch_size=bulk_size, bulk_size=bulk_size, file_source=source, action_metadata=am_handler,
index_name="test_index", type_name="test_type")
@@ -176,7 +199,7 @@ class IndexDataReaderTests(TestCase):
bulk_size = 50
source = params.Slice(io.StringAsFileSource, 3, len(data))
- am_handler = params.GenerateActionMetaData("test_index", "test_type", conflicting_ids=None)
+ am_handler = params.GenerateActionMetaData("test_index", "test_type")
reader = params.IndexDataReader(data, batch_size=bulk_size, bulk_size=bulk_size, file_source=source, action_metadata=am_handler,
index_name="test_index", type_name="test_type")
@@ -199,7 +222,7 @@ class IndexDataReaderTests(TestCase):
bulk_size = 3
source = params.Slice(io.StringAsFileSource, 0, len(data))
- am_handler = params.GenerateActionMetaData("test_index", "test_type", conflicting_ids=None)
+ am_handler = params.GenerateActionMetaData("test_index", "test_type")
reader = params.IndexDataReader(data, batch_size=bulk_size, bulk_size=bulk_size, file_source=source, action_metadata=am_handler,
index_name="test_index", type_name="test_type")
@@ -223,7 +246,7 @@ class IndexDataReaderTests(TestCase):
# only 5 documents to index for this client
source = params.Slice(io.StringAsFileSource, 0, 5)
- am_handler = params.GenerateActionMetaData("test_index", "test_type", conflicting_ids=None)
+ am_handler = params.GenerateActionMetaData("test_index", "test_type")
reader = params.IndexDataReader(data, batch_size=bulk_size, bulk_size=bulk_size, file_source=source, action_metadata=am_handler,
index_name="test_index", type_name="test_type")
@@ -263,6 +286,69 @@ class IndexDataReaderTests(TestCase):
expected_line_sizes = [6, 6, 2]
self.assert_bulks_sized(reader, expected_bulk_sizes, expected_line_sizes)
+ def test_read_bulk_with_id_conflicts(self):
+ pseudo_random_conflicts = iter([
+ # if this value is <= our chosen threshold of 0.25 (see conflict_probability) we produce a conflict.
+ 0.2,
+ 0.25,
+ 0.2,
+ # no conflict
+ 0.3
+ ])
+
+ chosen_index_of_conflicting_ids = iter([
+ # the "random" index of the id in the array `conflicting_ids` that will produce a conflict
+ 1,
+ 3,
+ 2])
+
+ data = [
+ '{"key": "value1"}',
+ '{"key": "value2"}',
+ '{"key": "value3"}',
+ '{"key": "value4"}',
+ '{"key": "value5"}'
+ ]
+ bulk_size = 2
+
+ source = params.Slice(io.StringAsFileSource, 0, len(data))
+ am_handler = params.GenerateActionMetaData("test_index", "test_type",
+ conflicting_ids=[100, 200, 300, 400],
+ conflict_probability=25,
+ on_conflict="update",
+ rand=lambda: next(pseudo_random_conflicts),
+ randint=lambda x, y: next(chosen_index_of_conflicting_ids))
+
+ reader = params.IndexDataReader(data, batch_size=bulk_size, bulk_size=bulk_size, file_source=source, action_metadata=am_handler,
+ index_name="test_index", type_name="test_type")
+
+ # consume all bulks
+ bulks = []
+ with reader:
+ for index, type, batch in reader:
+ for bulk_size, bulk in batch:
+ bulks.append(bulk)
+
+ self.assertEqual([
+ [
+ '{"index": {"_index": "test_index", "_type": "test_type", "_id": "100"}}',
+ '{"key": "value1"}',
+ '{"update": {"_index": "test_index", "_type": "test_type", "_id": "200"}}',
+ '{"doc":{"key": "value2"}}'
+ ],
+ [
+ '{"update": {"_index": "test_index", "_type": "test_type", "_id": "400"}}',
+ '{"doc":{"key": "value3"}}',
+ '{"update": {"_index": "test_index", "_type": "test_type", "_id": "300"}}',
+ '{"doc":{"key": "value4"}}'
+ ],
+ [
+ '{"index": {"_index": "test_index", "_type": "test_type", "_id": "200"}}',
+ '{"key": "value5"}'
+ ]
+
+ ], bulks)
+
def assert_bulks_sized(self, reader, expected_bulk_sizes, expected_line_sizes):
with reader:
bulk_index = 0
@@ -475,6 +561,15 @@ class BulkIndexParamSourceTests(TestCase):
self.assertEqual("Unknown 'conflicts' setting [crazy]", ctx.exception.args[0])
+ def test_create_with_unknown_on_conflict_setting(self):
+ with self.assertRaises(exceptions.InvalidSyntax) as ctx:
+ params.BulkIndexParamSource(track=track.Track(name="unit-test"), params={
+ "conflicts": "sequential",
+ "on-conflict": "delete"
+ })
+
+ self.assertEqual("Unknown 'on-conflict' setting [delete]", ctx.exception.args[0])
+
def test_create_with_ingest_percentage_too_low(self):
with self.assertRaises(exceptions.InvalidSyntax) as ctx:
params.BulkIndexParamSource(track=track.Track(name="unit-test"), params={
@@ -690,7 +785,9 @@ class BulkDataGeneratorTests(TestCase):
])
bulks = params.bulk_data_based(num_clients=1, client_index=0, corpora=[corpus],
- batch_size=5, bulk_size=5, id_conflicts=params.IndexIdConflict.NoConflicts, pipeline=None,
+ batch_size=5, bulk_size=5,
+ id_conflicts=params.IndexIdConflict.NoConflicts, conflict_probability=None, on_conflict=None,
+ pipeline=None,
original_params={
"my-custom-parameter": "foo",
"my-custom-parameter-2": True
@@ -748,7 +845,9 @@ class BulkDataGeneratorTests(TestCase):
]
bulks = params.bulk_data_based(num_clients=1, client_index=0, corpora=corpora,
- batch_size=5, bulk_size=5, id_conflicts=params.IndexIdConflict.NoConflicts, pipeline=None,
+ batch_size=5, bulk_size=5,
+ id_conflicts=params.IndexIdConflict.NoConflicts, conflict_probability=None, on_conflict=None,
+ pipeline=None,
original_params={
"my-custom-parameter": "foo",
"my-custom-parameter-2": True
@@ -802,7 +901,8 @@ class BulkDataGeneratorTests(TestCase):
])
bulks = params.bulk_data_based(num_clients=1, client_index=0, corpora=[corpus], batch_size=3, bulk_size=3,
- id_conflicts=params.IndexIdConflict.NoConflicts, pipeline=None,
+ id_conflicts=params.IndexIdConflict.NoConflicts, conflict_probability=None, on_conflict=None,
+ pipeline=None,
original_params={
"body": "foo",
"custom-param": "bar"
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 8
} | 0.10 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-benchmark"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
elasticsearch==6.2.0
-e git+https://github.com/elastic/rally.git@6ce036c1e92f9badbe839b85102096a99d0e5b83#egg=esrally
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==2.9.5
jsonschema==2.5.1
MarkupSafe==2.0.1
packaging==21.3
pluggy==1.0.0
psutil==5.4.0
py==1.11.0
py-cpuinfo==3.2.0
pyparsing==3.1.4
pytest==7.0.1
pytest-benchmark==3.4.1
tabulate==0.8.1
thespian==3.9.2
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.22
zipp==3.6.0
| name: rally
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- elasticsearch==6.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==2.9.5
- jsonschema==2.5.1
- markupsafe==2.0.1
- packaging==21.3
- pluggy==1.0.0
- psutil==5.4.0
- py==1.11.0
- py-cpuinfo==3.2.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-benchmark==3.4.1
- tabulate==0.8.1
- thespian==3.9.2
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.22
- zipp==3.6.0
prefix: /opt/conda/envs/rally
| [
"tests/mechanic/launcher_test.py::ClusterLauncherTests::test_error_on_cluster_launch",
"tests/mechanic/launcher_test.py::ClusterLauncherTests::test_launches_cluster_with_post_launch_handler",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_invokes_hook",
"tests/mechanic/team_test.py::PluginBootstrapHookHandlerTests::test_cannot_register_for_unknown_phase",
"tests/mechanic/team_test.py::PluginBootstrapHookHandlerTests::test_loads_module",
"tests/mechanic/telemetry_test.py::NodStatsRecorderTests::test_negative_sample_interval_forbidden",
"tests/mechanic/telemetry_test.py::NodStatsRecorderTests::test_stores_all_nodes_stats",
"tests/mechanic/telemetry_test.py::NodStatsRecorderTests::test_stores_default_nodes_stats",
"tests/track/params_test.py::ActionMetaDataTests::test_generate_action_meta_data_with_id_conflicts",
"tests/track/params_test.py::ActionMetaDataTests::test_generate_action_meta_data_without_id_conflicts",
"tests/track/params_test.py::ActionMetaDataTests::test_source_file_action_meta_data",
"tests/track/params_test.py::IndexDataReaderTests::test_read_bulk_larger_than_number_of_docs",
"tests/track/params_test.py::IndexDataReaderTests::test_read_bulk_smaller_than_number_of_docs",
"tests/track/params_test.py::IndexDataReaderTests::test_read_bulk_smaller_than_number_of_docs_and_multiple_clients",
"tests/track/params_test.py::IndexDataReaderTests::test_read_bulk_with_id_conflicts",
"tests/track/params_test.py::IndexDataReaderTests::test_read_bulk_with_offset",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_unknown_on_conflict_setting",
"tests/track/params_test.py::BulkDataGeneratorTests::test_generate_bulks_from_multiple_corpora",
"tests/track/params_test.py::BulkDataGeneratorTests::test_generate_two_bulks",
"tests/track/params_test.py::BulkDataGeneratorTests::test_internal_params_take_precedence"
]
| []
| [
"tests/mechanic/launcher_test.py::ExternalLauncherTests::test_setup_external_cluster_multiple_nodes",
"tests/mechanic/launcher_test.py::ExternalLauncherTests::test_setup_external_cluster_single_node",
"tests/mechanic/launcher_test.py::ClusterLauncherTests::test_launches_cluster_without_post_launch_handler",
"tests/mechanic/provisioner_test.py::BareProvisionerTests::test_prepare_distribution_ge_63_with_plugins",
"tests/mechanic/provisioner_test.py::BareProvisionerTests::test_prepare_distribution_lt_63_with_plugins",
"tests/mechanic/provisioner_test.py::BareProvisionerTests::test_prepare_without_plugins",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_cleanup",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_cleanup_nothing_on_preserve",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_prepare_default_data_paths",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_prepare_user_provided_data_path",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_plugin_successfully",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_plugin_with_io_error",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_plugin_with_unknown_error",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_unknown_plugin",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_pass_plugin_properties",
"tests/mechanic/provisioner_test.py::DockerProvisionerTests::test_provisioning",
"tests/mechanic/team_test.py::CarLoaderTests::test_lists_car_names",
"tests/mechanic/team_test.py::CarLoaderTests::test_load_car_with_mixin_multiple_config_bases",
"tests/mechanic/team_test.py::CarLoaderTests::test_load_car_with_mixin_single_config_base",
"tests/mechanic/team_test.py::CarLoaderTests::test_load_known_car",
"tests/mechanic/team_test.py::CarLoaderTests::test_raises_error_on_empty_config_base",
"tests/mechanic/team_test.py::CarLoaderTests::test_raises_error_on_missing_config_base",
"tests/mechanic/team_test.py::CarLoaderTests::test_raises_error_on_unknown_car",
"tests/mechanic/team_test.py::PluginLoaderTests::test_cannot_load_community_plugin_with_missing_config",
"tests/mechanic/team_test.py::PluginLoaderTests::test_cannot_load_plugin_with_missing_config",
"tests/mechanic/team_test.py::PluginLoaderTests::test_lists_plugins",
"tests/mechanic/team_test.py::PluginLoaderTests::test_loads_community_plugin_without_configuration",
"tests/mechanic/team_test.py::PluginLoaderTests::test_loads_configured_plugin",
"tests/mechanic/team_test.py::PluginLoaderTests::test_loads_core_plugin",
"tests/mechanic/telemetry_test.py::TelemetryTests::test_merges_options_set_by_different_devices",
"tests/mechanic/telemetry_test.py::StartupTimeTests::test_store_calculated_metrics",
"tests/mechanic/telemetry_test.py::MergePartsDeviceTests::test_store_calculated_metrics",
"tests/mechanic/telemetry_test.py::MergePartsDeviceTests::test_store_nothing_if_no_metrics_present",
"tests/mechanic/telemetry_test.py::JfrTests::test_sets_options_for_java_9_or_above_custom_recording_template",
"tests/mechanic/telemetry_test.py::JfrTests::test_sets_options_for_java_9_or_above_default_recording_template",
"tests/mechanic/telemetry_test.py::JfrTests::test_sets_options_for_pre_java_9_custom_recording_template",
"tests/mechanic/telemetry_test.py::JfrTests::test_sets_options_for_pre_java_9_default_recording_template",
"tests/mechanic/telemetry_test.py::GcTests::test_sets_options_for_java_9_or_above",
"tests/mechanic/telemetry_test.py::GcTests::test_sets_options_for_pre_java_9",
"tests/mechanic/telemetry_test.py::ClusterEnvironmentInfoTests::test_stores_cluster_level_metrics_on_attach",
"tests/mechanic/telemetry_test.py::NodeEnvironmentInfoTests::test_stores_node_level_metrics_on_attach",
"tests/mechanic/telemetry_test.py::ExternalEnvironmentInfoTests::test_fallback_when_host_not_available",
"tests/mechanic/telemetry_test.py::ExternalEnvironmentInfoTests::test_stores_all_node_metrics_on_attach",
"tests/mechanic/telemetry_test.py::ClusterMetaDataInfoTests::test_enriches_cluster_nodes_for_elasticsearch_1_x",
"tests/mechanic/telemetry_test.py::ClusterMetaDataInfoTests::test_enriches_cluster_nodes_for_elasticsearch_after_1_x",
"tests/mechanic/telemetry_test.py::GcTimesSummaryTests::test_stores_only_diff_of_gc_times",
"tests/mechanic/telemetry_test.py::IndexStatsTests::test_index_stats_are_per_lap",
"tests/mechanic/telemetry_test.py::IndexStatsTests::test_stores_available_index_stats",
"tests/mechanic/telemetry_test.py::IndexSizeTests::test_stores_index_size_for_data_paths",
"tests/mechanic/telemetry_test.py::IndexSizeTests::test_stores_nothing_if_no_data_path",
"tests/track/params_test.py::SliceTests::test_slice_with_slice_larger_than_source",
"tests/track/params_test.py::SliceTests::test_slice_with_source_larger_than_slice",
"tests/track/params_test.py::ConflictingIdsBuilderTests::test_no_id_conflicts",
"tests/track/params_test.py::ConflictingIdsBuilderTests::test_random_conflicts",
"tests/track/params_test.py::ConflictingIdsBuilderTests::test_sequential_conflicts",
"tests/track/params_test.py::IndexDataReaderTests::test_read_bulks_and_assume_metadata_line_in_source_file",
"tests/track/params_test.py::InvocationGeneratorTests::test_build_conflicting_ids",
"tests/track/params_test.py::InvocationGeneratorTests::test_calculate_bounds",
"tests/track/params_test.py::InvocationGeneratorTests::test_calculate_non_multiple_bounds",
"tests/track/params_test.py::InvocationGeneratorTests::test_calculate_number_of_bulks",
"tests/track/params_test.py::InvocationGeneratorTests::test_iterator_chaining_respects_context_manager",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_valid_param_source",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_fraction_larger_batch_size",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_fraction_smaller_batch_size",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_ingest_percentage_not_numeric",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_ingest_percentage_too_high",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_ingest_percentage_too_low",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_metadata_in_source_file_but_conflicts",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_negative_bulk_size",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_non_numeric_bulk_size",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_with_unknown_id_conflicts",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_create_without_params",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_filters_corpora",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_ingests_all_documents_by_default",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_passes_all_corpora_by_default",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_raises_exception_if_no_corpus_matches",
"tests/track/params_test.py::BulkIndexParamSourceTests::test_restricts_number_of_bulks_if_required",
"tests/track/params_test.py::ParamsRegistrationTests::test_can_register_class_as_param_source",
"tests/track/params_test.py::ParamsRegistrationTests::test_can_register_function_as_param_source",
"tests/track/params_test.py::ParamsRegistrationTests::test_can_register_legacy_class_as_param_source",
"tests/track/params_test.py::ParamsRegistrationTests::test_can_register_legacy_function_as_param_source",
"tests/track/params_test.py::CreateIndexParamSourceTests::test_create_index_from_track_with_settings",
"tests/track/params_test.py::CreateIndexParamSourceTests::test_create_index_from_track_without_settings",
"tests/track/params_test.py::CreateIndexParamSourceTests::test_create_index_inline_with_body",
"tests/track/params_test.py::CreateIndexParamSourceTests::test_create_index_inline_without_body",
"tests/track/params_test.py::CreateIndexParamSourceTests::test_filter_index",
"tests/track/params_test.py::DeleteIndexParamSourceTests::test_delete_index_by_name",
"tests/track/params_test.py::DeleteIndexParamSourceTests::test_delete_index_from_track",
"tests/track/params_test.py::DeleteIndexParamSourceTests::test_delete_no_index",
"tests/track/params_test.py::DeleteIndexParamSourceTests::test_filter_index_from_track",
"tests/track/params_test.py::CreateIndexTemplateParamSourceTests::test_create_index_template_from_track",
"tests/track/params_test.py::CreateIndexTemplateParamSourceTests::test_create_index_template_inline",
"tests/track/params_test.py::DeleteIndexTemplateParamSourceTests::test_delete_index_template_by_name",
"tests/track/params_test.py::DeleteIndexTemplateParamSourceTests::test_delete_index_template_by_name_and_matching_indices",
"tests/track/params_test.py::DeleteIndexTemplateParamSourceTests::test_delete_index_template_by_name_and_matching_indices_missing_index_pattern",
"tests/track/params_test.py::DeleteIndexTemplateParamSourceTests::test_delete_index_template_from_track",
"tests/track/params_test.py::SearchParamSourceTests::test_passes_request_parameters",
"tests/track/params_test.py::SearchParamSourceTests::test_replaces_body_params",
"tests/track/params_test.py::SearchParamSourceTests::test_user_specified_overrides_defaults"
]
| []
| Apache License 2.0 | 2,429 | [
"esrally/mechanic/mechanic.py",
"esrally/mechanic/provisioner.py",
"esrally/mechanic/team.py",
"esrally/track/params.py",
"docs/track.rst",
"esrally/mechanic/launcher.py",
"docs/telemetry.rst",
"esrally/mechanic/telemetry.py"
]
| [
"esrally/mechanic/mechanic.py",
"esrally/mechanic/provisioner.py",
"esrally/mechanic/team.py",
"esrally/track/params.py",
"docs/track.rst",
"esrally/mechanic/launcher.py",
"docs/telemetry.rst",
"esrally/mechanic/telemetry.py"
]
|
|
python-cmd2__cmd2-365 | 09b22c56266aad307744372a0dca8b57f43162bd | 2018-04-20 16:21:27 | 8f88f819fae7508066a81a8d961a7115f2ec4bed | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8c0c1601..d382fc75 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,6 @@
## 0.8.6 (TBD)
* Bug Fixes
- * TBD
+ * Commands using the @with_argparser_and_unknown_args were not correctly recognized when tab completing help
## 0.8.5 (April 15, 2018)
* Bug Fixes
diff --git a/cmd2.py b/cmd2.py
index ec07510e..4c91a6a5 100755
--- a/cmd2.py
+++ b/cmd2.py
@@ -420,8 +420,18 @@ def with_argparser_and_unknown_args(argparser):
# If there are subcommands, store their names in a list to support tab-completion of subcommand names
if argparser._subparsers is not None:
- subcommand_names = argparser._subparsers._group_actions[0]._name_parser_map.keys()
- cmd_wrapper.__dict__['subcommand_names'] = subcommand_names
+ # Key is subcommand name and value is completer function
+ subcommands = collections.OrderedDict()
+
+ # Get all subcommands and check if they have completer functions
+ for name, parser in argparser._subparsers._group_actions[0]._name_parser_map.items():
+ if 'completer' in parser._defaults:
+ completer = parser._defaults['completer']
+ else:
+ completer = None
+ subcommands[name] = completer
+
+ cmd_wrapper.__dict__['subcommands'] = subcommands
return cmd_wrapper
| Backport bug in help completion handling of commands using argparser_with_unknown_args
During the autocompleter development I found a bug in handling of help completion that affects the 0.8.x line so I'm going to port those changes back to the python2 branch. | python-cmd2/cmd2 | diff --git a/tests/conftest.py b/tests/conftest.py
index 837e7504..1433d425 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -8,9 +8,24 @@ Released under MIT license, see LICENSE file
import sys
from pytest import fixture
+try:
+ from unittest import mock
+except ImportError:
+ import mock
import cmd2
+# Prefer statically linked gnureadline if available (for macOS compatibility due to issues with libedit)
+try:
+ import gnureadline as readline
+except ImportError:
+ # Try to import readline, but allow failure for convenience in Windows unit testing
+ # Note: If this actually fails, you should install readline on Linux or Mac or pyreadline on Windows
+ try:
+ # noinspection PyUnresolvedReferences
+ import readline
+ except ImportError:
+ pass
# Help text for base cmd2.Cmd application
BASE_HELP = """Documented commands (type help <topic>):
@@ -141,3 +156,38 @@ def base_app():
c = cmd2.Cmd()
c.stdout = StdOut()
return c
+
+
+def complete_tester(text, line, begidx, endidx, app):
+ """
+ This is a convenience function to test cmd2.complete() since
+ in a unit test environment there is no actual console readline
+ is monitoring. Therefore we use mock to provide readline data
+ to complete().
+
+ :param text: str - the string prefix we are attempting to match
+ :param line: str - the current input line with leading whitespace removed
+ :param begidx: int - the beginning index of the prefix text
+ :param endidx: int - the ending index of the prefix text
+ :param app: the cmd2 app that will run completions
+ :return: The first matched string or None if there are no matches
+ Matches are stored in app.completion_matches
+ These matches also have been sorted by complete()
+ """
+ def get_line():
+ return line
+
+ def get_begidx():
+ return begidx
+
+ def get_endidx():
+ return endidx
+
+ first_match = None
+ with mock.patch.object(readline, 'get_line_buffer', get_line):
+ with mock.patch.object(readline, 'get_begidx', get_begidx):
+ with mock.patch.object(readline, 'get_endidx', get_endidx):
+ # Run the readline tab-completion function with readline mocks in place
+ first_match = app.complete(text, 0)
+
+ return first_match
diff --git a/tests/test_completion.py b/tests/test_completion.py
index b102bc0a..839e1de2 100644
--- a/tests/test_completion.py
+++ b/tests/test_completion.py
@@ -13,21 +13,8 @@ import os
import sys
import cmd2
-import mock
import pytest
-
-# Prefer statically linked gnureadline if available (for macOS compatibility due to issues with libedit)
-try:
- import gnureadline as readline
-except ImportError:
- # Try to import readline, but allow failure for convenience in Windows unit testing
- # Note: If this actually fails, you should install readline on Linux or Mac or pyreadline on Windows
- try:
- # noinspection PyUnresolvedReferences
- import readline
- except ImportError:
- pass
-
+from conftest import complete_tester
# List of strings used with completion functions
food_item_strs = ['Pizza', 'Ham', 'Ham Sandwich', 'Potato']
@@ -87,41 +74,6 @@ def cmd2_app():
return c
-def complete_tester(text, line, begidx, endidx, app):
- """
- This is a convenience function to test cmd2.complete() since
- in a unit test environment there is no actual console readline
- is monitoring. Therefore we use mock to provide readline data
- to complete().
-
- :param text: str - the string prefix we are attempting to match
- :param line: str - the current input line with leading whitespace removed
- :param begidx: int - the beginning index of the prefix text
- :param endidx: int - the ending index of the prefix text
- :param app: the cmd2 app that will run completions
- :return: The first matched string or None if there are no matches
- Matches are stored in app.completion_matches
- These matches also have been sorted by complete()
- """
- def get_line():
- return line
-
- def get_begidx():
- return begidx
-
- def get_endidx():
- return endidx
-
- first_match = None
- with mock.patch.object(readline, 'get_line_buffer', get_line):
- with mock.patch.object(readline, 'get_begidx', get_begidx):
- with mock.patch.object(readline, 'get_endidx', get_endidx):
- # Run the readline tab-completion function with readline mocks in place
- first_match = app.complete(text, 0)
-
- return first_match
-
-
def test_cmd2_command_completion_single(cmd2_app):
text = 'he'
line = text
@@ -911,6 +863,7 @@ def test_subcommand_tab_completion(sc_app):
# It is at end of line, so extra space is present
assert first_match is not None and sc_app.completion_matches == ['Football ']
+
def test_subcommand_tab_completion_with_no_completer(sc_app):
# This tests what happens when a subcommand has no completer
# In this case, the foo subcommand has no completer defined
@@ -922,6 +875,7 @@ def test_subcommand_tab_completion_with_no_completer(sc_app):
first_match = complete_tester(text, line, begidx, endidx, sc_app)
assert first_match is None
+
def test_subcommand_tab_completion_space_in_text(sc_app):
text = 'B'
line = 'base sport "Space {}'.format(text)
@@ -934,6 +888,179 @@ def test_subcommand_tab_completion_space_in_text(sc_app):
sc_app.completion_matches == ['Ball" '] and \
sc_app.display_matches == ['Space Ball']
+####################################################
+
+
+class SubcommandsWithUnknownExample(cmd2.Cmd):
+ """
+ Example cmd2 application where we a base command which has a couple subcommands
+ and the "sport" subcommand has tab completion enabled.
+ """
+
+ def __init__(self):
+ cmd2.Cmd.__init__(self)
+
+ # subcommand functions for the base command
+ def base_foo(self, args):
+ """foo subcommand of base command"""
+ self.poutput(args.x * args.y)
+
+ def base_bar(self, args):
+ """bar subcommand of base command"""
+ self.poutput('((%s))' % args.z)
+
+ def base_sport(self, args):
+ """sport subcommand of base command"""
+ self.poutput('Sport is {}'.format(args.sport))
+
+ # noinspection PyUnusedLocal
+ def complete_base_sport(self, text, line, begidx, endidx):
+ """ Adds tab completion to base sport subcommand """
+ index_dict = {1: sport_item_strs}
+ return self.index_based_complete(text, line, begidx, endidx, index_dict)
+
+ # create the top-level parser for the base command
+ base_parser = argparse.ArgumentParser(prog='base')
+ base_subparsers = base_parser.add_subparsers(title='subcommands', help='subcommand help')
+
+ # create the parser for the "foo" subcommand
+ parser_foo = base_subparsers.add_parser('foo', help='foo help')
+ parser_foo.add_argument('-x', type=int, default=1, help='integer')
+ parser_foo.add_argument('y', type=float, help='float')
+ parser_foo.set_defaults(func=base_foo)
+
+ # create the parser for the "bar" subcommand
+ parser_bar = base_subparsers.add_parser('bar', help='bar help')
+ parser_bar.add_argument('z', help='string')
+ parser_bar.set_defaults(func=base_bar)
+
+ # create the parser for the "sport" subcommand
+ parser_sport = base_subparsers.add_parser('sport', help='sport help')
+ parser_sport.add_argument('sport', help='Enter name of a sport')
+
+ # Set both a function and tab completer for the "sport" subcommand
+ parser_sport.set_defaults(func=base_sport, completer=complete_base_sport)
+
+ @cmd2.with_argparser_and_unknown_args(base_parser)
+ def do_base(self, args):
+ """Base command help"""
+ func = getattr(args, 'func', None)
+ if func is not None:
+ # Call whatever subcommand function was selected
+ func(self, args)
+ else:
+ # No subcommand was provided, so call help
+ self.do_help('base')
+
+ # Enable tab completion of base to make sure the subcommands' completers get called.
+ complete_base = cmd2.Cmd.cmd_with_subs_completer
+
+
[email protected]
+def scu_app():
+ """Declare test fixture for with_argparser_and_unknown_args"""
+ app = SubcommandsWithUnknownExample()
+ return app
+
+
+def test_cmd2_subcmd_with_unknown_completion_single_end(scu_app):
+ text = 'f'
+ line = 'base {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, scu_app)
+
+ # It is at end of line, so extra space is present
+ assert first_match is not None and scu_app.completion_matches == ['foo ']
+
+
+def test_cmd2_subcmd_with_unknown_completion_multiple(scu_app):
+ text = ''
+ line = 'base {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, scu_app)
+ assert first_match is not None and scu_app.completion_matches == ['bar', 'foo', 'sport']
+
+
+def test_cmd2_subcmd_with_unknown_completion_nomatch(scu_app):
+ text = 'z'
+ line = 'base {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, scu_app)
+ assert first_match is None
+
+
+def test_cmd2_help_subcommand_completion_single(scu_app):
+ text = 'base'
+ line = 'help {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+ assert scu_app.complete_help(text, line, begidx, endidx) == ['base']
+
+
+def test_cmd2_help_subcommand_completion_multiple(scu_app):
+ text = ''
+ line = 'help base {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ matches = sorted(scu_app.complete_help(text, line, begidx, endidx))
+ assert matches == ['bar', 'foo', 'sport']
+
+
+def test_cmd2_help_subcommand_completion_nomatch(scu_app):
+ text = 'z'
+ line = 'help base {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+ assert scu_app.complete_help(text, line, begidx, endidx) == []
+
+
+def test_subcommand_tab_completion(scu_app):
+ # This makes sure the correct completer for the sport subcommand is called
+ text = 'Foot'
+ line = 'base sport {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, scu_app)
+
+ # It is at end of line, so extra space is present
+ assert first_match is not None and scu_app.completion_matches == ['Football ']
+
+
+def test_subcommand_tab_completion_with_no_completer(scu_app):
+ # This tests what happens when a subcommand has no completer
+ # In this case, the foo subcommand has no completer defined
+ text = 'Foot'
+ line = 'base foo {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, scu_app)
+ assert first_match is None
+
+
+def test_subcommand_tab_completion_space_in_text(scu_app):
+ text = 'B'
+ line = 'base sport "Space {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, scu_app)
+
+ assert first_match is not None and \
+ scu_app.completion_matches == ['Ball" '] and \
+ scu_app.display_matches == ['Space Ball']
+
+####################################################
+
+
class SecondLevel(cmd2.Cmd):
"""To be used as a second level command class. """
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 2
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-xdist"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/python-cmd2/cmd2.git@09b22c56266aad307744372a0dca8b57f43162bd#egg=cmd2
exceptiongroup==1.2.2
execnet==2.1.1
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
pyparsing==3.2.3
pyperclip==1.9.0
pytest==8.3.5
pytest-xdist==3.6.1
six==1.17.0
tomli==2.2.1
wcwidth==0.2.13
| name: cmd2
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- execnet==2.1.1
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pyparsing==3.2.3
- pyperclip==1.9.0
- pytest==8.3.5
- pytest-xdist==3.6.1
- six==1.17.0
- tomli==2.2.1
- wcwidth==0.2.13
prefix: /opt/conda/envs/cmd2
| [
"tests/test_completion.py::test_cmd2_help_subcommand_completion_multiple",
"tests/test_completion.py::test_cmd2_help_subcommand_completion_nomatch",
"tests/test_completion.py::test_subcommand_tab_completion",
"tests/test_completion.py::test_subcommand_tab_completion_space_in_text",
"tests/test_completion.py::test_cmd2_subcmd_with_unknown_completion_single_end",
"tests/test_completion.py::test_cmd2_subcmd_with_unknown_completion_multiple"
]
| []
| [
"tests/test_completion.py::test_cmd2_command_completion_single",
"tests/test_completion.py::test_complete_command_single",
"tests/test_completion.py::test_complete_empty_arg",
"tests/test_completion.py::test_complete_bogus_command",
"tests/test_completion.py::test_cmd2_command_completion_multiple",
"tests/test_completion.py::test_cmd2_command_completion_nomatch",
"tests/test_completion.py::test_cmd2_help_completion_single",
"tests/test_completion.py::test_cmd2_help_completion_multiple",
"tests/test_completion.py::test_cmd2_help_completion_nomatch",
"tests/test_completion.py::test_shell_command_completion_shortcut",
"tests/test_completion.py::test_shell_command_completion_doesnt_match_wildcards",
"tests/test_completion.py::test_shell_command_completion_multiple",
"tests/test_completion.py::test_shell_command_completion_nomatch",
"tests/test_completion.py::test_shell_command_completion_doesnt_complete_when_just_shell",
"tests/test_completion.py::test_shell_command_completion_does_path_completion_when_after_command",
"tests/test_completion.py::test_path_completion_single_end",
"tests/test_completion.py::test_path_completion_multiple",
"tests/test_completion.py::test_path_completion_nomatch",
"tests/test_completion.py::test_default_to_shell_completion",
"tests/test_completion.py::test_path_completion_cwd",
"tests/test_completion.py::test_path_completion_doesnt_match_wildcards",
"tests/test_completion.py::test_path_completion_expand_user_dir",
"tests/test_completion.py::test_path_completion_user_expansion",
"tests/test_completion.py::test_path_completion_directories_only",
"tests/test_completion.py::test_basic_completion_single",
"tests/test_completion.py::test_basic_completion_multiple",
"tests/test_completion.py::test_basic_completion_nomatch",
"tests/test_completion.py::test_delimiter_completion",
"tests/test_completion.py::test_flag_based_completion_single",
"tests/test_completion.py::test_flag_based_completion_multiple",
"tests/test_completion.py::test_flag_based_completion_nomatch",
"tests/test_completion.py::test_flag_based_default_completer",
"tests/test_completion.py::test_flag_based_callable_completer",
"tests/test_completion.py::test_index_based_completion_single",
"tests/test_completion.py::test_index_based_completion_multiple",
"tests/test_completion.py::test_index_based_completion_nomatch",
"tests/test_completion.py::test_index_based_default_completer",
"tests/test_completion.py::test_index_based_callable_completer",
"tests/test_completion.py::test_tokens_for_completion_quoted",
"tests/test_completion.py::test_tokens_for_completion_unclosed_quote",
"tests/test_completion.py::test_tokens_for_completion_redirect",
"tests/test_completion.py::test_tokens_for_completion_quoted_redirect",
"tests/test_completion.py::test_tokens_for_completion_redirect_off",
"tests/test_completion.py::test_parseline_command_and_args",
"tests/test_completion.py::test_parseline_emptyline",
"tests/test_completion.py::test_parseline_strips_line",
"tests/test_completion.py::test_parseline_expands_alias",
"tests/test_completion.py::test_parseline_expands_shortcuts",
"tests/test_completion.py::test_add_opening_quote_basic_no_text",
"tests/test_completion.py::test_add_opening_quote_basic_nothing_added",
"tests/test_completion.py::test_add_opening_quote_basic_quote_added",
"tests/test_completion.py::test_add_opening_quote_basic_text_is_common_prefix",
"tests/test_completion.py::test_add_opening_quote_delimited_no_text",
"tests/test_completion.py::test_add_opening_quote_delimited_nothing_added",
"tests/test_completion.py::test_add_opening_quote_delimited_quote_added",
"tests/test_completion.py::test_add_opening_quote_delimited_text_is_common_prefix",
"tests/test_completion.py::test_add_opening_quote_delimited_space_in_prefix",
"tests/test_completion.py::test_cmd2_subcommand_completion_single_end",
"tests/test_completion.py::test_cmd2_subcommand_completion_multiple",
"tests/test_completion.py::test_cmd2_subcommand_completion_nomatch",
"tests/test_completion.py::test_cmd2_help_subcommand_completion_single",
"tests/test_completion.py::test_subcommand_tab_completion_with_no_completer",
"tests/test_completion.py::test_cmd2_subcmd_with_unknown_completion_nomatch",
"tests/test_completion.py::test_cmd2_submenu_completion_single_end",
"tests/test_completion.py::test_cmd2_submenu_completion_multiple",
"tests/test_completion.py::test_cmd2_submenu_completion_nomatch",
"tests/test_completion.py::test_cmd2_submenu_completion_after_submenu_match",
"tests/test_completion.py::test_cmd2_submenu_completion_after_submenu_nomatch",
"tests/test_completion.py::test_cmd2_help_submenu_completion_multiple",
"tests/test_completion.py::test_cmd2_help_submenu_completion_nomatch",
"tests/test_completion.py::test_cmd2_help_submenu_completion_subcommands"
]
| []
| MIT License | 2,430 | [
"CHANGELOG.md",
"cmd2.py"
]
| [
"CHANGELOG.md",
"cmd2.py"
]
|
|
dask__dask-3429 | a842d448b7dabd48f8ad23cba906f2502e6149a8 | 2018-04-20 17:55:24 | 48c4a589393ebc5b335cc5c7df291901401b0b15 | diff --git a/.gitignore b/.gitignore
index cb1fc67ff..5b3080424 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,5 +1,6 @@
*.pyc
*.egg-info
+dask-worker-space/
docs/build
build/
dist/
diff --git a/.travis.yml b/.travis.yml
index 8ffe4f6ed..a1edcbfd9 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -35,7 +35,7 @@ jobs:
- env:
- PYTHON=3.4
- - NUMPY=1.10.4
+ - NUMPY=1.12.1
- PANDAS=0.19.1
- *test_and_lint
- *no_coverage
@@ -45,7 +45,7 @@ jobs:
- env:
- PYTHON=3.5
- - NUMPY=1.12.1
+ - NUMPY=1.11.0
- PANDAS=0.19.2
- *test_and_lint
- *no_coverage
diff --git a/dask/array/__init__.py b/dask/array/__init__.py
index 4e43f29b6..67c697118 100644
--- a/dask/array/__init__.py
+++ b/dask/array/__init__.py
@@ -6,7 +6,7 @@ from .core import (Array, block, concatenate, stack, from_array, store,
from_delayed, asarray, asanyarray,
broadcast_arrays, broadcast_to)
from .routines import (take, choose, argwhere, where, coarsen, insert,
- ravel, roll, unique, squeeze, topk, ptp, diff, ediff1d,
+ ravel, roll, unique, squeeze, ptp, diff, ediff1d,
bincount, digitize, histogram, cov, array, dstack,
vstack, hstack, compress, extract, round, count_nonzero,
flatnonzero, nonzero, around, isin, isnull, notnull,
@@ -37,7 +37,8 @@ from .reductions import (sum, prod, mean, std, var, any, all, min, max, vnorm,
argmin, argmax,
nansum, nanmean, nanstd, nanvar, nanmin,
nanmax, nanargmin, nanargmax,
- cumsum, cumprod)
+ cumsum, cumprod,
+ topk, argtopk)
from .percentile import percentile
with ignoring(ImportError):
from .reductions import nanprod, nancumprod, nancumsum
diff --git a/dask/array/chunk.py b/dask/array/chunk.py
index d38fe660b..2879f38e5 100644
--- a/dask/array/chunk.py
+++ b/dask/array/chunk.py
@@ -61,15 +61,14 @@ nanargmax = keepdims_wrapper(np.nanargmax)
any = keepdims_wrapper(np.any)
all = keepdims_wrapper(np.all)
nansum = keepdims_wrapper(np.nansum)
+nanprod = keepdims_wrapper(np.nanprod)
try:
- from numpy import nanprod, nancumprod, nancumsum
+ from numpy import nancumprod, nancumsum
except ImportError: # pragma: no cover
- nanprod = npcompat.nanprod
nancumprod = npcompat.nancumprod
nancumsum = npcompat.nancumsum
-nanprod = keepdims_wrapper(nanprod)
nancumprod = keepdims_wrapper(nancumprod)
nancumsum = keepdims_wrapper(nancumsum)
@@ -176,16 +175,48 @@ except ImportError: # pragma: no cover
broadcast_to = npcompat.broadcast_to
-def topk(k, x):
- """ Top k elements of an array
-
- >>> topk(2, np.array([5, 1, 3, 6]))
- array([6, 5])
+def topk(a, k, axis, keepdims):
+ """Kernel of topk and argtopk.
+ Extract the k largest elements from a on the given axis.
+ If k is negative, extract the -k smallest elements instead.
+ Note that, unlike in the parent function, the returned elements
+ are not sorted internally.
+ """
+ axis = axis[0]
+ if abs(k) >= a.shape[axis]:
+ return a
+ a = np.partition(a, -k, axis=axis)
+ # return a[-k:] if k>0 else a[:-k], on arbitrary axis
+ return a[[
+ (slice(-k, None) if k > 0 else slice(None, -k))
+ if i == axis else slice(None)
+ for i in range(a.ndim)
+ ]]
+
+
+def topk_postprocess(a, k, axis):
+ """Kernel of topk and argtopk.
+ Post-processes the output of topk, sorting the results internally.
+ """
+ a = np.sort(a, axis=axis)
+ if k > 0:
+ # a = a[::-1] on arbitrary axis
+ a = a[[
+ slice(None, None, -1) if i == axis else slice(None)
+ for i in range(a.ndim)
+ ]]
+ return a
+
+
+def argtopk_preprocess(a, idx):
+ """Kernel of argtopk.
+ Preprocess data, by putting it together with its indexes in a recarray
"""
- # http://stackoverflow.com/a/23734295/616616 by larsmans
- k = np.minimum(k, len(x))
- ind = np.argpartition(x, -k)[-k:]
- return np.sort(x[ind])[::-1]
+ # np.core.records.fromarrays won't work if a and idx don't have the same shape
+ res = np.recarray(a.shape, dtype=[('values', a.dtype), ('idx', idx.dtype)])
+ res.values = a
+ res.idx = idx
+ return res
def arange(start, stop, step, length, dtype):
diff --git a/dask/array/core.py b/dask/array/core.py
index b2c44a4e0..4f0c58e25 100644
--- a/dask/array/core.py
+++ b/dask/array/core.py
@@ -1401,12 +1401,19 @@ class Array(DaskMethodsMixin):
shape = shape[0]
return reshape(self, shape)
- def topk(self, k):
+ def topk(self, k, axis=-1, split_every=None):
"""The top k elements of an array.
See ``da.topk`` for docstring"""
- from .routines import topk
- return topk(k, self)
+ from .reductions import topk
+ return topk(self, k, axis=axis, split_every=split_every)
+
+ def argtopk(self, k, axis=-1, split_every=None):
+ """The indices of the top k elements of an array.
+
+ See ``da.argtopk`` for docstring"""
+ from .reductions import argtopk
+ return argtopk(self, k, axis=axis, split_every=split_every)
def astype(self, dtype, **kwargs):
"""Copy of the array, cast to a specified type.
diff --git a/dask/array/einsumfuncs.py b/dask/array/einsumfuncs.py
index f896610f3..483335a1d 100644
--- a/dask/array/einsumfuncs.py
+++ b/dask/array/einsumfuncs.py
@@ -193,7 +193,7 @@ def _einsum_kernel(*operands, **kwargs):
return chunk.reshape(chunk.shape + (1,) * ncontract_inds)
-_einsum_can_optimize = LooseVersion(np.__version__) >= LooseVersion("1.12.0")
+einsum_can_optimize = LooseVersion(np.__version__) >= LooseVersion("1.12.0")
@wraps(np.einsum)
@@ -212,7 +212,7 @@ def einsum(*operands, **kwargs):
if optimize is None:
optimize = False
- if _einsum_can_optimize and optimize is not False:
+ if einsum_can_optimize and optimize is not False:
# Avoid computation of dask arrays within np.einsum_path
# by passing in small numpy arrays broadcasted
# up to the right shape
@@ -234,7 +234,7 @@ def einsum(*operands, **kwargs):
kwargs['kernel_dtype'] = einsum_dtype
kwargs['ncontract_inds'] = ncontract_inds
- if _einsum_can_optimize:
+ if einsum_can_optimize:
kwargs['optimize'] = optimize
# Update kwargs with atop parameters
diff --git a/dask/array/ma.py b/dask/array/ma.py
index f656eaf97..67186e1ce 100644
--- a/dask/array/ma.py
+++ b/dask/array/ma.py
@@ -10,8 +10,8 @@ from .core import (concatenate_lookup, tensordot_lookup, map_blocks,
asanyarray, atop)
-if LooseVersion(np.__version__) < '1.11.0':
- raise ImportError("dask.array.ma requires numpy >= 1.11.0")
+if LooseVersion(np.__version__) < '1.11.2':
+ raise ImportError("dask.array.ma requires numpy >= 1.11.2")
@normalize_token.register(np.ma.masked_array)
diff --git a/dask/array/numpy_compat.py b/dask/array/numpy_compat.py
index 8f27e5c91..6e1999e84 100644
--- a/dask/array/numpy_compat.py
+++ b/dask/array/numpy_compat.py
@@ -2,29 +2,9 @@ from __future__ import absolute_import, division, print_function
from distutils.version import LooseVersion
-from ..compatibility import builtins
import numpy as np
import warnings
-try:
- isclose = np.isclose
-except AttributeError:
- def isclose(*args, **kwargs):
- raise RuntimeError("You need numpy version 1.7 or greater to use "
- "isclose.")
-
-try:
- full = np.full
-except AttributeError:
- def full(shape, fill_value, dtype=None, order=None):
- """Our implementation of numpy.full because your numpy is old."""
- if order is not None:
- raise NotImplementedError("`order` kwarg is not supported upgrade "
- "to Numpy 1.8 or greater for support.")
- return np.multiply(fill_value, np.ones(shape, dtype=dtype),
- dtype=dtype)
-
-
# Taken from scikit-learn:
# https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/fixes.py#L84
try:
@@ -54,79 +34,12 @@ except TypeError:
np.ma.core._DomainSafeDivide(),
0, 1)
-# functions copied from numpy
-try:
- from numpy import broadcast_to, nanprod, nancumsum, nancumprod
-except ImportError: # pragma: no cover
- # these functions should arrive in numpy v1.10 to v1.12. Until then,
- # they are duplicated here
+if LooseVersion(np.__version__) < '1.12.0':
+ # These functions were added in numpy 1.12.0. For previous versions they
+ # are duplicated here
# See https://github.com/numpy/numpy/blob/master/LICENSE.txt
# or NUMPY_LICENSE.txt within this directory
- def _maybe_view_as_subclass(original_array, new_array):
- if type(original_array) is not type(new_array):
- # if input was an ndarray subclass and subclasses were OK,
- # then view the result as that subclass.
- new_array = new_array.view(type=type(original_array))
- # Since we have done something akin to a view from original_array, we
- # should let the subclass finalize (if it has it implemented, i.e., is
- # not None).
- if new_array.__array_finalize__:
- new_array.__array_finalize__(original_array)
- return new_array
-
- def _broadcast_to(array, shape, subok, readonly):
- shape = tuple(shape) if np.iterable(shape) else (shape,)
- array = np.array(array, copy=False, subok=subok)
- if not shape and array.shape:
- raise ValueError('cannot broadcast a non-scalar to a scalar array')
- if builtins.any(size < 0 for size in shape):
- raise ValueError('all elements of broadcast shape must be non-'
- 'negative')
- broadcast = np.nditer(
- (array,), flags=['multi_index', 'zerosize_ok', 'refs_ok'],
- op_flags=['readonly'], itershape=shape, order='C').itviews[0]
- result = _maybe_view_as_subclass(array, broadcast)
- if not readonly and array.flags.writeable:
- result.flags.writeable = True
- return result
-
- def broadcast_to(array, shape, subok=False):
- """Broadcast an array to a new shape.
-
- Parameters
- ----------
- array : array_like
- The array to broadcast.
- shape : tuple
- The shape of the desired array.
- subok : bool, optional
- If True, then sub-classes will be passed-through, otherwise
- the returned array will be forced to be a base-class array (default).
-
- Returns
- -------
- broadcast : array
- A readonly view on the original array with the given shape. It is
- typically not contiguous. Furthermore, more than one element of a
- broadcasted array may refer to a single memory location.
-
- Raises
- ------
- ValueError
- If the array is not compatible with the new shape according to NumPy's
- broadcasting rules.
-
- Examples
- --------
- >>> x = np.array([1, 2, 3])
- >>> np.broadcast_to(x, (3, 3)) # doctest: +SKIP
- array([[1, 2, 3],
- [1, 2, 3],
- [1, 2, 3]])
- """
- return _broadcast_to(array, shape, subok=subok, readonly=True)
-
def _replace_nan(a, val):
"""
If `a` is of inexact type, make a copy of `a`, replace NaNs with
@@ -168,75 +81,6 @@ except ImportError: # pragma: no cover
np.copyto(a, val, where=mask)
return a, mask
- def nanprod(a, axis=None, dtype=None, out=None, keepdims=0):
- """
- Return the product of array elements over a given axis treating Not a
- Numbers (NaNs) as zero.
-
- One is returned for slices that are all-NaN or empty.
-
- .. versionadded:: 1.10.0
-
- Parameters
- ----------
- a : array_like
- Array containing numbers whose sum is desired. If `a` is not an
- array, a conversion is attempted.
- axis : int, optional
- Axis along which the product is computed. The default is to compute
- the product of the flattened array.
- dtype : data-type, optional
- The type of the returned array and of the accumulator in which the
- elements are summed. By default, the dtype of `a` is used. An
- exception is when `a` has an integer type with less precision than
- the platform (u)intp. In that case, the default will be either
- (u)int32 or (u)int64 depending on whether the platform is 32 or 64
- bits. For inexact inputs, dtype must be inexact.
- out : ndarray, optional
- Alternate output array in which to place the result. The default
- is ``None``. If provided, it must have the same shape as the
- expected output, but the type will be cast if necessary. See
- `doc.ufuncs` for details. The casting of NaN to integer can yield
- unexpected results.
- keepdims : bool, optional
- If True, the axes which are reduced are left in the result as
- dimensions with size one. With this option, the result will
- broadcast correctly against the original `arr`.
-
- Returns
- -------
- y : ndarray or numpy scalar
-
- See Also
- --------
- :func:`numpy.prod` : Product across array propagating NaNs.
- isnan : Show which elements are NaN.
-
- Notes
- -----
- Numpy integer arithmetic is modular. If the size of a product exceeds
- the size of an integer accumulator, its value will wrap around and the
- result will be incorrect. Specifying ``dtype=double`` can alleviate
- that problem.
-
- Examples
- --------
- >>> np.nanprod(1)
- 1
- >>> np.nanprod([1])
- 1
- >>> np.nanprod([1, np.nan])
- 1.0
- >>> a = np.array([[1, 2], [3, np.nan]])
- >>> np.nanprod(a)
- 6.0
- >>> np.nanprod(a, axis=0)
- array([ 3., 2.])
-
- """
- a, mask = _replace_nan(a, 1)
- return np.prod(a, axis=axis, dtype=dtype, out=out, keepdims=keepdims)
-
def nancumsum(a, axis=None, dtype=None, out=None):
"""
Return the cumulative sum of array elements over a given axis treating Not a
diff --git a/dask/array/reductions.py b/dask/array/reductions.py
index 8598a3519..29e6a9782 100644
--- a/dask/array/reductions.py
+++ b/dask/array/reductions.py
@@ -12,6 +12,7 @@ from toolz import compose, partition_all, get, accumulate, pluck
from . import chunk
from .core import _concatenate2, Array, atop, lol_tuples, handle_out
+from .creation import arange
from .ufunc import sqrt
from .wrap import zeros, ones
from .numpy_compat import ma_divide, divide as np_divide
@@ -722,3 +723,73 @@ def validate_axis(ndim, axis):
return axis + ndim
else:
return axis
+
+
+def topk(a, k, axis=-1, split_every=None):
+ """Extract the k largest elements from a on the given axis,
+ and return them sorted from largest to smallest.
+ If k is negative, extract the -k smallest elements instead,
+ and return them sorted from smallest to largest.
+
+ This assumes that ``k`` is small. All results will be returned in a single
+ chunk along the given axis.
+
+ Examples
+ --------
+ >>> import dask.array as da
+ >>> x = np.array([5, 1, 3, 6])
+ >>> d = da.from_array(x, chunks=2)
+ >>> d.topk(2).compute()
+ array([6, 5])
+ >>> d.topk(-2).compute()
+ array([1, 3])
+ """
+ if isinstance(a, int) and isinstance(k, Array):
+ warnings.warn("DeprecationWarning: topk(k, a) has been replaced with topk(a, k)")
+ a, k = k, a
+
+ axis = validate_axis(a.ndim, axis)
+
+ kernel = partial(chunk.topk, k=k)
+ res = reduction(a, kernel, kernel, axis=axis, keepdims=True,
+ dtype=a.dtype, split_every=split_every)
+ # reduction(keepdims=True) sets shape[axis] to 1. Fix it.
+ chunks = list(res.chunks)
+ chunks[axis] = (abs(k), )
+ res = Array(res.dask, res.name, chunks, res.dtype)
+
+ # Sort result internally
+ return res.map_blocks(chunk.topk_postprocess, k=k, axis=axis, dtype=a.dtype)
+
+
+def argtopk(a, k, axis=-1, split_every=None):
+ """Extract the indices of the k largest elements from a on the given axis,
+ and return them sorted from largest to smallest.
+ If k is negative, extract the indices of the -k smallest elements instead,
+ and return them sorted from smallest to largest.
+
+ This assumes that ``k`` is small. All results will be returned in a single
+ chunk along the given axis.
+
+ Examples
+ --------
+ >>> import dask.array as da
+ >>> x = np.array([5, 1, 3, 6])
+ >>> d = da.from_array(x, chunks=2)
+ >>> d.argtopk(2).compute()
+ array([3, 0])
+ >>> d.argtopk(-2).compute()
+ array([1, 2])
+ """
+ axis = validate_axis(a.ndim, axis)
+
+ # Convert a to a recarray that contains its index
+ idx = arange(a.shape[axis], chunks=a.chunks[axis], dtype=np.int64)
+ idx = idx[tuple(slice(None) if i == axis else np.newaxis for i in range(a.ndim))]
+ a_rec = a.map_blocks(chunk.argtopk_preprocess, idx,
+ dtype=[('a', a.dtype), ('idx', idx.dtype)])
+
+ res = topk(a_rec, k, axis=axis, split_every=split_every)
+
+ # Discard values
+ return res['idx']
diff --git a/dask/array/routines.py b/dask/array/routines.py
index 819daba9e..72e88433a 100644
--- a/dask/array/routines.py
+++ b/dask/array/routines.py
@@ -6,7 +6,6 @@ from collections import Iterable
from distutils.version import LooseVersion
from functools import wraps, partial
from numbers import Integral
-from operator import getitem
import numpy as np
from toolz import concat, sliding_window, interleave
@@ -14,7 +13,7 @@ from toolz import concat, sliding_window, interleave
from .. import sharedict
from ..core import flatten
from ..base import tokenize
-from . import numpy_compat, chunk
+from . import chunk
from .creation import arange
from .utils import safe_wraps
from .wrap import ones
@@ -881,38 +880,6 @@ def squeeze(a, axis=None):
return a[sl]
-def topk(k, x):
- """ The top k elements of an array
-
- Returns the k greatest elements of the array in sorted order. Only works
- on arrays of a single dimension.
-
- This assumes that ``k`` is small. All results will be returned in a single
- chunk.
-
- Examples
- --------
-
- >>> x = np.array([5, 1, 3, 6])
- >>> d = from_array(x, chunks=2)
- >>> d.topk(2).compute()
- array([6, 5])
- """
- if x.ndim != 1:
- raise ValueError("Topk only works on arrays of one dimension")
-
- token = tokenize(k, x)
- name = 'chunk.topk-' + token
- dsk = {(name, i): (chunk.topk, k, key)
- for i, key in enumerate(x.__dask_keys__())}
- name2 = 'topk-' + token
- dsk[(name2, 0)] = (getitem, (np.sort, (np.concatenate, list(dsk))),
- slice(-1, -k - 1, -1))
- chunks = ((k,),)
-
- return Array(sharedict.merge((name2, dsk), x.dask), name2, chunks, dtype=x.dtype)
-
-
@wraps(np.compress)
def compress(condition, a, axis=None):
if axis is None:
@@ -995,9 +962,9 @@ def notnull(values):
return ~isnull(values)
-@wraps(numpy_compat.isclose)
+@wraps(np.isclose)
def isclose(arr1, arr2, rtol=1e-5, atol=1e-8, equal_nan=False):
- func = partial(numpy_compat.isclose, rtol=rtol, atol=atol, equal_nan=equal_nan)
+ func = partial(np.isclose, rtol=rtol, atol=atol, equal_nan=equal_nan)
return elemwise(func, arr1, arr2, dtype='bool')
diff --git a/dask/array/utils.py b/dask/array/utils.py
index 7e35865f7..3ac9b43e2 100644
--- a/dask/array/utils.py
+++ b/dask/array/utils.py
@@ -1,6 +1,5 @@
from __future__ import absolute_import, division, print_function
-from distutils.version import LooseVersion
import difflib
import functools
import math
@@ -14,23 +13,9 @@ from ..local import get_sync
from ..sharedict import ShareDict
-if LooseVersion(np.__version__) >= '1.10.0':
- _allclose = np.allclose
-else:
- def _allclose(a, b, **kwargs):
- if kwargs.pop('equal_nan', False):
- a_nans = np.isnan(a)
- b_nans = np.isnan(b)
- if not (a_nans == b_nans).all():
- return False
- a = a[~a_nans]
- b = b[~b_nans]
- return np.allclose(a, b, **kwargs)
-
-
def allclose(a, b, equal_nan=False, **kwargs):
if getattr(a, 'dtype', None) != 'O':
- return _allclose(a, b, equal_nan=equal_nan, **kwargs)
+ return np.allclose(a, b, equal_nan=equal_nan, **kwargs)
if equal_nan:
return (a.shape == b.shape and
all(np.isnan(b) if np.isnan(a) else a == b
diff --git a/dask/array/wrap.py b/dask/array/wrap.py
index accee3d64..1306c781d 100644
--- a/dask/array/wrap.py
+++ b/dask/array/wrap.py
@@ -12,7 +12,6 @@ except ImportError:
from ..base import tokenize
from .core import Array, normalize_chunks
-from .numpy_compat import full
def wrap_func_shape_as_first_arg(func, *args, **kwargs):
@@ -68,4 +67,4 @@ w = wrap(wrap_func_shape_as_first_arg)
ones = w(np.ones, dtype='f8')
zeros = w(np.zeros, dtype='f8')
empty = w(np.empty, dtype='f8')
-full = w(full)
+full = w(np.full)
diff --git a/docs/source/array-api.rst b/docs/source/array-api.rst
index b253f28ec..f6fa498c4 100644
--- a/docs/source/array-api.rst
+++ b/docs/source/array-api.rst
@@ -22,6 +22,7 @@ Top level user functions:
arctanh
argmax
argmin
+ argtopk
argwhere
around
array
@@ -341,7 +342,6 @@ Other functions
.. autofunction:: from_array
.. autofunction:: from_delayed
.. autofunction:: store
-.. autofunction:: topk
.. autofunction:: coarsen
.. autofunction:: stack
.. autofunction:: concatenate
@@ -362,6 +362,7 @@ Other functions
.. autofunction:: arctanh
.. autofunction:: argmax
.. autofunction:: argmin
+.. autofunction:: argtopk
.. autofunction:: argwhere
.. autofunction:: around
.. autofunction:: array
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
index d55f3a042..301db78af 100644
--- a/docs/source/changelog.rst
+++ b/docs/source/changelog.rst
@@ -12,6 +12,12 @@ Array
- Add ``piecewise`` for Dask Arrays (:pr:`3350`) `John A Kirkham`_
- Fix handling of ``nan`` in ``broadcast_shapes`` (:pr:`3356`) `John A Kirkham`_
- Add ``isin`` for dask arrays (:pr:`3363`). `Stephan Hoyer`_
+- Overhauled ``topk`` for Dask Arrays: faster algorithm, particularly for large k's; added support
+ for multiple axes, recursive aggregation, and an option to pick the bottom k elements instead.
+ (:pr:`3395`) `Guido Imperiale`_
+- The ``topk`` API has changed from topk(k, array) to the more conventional topk(array, k).
+ The legacy API still works but is now deprecated. (:pr:`2965`) `Guido Imperiale`_
+- New function ``argtopk`` for Dask Arrays (:pr:`3396`) `Guido Imperiale`_
DataFrame
+++++++++
@@ -1027,6 +1033,7 @@ Other
- There is also a gitter chat room and a stackoverflow tag
+.. _`Guido Imperiale`: https://github.com/crusaderky
.. _`John A Kirkham`: https://github.com/jakirkham
.. _`Matthew Rocklin`: https://github.com/mrocklin
.. _`Jim Crist`: https://github.com/jcrist
diff --git a/setup.py b/setup.py
index 75ccaa59f..8daeba180 100755
--- a/setup.py
+++ b/setup.py
@@ -8,9 +8,9 @@ import versioneer
# NOTE: These are tested in `continuous_integration/travis/test_imports.sh` If
# you modify these, make sure to change the corresponding line there.
extras_require = {
- 'array': ['numpy >= 1.10.4', 'toolz >= 0.7.3'],
+ 'array': ['numpy >= 1.11.0', 'toolz >= 0.7.3'],
'bag': ['cloudpickle >= 0.2.1', 'toolz >= 0.7.3', 'partd >= 0.3.8'],
- 'dataframe': ['numpy >= 1.10.4', 'pandas >= 0.19.0', 'toolz >= 0.7.3',
+ 'dataframe': ['numpy >= 1.11.0', 'pandas >= 0.19.0', 'toolz >= 0.7.3',
'partd >= 0.3.8', 'cloudpickle >= 0.2.1'],
'distributed': ['distributed >= 1.21'],
'delayed': ['toolz >= 0.7.3'],
| Dropping NumPy 1.10
Increasingly we discover issues with PRs after merging them on NumPy 1.10, which we then either need to fix ourselves or hope the submitter of the PR will kindly fix the issue. While these issues can normally be solved by skipping, adding compat functions, etc., it starts to raise the question: how long do we want to support NumPy 1.10. As of January NumPy 1.14 was relased and has since had 2 patch releases. The list of NumPy 1.15 issue is not too long either (roughly two thirds have already been resolved). To me this is a pretty good case for dropping, but am interested to hear other thoughts.
If we do feel strongly about keeping NumPy 1.10, could we discuss adding it as part of the PR builds to cutdown on strain caused by the issues mentioned above? | dask/dask | diff --git a/dask/array/tests/test_array_core.py b/dask/array/tests/test_array_core.py
index adf24fe13..211a2b849 100644
--- a/dask/array/tests/test_array_core.py
+++ b/dask/array/tests/test_array_core.py
@@ -286,8 +286,6 @@ def test_stack_promote_type():
assert_eq(res, np.stack([i, f]))
[email protected](LooseVersion(np.__version__) < '1.10.0',
- reason="NumPy doesn't yet support stack")
def test_stack_rechunk():
x = da.random.random(10, chunks=5)
y = da.random.random(10, chunks=4)
diff --git a/dask/array/tests/test_linalg.py b/dask/array/tests/test_linalg.py
index eafd86144..18d9e5473 100644
--- a/dask/array/tests/test_linalg.py
+++ b/dask/array/tests/test_linalg.py
@@ -500,10 +500,6 @@ def test_norm_1dim(shape, chunks, axis, norm, keepdims):
a_r = np.linalg.norm(a, ord=norm, axis=axis, keepdims=keepdims)
d_r = da.linalg.norm(d, ord=norm, axis=axis, keepdims=keepdims)
-
- # Fix a type mismatch on NumPy 1.10.
- a_r = a_r.astype(float)
-
assert_eq(a_r, d_r)
diff --git a/dask/array/tests/test_reductions.py b/dask/array/tests/test_reductions.py
index 2dda9e1d5..ed6a2e6a0 100644
--- a/dask/array/tests/test_reductions.py
+++ b/dask/array/tests/test_reductions.py
@@ -1,21 +1,13 @@
from __future__ import absolute_import, division, print_function
import pytest
-pytest.importorskip('numpy')
+np = pytest.importorskip('numpy')
import dask.array as da
from dask.array.utils import assert_eq as _assert_eq, same_keys
from dask.core import get_deps
from dask.context import set_options
-import numpy as np
-# temporary until numpy functions migrated
-try:
- from numpy import nanprod
-except ImportError: # pragma: no cover
- import dask.array.numpy_compat as npcompat
- nanprod = npcompat.nanprod
-
def assert_eq(a, b):
_assert_eq(a, b, equal_nan=True)
@@ -56,7 +48,7 @@ def test_reductions_1D(dtype):
reduction_1d_test(da.all, a, np.all, x, False)
reduction_1d_test(da.nansum, a, np.nansum, x)
- reduction_1d_test(da.nanprod, a, nanprod, x)
+ reduction_1d_test(da.nanprod, a, np.nanprod, x)
reduction_1d_test(da.nanmean, a, np.mean, x)
reduction_1d_test(da.nanvar, a, np.var, x)
reduction_1d_test(da.nanstd, a, np.std, x)
@@ -127,7 +119,7 @@ def test_reductions_2D(dtype):
reduction_2d_test(da.all, a, np.all, x, False)
reduction_2d_test(da.nansum, a, np.nansum, x)
- reduction_2d_test(da.nanprod, a, nanprod, x)
+ reduction_2d_test(da.nanprod, a, np.nanprod, x)
reduction_2d_test(da.nanmean, a, np.mean, x)
reduction_2d_test(da.nanvar, a, np.nanvar, x, False) # Difference in dtype algo
reduction_2d_test(da.nanstd, a, np.nanstd, x, False) # Difference in dtype algo
@@ -207,7 +199,7 @@ def test_reductions_2D_nans():
reduction_2d_test(da.all, a, np.all, x, False, False)
reduction_2d_test(da.nansum, a, np.nansum, x, False, False)
- reduction_2d_test(da.nanprod, a, nanprod, x, False, False)
+ reduction_2d_test(da.nanprod, a, np.nanprod, x, False, False)
reduction_2d_test(da.nanmean, a, np.nanmean, x, False, False)
with pytest.warns(None): # division by 0 warning
reduction_2d_test(da.nanvar, a, np.nanvar, x, False, False)
@@ -287,7 +279,7 @@ def test_nan():
assert_eq(np.nanstd(x, axis=0), da.nanstd(d, axis=0))
assert_eq(np.nanargmin(x, axis=0), da.nanargmin(d, axis=0))
assert_eq(np.nanargmax(x, axis=0), da.nanargmax(d, axis=0))
- assert_eq(nanprod(x), da.nanprod(d))
+ assert_eq(np.nanprod(x), da.nanprod(d))
@pytest.mark.skipif(np.__version__ < '1.13.0', reason='nanmax/nanmin for object dtype')
@@ -422,3 +414,65 @@ def test_array_cumreduction_out(func):
x = da.ones((10, 10), chunks=(4, 4))
func(x, axis=0, out=x)
assert_eq(x, func(np.ones((10, 10)), axis=0))
+
+
[email protected]('npfunc,daskfunc', [
+ (np.sort, da.topk),
+ (np.argsort, da.argtopk),
+])
[email protected]('split_every', [None, 2])
+def test_topk_argtopk1(npfunc, daskfunc, split_every):
+ # Test data
+ k = 5
+ a = da.random.random(1000, chunks=250)
+ b = da.random.random((10, 20, 30), chunks=(4, 8, 8))
+ c = da.from_array(np.array([(1, 'Hello'), (2, 'World')], dtype=[('foo', int), ('bar', '<U5')]),
+ chunks=1)
+
+ # 1-dimensional arrays
+ # top 5 elements, sorted descending
+ assert_eq(npfunc(a)[-k:][::-1],
+ daskfunc(a, k, split_every=split_every))
+ # bottom 5 elements, sorted ascending
+ assert_eq(npfunc(a)[:k],
+ daskfunc(a, -k, split_every=split_every))
+
+ # n-dimensional arrays
+ # also testing when k > chunk
+ # top 5 elements, sorted descending
+ assert_eq(npfunc(b, axis=0)[-k:, :, :][::-1, :, :],
+ daskfunc(b, k, axis=0, split_every=split_every))
+ assert_eq(npfunc(b, axis=1)[:, -k:, :][:, ::-1, :],
+ daskfunc(b, k, axis=1, split_every=split_every))
+ assert_eq(npfunc(b, axis=-1)[:, :, -k:][:, :, ::-1],
+ daskfunc(b, k, axis=-1, split_every=split_every))
+ with pytest.raises(ValueError):
+ daskfunc(b, k, axis=3, split_every=split_every)
+
+ # bottom 5 elements, sorted ascending
+ assert_eq(npfunc(b, axis=0)[:k, :, :],
+ daskfunc(b, -k, axis=0, split_every=split_every))
+ assert_eq(npfunc(b, axis=1)[:, :k, :],
+ daskfunc(b, -k, axis=1, split_every=split_every))
+ assert_eq(npfunc(b, axis=-1)[:, :, :k],
+ daskfunc(b, -k, axis=-1, split_every=split_every))
+ with pytest.raises(ValueError):
+ daskfunc(b, -k, axis=3, split_every=split_every)
+
+ # structured arrays
+ assert_eq(npfunc(c, axis=0)[-1:][::-1],
+ daskfunc(c, 1, split_every=split_every))
+ assert_eq(npfunc(c, axis=0)[:1],
+ daskfunc(c, -1, split_every=split_every))
+
+
+def test_topk_argtopk2():
+ a = da.random.random((10, 20, 30), chunks=(4, 8, 8))
+
+ # Support for deprecated API for topk
+ with pytest.warns(UserWarning):
+ assert_eq(da.topk(a, 5), da.topk(5, a))
+
+ # As Array methods
+ assert_eq(a.topk(5, axis=1, split_every=2), da.topk(a, 5, axis=1, split_every=2))
+ assert_eq(a.argtopk(5, axis=1, split_every=2), da.argtopk(a, 5, axis=1, split_every=2))
diff --git a/dask/array/tests/test_routines.py b/dask/array/tests/test_routines.py
index 882941d35..7b8443644 100644
--- a/dask/array/tests/test_routines.py
+++ b/dask/array/tests/test_routines.py
@@ -11,6 +11,7 @@ np = pytest.importorskip('numpy')
import dask.array as da
from dask.utils import ignoring
from dask.array.utils import assert_eq, same_keys
+from dask.array.einsumfuncs import einsum_can_optimize
def test_array():
@@ -418,27 +419,6 @@ def test_ediff1d(shape, to_end, to_begin):
assert_eq(da.ediff1d(a, to_end, to_begin), np.ediff1d(x, to_end, to_begin))
-def test_topk():
- x = np.array([5, 2, 1, 6])
- d = da.from_array(x, chunks=2)
-
- e = da.topk(2, d)
-
- assert e.chunks == ((2,),)
- assert_eq(e, np.sort(x)[-1:-3:-1])
- assert same_keys(da.topk(2, d), e)
-
-
-def test_topk_k_bigger_than_chunk():
- x = np.array([5, 2, 1, 6])
- d = da.from_array(x, chunks=2)
-
- e = da.topk(3, d)
-
- assert e.chunks == ((3,),)
- assert_eq(e, np.array([6, 5, 2]))
-
-
def test_bincount():
x = np.array([2, 1, 5, 2, 1])
d = da.from_array(x, chunks=2)
@@ -469,8 +449,6 @@ def test_bincount_raises_informative_error_on_missing_minlength_kwarg():
assert False
[email protected](LooseVersion(np.__version__) < '1.10.0',
- reason="NumPy doesn't yet support nd digitize")
def test_digitize():
x = np.array([2, 4, 5, 6, 1])
bins = np.array([1, 2, 3, 4, 5])
@@ -1356,6 +1334,8 @@ def test_einsum(einsum_signature):
da.einsum(einsum_signature, *da_inputs))
[email protected](not einsum_can_optimize,
+ reason="np.einsum(optimize) unavailable")
@pytest.mark.parametrize('optimize_opts', [
(True, False),
('greedy', False),
diff --git a/dask/array/tests/test_sparse.py b/dask/array/tests/test_sparse.py
index dc7519f5a..028bcfef4 100644
--- a/dask/array/tests/test_sparse.py
+++ b/dask/array/tests/test_sparse.py
@@ -9,7 +9,7 @@ from dask.array.utils import assert_eq
sparse = pytest.importorskip('sparse')
-if LooseVersion(np.__version__) < '1.11.0':
+if LooseVersion(np.__version__) < '1.11.2':
pytestmark = pytest.mark.skip
diff --git a/dask/dataframe/tests/test_dataframe.py b/dask/dataframe/tests/test_dataframe.py
index 6c447ecd2..35e290f7c 100644
--- a/dask/dataframe/tests/test_dataframe.py
+++ b/dask/dataframe/tests/test_dataframe.py
@@ -1,6 +1,7 @@
import sys
-from operator import add
+from distutils.version import LooseVersion
from itertools import product
+from operator import add
import pandas as pd
import pandas.util.testing as tm
@@ -331,24 +332,25 @@ def test_cumulative():
assert_eq(ddf.cummin(axis=1), df.cummin(axis=1))
assert_eq(ddf.cummax(axis=1), df.cummax(axis=1))
- # testing out parameter
- np.cumsum(ddf, out=ddf_out)
- assert_eq(ddf_out, df.cumsum())
- np.cumprod(ddf, out=ddf_out)
- assert_eq(ddf_out, df.cumprod())
- ddf.cummin(out=ddf_out)
- assert_eq(ddf_out, df.cummin())
- ddf.cummax(out=ddf_out)
- assert_eq(ddf_out, df.cummax())
-
- np.cumsum(ddf, out=ddf_out, axis=1)
- assert_eq(ddf_out, df.cumsum(axis=1))
- np.cumprod(ddf, out=ddf_out, axis=1)
- assert_eq(ddf_out, df.cumprod(axis=1))
- ddf.cummin(out=ddf_out, axis=1)
- assert_eq(ddf_out, df.cummin(axis=1))
- ddf.cummax(out=ddf_out, axis=1)
- assert_eq(ddf_out, df.cummax(axis=1))
+ # testing out parameter if out parameter supported
+ if LooseVersion(np.__version__) >= '1.13.0':
+ np.cumsum(ddf, out=ddf_out)
+ assert_eq(ddf_out, df.cumsum())
+ np.cumprod(ddf, out=ddf_out)
+ assert_eq(ddf_out, df.cumprod())
+ ddf.cummin(out=ddf_out)
+ assert_eq(ddf_out, df.cummin())
+ ddf.cummax(out=ddf_out)
+ assert_eq(ddf_out, df.cummax())
+
+ np.cumsum(ddf, out=ddf_out, axis=1)
+ assert_eq(ddf_out, df.cumsum(axis=1))
+ np.cumprod(ddf, out=ddf_out, axis=1)
+ assert_eq(ddf_out, df.cumprod(axis=1))
+ ddf.cummin(out=ddf_out, axis=1)
+ assert_eq(ddf_out, df.cummin(axis=1))
+ ddf.cummax(out=ddf_out, axis=1)
+ assert_eq(ddf_out, df.cummax(axis=1))
assert_eq(ddf.a.cumsum(), df.a.cumsum())
assert_eq(ddf.a.cumprod(), df.a.cumprod())
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 15
} | 1.21 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[complete]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "py.test --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
click==8.0.4
cloudpickle==2.2.1
-e git+https://github.com/dask/dask.git@a842d448b7dabd48f8ad23cba906f2502e6149a8#egg=dask
distributed==1.21.8
HeapDict==1.0.1
importlib-metadata==4.8.3
iniconfig==1.1.1
locket==1.0.0
msgpack==1.0.5
numpy==1.19.5
packaging==21.3
pandas==1.1.5
partd==1.2.0
pluggy==1.0.0
psutil==7.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
six==1.17.0
sortedcontainers==2.4.0
tblib==1.7.0
tomli==1.2.3
toolz==0.12.0
tornado==6.1
typing_extensions==4.1.1
zict==2.1.0
zipp==3.6.0
| name: dask
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- click==8.0.4
- cloudpickle==2.2.1
- distributed==1.21.8
- heapdict==1.0.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- locket==1.0.0
- msgpack==1.0.5
- numpy==1.19.5
- packaging==21.3
- pandas==1.1.5
- partd==1.2.0
- pluggy==1.0.0
- psutil==7.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- six==1.17.0
- sortedcontainers==2.4.0
- tblib==1.7.0
- tomli==1.2.3
- toolz==0.12.0
- tornado==6.1
- typing-extensions==4.1.1
- zict==2.1.0
- zipp==3.6.0
prefix: /opt/conda/envs/dask
| [
"dask/array/tests/test_array_core.py::test_getem",
"dask/array/tests/test_array_core.py::test_top",
"dask/array/tests/test_array_core.py::test_top_supports_broadcasting_rules",
"dask/array/tests/test_array_core.py::test_top_literals",
"dask/array/tests/test_array_core.py::test_atop_literals",
"dask/array/tests/test_array_core.py::test_concatenate3_on_scalars",
"dask/array/tests/test_array_core.py::test_chunked_dot_product",
"dask/array/tests/test_array_core.py::test_chunked_transpose_plus_one",
"dask/array/tests/test_array_core.py::test_broadcast_dimensions_works_with_singleton_dimensions",
"dask/array/tests/test_array_core.py::test_broadcast_dimensions",
"dask/array/tests/test_array_core.py::test_Array",
"dask/array/tests/test_array_core.py::test_uneven_chunks",
"dask/array/tests/test_array_core.py::test_numblocks_suppoorts_singleton_block_dims",
"dask/array/tests/test_array_core.py::test_keys",
"dask/array/tests/test_array_core.py::test_Array_computation",
"dask/array/tests/test_array_core.py::test_stack",
"dask/array/tests/test_array_core.py::test_short_stack",
"dask/array/tests/test_array_core.py::test_stack_scalars",
"dask/array/tests/test_array_core.py::test_stack_promote_type",
"dask/array/tests/test_array_core.py::test_stack_rechunk",
"dask/array/tests/test_array_core.py::test_concatenate",
"dask/array/tests/test_array_core.py::test_concatenate_types[dtypes0]",
"dask/array/tests/test_array_core.py::test_concatenate_types[dtypes1]",
"dask/array/tests/test_array_core.py::test_concatenate_unknown_axes",
"dask/array/tests/test_array_core.py::test_concatenate_rechunk",
"dask/array/tests/test_array_core.py::test_concatenate_fixlen_strings",
"dask/array/tests/test_array_core.py::test_block_simple_row_wise",
"dask/array/tests/test_array_core.py::test_block_simple_column_wise",
"dask/array/tests/test_array_core.py::test_block_with_1d_arrays_row_wise",
"dask/array/tests/test_array_core.py::test_block_with_1d_arrays_multiple_rows",
"dask/array/tests/test_array_core.py::test_block_with_1d_arrays_column_wise",
"dask/array/tests/test_array_core.py::test_block_mixed_1d_and_2d",
"dask/array/tests/test_array_core.py::test_block_complicated",
"dask/array/tests/test_array_core.py::test_block_nested",
"dask/array/tests/test_array_core.py::test_block_3d",
"dask/array/tests/test_array_core.py::test_block_with_mismatched_shape",
"dask/array/tests/test_array_core.py::test_block_no_lists",
"dask/array/tests/test_array_core.py::test_block_invalid_nesting",
"dask/array/tests/test_array_core.py::test_block_empty_lists",
"dask/array/tests/test_array_core.py::test_block_tuple",
"dask/array/tests/test_array_core.py::test_binops",
"dask/array/tests/test_array_core.py::test_broadcast_shapes",
"dask/array/tests/test_array_core.py::test_elemwise_on_scalars",
"dask/array/tests/test_array_core.py::test_elemwise_with_ndarrays",
"dask/array/tests/test_array_core.py::test_elemwise_differently_chunked",
"dask/array/tests/test_array_core.py::test_elemwise_dtype",
"dask/array/tests/test_array_core.py::test_operators",
"dask/array/tests/test_array_core.py::test_operator_dtype_promotion",
"dask/array/tests/test_array_core.py::test_T",
"dask/array/tests/test_array_core.py::test_norm",
"dask/array/tests/test_array_core.py::test_broadcast_to",
"dask/array/tests/test_array_core.py::test_broadcast_to_array",
"dask/array/tests/test_array_core.py::test_broadcast_to_scalar",
"dask/array/tests/test_array_core.py::test_broadcast_to_chunks",
"dask/array/tests/test_array_core.py::test_broadcast_arrays",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape0-v_shape0]",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape1-v_shape1]",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape2-v_shape2]",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape3-v_shape3]",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape4-v_shape4]",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape5-v_shape5]",
"dask/array/tests/test_array_core.py::test_broadcast_operator[u_shape6-v_shape6]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape0-new_shape0-chunks0]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape1-new_shape1-5]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape2-new_shape2-5]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape3-new_shape3-12]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape4-new_shape4-12]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape5-new_shape5-chunks5]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape6-new_shape6-4]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape7-new_shape7-4]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape8-new_shape8-4]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape9-new_shape9-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape10-new_shape10-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape11-new_shape11-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape12-new_shape12-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape13-new_shape13-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape14-new_shape14-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape15-new_shape15-2]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape16-new_shape16-chunks16]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape17-new_shape17-3]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape18-new_shape18-4]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape19-new_shape19-chunks19]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape20-new_shape20-1]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape21-new_shape21-1]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape22-new_shape22-24]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape23-new_shape23-6]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape24-new_shape24-6]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape25-new_shape25-6]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape26-new_shape26-chunks26]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape27-new_shape27-chunks27]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape28-new_shape28-chunks28]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape29-new_shape29-chunks29]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape30-new_shape30-chunks30]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape31-new_shape31-chunks31]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape32-new_shape32-chunks32]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape33-new_shape33-chunks33]",
"dask/array/tests/test_array_core.py::test_reshape[original_shape34-new_shape34-chunks34]",
"dask/array/tests/test_array_core.py::test_reshape_exceptions",
"dask/array/tests/test_array_core.py::test_reshape_splat",
"dask/array/tests/test_array_core.py::test_reshape_fails_for_dask_only",
"dask/array/tests/test_array_core.py::test_reshape_unknown_dimensions",
"dask/array/tests/test_array_core.py::test_full",
"dask/array/tests/test_array_core.py::test_map_blocks",
"dask/array/tests/test_array_core.py::test_map_blocks2",
"dask/array/tests/test_array_core.py::test_map_blocks_with_constants",
"dask/array/tests/test_array_core.py::test_map_blocks_with_kwargs",
"dask/array/tests/test_array_core.py::test_map_blocks_with_chunks",
"dask/array/tests/test_array_core.py::test_map_blocks_dtype_inference",
"dask/array/tests/test_array_core.py::test_from_function_requires_block_args",
"dask/array/tests/test_array_core.py::test_repr",
"dask/array/tests/test_array_core.py::test_slicing_with_ellipsis",
"dask/array/tests/test_array_core.py::test_slicing_with_ndarray",
"dask/array/tests/test_array_core.py::test_dtype",
"dask/array/tests/test_array_core.py::test_blockdims_from_blockshape",
"dask/array/tests/test_array_core.py::test_coerce",
"dask/array/tests/test_array_core.py::test_bool",
"dask/array/tests/test_array_core.py::test_store_kwargs",
"dask/array/tests/test_array_core.py::test_store_delayed_target",
"dask/array/tests/test_array_core.py::test_store",
"dask/array/tests/test_array_core.py::test_store_regions",
"dask/array/tests/test_array_core.py::test_store_compute_false",
"dask/array/tests/test_array_core.py::test_store_locks",
"dask/array/tests/test_array_core.py::test_store_method_return",
"dask/array/tests/test_array_core.py::test_to_dask_dataframe",
"dask/array/tests/test_array_core.py::test_np_array_with_zero_dimensions",
"dask/array/tests/test_array_core.py::test_dtype_complex",
"dask/array/tests/test_array_core.py::test_astype",
"dask/array/tests/test_array_core.py::test_arithmetic",
"dask/array/tests/test_array_core.py::test_elemwise_consistent_names",
"dask/array/tests/test_array_core.py::test_optimize",
"dask/array/tests/test_array_core.py::test_slicing_with_non_ndarrays",
"dask/array/tests/test_array_core.py::test_getter",
"dask/array/tests/test_array_core.py::test_size",
"dask/array/tests/test_array_core.py::test_nbytes",
"dask/array/tests/test_array_core.py::test_itemsize",
"dask/array/tests/test_array_core.py::test_Array_normalizes_dtype",
"dask/array/tests/test_array_core.py::test_from_array_with_lock",
"dask/array/tests/test_array_core.py::test_from_array_tasks_always_call_getter",
"dask/array/tests/test_array_core.py::test_from_array_no_asarray",
"dask/array/tests/test_array_core.py::test_from_array_getitem",
"dask/array/tests/test_array_core.py::test_from_array_minus_one",
"dask/array/tests/test_array_core.py::test_asarray",
"dask/array/tests/test_array_core.py::test_asanyarray",
"dask/array/tests/test_array_core.py::test_from_func",
"dask/array/tests/test_array_core.py::test_concatenate3_2",
"dask/array/tests/test_array_core.py::test_map_blocks3",
"dask/array/tests/test_array_core.py::test_from_array_with_missing_chunks",
"dask/array/tests/test_array_core.py::test_normalize_chunks",
"dask/array/tests/test_array_core.py::test_raise_on_no_chunks",
"dask/array/tests/test_array_core.py::test_chunks_is_immutable",
"dask/array/tests/test_array_core.py::test_raise_on_bad_kwargs",
"dask/array/tests/test_array_core.py::test_long_slice",
"dask/array/tests/test_array_core.py::test_ellipsis_slicing",
"dask/array/tests/test_array_core.py::test_point_slicing",
"dask/array/tests/test_array_core.py::test_point_slicing_with_full_slice",
"dask/array/tests/test_array_core.py::test_slice_with_floats",
"dask/array/tests/test_array_core.py::test_slice_with_integer_types",
"dask/array/tests/test_array_core.py::test_index_with_integer_types",
"dask/array/tests/test_array_core.py::test_vindex_basic",
"dask/array/tests/test_array_core.py::test_vindex_nd",
"dask/array/tests/test_array_core.py::test_vindex_negative",
"dask/array/tests/test_array_core.py::test_vindex_errors",
"dask/array/tests/test_array_core.py::test_vindex_merge",
"dask/array/tests/test_array_core.py::test_empty_array",
"dask/array/tests/test_array_core.py::test_memmap",
"dask/array/tests/test_array_core.py::test_to_npy_stack",
"dask/array/tests/test_array_core.py::test_view",
"dask/array/tests/test_array_core.py::test_view_fortran",
"dask/array/tests/test_array_core.py::test_map_blocks_with_changed_dimension",
"dask/array/tests/test_array_core.py::test_broadcast_chunks",
"dask/array/tests/test_array_core.py::test_chunks_error",
"dask/array/tests/test_array_core.py::test_array_compute_forward_kwargs",
"dask/array/tests/test_array_core.py::test_dont_fuse_outputs",
"dask/array/tests/test_array_core.py::test_dont_dealias_outputs",
"dask/array/tests/test_array_core.py::test_timedelta_op",
"dask/array/tests/test_array_core.py::test_to_delayed",
"dask/array/tests/test_array_core.py::test_to_delayed_optimize_graph",
"dask/array/tests/test_array_core.py::test_cumulative",
"dask/array/tests/test_array_core.py::test_atop_names",
"dask/array/tests/test_array_core.py::test_atop_new_axes",
"dask/array/tests/test_array_core.py::test_atop_kwargs",
"dask/array/tests/test_array_core.py::test_atop_chunks",
"dask/array/tests/test_array_core.py::test_atop_raises_on_incorrect_indices",
"dask/array/tests/test_array_core.py::test_from_delayed",
"dask/array/tests/test_array_core.py::test_A_property",
"dask/array/tests/test_array_core.py::test_copy_mutate",
"dask/array/tests/test_array_core.py::test_npartitions",
"dask/array/tests/test_array_core.py::test_astype_gh1151",
"dask/array/tests/test_array_core.py::test_elemwise_name",
"dask/array/tests/test_array_core.py::test_map_blocks_name",
"dask/array/tests/test_array_core.py::test_array_picklable",
"dask/array/tests/test_array_core.py::test_from_array_raises_on_bad_chunks",
"dask/array/tests/test_array_core.py::test_concatenate_axes",
"dask/array/tests/test_array_core.py::test_atop_concatenate",
"dask/array/tests/test_array_core.py::test_common_blockdim",
"dask/array/tests/test_array_core.py::test_uneven_chunks_that_fit_neatly",
"dask/array/tests/test_array_core.py::test_elemwise_uneven_chunks",
"dask/array/tests/test_array_core.py::test_uneven_chunks_atop",
"dask/array/tests/test_array_core.py::test_warn_bad_rechunking",
"dask/array/tests/test_array_core.py::test_optimize_fuse_keys",
"dask/array/tests/test_array_core.py::test_concatenate_stack_dont_warn",
"dask/array/tests/test_array_core.py::test_map_blocks_delayed",
"dask/array/tests/test_array_core.py::test_no_chunks",
"dask/array/tests/test_array_core.py::test_no_chunks_2d",
"dask/array/tests/test_array_core.py::test_no_chunks_yes_chunks",
"dask/array/tests/test_array_core.py::test_raise_informative_errors_no_chunks",
"dask/array/tests/test_array_core.py::test_no_chunks_slicing_2d",
"dask/array/tests/test_array_core.py::test_index_array_with_array_1d",
"dask/array/tests/test_array_core.py::test_index_array_with_array_2d",
"dask/array/tests/test_array_core.py::test_setitem_1d",
"dask/array/tests/test_array_core.py::test_setitem_2d",
"dask/array/tests/test_array_core.py::test_setitem_errs",
"dask/array/tests/test_array_core.py::test_zero_slice_dtypes",
"dask/array/tests/test_array_core.py::test_zero_sized_array_rechunk",
"dask/array/tests/test_array_core.py::test_atop_zero_shape",
"dask/array/tests/test_array_core.py::test_atop_zero_shape_new_axes",
"dask/array/tests/test_array_core.py::test_broadcast_against_zero_shape",
"dask/array/tests/test_array_core.py::test_from_array_name",
"dask/array/tests/test_array_core.py::test_concatenate_errs",
"dask/array/tests/test_array_core.py::test_stack_errs",
"dask/array/tests/test_array_core.py::test_atop_with_numpy_arrays",
"dask/array/tests/test_array_core.py::test_elemwise_with_lists[other0-100]",
"dask/array/tests/test_array_core.py::test_elemwise_with_lists[other0-6]",
"dask/array/tests/test_array_core.py::test_elemwise_with_lists[other1-100]",
"dask/array/tests/test_array_core.py::test_elemwise_with_lists[other1-6]",
"dask/array/tests/test_array_core.py::test_elemwise_with_lists[other2-100]",
"dask/array/tests/test_array_core.py::test_elemwise_with_lists[other2-6]",
"dask/array/tests/test_array_core.py::test_constructor_plugin",
"dask/array/tests/test_array_core.py::test_no_warnings_on_metadata",
"dask/array/tests/test_array_core.py::test_delayed_array_key_hygeine",
"dask/array/tests/test_array_core.py::test_empty_chunks_in_array_len",
"dask/array/tests/test_array_core.py::test_meta[None]",
"dask/array/tests/test_array_core.py::test_meta[dtype1]",
"dask/array/tests/test_reductions.py::test_reductions_1D[f4]",
"dask/array/tests/test_reductions.py::test_reductions_1D[i4]",
"dask/array/tests/test_reductions.py::test_reduction_errors",
"dask/array/tests/test_reductions.py::test_arg_reductions[argmin-argmin]",
"dask/array/tests/test_reductions.py::test_arg_reductions[argmax-argmax]",
"dask/array/tests/test_reductions.py::test_arg_reductions[_nanargmin-nanargmin]",
"dask/array/tests/test_reductions.py::test_arg_reductions[_nanargmax-nanargmax]",
"dask/array/tests/test_reductions.py::test_nanarg_reductions[_nanargmin-nanargmin]",
"dask/array/tests/test_reductions.py::test_nanarg_reductions[_nanargmax-nanargmax]",
"dask/array/tests/test_reductions.py::test_reductions_2D_nans",
"dask/array/tests/test_reductions.py::test_moment",
"dask/array/tests/test_reductions.py::test_reductions_with_negative_axes",
"dask/array/tests/test_reductions.py::test_nan",
"dask/array/tests/test_reductions.py::test_0d_array",
"dask/array/tests/test_reductions.py::test_reduction_on_scalar",
"dask/array/tests/test_reductions.py::test_reductions_with_empty_array",
"dask/array/tests/test_reductions.py::test_tree_reduce_depth",
"dask/array/tests/test_reductions.py::test_tree_reduce_set_options",
"dask/array/tests/test_reductions.py::test_reduction_names",
"dask/array/tests/test_reductions.py::test_array_reduction_out[sum]",
"dask/array/tests/test_reductions.py::test_array_reduction_out[argmax]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[None-cumsum]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[None-cumprod]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[0-cumsum]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[0-cumprod]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[1-cumsum]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[1-cumprod]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[-1-cumsum]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_axis[-1-cumprod]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_out[cumsum]",
"dask/array/tests/test_reductions.py::test_array_cumreduction_out[cumprod]",
"dask/array/tests/test_reductions.py::test_topk_argtopk1[None-sort-topk]",
"dask/array/tests/test_reductions.py::test_topk_argtopk1[None-argsort-argtopk]",
"dask/array/tests/test_reductions.py::test_topk_argtopk1[2-sort-topk]",
"dask/array/tests/test_reductions.py::test_topk_argtopk1[2-argsort-argtopk]",
"dask/array/tests/test_reductions.py::test_topk_argtopk2",
"dask/array/tests/test_routines.py::test_array",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_3d]",
"dask/array/tests/test_routines.py::test_transpose",
"dask/array/tests/test_routines.py::test_transpose_negative_axes",
"dask/array/tests/test_routines.py::test_swapaxes",
"dask/array/tests/test_routines.py::test_flip[shape0-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape0-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape1-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape1-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape2-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape2-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape3-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape3-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape4-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape4-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape0-y_shape0]",
"dask/array/tests/test_routines.py::test_matmul[x_shape1-y_shape1]",
"dask/array/tests/test_routines.py::test_matmul[x_shape2-y_shape2]",
"dask/array/tests/test_routines.py::test_matmul[x_shape3-y_shape3]",
"dask/array/tests/test_routines.py::test_matmul[x_shape4-y_shape4]",
"dask/array/tests/test_routines.py::test_matmul[x_shape5-y_shape5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape6-y_shape6]",
"dask/array/tests/test_routines.py::test_matmul[x_shape7-y_shape7]",
"dask/array/tests/test_routines.py::test_matmul[x_shape8-y_shape8]",
"dask/array/tests/test_routines.py::test_matmul[x_shape9-y_shape9]",
"dask/array/tests/test_routines.py::test_matmul[x_shape10-y_shape10]",
"dask/array/tests/test_routines.py::test_matmul[x_shape11-y_shape11]",
"dask/array/tests/test_routines.py::test_matmul[x_shape12-y_shape12]",
"dask/array/tests/test_routines.py::test_matmul[x_shape13-y_shape13]",
"dask/array/tests/test_routines.py::test_matmul[x_shape14-y_shape14]",
"dask/array/tests/test_routines.py::test_matmul[x_shape15-y_shape15]",
"dask/array/tests/test_routines.py::test_matmul[x_shape16-y_shape16]",
"dask/array/tests/test_routines.py::test_matmul[x_shape17-y_shape17]",
"dask/array/tests/test_routines.py::test_matmul[x_shape18-y_shape18]",
"dask/array/tests/test_routines.py::test_matmul[x_shape19-y_shape19]",
"dask/array/tests/test_routines.py::test_matmul[x_shape20-y_shape20]",
"dask/array/tests/test_routines.py::test_matmul[x_shape21-y_shape21]",
"dask/array/tests/test_routines.py::test_matmul[x_shape22-y_shape22]",
"dask/array/tests/test_routines.py::test_matmul[x_shape23-y_shape23]",
"dask/array/tests/test_routines.py::test_matmul[x_shape24-y_shape24]",
"dask/array/tests/test_routines.py::test_tensordot",
"dask/array/tests/test_routines.py::test_tensordot_2[0]",
"dask/array/tests/test_routines.py::test_tensordot_2[1]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes2]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes3]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes4]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes5]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes6]",
"dask/array/tests/test_routines.py::test_dot_method",
"dask/array/tests/test_routines.py::test_vdot[shape0-chunks0]",
"dask/array/tests/test_routines.py::test_vdot[shape1-chunks1]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-range-<lambda>]",
"dask/array/tests/test_routines.py::test_ptp[shape0-None]",
"dask/array/tests/test_routines.py::test_ptp[shape1-0]",
"dask/array/tests/test_routines.py::test_ptp[shape2-1]",
"dask/array/tests/test_routines.py::test_ptp[shape3-2]",
"dask/array/tests/test_routines.py::test_ptp[shape4--1]",
"dask/array/tests/test_routines.py::test_diff[0-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[0-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[0-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[0-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[1-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[1-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[1-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[1-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[2-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[2-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[2-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[2-shape3--1]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape1]",
"dask/array/tests/test_routines.py::test_bincount",
"dask/array/tests/test_routines.py::test_bincount_with_weights",
"dask/array/tests/test_routines.py::test_bincount_raises_informative_error_on_missing_minlength_kwarg",
"dask/array/tests/test_routines.py::test_digitize",
"dask/array/tests/test_routines.py::test_histogram",
"dask/array/tests/test_routines.py::test_histogram_alternative_bins_range",
"dask/array/tests/test_routines.py::test_histogram_return_type",
"dask/array/tests/test_routines.py::test_histogram_extra_args_and_shapes",
"dask/array/tests/test_routines.py::test_cov",
"dask/array/tests/test_routines.py::test_corrcoef",
"dask/array/tests/test_routines.py::test_round",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-True]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[True]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[False]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_ravel",
"dask/array/tests/test_routines.py::test_squeeze[None-True]",
"dask/array/tests/test_routines.py::test_squeeze[None-False]",
"dask/array/tests/test_routines.py::test_squeeze[0-True]",
"dask/array/tests/test_routines.py::test_squeeze[0-False]",
"dask/array/tests/test_routines.py::test_squeeze[-1-True]",
"dask/array/tests/test_routines.py::test_squeeze[-1-False]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-True]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-False]",
"dask/array/tests/test_routines.py::test_vstack",
"dask/array/tests/test_routines.py::test_hstack",
"dask/array/tests/test_routines.py::test_dstack",
"dask/array/tests/test_routines.py::test_take",
"dask/array/tests/test_routines.py::test_take_dask_from_numpy",
"dask/array/tests/test_routines.py::test_compress",
"dask/array/tests/test_routines.py::test_extract",
"dask/array/tests/test_routines.py::test_isnull",
"dask/array/tests/test_routines.py::test_isclose",
"dask/array/tests/test_routines.py::test_allclose",
"dask/array/tests/test_routines.py::test_choose",
"dask/array/tests/test_routines.py::test_piecewise",
"dask/array/tests/test_routines.py::test_piecewise_otherwise",
"dask/array/tests/test_routines.py::test_argwhere",
"dask/array/tests/test_routines.py::test_argwhere_obj",
"dask/array/tests/test_routines.py::test_argwhere_str",
"dask/array/tests/test_routines.py::test_where",
"dask/array/tests/test_routines.py::test_where_scalar_dtype",
"dask/array/tests/test_routines.py::test_where_bool_optimization",
"dask/array/tests/test_routines.py::test_where_nonzero",
"dask/array/tests/test_routines.py::test_where_incorrect_args",
"dask/array/tests/test_routines.py::test_count_nonzero",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_str",
"dask/array/tests/test_routines.py::test_flatnonzero",
"dask/array/tests/test_routines.py::test_nonzero",
"dask/array/tests/test_routines.py::test_nonzero_method",
"dask/array/tests/test_routines.py::test_coarsen",
"dask/array/tests/test_routines.py::test_coarsen_with_excess",
"dask/array/tests/test_routines.py::test_insert",
"dask/array/tests/test_routines.py::test_multi_insert",
"dask/array/tests/test_routines.py::test_result_type",
"dask/array/tests/test_routines.py::test_einsum[abc,bad->abcd]",
"dask/array/tests/test_routines.py::test_einsum[abcdef,bcdfg->abcdeg]",
"dask/array/tests/test_routines.py::test_einsum[ea,fb,abcd,gc,hd->efgh]",
"dask/array/tests/test_routines.py::test_einsum[ab,b]",
"dask/array/tests/test_routines.py::test_einsum[aa]",
"dask/array/tests/test_routines.py::test_einsum[a,a->]",
"dask/array/tests/test_routines.py::test_einsum[a,a->a]",
"dask/array/tests/test_routines.py::test_einsum[a,a]",
"dask/array/tests/test_routines.py::test_einsum[a,b]",
"dask/array/tests/test_routines.py::test_einsum[a,b,c]",
"dask/array/tests/test_routines.py::test_einsum[a]",
"dask/array/tests/test_routines.py::test_einsum[ba,b]",
"dask/array/tests/test_routines.py::test_einsum[ba,b->]",
"dask/array/tests/test_routines.py::test_einsum[defab,fedbc->defac]",
"dask/array/tests/test_routines.py::test_einsum[ab...,bc...->ac...]",
"dask/array/tests/test_routines.py::test_einsum[a...a]",
"dask/array/tests/test_routines.py::test_einsum[abc...->cba...]",
"dask/array/tests/test_routines.py::test_einsum[...ab->...a]",
"dask/array/tests/test_routines.py::test_einsum[a...a->a...]",
"dask/array/tests/test_routines.py::test_einsum[...abc,...abcd->...d]",
"dask/array/tests/test_routines.py::test_einsum[ab...,b->ab...]",
"dask/array/tests/test_routines.py::test_einsum[aa->a]",
"dask/array/tests/test_routines.py::test_einsum[ab,ab,c->c]",
"dask/array/tests/test_routines.py::test_einsum[aab,bc->ac]",
"dask/array/tests/test_routines.py::test_einsum[aab,bcc->ac]",
"dask/array/tests/test_routines.py::test_einsum[fdf,cdd,ccd,afe->ae]",
"dask/array/tests/test_routines.py::test_einsum[fff,fae,bef,def->abd]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts0]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts1]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts2]",
"dask/array/tests/test_routines.py::test_einsum_order[C]",
"dask/array/tests/test_routines.py::test_einsum_order[F]",
"dask/array/tests/test_routines.py::test_einsum_order[A]",
"dask/array/tests/test_routines.py::test_einsum_order[K]",
"dask/array/tests/test_routines.py::test_einsum_casting[no]",
"dask/array/tests/test_routines.py::test_einsum_casting[equiv]",
"dask/array/tests/test_routines.py::test_einsum_casting[safe]",
"dask/array/tests/test_routines.py::test_einsum_casting[same_kind]",
"dask/array/tests/test_routines.py::test_einsum_casting[unsafe]",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction2",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction3",
"dask/dataframe/tests/test_dataframe.py::test_head_tail",
"dask/dataframe/tests/test_dataframe.py::test_head_npartitions",
"dask/dataframe/tests/test_dataframe.py::test_head_npartitions_warn",
"dask/dataframe/tests/test_dataframe.py::test_index_head",
"dask/dataframe/tests/test_dataframe.py::test_Series",
"dask/dataframe/tests/test_dataframe.py::test_Index",
"dask/dataframe/tests/test_dataframe.py::test_Scalar",
"dask/dataframe/tests/test_dataframe.py::test_column_names",
"dask/dataframe/tests/test_dataframe.py::test_index_names",
"dask/dataframe/tests/test_dataframe.py::test_timezone_freq[1]",
"dask/dataframe/tests/test_dataframe.py::test_rename_columns",
"dask/dataframe/tests/test_dataframe.py::test_rename_series",
"dask/dataframe/tests/test_dataframe.py::test_rename_series_method",
"dask/dataframe/tests/test_dataframe.py::test_describe",
"dask/dataframe/tests/test_dataframe.py::test_describe_empty",
"dask/dataframe/tests/test_dataframe.py::test_cumulative",
"dask/dataframe/tests/test_dataframe.py::test_dropna",
"dask/dataframe/tests/test_dataframe.py::test_squeeze",
"dask/dataframe/tests/test_dataframe.py::test_where_mask",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_multi_argument",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_names",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_column_info",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_method_names",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_keeps_kwargs_readable",
"dask/dataframe/tests/test_dataframe.py::test_metadata_inference_single_partition_aligned_args",
"dask/dataframe/tests/test_dataframe.py::test_drop_duplicates",
"dask/dataframe/tests/test_dataframe.py::test_drop_duplicates_subset",
"dask/dataframe/tests/test_dataframe.py::test_get_partition",
"dask/dataframe/tests/test_dataframe.py::test_ndim",
"dask/dataframe/tests/test_dataframe.py::test_dtype",
"dask/dataframe/tests/test_dataframe.py::test_value_counts",
"dask/dataframe/tests/test_dataframe.py::test_unique",
"dask/dataframe/tests/test_dataframe.py::test_isin",
"dask/dataframe/tests/test_dataframe.py::test_len",
"dask/dataframe/tests/test_dataframe.py::test_size",
"dask/dataframe/tests/test_dataframe.py::test_nbytes",
"dask/dataframe/tests/test_dataframe.py::test_quantile",
"dask/dataframe/tests/test_dataframe.py::test_quantile_missing",
"dask/dataframe/tests/test_dataframe.py::test_empty_quantile",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_quantile",
"dask/dataframe/tests/test_dataframe.py::test_index",
"dask/dataframe/tests/test_dataframe.py::test_assign",
"dask/dataframe/tests/test_dataframe.py::test_map",
"dask/dataframe/tests/test_dataframe.py::test_concat",
"dask/dataframe/tests/test_dataframe.py::test_args",
"dask/dataframe/tests/test_dataframe.py::test_known_divisions",
"dask/dataframe/tests/test_dataframe.py::test_unknown_divisions",
"dask/dataframe/tests/test_dataframe.py::test_align[inner]",
"dask/dataframe/tests/test_dataframe.py::test_align[outer]",
"dask/dataframe/tests/test_dataframe.py::test_align[left]",
"dask/dataframe/tests/test_dataframe.py::test_align[right]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[inner]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[outer]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[left]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[right]",
"dask/dataframe/tests/test_dataframe.py::test_combine",
"dask/dataframe/tests/test_dataframe.py::test_combine_first",
"dask/dataframe/tests/test_dataframe.py::test_random_partitions",
"dask/dataframe/tests/test_dataframe.py::test_series_round",
"dask/dataframe/tests/test_dataframe.py::test_repartition_divisions",
"dask/dataframe/tests/test_dataframe.py::test_repartition_on_pandas_dataframe",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions_same_limits",
"dask/dataframe/tests/test_dataframe.py::test_repartition_object_index",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_errors",
"dask/dataframe/tests/test_dataframe.py::test_embarrassingly_parallel_operations",
"dask/dataframe/tests/test_dataframe.py::test_fillna",
"dask/dataframe/tests/test_dataframe.py::test_fillna_multi_dataframe",
"dask/dataframe/tests/test_dataframe.py::test_ffill_bfill",
"dask/dataframe/tests/test_dataframe.py::test_fillna_series_types",
"dask/dataframe/tests/test_dataframe.py::test_sample",
"dask/dataframe/tests/test_dataframe.py::test_sample_without_replacement",
"dask/dataframe/tests/test_dataframe.py::test_datetime_accessor",
"dask/dataframe/tests/test_dataframe.py::test_str_accessor",
"dask/dataframe/tests/test_dataframe.py::test_empty_max",
"dask/dataframe/tests/test_dataframe.py::test_deterministic_apply_concat_apply_names",
"dask/dataframe/tests/test_dataframe.py::test_aca_meta_infer",
"dask/dataframe/tests/test_dataframe.py::test_aca_split_every",
"dask/dataframe/tests/test_dataframe.py::test_reduction_method",
"dask/dataframe/tests/test_dataframe.py::test_reduction_method_split_every",
"dask/dataframe/tests/test_dataframe.py::test_pipe",
"dask/dataframe/tests/test_dataframe.py::test_gh_517",
"dask/dataframe/tests/test_dataframe.py::test_drop_axis_1",
"dask/dataframe/tests/test_dataframe.py::test_gh580",
"dask/dataframe/tests/test_dataframe.py::test_rename_dict",
"dask/dataframe/tests/test_dataframe.py::test_rename_function",
"dask/dataframe/tests/test_dataframe.py::test_rename_index",
"dask/dataframe/tests/test_dataframe.py::test_to_frame",
"dask/dataframe/tests/test_dataframe.py::test_apply_warns",
"dask/dataframe/tests/test_dataframe.py::test_applymap",
"dask/dataframe/tests/test_dataframe.py::test_abs",
"dask/dataframe/tests/test_dataframe.py::test_round",
"dask/dataframe/tests/test_dataframe.py::test_cov",
"dask/dataframe/tests/test_dataframe.py::test_corr",
"dask/dataframe/tests/test_dataframe.py::test_cov_corr_meta",
"dask/dataframe/tests/test_dataframe.py::test_autocorr",
"dask/dataframe/tests/test_dataframe.py::test_index_time_properties",
"dask/dataframe/tests/test_dataframe.py::test_nlargest_nsmallest",
"dask/dataframe/tests/test_dataframe.py::test_reset_index",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_compute_forward_kwargs",
"dask/dataframe/tests/test_dataframe.py::test_series_iteritems",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_iterrows",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_itertuples",
"dask/dataframe/tests/test_dataframe.py::test_astype",
"dask/dataframe/tests/test_dataframe.py::test_astype_categoricals",
"dask/dataframe/tests/test_dataframe.py::test_astype_categoricals_known",
"dask/dataframe/tests/test_dataframe.py::test_groupby_callable",
"dask/dataframe/tests/test_dataframe.py::test_methods_tokenize_differently",
"dask/dataframe/tests/test_dataframe.py::test_gh_1301",
"dask/dataframe/tests/test_dataframe.py::test_timeseries_sorted",
"dask/dataframe/tests/test_dataframe.py::test_column_assignment",
"dask/dataframe/tests/test_dataframe.py::test_columns_assignment",
"dask/dataframe/tests/test_dataframe.py::test_attribute_assignment",
"dask/dataframe/tests/test_dataframe.py::test_setitem_triggering_realign",
"dask/dataframe/tests/test_dataframe.py::test_inplace_operators",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx0-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx0-False]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx1-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx1-False]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin_empty_partitions",
"dask/dataframe/tests/test_dataframe.py::test_getitem_meta",
"dask/dataframe/tests/test_dataframe.py::test_getitem_multilevel",
"dask/dataframe/tests/test_dataframe.py::test_diff",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_drop_duplicates[None]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_drop_duplicates[2]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_value_counts[None]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_value_counts[2]",
"dask/dataframe/tests/test_dataframe.py::test_values",
"dask/dataframe/tests/test_dataframe.py::test_copy",
"dask/dataframe/tests/test_dataframe.py::test_del",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[True-True]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[True-False]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[False-True]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[False-False]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[sum]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[mean]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[std]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[var]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[count]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[min]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[max]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[idxmin]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[idxmax]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[prod]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[all]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[sem]",
"dask/dataframe/tests/test_dataframe.py::test_to_datetime",
"dask/dataframe/tests/test_dataframe.py::test_to_timedelta",
"dask/dataframe/tests/test_dataframe.py::test_isna[values0]",
"dask/dataframe/tests/test_dataframe.py::test_isna[values1]",
"dask/dataframe/tests/test_dataframe.py::test_slice_on_filtered_boundary[0]",
"dask/dataframe/tests/test_dataframe.py::test_slice_on_filtered_boundary[9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_nonmonotonic",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1-None-False-False-drop0]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1-None-False-True-drop1]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3-False-False-drop2]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3-True-False-drop3]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-0.5-None-False-False-drop4]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-0.5-None-False-True-drop5]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1.5-None-False-True-drop6]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3.5-False-False-drop7]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3.5-True-False-drop8]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-2.5-False-False-drop9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index0-0-9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index1--1-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index2-None-10]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index3-None-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index4--1-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index5-None-2]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index6--2-3]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index7-None-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index8-left8-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index9-None-right9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index10-left10-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index11-None-right11]",
"dask/dataframe/tests/test_dataframe.py::test_better_errors_object_reductions",
"dask/dataframe/tests/test_dataframe.py::test_sample_empty_partitions",
"dask/dataframe/tests/test_dataframe.py::test_coerce",
"dask/dataframe/tests/test_dataframe.py::test_bool",
"dask/dataframe/tests/test_dataframe.py::test_cumulative_multiple_columns",
"dask/dataframe/tests/test_dataframe.py::test_map_partition_array[asarray]",
"dask/dataframe/tests/test_dataframe.py::test_map_partition_array[func1]",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_operations",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_operations_errors",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_multi_dimensional",
"dask/dataframe/tests/test_dataframe.py::test_meta_raises"
]
| [
"dask/array/tests/test_array_core.py::test_field_access",
"dask/array/tests/test_array_core.py::test_field_access_with_shape",
"dask/array/tests/test_array_core.py::test_matmul",
"dask/array/tests/test_reductions.py::test_nan_object[nansum]",
"dask/array/tests/test_reductions.py::test_nan_object[sum]",
"dask/array/tests/test_reductions.py::test_nan_object[nanmin]",
"dask/array/tests/test_reductions.py::test_nan_object[min]",
"dask/array/tests/test_reductions.py::test_nan_object[nanmax]",
"dask/array/tests/test_reductions.py::test_nan_object[max]",
"dask/dataframe/tests/test_dataframe.py::test_Dataframe",
"dask/dataframe/tests/test_dataframe.py::test_attributes",
"dask/dataframe/tests/test_dataframe.py::test_timezone_freq[npartitions1]",
"dask/dataframe/tests/test_dataframe.py::test_clip[2-5]",
"dask/dataframe/tests/test_dataframe.py::test_clip[2.5-3.5]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_picklable",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_divisions",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_month",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include0-None]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[None-exclude1]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include2-exclude2]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include3-None]",
"dask/dataframe/tests/test_dataframe.py::test_to_timestamp",
"dask/dataframe/tests/test_dataframe.py::test_apply",
"dask/dataframe/tests/test_dataframe.py::test_cov_corr_mixed",
"dask/dataframe/tests/test_dataframe.py::test_apply_infer_columns",
"dask/dataframe/tests/test_dataframe.py::test_info",
"dask/dataframe/tests/test_dataframe.py::test_groupby_multilevel_info",
"dask/dataframe/tests/test_dataframe.py::test_categorize_info",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx2-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx2-False]",
"dask/dataframe/tests/test_dataframe.py::test_shift",
"dask/dataframe/tests/test_dataframe.py::test_shift_with_freq",
"dask/dataframe/tests/test_dataframe.py::test_first_and_last[first]",
"dask/dataframe/tests/test_dataframe.py::test_first_and_last[last]",
"dask/dataframe/tests/test_dataframe.py::test_datetime_loc_open_slicing"
]
| []
| []
| BSD 3-Clause "New" or "Revised" License | 2,431 | [
"dask/array/einsumfuncs.py",
"dask/array/utils.py",
"dask/array/numpy_compat.py",
"dask/array/reductions.py",
"docs/source/array-api.rst",
"dask/array/wrap.py",
"setup.py",
"dask/array/routines.py",
".gitignore",
"dask/array/core.py",
".travis.yml",
"dask/array/chunk.py",
"dask/array/__init__.py",
"dask/array/ma.py",
"docs/source/changelog.rst"
]
| [
"dask/array/einsumfuncs.py",
"dask/array/utils.py",
"dask/array/numpy_compat.py",
"dask/array/reductions.py",
"docs/source/array-api.rst",
"dask/array/wrap.py",
"setup.py",
"dask/array/routines.py",
".gitignore",
"dask/array/core.py",
".travis.yml",
"dask/array/chunk.py",
"dask/array/__init__.py",
"dask/array/ma.py",
"docs/source/changelog.rst"
]
|
|
Azure__WALinuxAgent-1127 | d1f9e05b9eaa63997108ebf3de261bf9dca7a25d | 2018-04-20 21:42:00 | 6e9b985c1d7d564253a1c344bab01b45093103cd | diff --git a/azurelinuxagent/ga/exthandlers.py b/azurelinuxagent/ga/exthandlers.py
index 91285cf9..024c7f55 100644
--- a/azurelinuxagent/ga/exthandlers.py
+++ b/azurelinuxagent/ga/exthandlers.py
@@ -56,6 +56,7 @@ from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION
# HandlerEnvironment.json schema version
HANDLER_ENVIRONMENT_VERSION = 1.0
+EXTENSION_STATUS_ERROR = 'error'
VALID_EXTENSION_STATUS = ['transitioning', 'error', 'success', 'warning']
VALID_HANDLER_STATUS = ['Ready', 'NotReady', "Installing", "Unresponsive"]
@@ -107,14 +108,15 @@ def parse_ext_status(ext_status, data):
validate_has_key(data, 'status', 'status')
status_data = data['status']
validate_has_key(status_data, 'status', 'status/status')
-
- validate_in_range(status_data['status'], VALID_EXTENSION_STATUS,
- 'status/status')
+
+ status = status_data['status']
+ if status not in VALID_EXTENSION_STATUS:
+ status = EXTENSION_STATUS_ERROR
applied_time = status_data.get('configurationAppliedTime')
ext_status.configurationAppliedTime = applied_time
ext_status.operation = status_data.get('operation')
- ext_status.status = status_data.get('status')
+ ext_status.status = status
ext_status.code = status_data.get('code', 0)
formatted_message = status_data.get('formattedMessage')
ext_status.message = parse_formatted_message(formatted_message)
| Extension install failures timeout
The Windows GA reports a status which allows a fast failure, however the Linux GA just reports 'Not ready' which essentially waits for a CRP timeout. We should investigate if there is a substatus we are missing to allow a fast failure. | Azure/WALinuxAgent | diff --git a/tests/ga/test_exthandlers.py b/tests/ga/test_exthandlers.py
new file mode 100644
index 00000000..248750b1
--- /dev/null
+++ b/tests/ga/test_exthandlers.py
@@ -0,0 +1,74 @@
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the Apache License.
+import json
+
+from azurelinuxagent.common.protocol.restapi import ExtensionStatus
+from azurelinuxagent.ga.exthandlers import parse_ext_status
+from tests.tools import *
+
+
+class TestExtHandlers(AgentTestCase):
+ def test_parse_extension_status00(self):
+ """
+ Parse a status report for a successful execution of an extension.
+ """
+
+ s = '''[{
+ "status": {
+ "status": "success",
+ "formattedMessage": {
+ "lang": "en-US",
+ "message": "Command is finished."
+ },
+ "operation": "Daemon",
+ "code": "0",
+ "name": "Microsoft.OSTCExtensions.CustomScriptForLinux"
+ },
+ "version": "1.0",
+ "timestampUTC": "2018-04-20T21:20:24Z"
+ }
+]'''
+ ext_status = ExtensionStatus(seq_no=0)
+ parse_ext_status(ext_status, json.loads(s))
+
+ self.assertEqual('0', ext_status.code)
+ self.assertEqual(None, ext_status.configurationAppliedTime)
+ self.assertEqual('Command is finished.', ext_status.message)
+ self.assertEqual('Daemon', ext_status.operation)
+ self.assertEqual('success', ext_status.status)
+ self.assertEqual(0, ext_status.sequenceNumber)
+ self.assertEqual(0, len(ext_status.substatusList))
+
+ def test_parse_extension_status01(self):
+ """
+ Parse a status report for a failed execution of an extension.
+
+ The extension returned a bad status/status of failed.
+ The agent should handle this gracefully, and convert all unknown
+ status/status values into an error.
+ """
+
+ s = '''[{
+ "status": {
+ "status": "failed",
+ "formattedMessage": {
+ "lang": "en-US",
+ "message": "Enable failed: Failed with error: commandToExecute is empty or invalid ..."
+ },
+ "operation": "Enable",
+ "code": "0",
+ "name": "Microsoft.OSTCExtensions.CustomScriptForLinux"
+ },
+ "version": "1.0",
+ "timestampUTC": "2018-04-20T20:50:22Z"
+}]'''
+ ext_status = ExtensionStatus(seq_no=0)
+ parse_ext_status(ext_status, json.loads(s))
+
+ self.assertEqual('0', ext_status.code)
+ self.assertEqual(None, ext_status.configurationAppliedTime)
+ self.assertEqual('Enable failed: Failed with error: commandToExecute is empty or invalid ...', ext_status.message)
+ self.assertEqual('Enable', ext_status.operation)
+ self.assertEqual('error', ext_status.status)
+ self.assertEqual(0, ext_status.sequenceNumber)
+ self.assertEqual(0, len(ext_status.substatusList))
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 2.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pyasn1",
"nose",
"nose-cov",
"pytest"
],
"pre_install": null,
"python": "3.4",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
cov-core==1.15.0
coverage==6.2
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
nose==1.3.7
nose-cov==1.6
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyasn1==0.5.1
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
-e git+https://github.com/Azure/WALinuxAgent.git@d1f9e05b9eaa63997108ebf3de261bf9dca7a25d#egg=WALinuxAgent
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: WALinuxAgent
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- cov-core==1.15.0
- coverage==6.2
- nose==1.3.7
- nose-cov==1.6
- pyasn1==0.5.1
prefix: /opt/conda/envs/WALinuxAgent
| [
"tests/ga/test_exthandlers.py::TestExtHandlers::test_parse_extension_status01"
]
| []
| [
"tests/ga/test_exthandlers.py::TestExtHandlers::test_parse_extension_status00"
]
| []
| Apache License 2.0 | 2,432 | [
"azurelinuxagent/ga/exthandlers.py"
]
| [
"azurelinuxagent/ga/exthandlers.py"
]
|
|
python-cmd2__cmd2-366 | a0a46f9396a72f440f65e46d7170a0d366796574 | 2018-04-22 03:01:24 | 8f88f819fae7508066a81a8d961a7115f2ec4bed | diff --git a/CHANGELOG.md b/CHANGELOG.md
index aa2e785f..bb577994 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,8 @@
* All ``cmd2`` code should be ported to use the new ``argparse``-based decorators
* See the [Argument Processing](http://cmd2.readthedocs.io/en/latest/argument_processing.html) section of the documentation for more information on these decorators
* Alternatively, see the [argparse_example.py](https://github.com/python-cmd2/cmd2/blob/master/examples/argparse_example.py)
+ * Deleted ``cmd_with_subs_completer``, ``get_subcommands``, and ``get_subcommand_completer``
+ * Replaced by default AutoCompleter implementation for all commands using argparse
* Python 2 no longer supported
* ``cmd2`` now supports Python 3.4+
diff --git a/cmd2/argparse_completer.py b/cmd2/argparse_completer.py
index 35f9342b..e87e9c04 100755
--- a/cmd2/argparse_completer.py
+++ b/cmd2/argparse_completer.py
@@ -71,6 +71,10 @@ import re as _re
from .rl_utils import rl_force_redisplay
+# attribute that can optionally added to an argparse argument (called an Action) to
+# define the completion choices for the argument. You may provide a Collection or a Function.
+ACTION_ARG_CHOICES = 'arg_choices'
+
class _RangeAction(object):
def __init__(self, nargs: Union[int, str, Tuple[int, int], None]):
self.nargs_min = None
@@ -220,6 +224,10 @@ class AutoCompleter(object):
# if there are choices defined, record them in the arguments dictionary
if action.choices is not None:
self._arg_choices[action.dest] = action.choices
+ # if completion choices are tagged on the action, record them
+ elif hasattr(action, ACTION_ARG_CHOICES):
+ action_arg_choices = getattr(action, ACTION_ARG_CHOICES)
+ self._arg_choices[action.dest] = action_arg_choices
# if the parameter is flag based, it will have option_strings
if action.option_strings:
@@ -406,6 +414,21 @@ class AutoCompleter(object):
return completion_results
+ def complete_command_help(self, tokens: List[str], text: str, line: str, begidx: int, endidx: int) -> List[str]:
+ """Supports the completion of sub-commands for commands through the cmd2 help command."""
+ for idx, token in enumerate(tokens):
+ if idx >= self._token_start_index:
+ if self._positional_completers:
+ # For now argparse only allows 1 sub-command group per level
+ # so this will only loop once.
+ for completers in self._positional_completers.values():
+ if token in completers:
+ return completers[token].complete_command_help(tokens, text, line, begidx, endidx)
+ else:
+ return self.basic_complete(text, line, begidx, endidx, completers.keys())
+ return []
+
+
@staticmethod
def _process_action_nargs(action: argparse.Action, arg_state: _ArgumentState) -> None:
if isinstance(action, _RangeAction):
@@ -467,6 +490,7 @@ class AutoCompleter(object):
def _resolve_choices_for_arg(self, action: argparse.Action, used_values=()) -> List[str]:
if action.dest in self._arg_choices:
args = self._arg_choices[action.dest]
+
if callable(args):
args = args()
diff --git a/cmd2/cmd2.py b/cmd2/cmd2.py
index 60d1dbf8..288a506b 100755
--- a/cmd2/cmd2.py
+++ b/cmd2/cmd2.py
@@ -51,6 +51,7 @@ import pyperclip
# Set up readline
from .rl_utils import rl_force_redisplay, readline, rl_type, RlType
+from .argparse_completer import AutoCompleter, ACArgumentParser
if rl_type == RlType.PYREADLINE:
@@ -266,23 +267,8 @@ def with_argparser_and_unknown_args(argparser: argparse.ArgumentParser) -> Calla
cmd_wrapper.__doc__ = argparser.format_help()
- # Mark this function as having an argparse ArgumentParser (used by do_help)
- cmd_wrapper.__dict__['has_parser'] = True
-
- # If there are subcommands, store their names in a list to support tab-completion of subcommand names
- if argparser._subparsers is not None:
- # Key is subcommand name and value is completer function
- subcommands = collections.OrderedDict()
-
- # Get all subcommands and check if they have completer functions
- for name, parser in argparser._subparsers._group_actions[0]._name_parser_map.items():
- if 'completer' in parser._defaults:
- completer = parser._defaults['completer']
- else:
- completer = None
- subcommands[name] = completer
-
- cmd_wrapper.__dict__['subcommands'] = subcommands
+ # Mark this function as having an argparse ArgumentParser
+ setattr(cmd_wrapper, 'argparser', argparser)
return cmd_wrapper
@@ -318,24 +304,8 @@ def with_argparser(argparser: argparse.ArgumentParser) -> Callable:
cmd_wrapper.__doc__ = argparser.format_help()
- # Mark this function as having an argparse ArgumentParser (used by do_help)
- cmd_wrapper.__dict__['has_parser'] = True
-
- # If there are subcommands, store their names in a list to support tab-completion of subcommand names
- if argparser._subparsers is not None:
-
- # Key is subcommand name and value is completer function
- subcommands = collections.OrderedDict()
-
- # Get all subcommands and check if they have completer functions
- for name, parser in argparser._subparsers._group_actions[0]._name_parser_map.items():
- if 'completer' in parser._defaults:
- completer = parser._defaults['completer']
- else:
- completer = None
- subcommands[name] = completer
-
- cmd_wrapper.__dict__['subcommands'] = subcommands
+ # Mark this function as having an argparse ArgumentParser
+ setattr(cmd_wrapper, 'argparser', argparser)
return cmd_wrapper
@@ -1020,49 +990,6 @@ class Cmd(cmd.Cmd):
return self._colorcodes[color][True] + val + self._colorcodes[color][False]
return val
- def get_subcommands(self, command):
- """
- Returns a list of a command's subcommand names if they exist
- :param command: the command we are querying
- :return: A subcommand list or None
- """
-
- subcommand_names = None
-
- # Check if is a valid command
- funcname = self._func_named(command)
-
- if funcname:
- # Check to see if this function was decorated with an argparse ArgumentParser
- func = getattr(self, funcname)
- subcommands = func.__dict__.get('subcommands', None)
- if subcommands is not None:
- subcommand_names = subcommands.keys()
-
- return subcommand_names
-
- def get_subcommand_completer(self, command, subcommand):
- """
- Returns a subcommand's tab completion function if one exists
- :param command: command which owns the subcommand
- :param subcommand: the subcommand we are querying
- :return: A completer or None
- """
-
- completer = None
-
- # Check if is a valid command
- funcname = self._func_named(command)
-
- if funcname:
- # Check to see if this function was decorated with an argparse ArgumentParser
- func = getattr(self, funcname)
- subcommands = func.__dict__.get('subcommands', None)
- if subcommands is not None:
- completer = subcommands[subcommand]
-
- return completer
-
# ----- Methods related to tab completion -----
def set_completion_defaults(self):
@@ -1794,16 +1721,14 @@ class Cmd(cmd.Cmd):
try:
compfunc = getattr(self, 'complete_' + command)
except AttributeError:
- compfunc = self.completedefault
-
- subcommands = self.get_subcommands(command)
- if subcommands is not None:
- # Since there are subcommands, then try completing those if the cursor is in
- # the token at index 1, otherwise default to using compfunc
- index_dict = {1: subcommands}
- compfunc = functools.partial(self.index_based_complete,
- index_dict=index_dict,
- all_else=compfunc)
+ # There's no completer function, next see if the command uses argparser
+ try:
+ cmd_func = getattr(self, 'do_' + command)
+ argparser = getattr(cmd_func, 'argparser')
+ # Command uses argparser, switch to the default argparse completer
+ compfunc = functools.partial(self._autocomplete_default, argparser=argparser)
+ except AttributeError:
+ compfunc = self.completedefault
# A valid command was not entered
else:
@@ -1910,6 +1835,16 @@ class Cmd(cmd.Cmd):
except IndexError:
return None
+ def _autocomplete_default(self, text: str, line: str, begidx: int, endidx: int,
+ argparser: argparse.ArgumentParser) -> List[str]:
+ """Default completion function for argparse commands."""
+ completer = AutoCompleter(argparser)
+
+ tokens, _ = self.tokens_for_completion(line, begidx, endidx)
+ results = completer.complete_command(tokens, text, line, begidx, endidx)
+
+ return results
+
def get_all_commands(self):
"""
Returns a list of all commands
@@ -1964,12 +1899,15 @@ class Cmd(cmd.Cmd):
strs_to_match = list(topics | visible_commands)
matches = self.basic_complete(text, line, begidx, endidx, strs_to_match)
- # Check if we are completing a subcommand
- elif index == subcmd_index:
-
- # Match subcommands if any exist
- command = tokens[cmd_index]
- matches = self.basic_complete(text, line, begidx, endidx, self.get_subcommands(command))
+ # check if the command uses argparser
+ elif index >= subcmd_index:
+ try:
+ cmd_func = getattr(self, 'do_' + tokens[cmd_index])
+ parser = getattr(cmd_func, 'argparser')
+ completer = AutoCompleter(parser)
+ matches = completer.complete_command_help(tokens[1:], text, line, begidx, endidx)
+ except AttributeError:
+ pass
return matches
@@ -2620,7 +2558,7 @@ Usage: Usage: unalias [-a] name [name ...]
if funcname:
# Check to see if this function was decorated with an argparse ArgumentParser
func = getattr(self, funcname)
- if func.__dict__.get('has_parser', False):
+ if hasattr(func, 'argparser'):
# Function has an argparser, so get help based on all the arguments in case there are sub-commands
new_arglist = arglist[1:]
new_arglist.append('-h')
@@ -2843,10 +2781,10 @@ Usage: Usage: unalias [-a] name [name ...]
else:
raise LookupError("Parameter '%s' not supported (type 'show' for list of parameters)." % param)
- set_parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
+ set_parser = ACArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
set_parser.add_argument('-a', '--all', action='store_true', help='display read-only settings as well')
set_parser.add_argument('-l', '--long', action='store_true', help='describe function of parameter')
- set_parser.add_argument('settable', nargs='*', help='[param_name] [value]')
+ set_parser.add_argument('settable', nargs=(0,2), help='[param_name] [value]')
@with_argparser(set_parser)
def do_set(self, args):
@@ -2927,87 +2865,6 @@ Usage: Usage: unalias [-a] name [name ...]
index_dict = {1: self.shell_cmd_complete}
return self.index_based_complete(text, line, begidx, endidx, index_dict, self.path_complete)
- def cmd_with_subs_completer(self, text, line, begidx, endidx):
- """
- This is a function provided for convenience to those who want an easy way to add
- tab completion to functions that implement subcommands. By setting this as the
- completer of the base command function, the correct completer for the chosen subcommand
- will be called.
-
- The use of this function requires assigning a completer function to the subcommand's parser
- Example:
- A command called print has a subcommands called 'names' that needs a tab completer
- When you create the parser for names, include the completer function in the parser's defaults.
-
- names_parser.set_defaults(func=print_names, completer=complete_print_names)
-
- To make sure the names completer gets called, set the completer for the print function
- in a similar fashion to what follows.
-
- complete_print = cmd2.Cmd.cmd_with_subs_completer
-
- When the subcommand's completer is called, this function will have stripped off all content from the
- beginning of the command line before the subcommand, meaning the line parameter always starts with the
- subcommand name and the index parameters reflect this change.
-
- For instance, the command "print names -d 2" becomes "names -d 2"
- begidx and endidx are incremented accordingly
-
- :param text: str - the string prefix we are attempting to match (all returned matches must begin with it)
- :param line: str - the current input line with leading whitespace removed
- :param begidx: int - the beginning index of the prefix text
- :param endidx: int - the ending index of the prefix text
- :return: List[str] - a list of possible tab completions
- """
- # The command is the token at index 0 in the command line
- cmd_index = 0
-
- # The subcommand is the token at index 1 in the command line
- subcmd_index = 1
-
- # Get all tokens through the one being completed
- tokens, _ = self.tokens_for_completion(line, begidx, endidx)
- if tokens is None:
- return []
-
- matches = []
-
- # Get the index of the token being completed
- index = len(tokens) - 1
-
- # If the token being completed is past the subcommand name, then do subcommand specific tab-completion
- if index > subcmd_index:
-
- # Get the command name
- command = tokens[cmd_index]
-
- # Get the subcommand name
- subcommand = tokens[subcmd_index]
-
- # Find the offset into line where the subcommand name begins
- subcmd_start = 0
- for cur_index in range(0, subcmd_index + 1):
- cur_token = tokens[cur_index]
- subcmd_start = line.find(cur_token, subcmd_start)
-
- if cur_index != subcmd_index:
- subcmd_start += len(cur_token)
-
- # Strip off everything before subcommand name
- orig_line = line
- line = line[subcmd_start:]
-
- # Update the indexes
- diff = len(orig_line) - len(line)
- begidx -= diff
- endidx -= diff
-
- # Call the subcommand specific completer if it exists
- compfunc = self.get_subcommand_completer(command, subcommand)
- if compfunc is not None:
- matches = compfunc(self, text, line, begidx, endidx)
-
- return matches
# noinspection PyBroadException
def do_py(self, arg):
diff --git a/docs/argument_processing.rst b/docs/argument_processing.rst
index 183dde4e..ecf59504 100644
--- a/docs/argument_processing.rst
+++ b/docs/argument_processing.rst
@@ -346,12 +346,10 @@ Sub-commands
Sub-commands are supported for commands using either the ``@with_argparser`` or
``@with_argparser_and_unknown_args`` decorator. The syntax for supporting them is based on argparse sub-parsers.
-Also, a convenience function called ``cmd_with_subs_completer`` is available to easily add tab completion to functions
-that implement subcommands. By setting this as the completer of the base command function, the correct completer for
-the chosen subcommand will be called.
+You may add multiple layers of sub-commands for your command. Cmd2 will automatically traverse and tab-complete
+sub-commands for all commands using argparse.
-See the subcommands_ example to learn more about how to use sub-commands in your ``cmd2`` application.
-This example also demonstrates usage of ``cmd_with_subs_completer``. In addition, the docstring for
-``cmd_with_subs_completer`` offers more details.
+See the subcommands_ and tab_autocompletion_ example to learn more about how to use sub-commands in your ``cmd2`` application.
.. _subcommands: https://github.com/python-cmd2/cmd2/blob/master/examples/subcommands.py
+.. _tab_autocompletion: https://github.com/python-cmd2/cmd2/blob/master/examples/tab_autocompletion.py
diff --git a/examples/subcommands.py b/examples/subcommands.py
index 031b17b2..75c0733e 100755
--- a/examples/subcommands.py
+++ b/examples/subcommands.py
@@ -35,12 +35,6 @@ class SubcommandsExample(cmd2.Cmd):
"""sport subcommand of base command"""
self.poutput('Sport is {}'.format(args.sport))
- # noinspection PyUnusedLocal
- def complete_base_sport(self, text, line, begidx, endidx):
- """ Adds tab completion to base sport subcommand """
- index_dict = {1: sport_item_strs}
- return self.index_based_complete(text, line, begidx, endidx, index_dict)
-
# create the top-level parser for the base command
base_parser = argparse.ArgumentParser(prog='base')
base_subparsers = base_parser.add_subparsers(title='subcommands', help='subcommand help')
@@ -53,15 +47,22 @@ class SubcommandsExample(cmd2.Cmd):
# create the parser for the "bar" subcommand
parser_bar = base_subparsers.add_parser('bar', help='bar help')
- parser_bar.add_argument('z', help='string')
parser_bar.set_defaults(func=base_bar)
+ bar_subparsers = parser_bar.add_subparsers(title='layer3', help='help for 3rd layer of commands')
+ parser_bar.add_argument('z', help='string')
+
+ bar_subparsers.add_parser('apple', help='apple help')
+ bar_subparsers.add_parser('artichoke', help='artichoke help')
+ bar_subparsers.add_parser('cranberries', help='cranberries help')
+
# create the parser for the "sport" subcommand
parser_sport = base_subparsers.add_parser('sport', help='sport help')
- parser_sport.add_argument('sport', help='Enter name of a sport')
+ sport_arg = parser_sport.add_argument('sport', help='Enter name of a sport')
+ setattr(sport_arg, 'arg_choices', sport_item_strs)
# Set both a function and tab completer for the "sport" subcommand
- parser_sport.set_defaults(func=base_sport, completer=complete_base_sport)
+ parser_sport.set_defaults(func=base_sport)
@with_argparser(base_parser)
def do_base(self, args):
@@ -74,9 +75,6 @@ class SubcommandsExample(cmd2.Cmd):
# No subcommand was provided, so call help
self.do_help('base')
- # Enable tab completion of base to make sure the subcommands' completers get called.
- complete_base = cmd2.Cmd.cmd_with_subs_completer
-
if __name__ == '__main__':
app = SubcommandsExample()
diff --git a/examples/tab_autocompletion.py b/examples/tab_autocompletion.py
index c704908f..17c8391d 100755
--- a/examples/tab_autocompletion.py
+++ b/examples/tab_autocompletion.py
@@ -13,6 +13,15 @@ from typing import List
import cmd2
from cmd2 import with_argparser, with_category, argparse_completer
+actors = ['Mark Hamill', 'Harrison Ford', 'Carrie Fisher', 'Alec Guinness', 'Peter Mayhew',
+ 'Anthony Daniels', 'Adam Driver', 'Daisy Ridley', 'John Boyega', 'Oscar Isaac',
+ 'Lupita Nyong\'o', 'Andy Serkis', 'Liam Neeson', 'Ewan McGregor', 'Natalie Portman',
+ 'Jake Lloyd', 'Hayden Christensen', 'Christopher Lee']
+
+def query_actors() -> List[str]:
+ """Simulating a function that queries and returns a completion values"""
+ return actors
+
class TabCompleteExample(cmd2.Cmd):
""" Example cmd2 application where we a base command which has a couple subcommands."""
@@ -27,10 +36,6 @@ class TabCompleteExample(cmd2.Cmd):
show_ratings = ['TV-Y', 'TV-Y7', 'TV-G', 'TV-PG', 'TV-14', 'TV-MA']
static_list_directors = ['J. J. Abrams', 'Irvin Kershner', 'George Lucas', 'Richard Marquand',
'Rian Johnson', 'Gareth Edwards']
- actors = ['Mark Hamill', 'Harrison Ford', 'Carrie Fisher', 'Alec Guinness', 'Peter Mayhew',
- 'Anthony Daniels', 'Adam Driver', 'Daisy Ridley', 'John Boyega', 'Oscar Isaac',
- 'Lupita Nyong\'o', 'Andy Serkis', 'Liam Neeson', 'Ewan McGregor', 'Natalie Portman',
- 'Jake Lloyd', 'Hayden Christensen', 'Christopher Lee']
USER_MOVIE_LIBRARY = ['ROGUE1', 'SW_EP04', 'SW_EP05']
MOVIE_DATABASE_IDS = ['SW_EP01', 'SW_EP02', 'SW_EP03', 'ROGUE1', 'SW_EP04',
'SW_EP05', 'SW_EP06', 'SW_EP07', 'SW_EP08', 'SW_EP09']
@@ -115,15 +120,6 @@ class TabCompleteExample(cmd2.Cmd):
if not args.type:
self.do_help('suggest')
- def complete_suggest(self, text: str, line: str, begidx: int, endidx: int) -> List[str]:
- """ Adds tab completion to media"""
- completer = argparse_completer.AutoCompleter(TabCompleteExample.suggest_parser, 1)
-
- tokens, _ = self.tokens_for_completion(line, begidx, endidx)
- results = completer.complete_command(tokens, text, line, begidx, endidx)
-
- return results
-
# If you prefer the original argparse help output but would like narg ranges, it's possible
# to enable narg ranges without the help changes using this method
@@ -143,15 +139,6 @@ class TabCompleteExample(cmd2.Cmd):
if not args.type:
self.do_help('orig_suggest')
- def complete_hybrid_suggest(self, text, line, begidx, endidx):
- """ Adds tab completion to media"""
- completer = argparse_completer.AutoCompleter(TabCompleteExample.suggest_parser_hybrid)
-
- tokens, _ = self.tokens_for_completion(line, begidx, endidx)
- results = completer.complete_command(tokens, text, line, begidx, endidx)
-
- return results
-
# This variant demonstrates the AutoCompleter working with the orginial argparse.
# Base argparse is unable to specify narg ranges. Autocompleter will keep expecting additional arguments
# for the -d/--duration flag until you specify a new flaw or end the list it with '--'
@@ -170,23 +157,98 @@ class TabCompleteExample(cmd2.Cmd):
if not args.type:
self.do_help('orig_suggest')
- def complete_orig_suggest(self, text, line, begidx, endidx) -> List[str]:
- """ Adds tab completion to media"""
- completer = argparse_completer.AutoCompleter(TabCompleteExample.suggest_parser_orig)
+ ###################################################################################
+ # The media command demonstrates a completer with multiple layers of subcommands
+ # - This example demonstrates how to tag a completion attribute on each action, enabling argument
+ # completion without implementing a complete_COMMAND function
- tokens, _ = self.tokens_for_completion(line, begidx, endidx)
- results = completer.complete_command(tokens, text, line, begidx, endidx)
+ def _do_vid_media_movies(self, args) -> None:
+ if not args.command:
+ self.do_help('media movies')
+ elif args.command == 'list':
+ for movie_id in TabCompleteExample.MOVIE_DATABASE:
+ movie = TabCompleteExample.MOVIE_DATABASE[movie_id]
+ print('{}\n-----------------------------\n{} ID: {}\nDirector: {}\nCast:\n {}\n\n'
+ .format(movie['title'], movie['rating'], movie_id,
+ ', '.join(movie['director']),
+ '\n '.join(movie['actor'])))
- return results
+ def _do_vid_media_shows(self, args) -> None:
+ if not args.command:
+ self.do_help('media shows')
+
+ elif args.command == 'list':
+ for show_id in TabCompleteExample.SHOW_DATABASE:
+ show = TabCompleteExample.SHOW_DATABASE[show_id]
+ print('{}\n-----------------------------\n{} ID: {}'
+ .format(show['title'], show['rating'], show_id))
+ for season in show['seasons']:
+ ep_list = show['seasons'][season]
+ print(' Season {}:\n {}'
+ .format(season,
+ '\n '.join(ep_list)))
+ print()
+
+ video_parser = argparse_completer.ACArgumentParser(prog='media')
+
+ video_types_subparsers = video_parser.add_subparsers(title='Media Types', dest='type')
+
+ vid_movies_parser = video_types_subparsers.add_parser('movies')
+ vid_movies_parser.set_defaults(func=_do_vid_media_movies)
+
+ vid_movies_commands_subparsers = vid_movies_parser.add_subparsers(title='Commands', dest='command')
+
+ vid_movies_list_parser = vid_movies_commands_subparsers.add_parser('list')
+
+ vid_movies_list_parser.add_argument('-t', '--title', help='Title Filter')
+ vid_movies_list_parser.add_argument('-r', '--rating', help='Rating Filter', nargs='+',
+ choices=ratings_types)
+ # save a reference to the action object
+ director_action = vid_movies_list_parser.add_argument('-d', '--director', help='Director Filter')
+ actor_action = vid_movies_list_parser.add_argument('-a', '--actor', help='Actor Filter', action='append')
+
+ # tag the action objects with completion providers. This can be a collection or a callable
+ setattr(director_action, argparse_completer.ACTION_ARG_CHOICES, static_list_directors)
+ setattr(actor_action, argparse_completer.ACTION_ARG_CHOICES, query_actors)
+
+ vid_movies_add_parser = vid_movies_commands_subparsers.add_parser('add')
+ vid_movies_add_parser.add_argument('title', help='Movie Title')
+ vid_movies_add_parser.add_argument('rating', help='Movie Rating', choices=ratings_types)
+
+ # save a reference to the action object
+ director_action = vid_movies_add_parser.add_argument('-d', '--director', help='Director', nargs=(1, 2),
+ required=True)
+ actor_action = vid_movies_add_parser.add_argument('actor', help='Actors', nargs='*')
+
+ # tag the action objects with completion providers. This can be a collection or a callable
+ setattr(director_action, argparse_completer.ACTION_ARG_CHOICES, static_list_directors)
+ setattr(actor_action, argparse_completer.ACTION_ARG_CHOICES, query_actors)
+
+ vid_movies_delete_parser = vid_movies_commands_subparsers.add_parser('delete')
+
+ vid_shows_parser = video_types_subparsers.add_parser('shows')
+ vid_shows_parser.set_defaults(func=_do_vid_media_shows)
+
+ vid_shows_commands_subparsers = vid_shows_parser.add_subparsers(title='Commands', dest='command')
+
+ vid_shows_list_parser = vid_shows_commands_subparsers.add_parser('list')
+
+ @with_category(CAT_AUTOCOMPLETE)
+ @with_argparser(video_parser)
+ def do_video(self, args):
+ """Video management command demonstrates multiple layers of subcommands being handled by AutoCompleter"""
+ func = getattr(args, 'func', None)
+ if func is not None:
+ # Call whatever subcommand function was selected
+ func(self, args)
+ else:
+ # No subcommand was provided, so call help
+ self.do_help('video')
###################################################################################
# The media command demonstrates a completer with multiple layers of subcommands
# - This example uses a flat completion lookup dictionary
- def query_actors(self) -> List[str]:
- """Simulating a function that queries and returns a completion values"""
- return TabCompleteExample.actors
-
def _do_media_movies(self, args) -> None:
if not args.command:
self.do_help('media movies')
@@ -264,7 +326,7 @@ class TabCompleteExample(cmd2.Cmd):
# name collisions.
def complete_media(self, text, line, begidx, endidx):
""" Adds tab completion to media"""
- choices = {'actor': self.query_actors, # function
+ choices = {'actor': query_actors, # function
'director': TabCompleteExample.static_list_directors # static list
}
completer = argparse_completer.AutoCompleter(TabCompleteExample.media_parser, arg_choices=choices)
| argparse tab autocompleter
I'm working on a tab completion implementation that automatically generates completions based on an argparse definition. Creating this issue to enable discussions. | python-cmd2/cmd2 | diff --git a/tests/test_autocompletion.py b/tests/test_autocompletion.py
index e68bc104..1d0c9678 100644
--- a/tests/test_autocompletion.py
+++ b/tests/test_autocompletion.py
@@ -213,6 +213,27 @@ def test_autocomp_subcmd_flag_comp_list(cmd2_app):
assert first_match is not None and first_match == '"Gareth Edwards'
+def test_autocomp_subcmd_flag_comp_func_attr(cmd2_app):
+ text = 'A'
+ line = 'video movies list -a "{}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, cmd2_app)
+ assert first_match is not None and \
+ cmd2_app.completion_matches == ['Adam Driver', 'Alec Guinness', 'Andy Serkis', 'Anthony Daniels']
+
+
+def test_autocomp_subcmd_flag_comp_list_attr(cmd2_app):
+ text = 'G'
+ line = 'video movies list -d {}'.format(text)
+ endidx = len(line)
+ begidx = endidx - len(text)
+
+ first_match = complete_tester(text, line, begidx, endidx, cmd2_app)
+ assert first_match is not None and first_match == '"Gareth Edwards'
+
+
def test_autcomp_pos_consumed(cmd2_app):
text = ''
line = 'library movie add SW_EP01 {}'.format(text)
@@ -254,3 +275,5 @@ def test_autcomp_custom_func_list_and_dict_arg(cmd2_app):
first_match = complete_tester(text, line, begidx, endidx, cmd2_app)
assert first_match is not None and \
cmd2_app.completion_matches == ['S01E02', 'S01E03', 'S02E01', 'S02E03']
+
+
diff --git a/tests/test_cmd2.py b/tests/test_cmd2.py
index 35ef4c0f..48f50bdc 100644
--- a/tests/test_cmd2.py
+++ b/tests/test_cmd2.py
@@ -68,8 +68,12 @@ def test_base_argparse_help(base_app, capsys):
def test_base_invalid_option(base_app, capsys):
run_cmd(base_app, 'set -z')
out, err = capsys.readouterr()
- expected = ['usage: set [-h] [-a] [-l] [settable [settable ...]]', 'set: error: unrecognized arguments: -z']
- assert normalize(str(err)) == expected
+ out = normalize(out)
+ err = normalize(err)
+ assert len(err) == 3
+ assert len(out) == 15
+ assert 'Error: unrecognized arguments: -z' in err[0]
+ assert out[0] == 'usage: set [-h] [-a] [-l] [settable [settable ...]]'
def test_base_shortcuts(base_app):
out = run_cmd(base_app, 'shortcuts')
diff --git a/tests/test_completion.py b/tests/test_completion.py
index a01d1166..cf45f281 100644
--- a/tests/test_completion.py
+++ b/tests/test_completion.py
@@ -14,7 +14,8 @@ import sys
import cmd2
import pytest
-from .conftest import complete_tester
+from .conftest import complete_tester, StdOut
+from examples.subcommands import SubcommandsExample
# List of strings used with completion functions
food_item_strs = ['Pizza', 'Ham', 'Ham Sandwich', 'Potato']
@@ -726,76 +727,13 @@ def test_add_opening_quote_delimited_space_in_prefix(cmd2_app):
os.path.commonprefix(cmd2_app.completion_matches) == expected_common_prefix and \
cmd2_app.display_matches == expected_display
-class SubcommandsExample(cmd2.Cmd):
- """
- Example cmd2 application where we a base command which has a couple subcommands
- and the "sport" subcommand has tab completion enabled.
- """
-
- def __init__(self):
- cmd2.Cmd.__init__(self)
-
- # subcommand functions for the base command
- def base_foo(self, args):
- """foo subcommand of base command"""
- self.poutput(args.x * args.y)
-
- def base_bar(self, args):
- """bar subcommand of base command"""
- self.poutput('((%s))' % args.z)
-
- def base_sport(self, args):
- """sport subcommand of base command"""
- self.poutput('Sport is {}'.format(args.sport))
-
- # noinspection PyUnusedLocal
- def complete_base_sport(self, text, line, begidx, endidx):
- """ Adds tab completion to base sport subcommand """
- index_dict = {1: sport_item_strs}
- return self.index_based_complete(text, line, begidx, endidx, index_dict)
-
- # create the top-level parser for the base command
- base_parser = argparse.ArgumentParser(prog='base')
- base_subparsers = base_parser.add_subparsers(title='subcommands', help='subcommand help')
-
- # create the parser for the "foo" subcommand
- parser_foo = base_subparsers.add_parser('foo', help='foo help')
- parser_foo.add_argument('-x', type=int, default=1, help='integer')
- parser_foo.add_argument('y', type=float, help='float')
- parser_foo.set_defaults(func=base_foo)
-
- # create the parser for the "bar" subcommand
- parser_bar = base_subparsers.add_parser('bar', help='bar help')
- parser_bar.add_argument('z', help='string')
- parser_bar.set_defaults(func=base_bar)
-
- # create the parser for the "sport" subcommand
- parser_sport = base_subparsers.add_parser('sport', help='sport help')
- parser_sport.add_argument('sport', help='Enter name of a sport')
-
- # Set both a function and tab completer for the "sport" subcommand
- parser_sport.set_defaults(func=base_sport, completer=complete_base_sport)
-
- @cmd2.with_argparser(base_parser)
- def do_base(self, args):
- """Base command help"""
- func = getattr(args, 'func', None)
- if func is not None:
- # Call whatever subcommand function was selected
- func(self, args)
- else:
- # No subcommand was provided, so call help
- self.do_help('base')
-
- # Enable tab completion of base to make sure the subcommands' completers get called.
- complete_base = cmd2.Cmd.cmd_with_subs_completer
-
@pytest.fixture
def sc_app():
- app = SubcommandsExample()
- return app
+ c = SubcommandsExample()
+ c.stdout = StdOut()
+ return c
def test_cmd2_subcommand_completion_single_end(sc_app):
text = 'f'
@@ -913,12 +851,6 @@ class SubcommandsWithUnknownExample(cmd2.Cmd):
"""sport subcommand of base command"""
self.poutput('Sport is {}'.format(args.sport))
- # noinspection PyUnusedLocal
- def complete_base_sport(self, text, line, begidx, endidx):
- """ Adds tab completion to base sport subcommand """
- index_dict = {1: sport_item_strs}
- return self.index_based_complete(text, line, begidx, endidx, index_dict)
-
# create the top-level parser for the base command
base_parser = argparse.ArgumentParser(prog='base')
base_subparsers = base_parser.add_subparsers(title='subcommands', help='subcommand help')
@@ -936,10 +868,8 @@ class SubcommandsWithUnknownExample(cmd2.Cmd):
# create the parser for the "sport" subcommand
parser_sport = base_subparsers.add_parser('sport', help='sport help')
- parser_sport.add_argument('sport', help='Enter name of a sport')
-
- # Set both a function and tab completer for the "sport" subcommand
- parser_sport.set_defaults(func=base_sport, completer=complete_base_sport)
+ sport_arg = parser_sport.add_argument('sport', help='Enter name of a sport')
+ setattr(sport_arg, 'arg_choices', sport_item_strs)
@cmd2.with_argparser_and_unknown_args(base_parser)
def do_base(self, args):
@@ -952,9 +882,6 @@ class SubcommandsWithUnknownExample(cmd2.Cmd):
# No subcommand was provided, so call help
self.do_help('base')
- # Enable tab completion of base to make sure the subcommands' completers get called.
- complete_base = cmd2.Cmd.cmd_with_subs_completer
-
@pytest.fixture
def scu_app():
@@ -971,6 +898,8 @@ def test_cmd2_subcmd_with_unknown_completion_single_end(scu_app):
first_match = complete_tester(text, line, begidx, endidx, scu_app)
+ print('first_match: {}'.format(first_match))
+
# It is at end of line, so extra space is present
assert first_match is not None and scu_app.completion_matches == ['foo ']
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 3,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 6
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-mock"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
-e git+https://github.com/python-cmd2/cmd2.git@a0a46f9396a72f440f65e46d7170a0d366796574#egg=cmd2
colorama==0.4.5
coverage==6.2
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pyperclip==1.9.0
pytest==6.2.4
pytest-cov==4.0.0
pytest-mock==3.6.1
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
wcwidth==0.2.13
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: cmd2
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- colorama==0.4.5
- coverage==6.2
- pyperclip==1.9.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- tomli==1.2.3
- wcwidth==0.2.13
prefix: /opt/conda/envs/cmd2
| [
"tests/test_autocompletion.py::test_autocomp_subcmd_flag_comp_func_attr",
"tests/test_autocompletion.py::test_autocomp_subcmd_flag_comp_list_attr",
"tests/test_cmd2.py::test_base_invalid_option",
"tests/test_completion.py::test_subcommand_tab_completion",
"tests/test_completion.py::test_subcommand_tab_completion_space_in_text"
]
| [
"tests/test_cmd2.py::test_output_redirection",
"tests/test_cmd2.py::test_interrupt_quit",
"tests/test_cmd2.py::test_interrupt_noquit",
"tests/test_cmd2.py::test_which_editor_good"
]
| [
"tests/test_autocompletion.py::test_help_required_group",
"tests/test_autocompletion.py::test_help_required_group_long",
"tests/test_autocompletion.py::test_autocomp_flags",
"tests/test_autocompletion.py::test_autcomp_hint",
"tests/test_autocompletion.py::test_autcomp_flag_comp",
"tests/test_autocompletion.py::test_autocomp_flags_choices",
"tests/test_autocompletion.py::test_autcomp_hint_in_narg_range",
"tests/test_autocompletion.py::test_autocomp_flags_narg_max",
"tests/test_autocompletion.py::test_autcomp_narg_beyond_max",
"tests/test_autocompletion.py::test_autocomp_subcmd_nested",
"tests/test_autocompletion.py::test_autocomp_subcmd_flag_choices_append",
"tests/test_autocompletion.py::test_autocomp_subcmd_flag_choices_append_exclude",
"tests/test_autocompletion.py::test_autocomp_subcmd_flag_comp_func",
"tests/test_autocompletion.py::test_autocomp_subcmd_flag_comp_list",
"tests/test_autocompletion.py::test_autcomp_pos_consumed",
"tests/test_autocompletion.py::test_autcomp_pos_after_flag",
"tests/test_autocompletion.py::test_autcomp_custom_func_list_arg",
"tests/test_autocompletion.py::test_autcomp_custom_func_list_and_dict_arg",
"tests/test_cmd2.py::test_ver",
"tests/test_cmd2.py::test_empty_statement",
"tests/test_cmd2.py::test_base_help",
"tests/test_cmd2.py::test_base_help_verbose",
"tests/test_cmd2.py::test_base_help_history",
"tests/test_cmd2.py::test_base_argparse_help",
"tests/test_cmd2.py::test_base_shortcuts",
"tests/test_cmd2.py::test_base_show",
"tests/test_cmd2.py::test_base_show_long",
"tests/test_cmd2.py::test_base_show_readonly",
"tests/test_cmd2.py::test_base_set",
"tests/test_cmd2.py::test_set_not_supported",
"tests/test_cmd2.py::test_set_quiet",
"tests/test_cmd2.py::test_base_shell",
"tests/test_cmd2.py::test_base_py",
"tests/test_cmd2.py::test_base_run_python_script",
"tests/test_cmd2.py::test_base_run_pyscript",
"tests/test_cmd2.py::test_recursive_pyscript_not_allowed",
"tests/test_cmd2.py::test_pyscript_with_nonexist_file",
"tests/test_cmd2.py::test_pyscript_with_exception",
"tests/test_cmd2.py::test_pyscript_requires_an_argument",
"tests/test_cmd2.py::test_base_error",
"tests/test_cmd2.py::test_base_history",
"tests/test_cmd2.py::test_history_script_format",
"tests/test_cmd2.py::test_history_with_string_argument",
"tests/test_cmd2.py::test_history_with_integer_argument",
"tests/test_cmd2.py::test_history_with_integer_span",
"tests/test_cmd2.py::test_history_with_span_start",
"tests/test_cmd2.py::test_history_with_span_end",
"tests/test_cmd2.py::test_history_with_span_index_error",
"tests/test_cmd2.py::test_history_output_file",
"tests/test_cmd2.py::test_history_edit",
"tests/test_cmd2.py::test_history_run_all_commands",
"tests/test_cmd2.py::test_history_run_one_command",
"tests/test_cmd2.py::test_base_load",
"tests/test_cmd2.py::test_load_with_empty_args",
"tests/test_cmd2.py::test_load_with_nonexistent_file",
"tests/test_cmd2.py::test_load_with_empty_file",
"tests/test_cmd2.py::test_load_with_binary_file",
"tests/test_cmd2.py::test_load_with_utf8_file",
"tests/test_cmd2.py::test_load_nested_loads",
"tests/test_cmd2.py::test_base_runcmds_plus_hooks",
"tests/test_cmd2.py::test_base_relative_load",
"tests/test_cmd2.py::test_relative_load_requires_an_argument",
"tests/test_cmd2.py::test_feedback_to_output_true",
"tests/test_cmd2.py::test_feedback_to_output_false",
"tests/test_cmd2.py::test_allow_redirection",
"tests/test_cmd2.py::test_input_redirection",
"tests/test_cmd2.py::test_pipe_to_shell",
"tests/test_cmd2.py::test_pipe_to_shell_error",
"tests/test_cmd2.py::test_base_timing",
"tests/test_cmd2.py::test_base_debug",
"tests/test_cmd2.py::test_base_colorize",
"tests/test_cmd2.py::test_edit_no_editor",
"tests/test_cmd2.py::test_edit_file",
"tests/test_cmd2.py::test_edit_file_with_spaces",
"tests/test_cmd2.py::test_edit_blank",
"tests/test_cmd2.py::test_base_py_interactive",
"tests/test_cmd2.py::test_exclude_from_history",
"tests/test_cmd2.py::test_base_cmdloop_with_queue",
"tests/test_cmd2.py::test_base_cmdloop_without_queue",
"tests/test_cmd2.py::test_cmdloop_without_rawinput",
"tests/test_cmd2.py::test_precmd_hook_success",
"tests/test_cmd2.py::test_precmd_hook_failure",
"tests/test_cmd2.py::test_default_to_shell_unknown",
"tests/test_cmd2.py::test_default_to_shell_good",
"tests/test_cmd2.py::test_default_to_shell_failure",
"tests/test_cmd2.py::test_ansi_prompt_not_esacped",
"tests/test_cmd2.py::test_ansi_prompt_escaped",
"tests/test_cmd2.py::test_custom_command_help",
"tests/test_cmd2.py::test_custom_help_menu",
"tests/test_cmd2.py::test_help_undocumented",
"tests/test_cmd2.py::test_help_overridden_method",
"tests/test_cmd2.py::test_help_cat_base",
"tests/test_cmd2.py::test_help_cat_verbose",
"tests/test_cmd2.py::test_select_options",
"tests/test_cmd2.py::test_select_invalid_option",
"tests/test_cmd2.py::test_select_list_of_strings",
"tests/test_cmd2.py::test_select_list_of_tuples",
"tests/test_cmd2.py::test_select_uneven_list_of_tuples",
"tests/test_cmd2.py::test_help_with_no_docstring",
"tests/test_cmd2.py::test_which_editor_bad",
"tests/test_cmd2.py::test_multiline_complete_empty_statement_raises_exception",
"tests/test_cmd2.py::test_multiline_complete_statement_without_terminator",
"tests/test_cmd2.py::test_clipboard_failure",
"tests/test_cmd2.py::test_cmdresult",
"tests/test_cmd2.py::test_is_text_file_bad_input",
"tests/test_cmd2.py::test_eof",
"tests/test_cmd2.py::test_eos",
"tests/test_cmd2.py::test_echo",
"tests/test_cmd2.py::test_pseudo_raw_input_tty_rawinput_true",
"tests/test_cmd2.py::test_pseudo_raw_input_tty_rawinput_false",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_true_echo_true",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_true_echo_false",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_false_echo_true",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_false_echo_false",
"tests/test_cmd2.py::test_raw_input",
"tests/test_cmd2.py::test_stdin_input",
"tests/test_cmd2.py::test_empty_stdin_input",
"tests/test_cmd2.py::test_poutput_string",
"tests/test_cmd2.py::test_poutput_zero",
"tests/test_cmd2.py::test_poutput_empty_string",
"tests/test_cmd2.py::test_poutput_none",
"tests/test_cmd2.py::test_alias",
"tests/test_cmd2.py::test_alias_lookup_invalid_alias",
"tests/test_cmd2.py::test_alias_with_invalid_name",
"tests/test_cmd2.py::test_unalias",
"tests/test_cmd2.py::test_unalias_all",
"tests/test_cmd2.py::test_unalias_non_existing",
"tests/test_completion.py::test_complete_command_single",
"tests/test_completion.py::test_complete_empty_arg",
"tests/test_completion.py::test_complete_bogus_command",
"tests/test_completion.py::test_cmd2_command_completion_single",
"tests/test_completion.py::test_cmd2_command_completion_multiple",
"tests/test_completion.py::test_cmd2_command_completion_nomatch",
"tests/test_completion.py::test_cmd2_help_completion_single",
"tests/test_completion.py::test_cmd2_help_completion_multiple",
"tests/test_completion.py::test_cmd2_help_completion_nomatch",
"tests/test_completion.py::test_shell_command_completion_shortcut",
"tests/test_completion.py::test_shell_command_completion_doesnt_match_wildcards",
"tests/test_completion.py::test_shell_command_completion_multiple",
"tests/test_completion.py::test_shell_command_completion_nomatch",
"tests/test_completion.py::test_shell_command_completion_doesnt_complete_when_just_shell",
"tests/test_completion.py::test_shell_command_completion_does_path_completion_when_after_command",
"tests/test_completion.py::test_path_completion_single_end",
"tests/test_completion.py::test_path_completion_multiple",
"tests/test_completion.py::test_path_completion_nomatch",
"tests/test_completion.py::test_default_to_shell_completion",
"tests/test_completion.py::test_path_completion_cwd",
"tests/test_completion.py::test_path_completion_doesnt_match_wildcards",
"tests/test_completion.py::test_path_completion_expand_user_dir",
"tests/test_completion.py::test_path_completion_user_expansion",
"tests/test_completion.py::test_path_completion_directories_only",
"tests/test_completion.py::test_basic_completion_single",
"tests/test_completion.py::test_basic_completion_multiple",
"tests/test_completion.py::test_basic_completion_nomatch",
"tests/test_completion.py::test_delimiter_completion",
"tests/test_completion.py::test_flag_based_completion_single",
"tests/test_completion.py::test_flag_based_completion_multiple",
"tests/test_completion.py::test_flag_based_completion_nomatch",
"tests/test_completion.py::test_flag_based_default_completer",
"tests/test_completion.py::test_flag_based_callable_completer",
"tests/test_completion.py::test_index_based_completion_single",
"tests/test_completion.py::test_index_based_completion_multiple",
"tests/test_completion.py::test_index_based_completion_nomatch",
"tests/test_completion.py::test_index_based_default_completer",
"tests/test_completion.py::test_index_based_callable_completer",
"tests/test_completion.py::test_tokens_for_completion_quoted",
"tests/test_completion.py::test_tokens_for_completion_unclosed_quote",
"tests/test_completion.py::test_tokens_for_completion_redirect",
"tests/test_completion.py::test_tokens_for_completion_quoted_redirect",
"tests/test_completion.py::test_tokens_for_completion_redirect_off",
"tests/test_completion.py::test_parseline_command_and_args",
"tests/test_completion.py::test_parseline_emptyline",
"tests/test_completion.py::test_parseline_strips_line",
"tests/test_completion.py::test_parseline_expands_alias",
"tests/test_completion.py::test_parseline_expands_shortcuts",
"tests/test_completion.py::test_add_opening_quote_basic_no_text",
"tests/test_completion.py::test_add_opening_quote_basic_nothing_added",
"tests/test_completion.py::test_add_opening_quote_basic_quote_added",
"tests/test_completion.py::test_add_opening_quote_basic_text_is_common_prefix",
"tests/test_completion.py::test_add_opening_quote_delimited_no_text",
"tests/test_completion.py::test_add_opening_quote_delimited_nothing_added",
"tests/test_completion.py::test_add_opening_quote_delimited_quote_added",
"tests/test_completion.py::test_add_opening_quote_delimited_text_is_common_prefix",
"tests/test_completion.py::test_add_opening_quote_delimited_space_in_prefix",
"tests/test_completion.py::test_cmd2_subcommand_completion_single_end",
"tests/test_completion.py::test_cmd2_subcommand_completion_multiple",
"tests/test_completion.py::test_cmd2_subcommand_completion_nomatch",
"tests/test_completion.py::test_cmd2_subcmd_with_unknown_completion_single_end",
"tests/test_completion.py::test_cmd2_subcmd_with_unknown_completion_multiple",
"tests/test_completion.py::test_cmd2_subcmd_with_unknown_completion_nomatch",
"tests/test_completion.py::test_cmd2_help_subcommand_completion_single",
"tests/test_completion.py::test_cmd2_help_subcommand_completion_multiple",
"tests/test_completion.py::test_cmd2_help_subcommand_completion_nomatch",
"tests/test_completion.py::test_subcommand_tab_completion_with_no_completer",
"tests/test_completion.py::test_cmd2_submenu_completion_single_end",
"tests/test_completion.py::test_cmd2_submenu_completion_multiple",
"tests/test_completion.py::test_cmd2_submenu_completion_nomatch",
"tests/test_completion.py::test_cmd2_submenu_completion_after_submenu_match",
"tests/test_completion.py::test_cmd2_submenu_completion_after_submenu_nomatch",
"tests/test_completion.py::test_cmd2_help_submenu_completion_multiple",
"tests/test_completion.py::test_cmd2_help_submenu_completion_nomatch",
"tests/test_completion.py::test_cmd2_help_submenu_completion_subcommands"
]
| []
| MIT License | 2,433 | [
"cmd2/cmd2.py",
"docs/argument_processing.rst",
"examples/tab_autocompletion.py",
"CHANGELOG.md",
"examples/subcommands.py",
"cmd2/argparse_completer.py"
]
| [
"cmd2/cmd2.py",
"docs/argument_processing.rst",
"examples/tab_autocompletion.py",
"CHANGELOG.md",
"examples/subcommands.py",
"cmd2/argparse_completer.py"
]
|
|
daviskirk__climatecontrol-9 | 26a70d6ca739c452639fe18816bf06ad24e4c4c6 | 2018-04-22 06:31:25 | 26a70d6ca739c452639fe18816bf06ad24e4c4c6 | diff --git a/README.rst b/README.rst
index 9df1ef9..2776c15 100644
--- a/README.rst
+++ b/README.rst
@@ -200,6 +200,29 @@ save it to a file like "cli.py" and then call it after installing click:
whithout needing to set any env vars.
+Testing
+-------
+
+When testing your application, different behaviours often depend on settings
+taking on different values. Assuming that you are using a single `Settings`
+object accross multiple functions or modules, handling these settings changes
+in tests can be tricky.
+
+The settings object provides a simple method for modifying your settings object
+temporarily:
+
+.. code:: python
+
+ settings_map.update({'a': 1})
+ # Enter a temporary changes context block:
+ with settings_map.temporary_changes():
+ settings_map.update({'a': 1})
+ # Inside the context, the settings can be modified and used as you choose
+ print(settings_map['a']) # outputs: 2
+ # After the context exits the settings map
+ print(settings_map['a']) # outputs: 1
+
+
.. |Build Status| image:: https://travis-ci.org/daviskirk/climatecontrol.svg?branch=master
:target: https://travis-ci.org/daviskirk/climatecontrol
.. |Coverage Status| image:: https://coveralls.io/repos/github/daviskirk/climatecontrol/badge.svg?branch=master
diff --git a/climatecontrol/settings_parser.py b/climatecontrol/settings_parser.py
index ab45c47..40b8bc8 100644
--- a/climatecontrol/settings_parser.py
+++ b/climatecontrol/settings_parser.py
@@ -1,6 +1,7 @@
"""Settings parser."""
from abc import ABC, abstractmethod
+from contextlib import contextmanager
import os
import json
import toml
@@ -312,6 +313,31 @@ class Settings(Mapping):
new_data[k] = parsed_v
return new_data
+ @contextmanager
+ def temporary_changes(self):
+ """Open a context where any changes to the settings are rolled back on context exit.
+
+ This context manager can be used for testing or to temporarily change
+ settings.
+
+ Example:
+ >>> from climatecontrol.settings_parser import Settings
+ >>> settings = Settings()
+ >>> settings.update({'a': 1})
+ >>> with settings.temporary_changes():
+ ... settings.update({'a': 2})
+ ... assert settings['a'] == 2
+ >>> assert settings['a'] == 1
+
+ """
+ archived_settings = deepcopy(self._data)
+ archived_settings_files = deepcopy(self._settings_files)
+ archived_external_data = deepcopy(self.external_data)
+ yield self
+ self._data = archived_settings
+ self._settings_files = archived_settings_files
+ self.external_data = archived_external_data
+
EnvSetting = NamedTuple('EnvSetting', [('name', str), ('value', Mapping[str, Any])])
| Function to save and restore settings (for testing)
This should allow creating a pytest fixture which undoes all changes to the Settings object after the test is finished.
Instead of a function, a context manager could be used instead. | daviskirk/climatecontrol | diff --git a/tests/test_settings.py b/tests/test_settings.py
index 1e6a915..a72b635 100644
--- a/tests/test_settings.py
+++ b/tests/test_settings.py
@@ -244,6 +244,26 @@ def test_filters(mock_empty_os_environ):
assert dict(settings_map) == {'subsection1': 'test1', 'subsection2': 'test2', 'subsection3': 'test3'}
+def test_temporary_changes():
+ """Test that temporary changes settings context manager works.
+
+ Within the context, settings should be changeable. After exit, the original
+ settings should be restored.
+
+ """
+ s = settings_parser.Settings()
+ s.update({'a': 1})
+ with s.temporary_changes():
+ # Change the settings within the context
+ s.update({'a': 2, 'b': 2})
+ s.settings_files.append('test')
+ assert s['a'] == 2
+ assert len(s.settings_files) == 1
+ # Check that outside of the context the settings are back to their old state.
+ assert s['a'] == 1
+ assert len(s.settings_files) == 0
+
+
@pytest.mark.parametrize('use_method', [True, False])
@pytest.mark.parametrize('option_name', ['config', 'settings'])
@pytest.mark.parametrize('mode', ['config', 'noconfig', 'wrongfile', 'noclick'])
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 2
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-mock",
"pytest-cov",
"toml",
"pyyaml",
"click"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | click==8.1.8
-e git+https://github.com/daviskirk/climatecontrol.git@26a70d6ca739c452639fe18816bf06ad24e4c4c6#egg=climatecontrol
coverage==7.8.0
exceptiongroup==1.2.2
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-mock==3.14.0
PyYAML==6.0.2
toml==0.10.2
tomli==2.2.1
typing==3.7.4.3
| name: climatecontrol
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- click==8.1.8
- coverage==7.8.0
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pyyaml==6.0.2
- toml==0.10.2
- tomli==2.2.1
- typing==3.7.4.3
prefix: /opt/conda/envs/climatecontrol
| [
"tests/test_settings.py::test_temporary_changes"
]
| [
"tests/test_settings.py::test_settings[True]",
"tests/test_settings.py::test_settings[None]",
"tests/test_settings.py::test_settings_parse",
"tests/test_settings.py::test_settings_files_and_env_file[.toml]",
"tests/test_settings.py::test_settings_files_and_env_file[.yml]",
"tests/test_settings.py::test_settings_files_and_env_file[.json]",
"tests/test_settings.py::test_settings_files_and_env_file_and_env[.toml]",
"tests/test_settings.py::test_settings_files_and_env_file_and_env[.yml]",
"tests/test_settings.py::test_settings_files_and_env_file_and_env[.json]",
"tests/test_settings.py::test_settings_parsing_order[None-expected0]",
"tests/test_settings.py::test_settings_parsing_order[order1-expected1]",
"tests/test_settings.py::test_settings_parsing_order[order2-expected2]",
"tests/test_settings.py::test_settings_parsing_order[order3-expected3]",
"tests/test_settings.py::test_settings_parsing_order[order4-expected4]",
"tests/test_settings.py::test_update[dict]",
"tests/test_settings.py::test_update[envvar]",
"tests/test_settings.py::test_update[both]",
"tests/test_settings.py::test_filters",
"tests/test_settings.py::test_cli_utils[.toml-wrongfile-config-True]",
"tests/test_settings.py::test_cli_utils[.toml-wrongfile-config-False]",
"tests/test_settings.py::test_cli_utils[.toml-wrongfile-settings-True]",
"tests/test_settings.py::test_cli_utils[.toml-wrongfile-settings-False]",
"tests/test_settings.py::test_cli_utils[.toml-noclick-config-True]",
"tests/test_settings.py::test_cli_utils[.toml-noclick-config-False]",
"tests/test_settings.py::test_cli_utils[.toml-noclick-settings-True]",
"tests/test_settings.py::test_cli_utils[.toml-noclick-settings-False]",
"tests/test_settings.py::test_cli_utils[.yml-wrongfile-config-True]",
"tests/test_settings.py::test_cli_utils[.yml-wrongfile-config-False]",
"tests/test_settings.py::test_cli_utils[.yml-wrongfile-settings-True]",
"tests/test_settings.py::test_cli_utils[.yml-wrongfile-settings-False]",
"tests/test_settings.py::test_cli_utils[.yml-noclick-config-True]",
"tests/test_settings.py::test_cli_utils[.yml-noclick-config-False]",
"tests/test_settings.py::test_cli_utils[.yml-noclick-settings-True]",
"tests/test_settings.py::test_cli_utils[.yml-noclick-settings-False]",
"tests/test_settings.py::test_cli_utils[.json-wrongfile-config-True]",
"tests/test_settings.py::test_cli_utils[.json-wrongfile-config-False]",
"tests/test_settings.py::test_cli_utils[.json-wrongfile-settings-True]",
"tests/test_settings.py::test_cli_utils[.json-wrongfile-settings-False]",
"tests/test_settings.py::test_cli_utils[.json-noclick-config-True]",
"tests/test_settings.py::test_cli_utils[.json-noclick-config-False]",
"tests/test_settings.py::test_cli_utils[.json-noclick-settings-True]",
"tests/test_settings.py::test_cli_utils[.json-noclick-settings-False]"
]
| [
"tests/test_settings.py::test_settings_empty",
"tests/test_settings.py::test_settings[False]",
"tests/test_settings.py::test_parse_from_file_vars[False-False]",
"tests/test_settings.py::test_parse_from_file_vars[False-True]",
"tests/test_settings.py::test_parse_from_file_vars[True-False]",
"tests/test_settings.py::test_parse_from_file_vars[True-True]",
"tests/test_settings.py::test_settings_files_fail[asd;kjhaflkjhasf]",
"tests/test_settings.py::test_settings_files_fail[.]",
"tests/test_settings.py::test_settings_files_fail[/home/]",
"tests/test_settings.py::test_settings_files_fail[settings_files3]",
"tests/test_settings.py::test_yaml_import_fail",
"tests/test_settings.py::test_settings_file_content[---\\na:\\n",
"tests/test_settings.py::test_settings_file_content[{\"a\":",
"tests/test_settings.py::test_settings_file_content[[a]\\nb=5]",
"tests/test_settings.py::test_settings_file_content_fail[a:\\n",
"tests/test_settings.py::test_settings_file_content_fail[[{\"a\":",
"tests/test_settings.py::test_settings_file_content_fail[b=5-SettingsFileError]",
"tests/test_settings.py::test_settings_files_file[.toml]",
"tests/test_settings.py::test_settings_files_file[.yml]",
"tests/test_settings.py::test_settings_files_file[.json]",
"tests/test_settings.py::test_settings_files_files[.toml]",
"tests/test_settings.py::test_settings_files_files[.yml]",
"tests/test_settings.py::test_settings_files_files[.json]",
"tests/test_settings.py::test_assign[settings_files-this.toml-expected0]",
"tests/test_settings.py::test_assign[settings_files-value1-expected1]",
"tests/test_settings.py::test_assign[parser-mock_parser_fcn-mock_parser_fcn]",
"tests/test_settings.py::test_cli_utils[.toml-config-config-True]",
"tests/test_settings.py::test_cli_utils[.toml-config-config-False]",
"tests/test_settings.py::test_cli_utils[.toml-config-settings-True]",
"tests/test_settings.py::test_cli_utils[.toml-config-settings-False]",
"tests/test_settings.py::test_cli_utils[.toml-noconfig-config-True]",
"tests/test_settings.py::test_cli_utils[.toml-noconfig-config-False]",
"tests/test_settings.py::test_cli_utils[.toml-noconfig-settings-True]",
"tests/test_settings.py::test_cli_utils[.toml-noconfig-settings-False]",
"tests/test_settings.py::test_cli_utils[.yml-config-config-True]",
"tests/test_settings.py::test_cli_utils[.yml-config-config-False]",
"tests/test_settings.py::test_cli_utils[.yml-config-settings-True]",
"tests/test_settings.py::test_cli_utils[.yml-config-settings-False]",
"tests/test_settings.py::test_cli_utils[.yml-noconfig-config-True]",
"tests/test_settings.py::test_cli_utils[.yml-noconfig-config-False]",
"tests/test_settings.py::test_cli_utils[.yml-noconfig-settings-True]",
"tests/test_settings.py::test_cli_utils[.yml-noconfig-settings-False]",
"tests/test_settings.py::test_cli_utils[.json-config-config-True]",
"tests/test_settings.py::test_cli_utils[.json-config-config-False]",
"tests/test_settings.py::test_cli_utils[.json-config-settings-True]",
"tests/test_settings.py::test_cli_utils[.json-config-settings-False]",
"tests/test_settings.py::test_cli_utils[.json-noconfig-config-True]",
"tests/test_settings.py::test_cli_utils[.json-noconfig-config-False]",
"tests/test_settings.py::test_cli_utils[.json-noconfig-settings-True]",
"tests/test_settings.py::test_cli_utils[.json-noconfig-settings-False]",
"tests/test_settings.py::test_get_configuration_file[.toml]",
"tests/test_settings.py::test_get_configuration_file[.yml]",
"tests/test_settings.py::test_get_configuration_file[.json]"
]
| []
| MIT License | 2,434 | [
"README.rst",
"climatecontrol/settings_parser.py"
]
| [
"README.rst",
"climatecontrol/settings_parser.py"
]
|
|
jupyter__nbgrader-947 | 919c56a9782647a97bd03a0c9d6d0ac5633db0a3 | 2018-04-22 13:37:42 | 5bc6f37c39c8b10b8f60440b2e6d9487e63ef3f1 | diff --git a/nbgrader/converters/autograde.py b/nbgrader/converters/autograde.py
index 62327f17..57c662ab 100644
--- a/nbgrader/converters/autograde.py
+++ b/nbgrader/converters/autograde.py
@@ -2,7 +2,7 @@ import os
import shutil
from textwrap import dedent
-from traitlets import Bool, List
+from traitlets import Bool, List, Dict
from .base import BaseConverter, NbGraderException
from ..preprocessors import (
@@ -24,6 +24,19 @@ class Autograde(BaseConverter):
)
).tag(config=True)
+ exclude_overwriting = Dict(
+ {},
+ help=dedent(
+ """
+ A dictionary with keys corresponding to assignment names and values
+ being a list of filenames (relative to the assignment's source
+ directory) that should NOT be overwritten with the source version.
+ This is to allow students to e.g. edit a python file and submit it
+ alongside the notebooks in their assignment.
+ """
+ )
+ ).tag(config=True)
+
_sanitizing = True
@property
@@ -109,7 +122,9 @@ class Autograde(BaseConverter):
self.log.info("Overwriting files with master versions from the source directory")
dest_path = self._format_dest(assignment_id, student_id)
source_path = self.coursedir.format_path(self.coursedir.source_directory, '.', assignment_id)
- source_files = utils.find_all_files(source_path, self.coursedir.ignore + ["*.ipynb"])
+ source_files = set(utils.find_all_files(source_path, self.coursedir.ignore + ["*.ipynb"]))
+ exclude_files = set([os.path.join(source_path, x) for x in self.exclude_overwriting.get(assignment_id, [])])
+ source_files = list(source_files - exclude_files)
# copy them to the build directory
for filename in source_files:
| Have submitted notebooks import from local directory
I had students edit a python file and then submit it along with the notebooks. However, when I run the autograder, nbgrader imports the python file from my source directory instead of the submitted one. This, of course, leads to the test cells that test their implementation always passing no matter what they do (and also makes it so that, if they added and rely on any further functionality that's not in my solution, then those blocks fail!). Is there any way to have the submitted notebooks import from the submitted .py file?
| jupyter/nbgrader | diff --git a/nbgrader/tests/apps/test_nbgrader_autograde.py b/nbgrader/tests/apps/test_nbgrader_autograde.py
index 091564e7..ba44d44b 100644
--- a/nbgrader/tests/apps/test_nbgrader_autograde.py
+++ b/nbgrader/tests/apps/test_nbgrader_autograde.py
@@ -389,20 +389,24 @@ class TestNbGraderAutograde(BaseTestApp):
"""Are dependent files properly linked and overwritten?"""
with open("nbgrader_config.py", "a") as fh:
fh.write("""c.CourseDirectory.db_assignments = [dict(name='ps1', duedate='2015-02-02 14:58:23.948203 PST')]\n""")
- fh.write("""c.CourseDirectory.db_students = [dict(id="foo"), dict(id="bar")]""")
+ fh.write("""c.CourseDirectory.db_students = [dict(id="foo"), dict(id="bar")]\n""")
+ fh.write("""c.Autograde.exclude_overwriting = {"ps1": ["helper.py"]}\n""")
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
self._make_file(join(course_dir, "source", "ps1", "data.csv"), "some,data\n")
+ self._make_file(join(course_dir, "source", "ps1", "helper.py"), "print('hello!')\n")
run_nbgrader(["assign", "ps1", "--db", db])
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), "2015-02-02 15:58:23.948203 PST")
self._make_file(join(course_dir, "submitted", "foo", "ps1", "data.csv"), "some,other,data\n")
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "helper.py"), "print('this is different!')\n")
run_nbgrader(["autograde", "ps1", "--db", db])
assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "p1.ipynb"))
assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "timestamp.txt"))
assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "data.csv"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "helper.py"))
with open(join(course_dir, "autograded", "foo", "ps1", "timestamp.txt"), "r") as fh:
contents = fh.read()
@@ -412,6 +416,45 @@ class TestNbGraderAutograde(BaseTestApp):
contents = fh.read()
assert contents == "some,data\n"
+ with open(join(course_dir, "autograded", "foo", "ps1", "helper.py"), "r") as fh:
+ contents = fh.read()
+ assert contents == "print('this is different!')\n"
+
+ def test_grade_overwrite_files_subdirs(self, db, course_dir):
+ """Are dependent files properly linked and overwritten?"""
+ with open("nbgrader_config.py", "a") as fh:
+ fh.write("""c.CourseDirectory.db_assignments = [dict(name='ps1', duedate='2015-02-02 14:58:23.948203 PST')]\n""")
+ fh.write("""c.CourseDirectory.db_students = [dict(id="foo"), dict(id="bar")]\n""")
+ fh.write("""c.Autograde.exclude_overwriting = {{"ps1": ["{}"]}}\n""".format(os.path.join("subdir", "helper.py")))
+
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
+ self._make_file(join(course_dir, "source", "ps1", "subdir", "data.csv"), "some,data\n")
+ self._make_file(join(course_dir, "source", "ps1", "subdir", "helper.py"), "print('hello!')\n")
+ run_nbgrader(["assign", "ps1", "--db", db])
+
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), "2015-02-02 15:58:23.948203 PST")
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "subdir", "data.csv"), "some,other,data\n")
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "subdir", "helper.py"), "print('this is different!')\n")
+ run_nbgrader(["autograde", "ps1", "--db", db])
+
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "p1.ipynb"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "timestamp.txt"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "subdir", "data.csv"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "subdir", "helper.py"))
+
+ with open(join(course_dir, "autograded", "foo", "ps1", "timestamp.txt"), "r") as fh:
+ contents = fh.read()
+ assert contents == "2015-02-02 15:58:23.948203 PST"
+
+ with open(join(course_dir, "autograded", "foo", "ps1", "subdir", "data.csv"), "r") as fh:
+ contents = fh.read()
+ assert contents == "some,data\n"
+
+ with open(join(course_dir, "autograded", "foo", "ps1", "subdir", "helper.py"), "r") as fh:
+ contents = fh.read()
+ assert contents == "print('this is different!')\n"
+
def test_side_effects(self, db, course_dir):
with open("nbgrader_config.py", "a") as fh:
fh.write("""c.CourseDirectory.db_assignments = [dict(name='ps1', duedate='2015-02-02 14:58:23.948203 PST')]\n""")
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pyenchant",
"sphinxcontrib-spelling",
"sphinx_rtd_theme",
"nbval",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.5",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
alembic==1.7.7
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
async-generator==1.10
attrs==22.2.0
Babel==2.11.0
backcall==0.2.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
comm==0.1.4
contextvars==2.4
coverage==6.2
dataclasses==0.8
decorator==5.1.1
defusedxml==0.7.1
docutils==0.18.1
entrypoints==0.4
greenlet==2.0.2
idna==3.10
imagesize==1.4.1
immutables==0.19
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
ipykernel==5.5.6
ipython==7.16.3
ipython-genutils==0.2.0
ipywidgets==7.8.5
jedi==0.17.2
Jinja2==3.0.3
json5==0.9.16
jsonschema==3.2.0
jupyter==1.1.1
jupyter-client==7.1.2
jupyter-console==6.4.3
jupyter-core==4.9.2
jupyter-server==1.13.1
jupyterlab==3.2.9
jupyterlab-pygments==0.1.2
jupyterlab-server==2.10.3
jupyterlab_widgets==1.1.11
Mako==1.1.6
MarkupSafe==2.0.1
mistune==0.8.4
nbclassic==0.3.5
nbclient==0.5.9
nbconvert==6.0.7
nbformat==5.1.3
-e git+https://github.com/jupyter/nbgrader.git@919c56a9782647a97bd03a0c9d6d0ac5633db0a3#egg=nbgrader
nbval==0.10.0
nest-asyncio==1.6.0
notebook==6.4.10
packaging==21.3
pandocfilters==1.5.1
parso==0.7.1
pexpect==4.9.0
pickleshare==0.7.5
pluggy==1.0.0
prometheus-client==0.17.1
prompt-toolkit==3.0.36
ptyprocess==0.7.0
py==1.11.0
pycparser==2.21
pyenchant==3.2.2
Pygments==2.14.0
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
pyzmq==25.1.2
requests==2.27.1
Send2Trash==1.8.3
six==1.17.0
sniffio==1.2.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-spelling==7.7.0
SQLAlchemy==1.4.54
terminado==0.12.1
testpath==0.6.0
tomli==1.2.3
tornado==6.1
traitlets==4.3.3
typing_extensions==4.1.1
urllib3==1.26.20
wcwidth==0.2.13
webencodings==0.5.1
websocket-client==1.3.1
widgetsnbextension==3.6.10
zipp==3.6.0
| name: nbgrader
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- alembic==1.7.7
- anyio==3.6.2
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- async-generator==1.10
- attrs==22.2.0
- babel==2.11.0
- backcall==0.2.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- comm==0.1.4
- contextvars==2.4
- coverage==6.2
- dataclasses==0.8
- decorator==5.1.1
- defusedxml==0.7.1
- docutils==0.18.1
- entrypoints==0.4
- greenlet==2.0.2
- idna==3.10
- imagesize==1.4.1
- immutables==0.19
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- ipykernel==5.5.6
- ipython==7.16.3
- ipython-genutils==0.2.0
- ipywidgets==7.8.5
- jedi==0.17.2
- jinja2==3.0.3
- json5==0.9.16
- jsonschema==3.2.0
- jupyter==1.1.1
- jupyter-client==7.1.2
- jupyter-console==6.4.3
- jupyter-core==4.9.2
- jupyter-server==1.13.1
- jupyterlab==3.2.9
- jupyterlab-pygments==0.1.2
- jupyterlab-server==2.10.3
- jupyterlab-widgets==1.1.11
- mako==1.1.6
- markupsafe==2.0.1
- mistune==0.8.4
- nbclassic==0.3.5
- nbclient==0.5.9
- nbconvert==6.0.7
- nbformat==5.1.3
- nbval==0.10.0
- nest-asyncio==1.6.0
- notebook==6.4.10
- packaging==21.3
- pandocfilters==1.5.1
- parso==0.7.1
- pexpect==4.9.0
- pickleshare==0.7.5
- pluggy==1.0.0
- prometheus-client==0.17.1
- prompt-toolkit==3.0.36
- ptyprocess==0.7.0
- py==1.11.0
- pycparser==2.21
- pyenchant==3.2.2
- pygments==2.14.0
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyzmq==25.1.2
- requests==2.27.1
- send2trash==1.8.3
- six==1.17.0
- sniffio==1.2.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-rtd-theme==2.0.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- sphinxcontrib-spelling==7.7.0
- sqlalchemy==1.4.54
- terminado==0.12.1
- testpath==0.6.0
- tomli==1.2.3
- tornado==6.1
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- wcwidth==0.2.13
- webencodings==0.5.1
- websocket-client==1.3.1
- widgetsnbextension==3.6.10
- zipp==3.6.0
prefix: /opt/conda/envs/nbgrader
| [
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_overwrite_files",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_overwrite_files_subdirs"
]
| [
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_force_single_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_update_newer_single_notebook"
]
| [
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_help",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_student",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_assignment",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_timestamp",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_empty_timestamp",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_late_submission_penalty_none",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_late_submission_penalty_zero",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_late_submission_penalty_plugin",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_force",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_filter_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_side_effects",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_skip_extra_notebooks",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_permissions",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_custom_permissions",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_update_newer",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_hidden_tests_single_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_handle_failure",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_handle_failure_single_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_source_kernelspec",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_incorrect_source_kernelspec",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_incorrect_submitted_kernelspec",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_no_execute",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_infinite_loop",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_files",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_missing_notebook"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,435 | [
"nbgrader/converters/autograde.py"
]
| [
"nbgrader/converters/autograde.py"
]
|
|
jupyter__nbgrader-948 | 919c56a9782647a97bd03a0c9d6d0ac5633db0a3 | 2018-04-22 14:37:06 | 5bc6f37c39c8b10b8f60440b2e6d9487e63ef3f1 | diff --git a/nbgrader/apps/__init__.py b/nbgrader/apps/__init__.py
index da8192e0..12b32d9c 100644
--- a/nbgrader/apps/__init__.py
+++ b/nbgrader/apps/__init__.py
@@ -18,6 +18,7 @@ from .dbapp import (
DbAssignmentAddApp, DbAssignmentRemoveApp, DbAssignmentImportApp, DbAssignmentListApp)
from .updateapp import UpdateApp
from .zipcollectapp import ZipCollectApp
+from .generateconfigapp import GenerateConfigApp
from .nbgraderapp import NbGraderApp
from .api import NbGraderAPI
@@ -50,5 +51,6 @@ __all__ = [
'DbAssignmentListApp',
'UpdateApp',
'ZipCollectApp',
+ 'GenerateConfigApp'
'NbGraderAPI'
]
diff --git a/nbgrader/apps/generateconfigapp.py b/nbgrader/apps/generateconfigapp.py
new file mode 100644
index 00000000..6c0ea220
--- /dev/null
+++ b/nbgrader/apps/generateconfigapp.py
@@ -0,0 +1,34 @@
+# coding: utf-8
+
+import os
+
+from traitlets import Unicode
+from traitlets.config.application import catch_config_error
+from .baseapp import NbGrader
+
+
+class GenerateConfigApp(NbGrader):
+
+ name = u'nbgrader-generate-config'
+ description = u'Generates a default nbgrader_config.py file'
+ examples = ""
+
+ filename = Unicode(
+ "nbgrader_config.py",
+ help="The name of the configuration file to generate."
+ ).tag(config=True)
+
+ @catch_config_error
+ def initialize(self, argv=None):
+ super(GenerateConfigApp, self).initialize(argv)
+
+ def start(self):
+ super(GenerateConfigApp, self).start()
+ s = self.generate_config_file()
+
+ if os.path.exists(self.filename):
+ self.fail("Config file '{}' already exists".format(self.filename))
+
+ with open(self.filename, 'w') as fh:
+ fh.write(s)
+ self.log.info("New config file saved to '{}'".format(self.filename))
diff --git a/nbgrader/apps/nbgraderapp.py b/nbgrader/apps/nbgraderapp.py
index cde01949..dbfc55ed 100755
--- a/nbgrader/apps/nbgraderapp.py
+++ b/nbgrader/apps/nbgraderapp.py
@@ -35,6 +35,7 @@ from . import (
DbApp,
UpdateApp,
ZipCollectApp,
+ GenerateConfigApp
)
aliases = {}
@@ -45,10 +46,6 @@ aliases.update({
flags = {}
flags.update(nbgrader_flags)
flags.update({
- 'generate-config': (
- {'NbGraderApp' : {'generate_config': True}},
- "Generate a config file."
- )
})
@@ -234,7 +231,15 @@ class NbGraderApp(NbGrader):
Update nbgrader cell metadata to the most recent version.
"""
).strip()
- )
+ ),
+ generate_config=(
+ GenerateConfigApp,
+ dedent(
+ """
+ Generates a default nbgrader_config.py file.
+ """
+ ).strip()
+ ),
)
@default("classes")
@@ -280,19 +285,6 @@ class NbGraderApp(NbGrader):
super(NbGraderApp, self).initialize(argv)
def start(self):
- # if we're generating a config file, then do only that
- if self.generate_config:
- s = self.generate_config_file()
- filename = "nbgrader_config.py"
-
- if os.path.exists(filename):
- self.fail("Config file '{}' already exists".format(filename))
-
- with open(filename, 'w') as fh:
- fh.write(s)
- self.log.info("New config file saved to '{}'".format(filename))
- raise NoStart()
-
# check: is there a subapp given?
if self.subapp is None:
print("No command given (run with --help for options). List of subcommands:\n")
diff --git a/nbgrader/apps/quickstartapp.py b/nbgrader/apps/quickstartapp.py
index 05925445..77154df3 100644
--- a/nbgrader/apps/quickstartapp.py
+++ b/nbgrader/apps/quickstartapp.py
@@ -99,7 +99,7 @@ class QuickStartApp(NbGrader):
self.log.info("Generating example config file...")
currdir = os.getcwd()
os.chdir(course_path)
- subprocess.call([sys.executable, "-m", "nbgrader", "--generate-config"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ subprocess.call([sys.executable, "-m", "nbgrader", "generate_config"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
os.chdir(currdir)
with open(os.path.join(course_path, "nbgrader_config.py"), "r") as fh:
config = fh.read()
diff --git a/nbgrader/docs/source/build_docs.py b/nbgrader/docs/source/build_docs.py
index 976fab0e..943f4bb8 100644
--- a/nbgrader/docs/source/build_docs.py
+++ b/nbgrader/docs/source/build_docs.py
@@ -46,6 +46,7 @@ def autogen_command_line(root):
'UpdateApp',
'ValidateApp',
'ZipCollectApp',
+ 'GenerateConfigApp'
]
print('Generating command line documentation')
diff --git a/nbgrader/docs/source/command_line_tools/index.rst b/nbgrader/docs/source/command_line_tools/index.rst
index b7250061..b10ba74a 100644
--- a/nbgrader/docs/source/command_line_tools/index.rst
+++ b/nbgrader/docs/source/command_line_tools/index.rst
@@ -8,6 +8,7 @@ Basic commands
:maxdepth: 1
nbgrader
+ nbgrader-generate-config
Instructor commands
-------------------
diff --git a/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb b/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb
index c1cd1475..48e896bb 100644
--- a/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb
+++ b/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb
@@ -136,7 +136,7 @@
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAssertionError\u001b[0m Traceback (most recent call last)",
- "\u001b[0;32m<ipython-input-4-c7326d393566>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;34m\"\"\"Check that squares returns the correct output for several inputs\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m9\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m16\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m25\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m36\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m49\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m64\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m81\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m100\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m11\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m9\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m16\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m25\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m36\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m49\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m64\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m81\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m100\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m121\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m<ipython-input-4-f3fef5b9ed4e>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;34m\"\"\"Check that squares returns the correct output for several inputs\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m9\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m16\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m25\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m36\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m49\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m64\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m81\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m100\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msquares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m11\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m9\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m16\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m25\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m36\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m49\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m64\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m81\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m100\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m121\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mAssertionError\u001b[0m: "
]
}
@@ -274,7 +274,7 @@
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAssertionError\u001b[0m Traceback (most recent call last)",
- "\u001b[0;32m<ipython-input-8-3982d45be654>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;34m\"\"\"Check that sum_of_squares returns the correct answer for various inputs.\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m5\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m385\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m11\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m506\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m<ipython-input-8-1a00eaa7c988>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;34m\"\"\"Check that sum_of_squares returns the correct answer for various inputs.\"\"\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m5\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m385\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32massert\u001b[0m \u001b[0msum_of_squares\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m11\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m506\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mAssertionError\u001b[0m: "
]
}
@@ -384,7 +384,7 @@
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNotImplementedError\u001b[0m Traceback (most recent call last)",
- "\u001b[0;32m<ipython-input-10-a3eb14613b08>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# YOUR CODE HERE\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mNotImplementedError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
+ "\u001b[0;32m<ipython-input-10-15b94d1fa268>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# YOUR CODE HERE\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mNotImplementedError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mNotImplementedError\u001b[0m: "
]
}
diff --git a/nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb b/nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb
index 7c6a3d68..83164922 100644
--- a/nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb
+++ b/nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb
@@ -855,8 +855,8 @@
"output_type": "stream",
"text": [
"[AutogradeApp | WARNING] No nbgrader_config.py file found (rerun with --debug to see where nbgrader is looking)\n",
- "[AutogradeApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png\n",
"[AutogradeApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/timestamp.txt -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/timestamp.txt\n",
+ "[AutogradeApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png\n",
"[AutogradeApp | INFO] Creating/updating student with ID 'bitdiddle': {}\n",
"[AutogradeApp | INFO] SubmittedAssignment<ps1 for bitdiddle> submitted at 2015-02-02 14:58:23.948203\n",
"[AutogradeApp | INFO] Overwriting files with master versions from the source directory\n",
@@ -878,8 +878,8 @@
"[AutogradeApp | INFO] Executing notebook with kernel: python\n",
"[AutogradeApp | INFO] Writing 2356 bytes to /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n",
"[AutogradeApp | INFO] Setting destination file permissions to 444\n",
- "[AutogradeApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted/hacker/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png\n",
"[AutogradeApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted/hacker/ps1/timestamp.txt -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/timestamp.txt\n",
+ "[AutogradeApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted/hacker/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png\n",
"[AutogradeApp | INFO] Creating/updating student with ID 'hacker': {}\n",
"[AutogradeApp | INFO] SubmittedAssignment<ps1 for hacker> submitted at 2015-02-01 09:28:58.749302\n",
"[AutogradeApp | INFO] Overwriting files with master versions from the source directory\n",
@@ -1060,15 +1060,15 @@
"output_type": "stream",
"text": [
"[FeedbackApp | WARNING] No nbgrader_config.py file found (rerun with --debug to see where nbgrader is looking)\n",
- "[FeedbackApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/jupyter.png\n",
"[FeedbackApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/timestamp.txt -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/timestamp.txt\n",
+ "[FeedbackApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/jupyter.png\n",
"[FeedbackApp | INFO] Converting notebook /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb\n",
"[FeedbackApp | INFO] Writing 275507 bytes to /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html\n",
"[FeedbackApp | INFO] Converting notebook /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem2.ipynb\n",
"[FeedbackApp | INFO] Writing 254432 bytes to /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem2.html\n",
"[FeedbackApp | INFO] Setting destination file permissions to 644\n",
- "[FeedbackApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/hacker/ps1/jupyter.png\n",
"[FeedbackApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/timestamp.txt -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/hacker/ps1/timestamp.txt\n",
+ "[FeedbackApp | INFO] Copying /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/jupyter.png -> /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/hacker/ps1/jupyter.png\n",
"[FeedbackApp | INFO] Converting notebook /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb\n",
"[FeedbackApp | INFO] Writing 271810 bytes to /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/feedback/hacker/ps1/problem1.html\n",
"[FeedbackApp | INFO] Converting notebook /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem2.ipynb\n",
diff --git a/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html b/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html
index d3a94acb..4bcbab02 100644
--- a/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html
+++ b/nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html
@@ -12079,7 +12079,7 @@ span.nbgrader-label {
<pre>
<span class="ansi-red-fg">---------------------------------------------------------------------------</span>
<span class="ansi-red-fg">AssertionError</span> Traceback (most recent call last)
-<span class="ansi-green-fg"><ipython-input-4-c7326d393566></span> in <span class="ansi-cyan-fg"><module></span><span class="ansi-blue-fg">()</span>
+<span class="ansi-green-fg"><ipython-input-4-f3fef5b9ed4e></span> in <span class="ansi-cyan-fg"><module></span><span class="ansi-blue-fg">()</span>
<span class="ansi-green-intense-fg ansi-bold"> 1</span> <span class="ansi-blue-fg">"""Check that squares returns the correct output for several inputs"""</span>
<span class="ansi-green-fg">----> 2</span><span class="ansi-red-fg"> </span><span class="ansi-green-fg">assert</span> squares<span class="ansi-blue-fg">(</span><span class="ansi-cyan-fg">1</span><span class="ansi-blue-fg">)</span> <span class="ansi-blue-fg">==</span> <span class="ansi-blue-fg">[</span><span class="ansi-cyan-fg">1</span><span class="ansi-blue-fg">]</span>
<span class="ansi-green-intense-fg ansi-bold"> 3</span> <span class="ansi-green-fg">assert</span> squares<span class="ansi-blue-fg">(</span><span class="ansi-cyan-fg">2</span><span class="ansi-blue-fg">)</span> <span class="ansi-blue-fg">==</span> <span class="ansi-blue-fg">[</span><span class="ansi-cyan-fg">1</span><span class="ansi-blue-fg">,</span> <span class="ansi-cyan-fg">4</span><span class="ansi-blue-fg">]</span>
@@ -12236,7 +12236,7 @@ span.nbgrader-label {
<pre>
<span class="ansi-red-fg">---------------------------------------------------------------------------</span>
<span class="ansi-red-fg">AssertionError</span> Traceback (most recent call last)
-<span class="ansi-green-fg"><ipython-input-8-3982d45be654></span> in <span class="ansi-cyan-fg"><module></span><span class="ansi-blue-fg">()</span>
+<span class="ansi-green-fg"><ipython-input-8-1a00eaa7c988></span> in <span class="ansi-cyan-fg"><module></span><span class="ansi-blue-fg">()</span>
<span class="ansi-green-intense-fg ansi-bold"> 1</span> <span class="ansi-blue-fg">"""Check that sum_of_squares returns the correct answer for various inputs."""</span>
<span class="ansi-green-fg">----> 2</span><span class="ansi-red-fg"> </span><span class="ansi-green-fg">assert</span> sum_of_squares<span class="ansi-blue-fg">(</span><span class="ansi-cyan-fg">1</span><span class="ansi-blue-fg">)</span> <span class="ansi-blue-fg">==</span> <span class="ansi-cyan-fg">1</span>
<span class="ansi-green-intense-fg ansi-bold"> 3</span> <span class="ansi-green-fg">assert</span> sum_of_squares<span class="ansi-blue-fg">(</span><span class="ansi-cyan-fg">2</span><span class="ansi-blue-fg">)</span> <span class="ansi-blue-fg">==</span> <span class="ansi-cyan-fg">5</span>
@@ -12354,7 +12354,7 @@ span.nbgrader-label {
<pre>
<span class="ansi-red-fg">---------------------------------------------------------------------------</span>
<span class="ansi-red-fg">NotImplementedError</span> Traceback (most recent call last)
-<span class="ansi-green-fg"><ipython-input-10-a3eb14613b08></span> in <span class="ansi-cyan-fg"><module></span><span class="ansi-blue-fg">()</span>
+<span class="ansi-green-fg"><ipython-input-10-15b94d1fa268></span> in <span class="ansi-cyan-fg"><module></span><span class="ansi-blue-fg">()</span>
<span class="ansi-green-intense-fg ansi-bold"> 1</span> <span class="ansi-red-fg"># YOUR CODE HERE</span>
<span class="ansi-green-fg">----> 2</span><span class="ansi-red-fg"> </span><span class="ansi-green-fg">raise</span> NotImplementedError<span class="ansi-blue-fg">(</span><span class="ansi-blue-fg">)</span>
diff --git a/nbgrader/docs/source/user_guide/managing_assignment_files.ipynb b/nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
index 7dd5743f..e762013d 100644
--- a/nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
+++ b/nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
@@ -415,9 +415,9 @@
"output_type": "stream",
"text": [
"total 40\n",
- "-rw-r--r-- 1 jhamrick wheel 5733 Oct 29 13:17 jupyter.png\n",
- "-rw-r--r-- 1 jhamrick wheel 8126 Oct 29 13:17 problem1.ipynb\n",
- "-rw-r--r-- 1 jhamrick wheel 2318 Oct 29 13:17 problem2.ipynb\n"
+ "-rw-r--r-- 1 jhamrick wheel 5733 Apr 22 15:29 jupyter.png\n",
+ "-rw-r--r-- 1 jhamrick wheel 8126 Apr 22 15:29 problem1.ipynb\n",
+ "-rw-r--r-- 1 jhamrick wheel 2318 Apr 22 15:29 problem2.ipynb\n"
]
}
],
@@ -585,8 +585,8 @@
"output_type": "stream",
"text": [
"[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n",
- "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2017-10-29 13:17:40.891463 UTC\n",
- "[SubmitApp | INFO] Submitted as: example_course ps1 2017-10-29 13:17:40.891463 UTC\n"
+ "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:40.397476 UTC\n",
+ "[SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:40.397476 UTC\n"
]
}
],
@@ -614,9 +614,9 @@
"output_type": "stream",
"text": [
"total 8\n",
- "drwxr-xr-x 3 jhamrick wheel 102 Oct 29 13:17 Library\n",
- "-rw-r--r-- 1 jhamrick wheel 91 Oct 29 13:17 nbgrader_config.py\n",
- "drwxr-xr-x 5 jhamrick wheel 170 Oct 29 13:17 ps1\n"
+ "drwxr-xr-x 3 jhamrick wheel 96 Apr 22 15:29 Library\n",
+ "-rw-r--r-- 1 jhamrick wheel 91 Apr 22 15:29 nbgrader_config.py\n",
+ "drwxr-xr-x 5 jhamrick wheel 160 Apr 22 15:29 ps1\n"
]
}
],
@@ -644,7 +644,7 @@
"output_type": "stream",
"text": [
"[ListApp | INFO] Submitted assignments:\n",
- "[ListApp | INFO] example_course jhamrick ps1 2017-10-29 13:17:40.891463 UTC\n"
+ "[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC\n"
]
}
],
@@ -672,8 +672,8 @@
"output_type": "stream",
"text": [
"[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n",
- "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2017-10-29 13:17:43.778886 UTC\n",
- "[SubmitApp | INFO] Submitted as: example_course ps1 2017-10-29 13:17:43.778886 UTC\n"
+ "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:43.070290 UTC\n",
+ "[SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:43.070290 UTC\n"
]
}
],
@@ -701,8 +701,8 @@
"output_type": "stream",
"text": [
"[ListApp | INFO] Submitted assignments:\n",
- "[ListApp | INFO] example_course jhamrick ps1 2017-10-29 13:17:40.891463 UTC\n",
- "[ListApp | INFO] example_course jhamrick ps1 2017-10-29 13:17:43.778886 UTC\n"
+ "[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC\n",
+ "[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:43.070290 UTC\n"
]
}
],
@@ -737,7 +737,7 @@
"output_type": "stream",
"text": [
"[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n",
- "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2017-10-29 13:17:46.694803 UTC\n",
+ "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:46.167901 UTC\n",
"[SubmitApp | WARNING] Possible missing notebooks and/or extra notebooks submitted for assignment ps1:\n",
" Expected:\n",
" \tproblem1.ipynb: MISSING\n",
@@ -745,7 +745,7 @@
" Submitted:\n",
" \tmyproblem1.ipynb: EXTRA\n",
" \tproblem2.ipynb: OK\n",
- "[SubmitApp | INFO] Submitted as: example_course ps1 2017-10-29 13:17:46.694803 UTC\n"
+ "[SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:46.167901 UTC\n"
]
}
],
@@ -805,7 +805,7 @@
"output_type": "stream",
"text": [
"[SubmitApp | INFO] Source: /private/tmp/student_home/ps1\n",
- "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2017-10-29 13:17:48.185532 UTC\n",
+ "[SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:47.497419 UTC\n",
"[SubmitApp | CRITICAL] Assignment ps1 not submitted. There are missing notebooks for the submission:\n",
" Expected:\n",
" \tproblem1.ipynb: MISSING\n",
@@ -931,9 +931,9 @@
"output_type": "stream",
"text": [
"[ListApp | INFO] Submitted assignments:\n",
- "[ListApp | INFO] example_course jhamrick ps1 2017-10-29 13:17:40.891463 UTC\n",
- "[ListApp | INFO] example_course jhamrick ps1 2017-10-29 13:17:43.778886 UTC\n",
- "[ListApp | INFO] example_course jhamrick ps1 2017-10-29 13:17:46.694803 UTC\n"
+ "[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC\n",
+ "[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:43.070290 UTC\n",
+ "[ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:46.167901 UTC\n"
]
}
],
@@ -987,9 +987,9 @@
"output_type": "stream",
"text": [
"total 0\n",
- "drwxr-xr-x 3 jhamrick staff 102 May 31 19:10 bitdiddle\n",
- "drwxr-xr-x 3 jhamrick staff 102 May 31 19:10 hacker\n",
- "drwxr-xr-x 3 jhamrick staff 102 Oct 29 13:17 jhamrick\n"
+ "drwxr-xr-x 3 jhamrick staff 96 May 31 2017 bitdiddle\n",
+ "drwxr-xr-x 3 jhamrick staff 96 May 31 2017 hacker\n",
+ "drwxr-xr-x 3 jhamrick staff 96 Apr 22 15:29 jhamrick\n"
]
}
],
diff --git a/nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb b/nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb
index 12db2131..d3828217 100644
--- a/nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb
+++ b/nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb
@@ -180,8 +180,8 @@
"output_type": "stream",
"text": [
"total 64\n",
- "-rw-r--r-- 1 jhamrick staff 18465 Oct 29 13:15 notebooks.zip\n",
- "-rw-r--r-- 1 jhamrick staff 8870 Oct 29 13:16 ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n"
+ "-rw-r--r-- 1 jhamrick staff 18465 Oct 29 20:04 notebooks.zip\n",
+ "-rw-r--r-- 1 jhamrick staff 8870 Apr 22 15:28 ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n"
]
}
],
@@ -259,8 +259,8 @@
"output_type": "stream",
"text": [
"total 64\n",
- "-rw-r--r-- 1 jhamrick staff 18465 Oct 29 13:15 notebooks.zip\n",
- "-rw-r--r-- 1 jhamrick staff 8870 Oct 29 13:16 ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n"
+ "-rw-r--r-- 1 jhamrick staff 18465 Oct 29 20:04 notebooks.zip\n",
+ "-rw-r--r-- 1 jhamrick staff 8870 Apr 22 15:28 ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n"
]
}
],
@@ -355,10 +355,10 @@
"[ZipCollectApp | INFO] Using file extractor: ExtractorPlugin\n",
"[ZipCollectApp | INFO] Using file collector: FileNameCollectorPlugin\n",
"[ZipCollectApp | WARNING] Directory not found. Creating: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted\n",
- "[ZipCollectApp | INFO] Extracting from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/archive/notebooks.zip\n",
- "[ZipCollectApp | INFO] Extracting to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks\n",
"[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/archive/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
"[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
+ "[ZipCollectApp | INFO] Extracting from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/archive/notebooks.zip\n",
+ "[ZipCollectApp | INFO] Extracting to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks\n",
"[ZipCollectApp | INFO] Start collecting files...\n",
"[ZipCollectApp | INFO] Parsing file: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_jupyter.png\n",
"[ZipCollectApp | WARNING] Skipped submission with no match information provided: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_jupyter.png\n",
@@ -410,15 +410,15 @@
"output_type": "stream",
"text": [
"total 24\n",
- "drwxr-xr-x 8 jhamrick staff 272 Oct 29 13:17 notebooks\n",
- "-rw-r--r-- 1 jhamrick staff 8870 Oct 29 13:17 ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
+ "drwxr-xr-x 8 jhamrick staff 256 Apr 22 15:29 notebooks\n",
+ "-rw-r--r-- 1 jhamrick staff 8870 Apr 22 15:29 ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
"total 88\n",
- "-rw-rw-r-- 1 jhamrick staff 5733 Oct 29 13:17 ps1_bitdiddle_attempt_2016-01-30-15-30-10_jupyter.png\n",
- "-rw-rw-r-- 1 jhamrick staff 7954 Oct 29 13:17 ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem1.ipynb\n",
- "-rw-rw-r-- 1 jhamrick staff 2288 Oct 29 13:17 ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem2.ipynb\n",
- "-rw-rw-r-- 1 jhamrick staff 5733 Oct 29 13:17 ps1_hacker_attempt_2016-01-30-16-30-10_jupyter.png\n",
- "-rw-rw-r-- 1 jhamrick staff 9072 Oct 29 13:17 ps1_hacker_attempt_2016-01-30-16-30-10_myproblem1.ipynb\n",
- "-rw-rw-r-- 1 jhamrick staff 2418 Oct 29 13:17 ps1_hacker_attempt_2016-01-30-16-30-10_problem2.ipynb\n"
+ "-rw-rw-r-- 1 jhamrick staff 5733 Apr 22 15:29 ps1_bitdiddle_attempt_2016-01-30-15-30-10_jupyter.png\n",
+ "-rw-rw-r-- 1 jhamrick staff 7954 Apr 22 15:29 ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem1.ipynb\n",
+ "-rw-rw-r-- 1 jhamrick staff 2288 Apr 22 15:29 ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem2.ipynb\n",
+ "-rw-rw-r-- 1 jhamrick staff 5733 Apr 22 15:29 ps1_hacker_attempt_2016-01-30-16-30-10_jupyter.png\n",
+ "-rw-rw-r-- 1 jhamrick staff 9072 Apr 22 15:29 ps1_hacker_attempt_2016-01-30-16-30-10_myproblem1.ipynb\n",
+ "-rw-rw-r-- 1 jhamrick staff 2418 Apr 22 15:29 ps1_hacker_attempt_2016-01-30-16-30-10_problem2.ipynb\n"
]
}
],
@@ -448,8 +448,8 @@
"output_type": "stream",
"text": [
"total 0\n",
- "drwxr-xr-x 3 jhamrick staff 102 Oct 29 13:17 bitdiddle\n",
- "drwxr-xr-x 3 jhamrick staff 102 Oct 29 13:17 hacker\n"
+ "drwxr-xr-x 3 jhamrick staff 96 Apr 22 15:29 bitdiddle\n",
+ "drwxr-xr-x 3 jhamrick staff 96 Apr 22 15:29 hacker\n"
]
}
],
@@ -469,9 +469,9 @@
"output_type": "stream",
"text": [
"total 40\n",
- "-rw-r--r-- 1 jhamrick staff 8870 Oct 29 13:17 problem1.ipynb\n",
- "-rw-rw-r-- 1 jhamrick staff 2418 Oct 29 13:17 problem2.ipynb\n",
- "-rw-r--r-- 1 jhamrick staff 19 Oct 29 13:17 timestamp.txt\n"
+ "-rw-r--r-- 1 jhamrick staff 8870 Apr 22 15:29 problem1.ipynb\n",
+ "-rw-rw-r-- 1 jhamrick staff 2418 Apr 22 15:29 problem2.ipynb\n",
+ "-rw-r--r-- 1 jhamrick staff 19 Apr 22 15:29 timestamp.txt\n"
]
}
],
@@ -567,10 +567,10 @@
"[ZipCollectApp | INFO] Using file extractor: ExtractorPlugin\n",
"[ZipCollectApp | INFO] Using file collector: CustomPlugin\n",
"[ZipCollectApp | WARNING] Clearing existing files in /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted\n",
- "[ZipCollectApp | INFO] Extracting from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/archive/notebooks.zip\n",
- "[ZipCollectApp | INFO] Extracting to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks\n",
"[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/archive/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
"[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
+ "[ZipCollectApp | INFO] Extracting from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/archive/notebooks.zip\n",
+ "[ZipCollectApp | INFO] Extracting to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks\n",
"[ZipCollectApp | INFO] Start collecting files...\n",
"[ZipCollectApp | INFO] Parsing file: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_jupyter.png\n",
"[ZipCollectApp | WARNING] Skipped submission with no match information provided: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_jupyter.png\n",
@@ -584,18 +584,18 @@
"[ZipCollectApp | INFO] Parsing file: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
"[ZipCollectApp | WARNING] 4 files collected, 3 files skipped\n",
"[ZipCollectApp | INFO] Start transfering files...\n",
- "[ZipCollectApp | WARNING] Clearing existing files in /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1\n",
- "[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem1.ipynb\n",
- "[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1/problem1.ipynb\n",
- "[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem2.ipynb\n",
- "[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1/problem2.ipynb\n",
- "[ZipCollectApp | INFO] Creating timestamp: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1/timestamp.txt\n",
"[ZipCollectApp | WARNING] Clearing existing files in /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/hacker/ps1\n",
"[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_hacker_attempt_2016-01-30-16-30-10_problem2.ipynb\n",
"[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/hacker/ps1/problem2.ipynb\n",
"[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb\n",
"[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/hacker/ps1/problem1.ipynb\n",
- "[ZipCollectApp | INFO] Creating timestamp: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/hacker/ps1/timestamp.txt\n"
+ "[ZipCollectApp | INFO] Creating timestamp: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/hacker/ps1/timestamp.txt\n",
+ "[ZipCollectApp | WARNING] Clearing existing files in /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1\n",
+ "[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem1.ipynb\n",
+ "[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1/problem1.ipynb\n",
+ "[ZipCollectApp | INFO] Copying from: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/downloaded/ps1/extracted/notebooks/ps1_bitdiddle_attempt_2016-01-30-15-30-10_problem2.ipynb\n",
+ "[ZipCollectApp | INFO] Copying to: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1/problem2.ipynb\n",
+ "[ZipCollectApp | INFO] Creating timestamp: /Users/jhamrick/project/tools/nbgrader/nbgrader/docs/source/user_guide/submitted_zip/bitdiddle/ps1/timestamp.txt\n"
]
}
],
diff --git a/nbgrader/docs/source/user_guide/managing_the_database.ipynb b/nbgrader/docs/source/user_guide/managing_the_database.ipynb
index e3f216fb..94b48f3f 100644
--- a/nbgrader/docs/source/user_guide/managing_the_database.ipynb
+++ b/nbgrader/docs/source/user_guide/managing_the_database.ipynb
@@ -246,7 +246,7 @@
"output_type": "stream",
"text": [
"[DbStudentAddApp | INFO] Creating/updating student with ID 'bitdiddle': {'last_name': 'Bitdiddle', 'first_name': 'Ben', 'email': None}\n",
- "[DbStudentAddApp | INFO] Creating/updating student with ID 'hacker': {'last_name': 'Hacker', 'first_name': 'Alyssa', 'email': None}\n"
+ "[DbStudentAddApp | INFO] Creating/updating student with ID 'hacker': {'first_name': 'Alyssa', 'last_name': 'Hacker', 'email': None}\n"
]
}
],
@@ -322,8 +322,8 @@
"output_type": "stream",
"text": [
"[DbStudentImportApp | INFO] Importing from: 'students.csv'\n",
- "[DbStudentImportApp | INFO] Creating/updating Student with id 'bitdiddle': {'email': None, 'first_name': 'Ben', 'last_name': 'Bitdiddle'}\n",
- "[DbStudentImportApp | INFO] Creating/updating Student with id 'hacker': {'email': None, 'first_name': 'Alyssa', 'last_name': 'Hacker'}\n"
+ "[DbStudentImportApp | INFO] Creating/updating Student with id 'bitdiddle': {'first_name': 'Ben', 'email': None, 'last_name': 'Bitdiddle'}\n",
+ "[DbStudentImportApp | INFO] Creating/updating Student with id 'hacker': {'first_name': 'Alyssa', 'email': None, 'last_name': 'Hacker'}\n"
]
}
],
| Nbgrader.generate_config=False still generates a config file
Running:
`
nbgrader quickstart <course_id> --Nbgrader.config_file=/home/course/.jupyter/nbgrader_config --Nbgrader.generate_config=False
`
still generates a config file. Even though I say =False to generating it. And I give the path to the real config I have.
The docs are also misleading because they say generating a config is False by default:
[see docs](https://nbgrader.readthedocs.io/en/latest/command_line_tools/nbgrader-quickstart.html?highlight=quickstart)
```
--NbGrader.generate_config=<Bool>
Default: False
Generate default config file.
```
#### My versions
Ubuntu 16.04
jupyterhub version 0.8.1
notebook version 5.4.0
jupyter version 4.4.0
nbgrader version 0.6.0.dev (Also referred to as Formgrader)
canvasapi 0.8.2
| jupyter/nbgrader | diff --git a/nbgrader/tests/apps/test_nbgrader.py b/nbgrader/tests/apps/test_nbgrader.py
index 6d23fe5f..0666c521 100644
--- a/nbgrader/tests/apps/test_nbgrader.py
+++ b/nbgrader/tests/apps/test_nbgrader.py
@@ -15,19 +15,6 @@ class TestNbGrader(BaseTestApp):
"""Is the help displayed when no subapp is given?"""
run_nbgrader([], retcode=0)
- def test_generate_config(self):
- """Is the config file properly generated?"""
-
- # it already exists, because we create it in conftest.py
- os.remove("nbgrader_config.py")
-
- # try recreating it
- run_nbgrader(["--generate-config"])
- assert os.path.isfile("nbgrader_config.py")
-
- # does it fail if it already exists?
- run_nbgrader(["--generate-config"], retcode=1)
-
def test_check_version(self, capfd):
"""Is the version the same regardless of how we run nbgrader?"""
out1 = '\n'.join(
diff --git a/nbgrader/tests/apps/test_nbgrader_generate_config.py b/nbgrader/tests/apps/test_nbgrader_generate_config.py
new file mode 100644
index 00000000..7b95b401
--- /dev/null
+++ b/nbgrader/tests/apps/test_nbgrader_generate_config.py
@@ -0,0 +1,24 @@
+import os
+
+from .. import run_nbgrader
+from .base import BaseTestApp
+
+
+class TestNbGraderGenerateConfig(BaseTestApp):
+
+ def test_help(self):
+ """Does the help display without error?"""
+ run_nbgrader(["generate_config", "--help-all"])
+
+ def test_generate_config(self):
+ """Is the config file properly generated?"""
+
+ # it already exists, because we create it in conftest.py
+ os.remove("nbgrader_config.py")
+
+ # try recreating it
+ run_nbgrader(["generate_config"])
+ assert os.path.isfile("nbgrader_config.py")
+
+ # does it fail if it already exists?
+ run_nbgrader(["generate_config"], retcode=1)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 11
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov pytest-rerunfailures coverage selenium invoke sphinx codecov cov-core",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"dev-requirements.txt",
"dev-requirements-windows.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
alembic==1.7.7
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
async-generator==1.10
attrs==22.2.0
Babel==2.11.0
backcall==0.2.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
comm==0.1.4
contextvars==2.4
cov-core==1.15.0
coverage==6.2
dataclasses==0.8
decorator==5.1.1
defusedxml==0.7.1
docutils==0.18.1
entrypoints==0.4
greenlet==2.0.2
idna==3.10
imagesize==1.4.1
immutables==0.19
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
invoke==2.2.0
ipykernel==5.5.6
ipython==7.16.3
ipython-genutils==0.2.0
ipywidgets==7.8.5
jedi==0.17.2
Jinja2==3.0.3
json5==0.9.16
jsonschema==3.2.0
jupyter==1.1.1
jupyter-client==7.1.2
jupyter-console==6.4.3
jupyter-core==4.9.2
jupyter-server==1.13.1
jupyterlab==3.2.9
jupyterlab-pygments==0.1.2
jupyterlab-server==2.10.3
jupyterlab_widgets==1.1.11
Mako==1.1.6
MarkupSafe==2.0.1
mistune==0.8.4
nbclassic==0.3.5
nbclient==0.5.9
nbconvert==6.0.7
nbformat==5.1.3
-e git+https://github.com/jupyter/nbgrader.git@919c56a9782647a97bd03a0c9d6d0ac5633db0a3#egg=nbgrader
nbval==0.10.0
nest-asyncio==1.6.0
notebook==6.4.10
packaging==21.3
pandocfilters==1.5.1
parso==0.7.1
pexpect==4.9.0
pickleshare==0.7.5
pluggy==1.0.0
prometheus-client==0.17.1
prompt-toolkit==3.0.36
ptyprocess==0.7.0
py==1.11.0
pycparser==2.21
pyenchant==3.2.2
Pygments==2.14.0
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
pytest-cov==4.0.0
pytest-rerunfailures==10.3
python-dateutil==2.9.0.post0
pytz==2025.2
pyzmq==25.1.2
requests==2.27.1
selenium==3.141.0
Send2Trash==1.8.3
six==1.17.0
sniffio==1.2.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-spelling==7.7.0
SQLAlchemy==1.4.54
terminado==0.12.1
testpath==0.6.0
tomli==1.2.3
tornado==6.1
traitlets==4.3.3
typing_extensions==4.1.1
urllib3==1.26.20
wcwidth==0.2.13
webencodings==0.5.1
websocket-client==1.3.1
widgetsnbextension==3.6.10
zipp==3.6.0
| name: nbgrader
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- alembic==1.7.7
- anyio==3.6.2
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- async-generator==1.10
- attrs==22.2.0
- babel==2.11.0
- backcall==0.2.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- comm==0.1.4
- contextvars==2.4
- cov-core==1.15.0
- coverage==6.2
- dataclasses==0.8
- decorator==5.1.1
- defusedxml==0.7.1
- docutils==0.18.1
- entrypoints==0.4
- greenlet==2.0.2
- idna==3.10
- imagesize==1.4.1
- immutables==0.19
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- invoke==2.2.0
- ipykernel==5.5.6
- ipython==7.16.3
- ipython-genutils==0.2.0
- ipywidgets==7.8.5
- jedi==0.17.2
- jinja2==3.0.3
- json5==0.9.16
- jsonschema==3.2.0
- jupyter==1.1.1
- jupyter-client==7.1.2
- jupyter-console==6.4.3
- jupyter-core==4.9.2
- jupyter-server==1.13.1
- jupyterlab==3.2.9
- jupyterlab-pygments==0.1.2
- jupyterlab-server==2.10.3
- jupyterlab-widgets==1.1.11
- mako==1.1.6
- markupsafe==2.0.1
- mistune==0.8.4
- nbclassic==0.3.5
- nbclient==0.5.9
- nbconvert==6.0.7
- nbformat==5.1.3
- nbval==0.10.0
- nest-asyncio==1.6.0
- notebook==6.4.10
- packaging==21.3
- pandocfilters==1.5.1
- parso==0.7.1
- pexpect==4.9.0
- pickleshare==0.7.5
- pluggy==1.0.0
- prometheus-client==0.17.1
- prompt-toolkit==3.0.36
- ptyprocess==0.7.0
- py==1.11.0
- pycparser==2.21
- pyenchant==3.2.2
- pygments==2.14.0
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- pytest-cov==4.0.0
- pytest-rerunfailures==10.3
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyzmq==25.1.2
- requests==2.27.1
- selenium==3.141.0
- send2trash==1.8.3
- six==1.17.0
- sniffio==1.2.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-rtd-theme==2.0.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- sphinxcontrib-spelling==7.7.0
- sqlalchemy==1.4.54
- terminado==0.12.1
- testpath==0.6.0
- tomli==1.2.3
- tornado==6.1
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- wcwidth==0.2.13
- webencodings==0.5.1
- websocket-client==1.3.1
- widgetsnbextension==3.6.10
- zipp==3.6.0
prefix: /opt/conda/envs/nbgrader
| [
"nbgrader/tests/apps/test_nbgrader_generate_config.py::TestNbGraderGenerateConfig::test_generate_config"
]
| []
| [
"nbgrader/tests/apps/test_nbgrader.py::TestNbGrader::test_help",
"nbgrader/tests/apps/test_nbgrader.py::TestNbGrader::test_no_subapp",
"nbgrader/tests/apps/test_nbgrader.py::TestNbGrader::test_check_version",
"nbgrader/tests/apps/test_nbgrader_generate_config.py::TestNbGraderGenerateConfig::test_help"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,436 | [
"nbgrader/docs/source/command_line_tools/index.rst",
"nbgrader/apps/quickstartapp.py",
"nbgrader/docs/source/user_guide/managing_the_database.ipynb",
"nbgrader/apps/generateconfigapp.py",
"nbgrader/docs/source/build_docs.py",
"nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb",
"nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb",
"nbgrader/apps/__init__.py",
"nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html",
"nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb",
"nbgrader/apps/nbgraderapp.py",
"nbgrader/docs/source/user_guide/managing_assignment_files.ipynb"
]
| [
"nbgrader/docs/source/command_line_tools/index.rst",
"nbgrader/apps/quickstartapp.py",
"nbgrader/docs/source/user_guide/managing_the_database.ipynb",
"nbgrader/apps/generateconfigapp.py",
"nbgrader/docs/source/build_docs.py",
"nbgrader/docs/source/user_guide/autograded/bitdiddle/ps1/problem1.ipynb",
"nbgrader/docs/source/user_guide/creating_and_grading_assignments.ipynb",
"nbgrader/apps/__init__.py",
"nbgrader/docs/source/user_guide/feedback/bitdiddle/ps1/problem1.html",
"nbgrader/docs/source/user_guide/managing_assignment_files_manually.ipynb",
"nbgrader/apps/nbgraderapp.py",
"nbgrader/docs/source/user_guide/managing_assignment_files.ipynb"
]
|
|
grabbles__grabbit-63 | 3fe38c7e7eb510a38e6c2d072bdc913aaa1b7389 | 2018-04-22 16:18:33 | 5a588731d1a4a42a6b67f09ede110d7770845ed0 | diff --git a/grabbit/core.py b/grabbit/core.py
index 166f17c..4a4eaa3 100644
--- a/grabbit/core.py
+++ b/grabbit/core.py
@@ -5,11 +5,13 @@ from collections import defaultdict, OrderedDict, namedtuple
from grabbit.external import six, inflect
from grabbit.utils import natural_sort, listify
from grabbit.extensions.writable import build_path, write_contents_to_file
-from os.path import (join, basename, dirname, abspath, split, isabs, exists)
+from os.path import (join, basename, dirname, abspath, split, exists, isdir,
+ relpath, isabs)
from functools import partial
from copy import deepcopy
import warnings
from keyword import iskeyword
+from itertools import chain
__all__ = ['File', 'Entity', 'Layout']
@@ -66,12 +68,9 @@ class File(object):
for name, val in entities.items():
- if (name not in self.tags) ^ (val is None):
+ if name not in self.tags:
return False
- if val is None:
- continue
-
def make_patt(x):
patt = '%s' % x
if isinstance(x, (int, float)):
@@ -121,11 +120,20 @@ class File(object):
if new_filename[-1] == os.sep:
new_filename += self.filename
+ if isabs(self.path) or root is None:
+ path = self.path
+ else:
+ path = join(root, self.path)
+
+ if not exists(path):
+ raise ValueError("Target filename to copy/symlink (%s) doesn't "
+ "exist." % path)
+
if symbolic_link:
contents = None
- link_to = self.path
+ link_to = path
else:
- with open(self.path, 'r') as f:
+ with open(path, 'r') as f:
contents = f.read()
link_to = None
@@ -136,7 +144,7 @@ class File(object):
class Domain(object):
- def __init__(self, name, config, root):
+ def __init__(self, name, config):
"""
A set of rules that applies to one or more directories
within a Layout.
@@ -151,10 +159,8 @@ class Domain(object):
self.name = name
self.config = config
- self.root = root
self.entities = {}
self.files = []
- self.path_patterns = []
self.include = listify(self.config.get('include', []))
self.exclude = listify(self.config.get('exclude', []))
@@ -164,8 +170,7 @@ class Domain(object):
"both be set. Please pass at most one of these "
"for domain '%s'." % self.name)
- if 'default_path_patterns' in config:
- self.path_patterns += listify(config['default_path_patterns'])
+ self.path_patterns = listify(config.get('default_path_patterns', []))
def add_entity(self, ent):
''' Add an Entity.
@@ -256,15 +261,14 @@ class Entity(object):
setattr(result, k, new_val)
return result
- def matches(self, f, update_file=False):
+ def match_file(self, f, update_file=False):
"""
- Determine whether the passed file matches the Entity and update the
- Entity/File mappings.
+ Determine whether the passed file matches the Entity.
Args:
f (File): The File instance to match against.
- update_file (bool): If True, the file's tag list is updated to
- include the current Entity.
+
+ Returns: the matched value if a match was found, otherwise None.
"""
if self.map_func is not None:
val = self.map_func(f)
@@ -272,15 +276,10 @@ class Entity(object):
m = self.regex.search(f.path)
val = m.group(1) if m is not None else None
- if val is None:
- return False
+ if val is not None and self.dtype is not None:
+ val = self.dtype(val)
- if update_file:
- if self.dtype is not None:
- val = self.dtype(val)
- f.tags[self.name] = Tag(self, val)
-
- return True
+ return val
def add_file(self, filename, value):
""" Adds the specified filename to tracking. """
@@ -320,21 +319,28 @@ class LayoutMetaclass(type):
class Layout(six.with_metaclass(LayoutMetaclass, object)):
- def __init__(self, path, config=None, index=None, dynamic_getters=False,
- absolute_paths=True, regex_search=False, entity_mapper=None,
- path_patterns=None, config_filename='layout.json',
- include=None, exclude=None):
+ def __init__(self, root=None, config=None, index=None,
+ dynamic_getters=False, absolute_paths=True,
+ regex_search=False, entity_mapper=None, path_patterns=None,
+ config_filename='layout.json', include=None, exclude=None):
"""
A container for all the files and metadata found at the specified path.
Args:
- path (str): The root path of the layout.
- config (str, list, dict): A specification of the configuration
- file(s) defining domains to use in the Layout. Must be one of:
+ root (str): Directory that all other paths will be relative to.
+ Every other path the Layout sees must be at this level or below.
+ domains (str, list, dict): A specification of the configuration
+ object(s) defining domains to use in the Layout. Can be one of:
- A dictionary containing config information
- A string giving the path to a JSON file containing the config
- - A list, where each element is one of the above
+ - A string giving the path to a directory containing a
+ configuration file with the name defined in config_filename
+ - A tuple with two elements, where the first element is one of
+ the above (i.e., dict or string), and the second element is
+ an iterable of directories to apply the config to.
+ - A list, where each element is any of the above (dict, string,
+ or tuple).
index (str): Optional path to a saved index file. If a valid value
is passed, this index is used to populate Files and Entities,
@@ -386,7 +392,6 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
raise ValueError("You cannot specify both the include and exclude"
" arguments. Please pass at most one of these.")
- self.root = abspath(path) if absolute_paths else path
self.entities = OrderedDict()
self.files = {}
self.mandatory = set()
@@ -398,91 +403,92 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
self.domains = OrderedDict()
self.include = listify(include or [])
self.exclude = listify(exclude or [])
+ self.absolute_paths = absolute_paths
+ self.root = abspath(root) if absolute_paths else root
if config is not None:
for c in listify(config):
- self._load_domain(c)
+ if isinstance(c, tuple):
+ c, root = c
+ else:
+ root = None
+ self._load_domain(c, root, True)
if index is None:
self.index()
else:
self.load_index(index)
- def _load_domain(self, config, root=None):
+ def _load_domain(self, config, root=None, from_init=False):
if isinstance(config, six.string_types):
+
+ if isdir(config):
+ config = join(config, self.config_filename)
+
+ if not exists(config):
+ raise ValueError("Config file '%s' cannot be found." % config)
+
+ config_filename = config
config = json.load(open(config, 'r'))
+ if root is None and not from_init:
+ root = dirname(abspath(config_filename))
+
if 'name' not in config:
raise ValueError("Config file missing 'name' attribute.")
+
if config['name'] in self.domains:
raise ValueError("Config with name '%s' already exists in "
"Layout. Name of each config file must be "
- "unique across entire Layout.")
- if root is not None:
+ "unique across entire Layout." % config['name'])
+
+ if root is None and from_init:
+ # warnings.warn("No valid root directory found for domain '%s'. "
+ # "Falling back on root directory for Layout (%s)."
+ # % (config['name'], self.root))
+ root = self.root
+
+ if config.get('root') in [None, '.']:
config['root'] = root
- if 'root' not in config:
- warnings.warn("No valid root directory found for domain '%s'."
- " Falling back on the Layout's root directory. "
- "If this isn't the intended behavior, make sure "
- "the config file for this domain includes a "
- "'root' key." % config['name'])
- config['root'] = self.root
- elif config['root'] == '.':
- config['root'] = self.root
- elif not isabs(config['root']):
- _root = config['root']
- config['root'] = join(self.root, config['root'])
- if not exists(config['root']):
- msg = ("Relative path '%s' for domain '%s' interpreted as '%s'"
- ", but this directory doesn't exist. Either specify the"
- " domain root as an absolute path, or make sure it "
- "points to a valid directory when appended to the "
- "Layout's root (%s)." % (_root, config['name'],
- config['root'], self.root))
- raise ValueError(msg)
+ for root in listify(config['root']):
+ if not exists(root):
+ raise ValueError("Root directory %s for domain %s does not "
+ "exist!" % (root, config['name']))
# Load entities
- domain = Domain(config['name'], config, config['root'])
+ domain = Domain(config['name'], config)
for e in config.get('entities', []):
self.add_entity(domain=domain, **e)
self.domains[domain.name] = domain
+ return domain
- def get_domain_entities(self, domains=None, file=None):
+ def get_domain_entities(self, domains=None):
# Get all Entities included in the specified Domains, in the same
- # order as Domains in the list. Alternatively, if a file is passed,
- # identify its domains and then return the entities.
-
- if file is None:
- if domains is None:
- domains = list(self.domains.keys())
- else:
- domains = self._get_domains_for_file(file)
+ # order as Domains in the list.
+ if domains is None:
+ domains = list(self.domains.keys())
ents = {}
for d in domains:
ents.update(self.domains[d].entities)
return ents
- def _check_inclusions(self, f, domains=None):
+ def _check_inclusions(self, f, domains=None, fullpath=True):
''' Check file or directory against regexes in config to determine if
it should be included in the index '''
filename = f if isinstance(f, six.string_types) else f.path
- if os.path.isabs(filename) and filename.startswith(
- self.root + os.path.sep):
- # for filenames under the root - analyze relative path to avoid
- # bringing injustice to the grandkids of some unfortunately named
- # root directories.
- filename = os.path.relpath(filename, self.root)
+ if not fullpath:
+ filename = basename(filename)
if domains is None:
domains = list(self.domains.keys())
- domains = [self.domains[dom] for dom in domains]
+ domains = [self.domains[dom] for dom in listify(domains)]
# Inject the Layout at the first position for global include/exclude
domains.insert(0, self)
@@ -490,15 +496,14 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
# If file matches any include regex, then True
if dom.include:
for regex in dom.include:
- if re.match(regex, filename):
+ if re.search(regex, filename):
return True
return False
else:
# If file matches any exclude regex, then False
for regex in dom.exclude:
- if re.match(regex, filename, flags=re.UNICODE):
+ if re.search(regex, filename, flags=re.UNICODE):
return False
-
return True
def _validate_dir(self, d):
@@ -515,10 +520,11 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
returned, the file will be ignored and dropped from the layout. '''
return True
- def _get_files(self):
+ def _get_files(self, root):
''' Returns all files in project (pre-filtering). Extend this in
subclasses as needed. '''
- return os.walk(self.root, topdown=True)
+ results = [os.walk(r, topdown=True) for r in listify(root)]
+ return list(chain(*results))
def _make_file_object(self, root, f):
''' Initialize a new File oject from a directory and filename. Extend
@@ -531,51 +537,35 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
for ent in self.entities.values():
ent.files = {}
- def _get_domains_for_file(self, f):
- if isinstance(f, File):
- return f.domains
- domains = []
- for d in self.domains.values():
- for path in listify(d.root):
- if f.startswith(path):
- domains.append(d.name)
- break
- return domains
-
- def _index_file(self, root, f, domains=None, update_layout=True):
-
- # If domains aren't explicitly passed, figure out what applies
- if domains is None:
- domains = self._get_domains_for_file(root)
+ def _index_file(self, root, f, domains, update_layout=True):
# Create the file object--allows for subclassing
f = self._make_file_object(root, f)
- if not (self._check_inclusions(f, domains) and self._validate_file(f)):
- return
-
- for d in domains:
- self.domains[d].add_file(f)
-
- entities = self.get_domain_entities(domains)
-
- if entities:
- self.files[f.path] = f
+ for d in listify(domains):
+ if d not in self.domains:
+ raise ValueError("Cannot index file '%s' in domain '%s'; "
+ "no domain with that name exists." %
+ (f.path, d))
+ domain = self.domains[d]
+ match_vals = {}
+ for e in domain.entities.values():
+ m = e.match_file(f)
+ if m is None and e.mandatory:
+ break
+ if m is not None:
+ match_vals[e.name] = (e, m)
- for e in entities.values():
- e.matches(f, update_file=True)
+ if match_vals:
+ for k, (ent, val) in match_vals.items():
+ f.tags[k] = Tag(ent, val)
+ if update_layout:
+ ent.add_file(f.path, val)
- file_ents = f.tags.keys()
+ if update_layout:
+ domain.add_file(f)
- # Only keep Files that match at least one Entity, and all
- # mandatory Entities
- if update_layout and file_ents and not (self.mandatory -
- set(file_ents)):
- self.files[f.path] = f
- # Bind the File to all of the matching entities
- for name, tag in f.tags.items():
- ent_id = tag.entity.id
- self.entities[ent_id].add_file(f.path, tag.value)
+ self.files[f.path] = f
return f
@@ -598,50 +588,62 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
self._reset_index()
- dataset = self._get_files()
-
- # Loop over all files
- for root, directories, filenames in dataset:
-
- # Determine which Domains apply to the current directory
- domains = self._get_domains_for_file(root)
-
- # Exclude directories that match exclude regex from further search
- full_dirs = [os.path.join(root, d) for d in directories]
-
- def check_incl(directory):
- return self._check_inclusions(directory, domains)
-
- full_dirs = filter(check_incl, full_dirs)
- full_dirs = filter(self._validate_dir, full_dirs)
- directories[:] = [split(d)[1] for d in full_dirs]
-
- if self.config_filename in filenames:
- config_path = os.path.join(root, self.config_filename)
- config = json.load(open(config_path, 'r'))
- cfg_root = config.get('root', root)
- if cfg_root == '.':
- cfg_root = root
- self._load_domain(config, root=cfg_root)
-
- # Filter Domains if current dir's config file has an
- # include directive
- if 'domains' in config:
- missing = set(config['domains']) - set(domains)
- if missing:
- msg = ("Missing configs '%s' specified in include "
- "directive of config '%s'. Please make sure "
- "these config files are accessible from the "
- "directory %s.") % (missing, config['name'],
- root)
- raise ValueError(msg)
- domains = config['domains']
- domains.append(config['name'])
-
- filenames.remove(self.config_filename)
-
- for f in filenames:
- self._index_file(root, f, domains)
+ # Track all candidate files
+ files_to_index = defaultdict(set)
+
+ # Track any additional config files we run into
+ extra_configs = []
+
+ def _index_domain_files(dom):
+
+ doms_to_add = set(dom.config.get('domains', []) + [dom.name])
+
+ dataset = self._get_files(dom.config['root'])
+
+ # Loop over all files in domain
+ for root, directories, filenames in dataset:
+
+ def check_incl(f):
+ return self._check_inclusions(f, dom.name)
+
+ # Exclude directories that match exclude regex
+ full_dirs = [join(root, d) for d in directories]
+ full_dirs = filter(check_incl, full_dirs)
+ full_dirs = filter(self._validate_dir, full_dirs)
+ directories[:] = [split(d)[1] for d in full_dirs]
+
+ for f in filenames:
+ full_path = join(root, f)
+ # Add config file to tracking
+ if f == self.config_filename:
+ if full_path not in extra_configs:
+ extra_configs.append(full_path)
+ # Add file to the candidate index
+ elif (self._check_inclusions(full_path, dom.name) and
+ self._validate_file(full_path)):
+ # If the file is below the Layout root, use a relative
+ # path. Otherwise, use absolute path.
+ if full_path.startswith(self.root) and not \
+ self.absolute_paths:
+ full_path = relpath(full_path, self.root)
+ files_to_index[full_path] |= doms_to_add
+
+ for dom in self.domains.values():
+ _index_domain_files(dom)
+
+ # Set up any additional configs we found. Note that in edge cases,
+ # this approach has the potential to miss out on some configs, because
+ # it doesn't recurse. This will generally only happen under fairly
+ # weird circumstances though (e.g., the config file points to another
+ # root elsewhere in the filesystem, or there are inconsistent include/
+ # exclude directives across nested configs), so this will do for now.
+ for dom in extra_configs:
+ dom = self._load_domain(dom)
+ _index_domain_files(dom)
+
+ for filename, domains in files_to_index.items():
+ _dir, _base = split(filename)
+ self._index_file(_dir, _base, list(domains))
def save_index(self, filename):
''' Save the current Layout's index to a .json file.
@@ -656,7 +658,7 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
data = {}
for f in self.files.values():
entities = {v.entity.id: v.value for k, v in f.tags.items()}
- data[f.path] = entities
+ data[f.path] = {'domains': f.domains, 'entities': entities}
with open(filename, 'w') as outfile:
json.dump(data, outfile)
@@ -679,15 +681,13 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
self._reset_index()
data = json.load(open(filename, 'r'))
- for path, ents in data.items():
+ for path, file in data.items():
- # If file path isn't absolute, assume it's relative to layout root
- if not isabs(path):
- path = join(self.root, path)
+ ents, domains = file['entities'], file['domains']
root, f = dirname(path), basename(path)
if reindex:
- self._index_file(root, f)
+ self._index_file(root, f, domains)
else:
f = self._make_file_object(root, f)
tags = {k: Tag(self.entities[k], v) for k, v in ents.items()}
@@ -717,8 +717,10 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
if ent.mandatory:
self.mandatory.add(ent.id)
+
if ent.directory is not None:
ent.directory = ent.directory.replace('{{root}}', self.root)
+
self.entities[ent.id] = ent
for alias in ent.aliases:
self.entities[alias] = ent
@@ -1014,12 +1016,13 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
for f in _files:
f.copy(path_patterns, symbolic_link=symbolic_links,
- root=root, conflicts=conflicts)
+ root=self.root, conflicts=conflicts)
def write_contents_to_file(self, entities, path_patterns=None,
contents=None, link_to=None,
content_mode='text', conflicts='fail',
- strict=False, domains=None):
+ strict=False, domains=None, index=False,
+ index_domains=None):
"""
Write arbitrary data to a file defined by the passed entities and
path patterns.
@@ -1044,19 +1047,39 @@ class Layout(six.with_metaclass(LayoutMetaclass, object)):
domains (list): List of Domains to scan for path_patterns. Order
determines precedence (i.e., earlier Domains will be scanned
first). If None, all available domains are included.
+ index (bool): If True, adds the generated file to the current
+ index using the domains specified in index_domains.
+ index_domains (list): List of domain names to attach the generated
+ file to when indexing. Ignored if index == False. If None,
+ All available domains are used.
"""
- if not path_patterns:
- path_patterns = self.path_patterns
+ if path_patterns:
+ path = build_path(entities, path_patterns, strict)
+ else:
+ path_patterns = [self.path_patterns]
if domains is None:
domains = list(self.domains.keys())
for dom in domains:
- path_patterns.extend(self.domains[dom].path_patterns)
- path = build_path(entities, path_patterns, strict)
+ path_patterns.append(self.domains[dom].path_patterns)
+ for pp in path_patterns:
+ path = build_path(entities, pp, strict)
+ if path is not None:
+ break
+
+ if path is None:
+ raise ValueError("Cannot construct any valid filename for "
+ "the passed entities given available path "
+ "patterns.")
+
write_contents_to_file(path, contents=contents, link_to=link_to,
content_mode=content_mode, conflicts=conflicts,
root=self.root)
- self._index_file(self.root, path)
+
+ if index:
+ if index_domains is None:
+ index_domains = list(self.domains.keys())
+ self._index_file(self.root, path, index_domains)
def merge_layouts(layouts):
| Domain root should default to directory containing config file
It looks like if no root is specified in a directory-specific domain (or if `root == '.'`) then the root of that domain is set to the _global_ (`Layout`-level) root. I'm confused by this behavior. I'd have intuitively thought that the domain's root would be the directory it's config file is taken from by default, and that relative paths would be relative to _that_ directory, rather than the layout root. In fact, it seems to go against the purpose of the domains mechanisms to have to specify the relative path of the domain from the layout root in the domain-specific config file, when that information should be taken from the location of that file (by default). Am I mis-understanding something?
As an example, adapted from the examples notebook:
```python
stamps = gb.Layout('../grabbit/tests/data/valuable_stamps/', absolute_paths=False, config_filename='dir_config.json', config='../grabbit/tests/specs/stamps.json')
[(k, d.root) for (k, d) in stamps.domains.items()]
```
gives
```
[('stamps', '../grabbit/tests/data/valuable_stamps/'),
('usa_stamps', '../grabbit/tests/data/valuable_stamps/')]
```
while I would have expected
```
[('stamps', '../grabbit/tests/data/valuable_stamps/'),
('usa_stamps', '../grabbit/tests/data/valuable_stamps/USA/')]
```
| grabbles/grabbit | diff --git a/grabbit/tests/data/ordinary_stamps/name=1_Lotus#value=1#country=Canada.txt b/grabbit/tests/data/ordinary_stamps/name=1_Lotus#value=1#country=Canada.txt
new file mode 100644
index 0000000..e69de29
diff --git a/grabbit/tests/data/ordinary_stamps/name=50c_Forever_Stamp_2018#value=0.5#country=USA.txt b/grabbit/tests/data/ordinary_stamps/name=50c_Forever_Stamp_2018#value=0.5#country=USA.txt
new file mode 100644
index 0000000..e69de29
diff --git a/grabbit/tests/misc/index.json b/grabbit/tests/misc/index.json
index c3bbf1d..39f4f9b 100644
--- a/grabbit/tests/misc/index.json
+++ b/grabbit/tests/misc/index.json
@@ -1,130 +1,250 @@
{
"dataset_description.json": {
- "test.type": "description"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "description"
+ }
},
"participants.tsv": {
- "test.type": "trt/participants"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "trt/participants"
+ }
},
"task-rest_acq-fullbrain_bold.json": {
- "test.type": "bold",
- "test.task": "rest_acq"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "bold",
+ "test.task": "rest_acq"
+ }
},
"task-rest_acq-fullbrain_run-1_physio.json": {
- "test.run": "1",
- "test.type": "physio",
- "test.task": "rest_acq",
- "test.acquisition": "fullbrain_run"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.run": "1",
+ "test.type": "physio",
+ "test.task": "rest_acq",
+ "test.acquisition": "fullbrain_run"
+ }
},
"task-rest_acq-fullbrain_run-2_physio.json": {
- "test.run": "2",
- "test.type": "physio",
- "test.task": "rest_acq",
- "test.acquisition": "fullbrain_run"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.run": "2",
+ "test.type": "physio",
+ "test.task": "rest_acq",
+ "test.acquisition": "fullbrain_run"
+ }
},
"task-rest_acq-prefrontal_bold.json": {
- "test.type": "bold",
- "test.task": "rest_acq"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "bold",
+ "test.task": "rest_acq"
+ }
},
"task-rest_acq-prefrontal_physio.json": {
- "test.type": "physio",
- "test.task": "rest_acq"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "physio",
+ "test.task": "rest_acq"
+ }
},
"test.bval": {
- "test.type": "trt/test",
- "test.bval": "test.bval"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "trt/test",
+ "test.bval": "test.bval"
+ }
},
"models/excluded_model.json": {
- "test.type": "model"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.type": "model"
+ }
},
"sub-01/sub-01_sessions.tsv": {
- "test.subject": "01",
- "test.type": "sessions"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.type": "sessions"
+ }
},
"sub-01/ses-1/sub-01_ses-1_scans.tsv": {
- "test.subject": "01",
- "test.session": "1",
- "test.type": "scans"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.type": "scans"
+ }
},
"sub-01/ses-1/anat/sub-01_ses-1_T1map.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.type": "T1map"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.type": "T1map"
+ }
},
"sub-01/ses-1/anat/sub-01_ses-1_T1w.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.type": "T1w"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.type": "T1w"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-1_magnitude1.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "1",
- "test.type": "magnitude1"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "1",
+ "test.type": "magnitude1"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-1_magnitude2.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "1",
- "test.type": "magnitude2"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "1",
+ "test.type": "magnitude2"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-1_phasediff.json": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "1",
- "test.type": "phasediff"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "1",
+ "test.type": "phasediff"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-1_phasediff.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "1",
- "test.type": "phasediff"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "1",
+ "test.type": "phasediff"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-2_magnitude1.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "2",
- "test.type": "magnitude1"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "2",
+ "test.type": "magnitude1"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-2_magnitude2.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "2",
- "test.type": "magnitude2"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "2",
+ "test.type": "magnitude2"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-2_phasediff.json": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "2",
- "test.type": "phasediff"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "2",
+ "test.type": "phasediff"
+ }
},
"sub-01/ses-1/fmap/sub-01_ses-1_run-2_phasediff.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "2",
- "test.type": "phasediff"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "2",
+ "test.type": "phasediff"
+ }
},
"sub-01/ses-1/func/sub-01_ses-1_task-rest_acq-fullbrain_run-1_bold.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "1",
- "test.type": "bold",
- "test.task": "rest_acq",
- "test.acquisition": "fullbrain_run"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "1",
+ "test.type": "bold",
+ "test.task": "rest_acq",
+ "test.acquisition": "fullbrain_run"
+ }
},
"sub-01/ses-1/func/sub-01_ses-1_task-rest_acq-fullbrain_run-1_physio.tsv.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "1",
- "test.type": "physio",
- "test.task": "rest_acq",
- "test.acquisition": "fullbrain_run"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "1",
+ "test.type": "physio",
+ "test.task": "rest_acq",
+ "test.acquisition": "fullbrain_run"
+ }
},
"sub-01/ses-1/func/sub-01_ses-1_task-rest_acq-fullbrain_run-2_bold.nii.gz": {
- "test.subject": "01",
- "test.session": "1",
- "test.run": "2",
- "test.type": "bold",
- "test.task": "rest_acq",
- "test.acquisition": "fullbrain_run"
+ "domains": [
+ "test"
+ ],
+ "entities": {
+ "test.subject": "01",
+ "test.session": "1",
+ "test.run": "2",
+ "test.type": "bold",
+ "test.task": "rest_acq",
+ "test.acquisition": "fullbrain_run"
+ }
}
-}
+}
\ No newline at end of file
diff --git a/grabbit/tests/specs/test.json b/grabbit/tests/specs/test.json
index 39ed11a..75ec8f4 100644
--- a/grabbit/tests/specs/test.json
+++ b/grabbit/tests/specs/test.json
@@ -5,14 +5,14 @@
{
"name": "subject",
"pattern": "sub-(\\d+)",
- "directory": "{{root}}/{subject}",
+ "directory": "{subject}",
"dtype": "str"
},
{
"name": "session",
"pattern": "ses-0*(\\d+)",
"mandatory": false,
- "directory": "{{root}}/{subject}/{session}",
+ "directory": "{subject}/{session}",
"missing_value": "ses-1"
},
{
diff --git a/grabbit/tests/test_core.py b/grabbit/tests/test_core.py
index d13da59..269140d 100644
--- a/grabbit/tests/test_core.py
+++ b/grabbit/tests/test_core.py
@@ -114,9 +114,8 @@ class TestEntity:
tmpdir.mkdir("tmp").join(filename).write("###")
f = File(join(str(tmpdir), filename))
e = Entity('avaricious', 'aardvark-(\d+)')
- e.matches(f, update_file=True)
- assert 'avaricious' in f.entities
- assert f.entities['avaricious'] == '4'
+ result = e.match_file(f)
+ assert result == '4'
def test_unique_and_count(self):
e = Entity('prop', '-(\d+)')
@@ -138,10 +137,6 @@ class TestEntity:
class TestLayout:
def test_init(self, bids_layout):
- if hasattr(bids_layout, '_hdfs_client'):
- assert bids_layout._hdfs_client.list(bids_layout.root)
- else:
- assert os.path.exists(bids_layout.root)
assert isinstance(bids_layout.files, dict)
assert isinstance(bids_layout.entities, dict)
assert isinstance(bids_layout.mandatory, set)
@@ -169,26 +164,51 @@ class TestLayout:
assert sub_file in bids_layout.files
assert sub_file not in layout.files
+ def test_init_with_config_options(self):
+ root = join(DIRNAME, 'data')
+ dir1 = join(root, 'valuable_stamps')
+ dir2 = join(root, 'ordinary_stamps')
+ config1 = join(DIRNAME, 'specs', 'stamps.json')
+ config2 = join(dir1, 'USA', 'dir_config.json')
+
+ # Fails because Domain usa_stamps is included twice
+ with pytest.raises(ValueError) as e:
+ layout = Layout(root, [config1, config2], exclude=['7t_trt'],
+ config_filename='dir_config.json')
+ assert e.value.message.startswith('Config with name')
+
+ # Test with two configs
+ layout = Layout(root, [config1, config2], exclude=['7t_trt'])
+ files = [f.filename for f in layout.files.values()]
+ assert 'name=Inverted_Jenny#value=75000#country=USA.txt' in files
+ assert 'name=5c_Francis_E_Willard#value=1dollar.txt' in files
+ assert 'name=1_Lotus#value=1#country=Canada.txt' in files
+
+ # Test with two configs and on-the-fly directory remapping
+ layout = Layout(dir1, [(config1, [dir1, dir2])],
+ exclude=['USA/'])
+ files = [f.filename for f in layout.files.values()]
+ assert 'name=Inverted_Jenny#value=75000#country=USA.txt' in files
+ assert 'name=5c_Francis_E_Willard#value=1dollar.txt' not in files
+ assert 'name=1_Lotus#value=1#country=Canada.txt' in files
+
def test_absolute_paths(self, bids_layout):
- result = bids_layout.get(subject=1, run=1, session=1)
- assert result # that we got some entries
- assert all([os.path.isabs(f.filename) for f in result])
if not hasattr(bids_layout, '_hdfs_client'):
root = join(DIRNAME, 'data', '7t_trt')
root = os.path.relpath(root)
config = join(DIRNAME, 'specs', 'test.json')
- layout = Layout(root, config, absolute_paths=False)
+ layout = Layout(root, config, absolute_paths=True)
result = layout.get(subject=1, run=1, session=1)
assert result
- assert not any([os.path.isabs(f.filename) for f in result])
+ assert all([os.path.isabs(f.filename) for f in result])
- layout = Layout(root, config, absolute_paths=True)
+ layout = Layout(root, config, absolute_paths=False)
result = layout.get(subject=1, run=1, session=1)
assert result
- assert all([os.path.isabs(f.filename) for f in result])
+ assert not any([os.path.isabs(f.filename) for f in result])
# Should always be absolute paths on HDFS
else:
@@ -250,8 +270,8 @@ class TestLayout:
if hasattr(bids_layout, '_hdfs_client'):
assert bids_layout._hdfs_client.list(bids_layout.root)
else:
- assert os.path.exists(result[0])
- assert os.path.isdir(result[0])
+ assert os.path.exists(join(bids_layout.root, result[0]))
+ assert os.path.isdir(join(bids_layout.root, result[0]))
result = bids_layout.get(target='subject', type='phasediff',
return_type='file')
@@ -259,7 +279,8 @@ class TestLayout:
if hasattr(bids_layout, '_hdfs_client'):
assert all([bids_layout._hdfs_client.content(f) for f in result])
else:
- assert all([os.path.exists(f) for f in result])
+ assert all([os.path.exists(join(bids_layout.root, f))
+ for f in result])
def test_natsort(self, bids_layout):
result = bids_layout.get(target='subject', return_type='id')
@@ -279,7 +300,7 @@ class TestLayout:
nearest = bids_layout.get_nearest(
result, type='sessions', extensions='tsv',
ignore_strict_entities=['type'])
- target = join('7t_trt', 'sub-01', 'sub-01_sessions.tsv')
+ target = join('sub-01', 'sub-01_sessions.tsv')
assert target in nearest
nearest = bids_layout.get_nearest(
result, extensions='tsv', all_=True,
@@ -292,11 +313,11 @@ class TestLayout:
assert nearest[0].subject == '01'
def test_index_regex(self, bids_layout, layout_include):
- targ = join(bids_layout.root, 'derivatives', 'excluded.json')
+ targ = join('derivatives', 'excluded.json')
assert targ not in bids_layout.files
- targ = join(layout_include.root, 'models',
- 'excluded_model.json')
- assert targ not in layout_include.files
+ domain = layout_include.domains['test_with_includes']
+ targ = join('models', 'excluded_model.json')
+ assert targ not in domain.files
with pytest.raises(ValueError):
layout_include._load_domain({'entities': [],
@@ -315,7 +336,8 @@ class TestLayout:
for i in range(10):
f = files[i]
entities = {v.entity.id: v.value for v in f.tags.values()}
- assert entities == index[f.path]
+ assert entities == index[f.path]['entities']
+ assert list(f.domains) == index[f.path]['domains']
os.unlink(tmp)
def test_load_index(self, bids_layout):
@@ -341,8 +363,7 @@ class TestLayout:
return str(hash(file.path)) + '.hsh'
root = join(DIRNAME, 'data', '7t_trt')
- config = join(DIRNAME, 'specs',
- 'test_with_mapper.json')
+ config = join(DIRNAME, 'specs', 'test_with_mapper.json')
# Test with external mapper
em = EntityMapper()
@@ -363,7 +384,7 @@ class TestLayout:
def test_clone(self, bids_layout):
lc = bids_layout.clone()
- attrs = ['root', 'mandatory', 'dynamic_getters', 'regex_search',
+ attrs = ['mandatory', 'dynamic_getters', 'regex_search',
'entity_mapper']
for a in attrs:
assert getattr(bids_layout, a) == getattr(lc, a)
@@ -374,7 +395,8 @@ class TestLayout:
root = tmpdir.mkdir("ohmyderivatives").mkdir("ds")
config = join(DIRNAME, 'specs', 'test.json')
layout = Layout(str(root), config, regex_search=True)
- assert layout._check_inclusions(str(root.join("ohmyimportantfile")))
+ assert layout._check_inclusions(str(root.join("ohmyimportantfile")),
+ fullpath=False)
assert not layout._check_inclusions(
str(root.join("badbadderivatives")))
@@ -383,6 +405,7 @@ class TestLayout:
assert {'stamps', 'usa_stamps'} == set(layout.domains.keys())
usa = layout.domains['usa_stamps']
general = layout.domains['stamps']
+ print([f.filename for f in usa.files])
assert len(usa.files) == 3
assert len(layout.files) == len(general.files)
assert not set(usa.files) - set(general.files)
diff --git a/grabbit/tests/test_writable.py b/grabbit/tests/test_writable.py
index 68b4442..5b1a8bf 100644
--- a/grabbit/tests/test_writable.py
+++ b/grabbit/tests/test_writable.py
@@ -18,7 +18,7 @@ def writable_file(tmpdir):
def layout():
data_dir = join(dirname(__file__), 'data', '7t_trt')
config = join(dirname(__file__), 'specs', 'test.json')
- layout = Layout(data_dir, config=config)
+ layout = Layout(data_dir, config, absolute_paths=False)
return layout
@@ -176,8 +176,10 @@ class TestWritableLayout:
data_dir = join(dirname(__file__), 'data', '7t_trt')
entities = {'subject': 'Bob', 'session': '01'}
pat = join('sub-{subject}/sess-{session}/desc.txt')
+
+ # With indexing
layout.write_contents_to_file(entities, path_patterns=pat,
- contents=contents)
+ contents=contents, index=True)
target = join(data_dir, 'sub-Bob/sess-01/desc.txt')
assert exists(target)
with open(target) as f:
@@ -186,11 +188,23 @@ class TestWritableLayout:
assert target in layout.files
shutil.rmtree(join(data_dir, 'sub-Bob'))
+ # Without indexing
+ pat = join('sub-{subject}/sess-{session}/desc_no_index.txt')
+ layout.write_contents_to_file(entities, path_patterns=pat,
+ contents=contents, index=False)
+ target = join(data_dir, 'sub-Bob/sess-01/desc_no_index.txt')
+ assert exists(target)
+ with open(target) as f:
+ written = f.read()
+ assert written == contents
+ assert target not in layout.files
+ shutil.rmtree(join(data_dir, 'sub-Bob'))
+
def test_write_contents_to_file_defaults(self, layout):
contents = 'test'
data_dir = join(dirname(__file__), 'data', '7t_trt')
config = join(dirname(__file__), 'specs', 'test.json')
- layout = Layout(data_dir, config=[config, {
+ layout = Layout(data_dir, [config, {
'name': "test_writable",
'default_path_patterns': ['sub-{subject}/ses-{session}/{subject}'
'{session}{run}{type}{task}{acquisition}'
@@ -199,7 +213,7 @@ class TestWritableLayout:
entities = {'subject': 'Bob', 'session': '01', 'run': '1',
'type': 'test', 'task': 'test', 'acquisition': 'test',
'bval': 0}
- layout.write_contents_to_file(entities, contents=contents)
+ layout.write_contents_to_file(entities, contents=contents, index=True)
target = join(data_dir, 'sub-Bob/ses-01/Bob011testtesttest0')
assert exists(target)
with open(target) as f:
@@ -218,6 +232,6 @@ class TestWritableLayout:
data_dir = join(dirname(__file__), 'data', '7t_trt')
filename = 'sub-04_ses-1_task-rest_acq-fullbrain_run-1_physio.tsv.gz'
- file = join(data_dir, 'sub-04', 'ses-1', 'func', filename)
+ file = join('sub-04', 'ses-1', 'func', filename)
path = layout.build_path(file, path_patterns=pat)
assert path.endswith('sub-04/sess-1/r-1.nii.gz')
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"six"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
exceptiongroup==1.2.2
-e git+https://github.com/grabbles/grabbit.git@3fe38c7e7eb510a38e6c2d072bdc913aaa1b7389#egg=grabbit
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
six==1.17.0
tomli==2.2.1
| name: grabbit
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- six==1.17.0
- tomli==2.2.1
prefix: /opt/conda/envs/grabbit
| [
"grabbit/tests/test_core.py::TestEntity::test_matches",
"grabbit/tests/test_core.py::TestLayout::test_save_index[local]",
"grabbit/tests/test_core.py::TestLayout::test_load_index[local]",
"grabbit/tests/test_core.py::TestLayout::test_init_with_config_options",
"grabbit/tests/test_core.py::TestLayout::test_excludes",
"grabbit/tests/test_writable.py::TestWritableLayout::test_write_contents_to_file",
"grabbit/tests/test_writable.py::TestWritableLayout::test_write_contents_to_file_defaults",
"grabbit/tests/test_writable.py::TestWritableLayout::test_build_file_from_layout"
]
| []
| [
"grabbit/tests/test_core.py::TestFile::test_init",
"grabbit/tests/test_core.py::TestFile::test_matches",
"grabbit/tests/test_core.py::TestFile::test_named_tuple",
"grabbit/tests/test_core.py::TestFile::test_named_tuple_with_reserved_name",
"grabbit/tests/test_core.py::TestEntity::test_init",
"grabbit/tests/test_core.py::TestEntity::test_unique_and_count",
"grabbit/tests/test_core.py::TestEntity::test_add_file",
"grabbit/tests/test_core.py::TestLayout::test_init[local]",
"grabbit/tests/test_core.py::TestLayout::test_init_with_include_arg[local]",
"grabbit/tests/test_core.py::TestLayout::test_init_with_exclude_arg[local]",
"grabbit/tests/test_core.py::TestLayout::test_absolute_paths[local]",
"grabbit/tests/test_core.py::TestLayout::test_querying[local]",
"grabbit/tests/test_core.py::TestLayout::test_natsort[local]",
"grabbit/tests/test_core.py::TestLayout::test_unique_and_count[local]",
"grabbit/tests/test_core.py::TestLayout::test_get_nearest[local]",
"grabbit/tests/test_core.py::TestLayout::test_index_regex[local]",
"grabbit/tests/test_core.py::TestLayout::test_clone[local]",
"grabbit/tests/test_core.py::TestLayout::test_parse_file_entities[local]",
"grabbit/tests/test_core.py::test_merge_layouts[local]",
"grabbit/tests/test_core.py::TestLayout::test_dynamic_getters[/grabbit/grabbit/tests/data/7t_trt-/grabbit/grabbit/tests/specs/test.json]",
"grabbit/tests/test_core.py::TestLayout::test_entity_mapper",
"grabbit/tests/test_core.py::TestLayout::test_multiple_domains",
"grabbit/tests/test_core.py::TestLayout::test_get_by_domain",
"grabbit/tests/test_writable.py::TestWritableFile::test_build_path",
"grabbit/tests/test_writable.py::TestWritableFile::test_strict_build_path",
"grabbit/tests/test_writable.py::TestWritableFile::test_build_file",
"grabbit/tests/test_writable.py::TestWritableLayout::test_write_files"
]
| []
| MIT License | 2,437 | [
"grabbit/core.py"
]
| [
"grabbit/core.py"
]
|
|
elastic__rally-481 | ebc10f53af246b3e34f751c1346aec9ed800981e | 2018-04-23 07:18:32 | ebc10f53af246b3e34f751c1346aec9ed800981e | diff --git a/esrally/mechanic/launcher.py b/esrally/mechanic/launcher.py
index 19e77cf8..a946e177 100644
--- a/esrally/mechanic/launcher.py
+++ b/esrally/mechanic/launcher.py
@@ -31,12 +31,35 @@ def wait_for_rest_layer(es, max_attempts=20):
class ClusterLauncher:
- def __init__(self, cfg, metrics_store, client_factory_class=client.EsClientFactory):
+ """
+ The cluster launcher performs cluster-wide tasks that need to be done in the startup / shutdown phase.
+
+ """
+ def __init__(self, cfg, metrics_store, on_post_launch=None, client_factory_class=client.EsClientFactory):
+ """
+
+ Creates a new ClusterLauncher.
+
+ :param cfg: The config object.
+ :param metrics_store: A metrics store that is configured to receive system metrics.
+ :param on_post_launch: An optional function that takes the Elasticsearch client as a parameter. It is invoked after the
+ REST API is available.
+ :param client_factory_class: A factory class that can create an Elasticsearch client.
+ """
self.cfg = cfg
self.metrics_store = metrics_store
+ self.on_post_launch = on_post_launch
self.client_factory = client_factory_class
def start(self):
+ """
+ Performs final startup tasks.
+
+ Precondition: All cluster nodes have been started.
+ Postcondition: The cluster is ready to receive HTTP requests or a ``LaunchError`` is raised.
+
+ :return: A representation of the launched cluster.
+ """
enabled_devices = self.cfg.opts("mechanic", "telemetry.devices")
telemetry_params = self.cfg.opts("mechanic", "telemetry.params")
hosts = self.cfg.opts("client", "hosts")
@@ -64,10 +87,16 @@ class ClusterLauncher:
logger.error("REST API layer is not yet available. Forcefully terminating cluster.")
self.stop(c)
raise exceptions.LaunchError("Elasticsearch REST API layer is not available. Forcefully terminated cluster.")
-
+ if self.on_post_launch:
+ self.on_post_launch(es)
return c
def stop(self, c):
+ """
+ Performs cleanup tasks. This method should be called before nodes are shut down.
+
+ :param c: The cluster that is about to be stopped.
+ """
c.telemetry.detach_from_cluster(c)
diff --git a/esrally/mechanic/mechanic.py b/esrally/mechanic/mechanic.py
index 08fdd111..aa0e6d4b 100644
--- a/esrally/mechanic/mechanic.py
+++ b/esrally/mechanic/mechanic.py
@@ -226,6 +226,7 @@ class MechanicActor(actor.RallyActor):
self.race_control = None
self.cluster_launcher = None
self.cluster = None
+ self.plugins = None
def receiveUnrecognizedMessage(self, msg, sender):
logger.info("MechanicActor#receiveMessage unrecognized(msg = [%s] sender = [%s])" % (str(type(msg)), str(sender)))
@@ -256,6 +257,7 @@ class MechanicActor(actor.RallyActor):
cls = metrics.metrics_store_class(self.cfg)
self.metrics_store = cls(self.cfg)
self.metrics_store.open(ctx=msg.open_metrics_context)
+ _, self.plugins = load_team(self.cfg, msg.external)
# In our startup procedure we first create all mechanics. Only if this succeeds we'll continue.
hosts = self.cfg.opts("client", "hosts")
@@ -345,7 +347,8 @@ class MechanicActor(actor.RallyActor):
self.transition_when_all_children_responded(sender, msg, "cluster_stopping", "cluster_stopped", self.on_all_nodes_stopped)
def on_all_nodes_started(self):
- self.cluster_launcher = launcher.ClusterLauncher(self.cfg, self.metrics_store)
+ plugin_handler = PostLaunchPluginHandler(self.plugins)
+ self.cluster_launcher = launcher.ClusterLauncher(self.cfg, self.metrics_store, on_post_launch=plugin_handler)
# Workaround because we could raise a LaunchError here and thespian will attempt to retry a failed message.
# In that case, we will get a followup RallyAssertionError because on the second attempt, Rally will check
# the status which is now "nodes_started" but we expected the status to be "nodes_starting" previously.
@@ -392,6 +395,21 @@ class MechanicActor(actor.RallyActor):
# do not self-terminate, let the parent actor handle this
+class PostLaunchPluginHandler:
+ def __init__(self, plugins, hook_handler_class=team.PluginBootstrapHookHandler):
+ self.handlers = []
+ if plugins:
+ for plugin in plugins:
+ handler = hook_handler_class(plugin)
+ if handler.can_load():
+ handler.load()
+ self.handlers.append(handler)
+
+ def __call__(self, client):
+ for handler in self.handlers:
+ handler.invoke(team.PluginBootstrapPhase.post_launch.name, client=client)
+
+
@thespian.actors.requireCapability('coordinator')
class Dispatcher(thespian.actors.ActorTypeDispatcher):
def __init__(self):
@@ -590,11 +608,7 @@ class NodeMechanicActor(actor.RallyActor):
# Internal API (only used by the actor and for tests)
#####################################################
-def create(cfg, metrics_store, all_node_ips, cluster_settings=None, sources=False, build=False, distribution=False, external=False,
- docker=False):
- races_root = paths.races_root(cfg)
- challenge_root_path = paths.race_root(cfg)
- node_ids = cfg.opts("provisioning", "node.ids", mandatory=False)
+def load_team(cfg, external):
# externally provisioned clusters do not support cars / plugins
if external:
car = None
@@ -603,6 +617,15 @@ def create(cfg, metrics_store, all_node_ips, cluster_settings=None, sources=Fals
team_path = team.team_path(cfg)
car = team.load_car(team_path, cfg.opts("mechanic", "car.names"), cfg.opts("mechanic", "car.params"))
plugins = team.load_plugins(team_path, cfg.opts("mechanic", "car.plugins"), cfg.opts("mechanic", "plugin.params"))
+ return car, plugins
+
+
+def create(cfg, metrics_store, all_node_ips, cluster_settings=None, sources=False, build=False, distribution=False, external=False,
+ docker=False):
+ races_root = paths.races_root(cfg)
+ challenge_root_path = paths.race_root(cfg)
+ node_ids = cfg.opts("provisioning", "node.ids", mandatory=False)
+ car, plugins = load_team(cfg, external)
if sources or distribution:
s = supplier.create(cfg, sources, distribution, build, challenge_root_path, plugins)
diff --git a/esrally/mechanic/provisioner.py b/esrally/mechanic/provisioner.py
index 4a1c3b38..2942caae 100644
--- a/esrally/mechanic/provisioner.py
+++ b/esrally/mechanic/provisioner.py
@@ -2,12 +2,12 @@ import os
import glob
import shutil
import logging
-from enum import Enum
import jinja2
from esrally import exceptions
-from esrally.utils import io, console, process, modules, versions
+from esrally.mechanic import team
+from esrally.utils import io, console, process, versions
logger = logging.getLogger("rally.provisioner")
@@ -102,21 +102,6 @@ def cleanup(preserve, install_dir, data_paths):
logger.exception("Could not delete [%s]. Skipping..." % install_dir)
-class ProvisioningPhase(Enum):
- post_install = 10
-
- @classmethod
- def valid(cls, name):
- for n in ProvisioningPhase.names():
- if n == name:
- return True
- return False
-
- @classmethod
- def names(cls):
- return [p.name for p in list(ProvisioningPhase)]
-
-
def _apply_config(source_root_path, target_root_path, config_vars):
for root, dirs, files in os.walk(source_root_path):
env = jinja2.Environment(loader=jinja2.FileSystemLoader(root))
@@ -173,7 +158,7 @@ class BareProvisioner:
for installer in self.plugin_installers:
# Never let install hooks modify our original provisioner variables and just provide a copy!
- installer.invoke_install_hook(ProvisioningPhase.post_install, provisioner_vars.copy())
+ installer.invoke_install_hook(team.PluginBootstrapPhase.post_install, provisioner_vars.copy())
return NodeConfiguration(self.es_installer.car, self.es_installer.node_ip, self.es_installer.node_name,
self.es_installer.node_root_dir, self.es_installer.es_home_path, self.es_installer.node_log_dir,
@@ -297,52 +282,8 @@ class ElasticsearchInstaller:
return [os.path.join(self.es_home_path, "data")]
-class InstallHookHandler:
- def __init__(self, plugin, loader_class=modules.ComponentLoader):
- self.plugin = plugin
- # Don't allow the loader to recurse. The subdirectories may contain Elasticsearch specific files which we do not want to add to
- # Rally's Python load path. We may need to define a more advanced strategy in the future.
- self.loader = loader_class(root_path=self.plugin.root_path, component_entry_point="plugin", recurse=False)
- self.hooks = {}
-
- def can_load(self):
- return self.loader.can_load()
-
- def load(self):
- root_module = self.loader.load()
- try:
- # every module needs to have a register() method
- root_module.register(self)
- except exceptions.RallyError:
- # just pass our own exceptions transparently.
- raise
- except BaseException:
- msg = "Could not load install hooks in [%s]" % self.loader.root_path
- logger.exception(msg)
- raise exceptions.SystemSetupError(msg)
-
- def register(self, phase, hook):
- logger.info("Registering install hook [%s] for phase [%s] in plugin [%s]" % (hook.__name__, phase, self.plugin.name))
- if not ProvisioningPhase.valid(phase):
- raise exceptions.SystemSetupError("Provisioning phase [%s] is unknown. Valid phases are: %s." %
- (phase, ProvisioningPhase.names()))
- if phase not in self.hooks:
- self.hooks[phase] = []
- self.hooks[phase].append(hook)
-
- def invoke(self, phase, variables):
- if phase in self.hooks:
- logger.info("Invoking phase [%s] for plugin [%s] in config [%s]" % (phase, self.plugin.name, self.plugin.config))
- for hook in self.hooks[phase]:
- logger.info("Invoking hook [%s]." % hook.__name__)
- # hooks should only take keyword arguments to be forwards compatible with Rally!
- hook(config_names=self.plugin.config, variables=variables)
- else:
- logger.debug("Plugin [%s] in config [%s] has no hook registered for phase [%s]." % (self.plugin.name, self.plugin.config, phase))
-
-
class PluginInstaller:
- def __init__(self, plugin, hook_handler_class=InstallHookHandler):
+ def __init__(self, plugin, hook_handler_class=team.PluginBootstrapHookHandler):
self.plugin = plugin
self.hook_handler = hook_handler_class(self.plugin)
if self.hook_handler.can_load():
@@ -371,7 +312,7 @@ class PluginInstaller:
(self.plugin_name, str(return_code)))
def invoke_install_hook(self, phase, variables):
- self.hook_handler.invoke(phase.name, variables)
+ self.hook_handler.invoke(phase.name, variables=variables)
@property
def variables(self):
diff --git a/esrally/mechanic/team.py b/esrally/mechanic/team.py
index e37e5c3a..37c4a518 100644
--- a/esrally/mechanic/team.py
+++ b/esrally/mechanic/team.py
@@ -1,11 +1,12 @@
import os
import logging
import configparser
+from enum import Enum
import tabulate
from esrally import exceptions, PROGRAM_NAME
-from esrally.utils import console, repo, io
+from esrally.utils import console, repo, io, modules
logger = logging.getLogger("rally.team")
@@ -341,3 +342,62 @@ class PluginDescriptor:
def __eq__(self, other):
return isinstance(other, type(self)) and (self.name, self.config, self.core_plugin) == (other.name, other.config, other.core_plugin)
+
+
+class PluginBootstrapPhase(Enum):
+ post_install = 10
+ post_launch = 20
+
+ @classmethod
+ def valid(cls, name):
+ for n in PluginBootstrapPhase.names():
+ if n == name:
+ return True
+ return False
+
+ @classmethod
+ def names(cls):
+ return [p.name for p in list(PluginBootstrapPhase)]
+
+
+class PluginBootstrapHookHandler:
+ def __init__(self, plugin, loader_class=modules.ComponentLoader):
+ self.plugin = plugin
+ # Don't allow the loader to recurse. The subdirectories may contain Elasticsearch specific files which we do not want to add to
+ # Rally's Python load path. We may need to define a more advanced strategy in the future.
+ self.loader = loader_class(root_path=self.plugin.root_path, component_entry_point="plugin", recurse=False)
+ self.hooks = {}
+
+ def can_load(self):
+ return self.loader.can_load()
+
+ def load(self):
+ root_module = self.loader.load()
+ try:
+ # every module needs to have a register() method
+ root_module.register(self)
+ except exceptions.RallyError:
+ # just pass our own exceptions transparently.
+ raise
+ except BaseException:
+ msg = "Could not load plugin bootstrap hooks in [{}]".format(self.loader.root_path)
+ logger.exception(msg)
+ raise exceptions.SystemSetupError(msg)
+
+ def register(self, phase, hook):
+ logger.info("Registering plugin bootstrap hook [%s] for phase [%s] in plugin [%s]", hook.__name__, phase, self.plugin.name)
+ if not PluginBootstrapPhase.valid(phase):
+ raise exceptions.SystemSetupError("Phase [{}] is unknown. Valid phases are: {}.".format(phase, PluginBootstrapPhase.names()))
+ if phase not in self.hooks:
+ self.hooks[phase] = []
+ self.hooks[phase].append(hook)
+
+ def invoke(self, phase, **kwargs):
+ if phase in self.hooks:
+ logger.info("Invoking phase [%s] for plugin [%s] in config [%s]", phase, self.plugin.name, self.plugin.config)
+ for hook in self.hooks[phase]:
+ logger.info("Invoking hook [%s].", hook.__name__)
+ # hooks should only take keyword arguments to be forwards compatible with Rally!
+ hook(config_names=self.plugin.config, **kwargs)
+ else:
+ logger.debug("Plugin [%s] in config [%s] has no hook registered for phase [%s].", self.plugin.name, self.plugin.config, phase)
\ No newline at end of file
| Add new post_launch phase for plugin installers
Rally currently provides a `post_install` hook for plugin installers. For additional flexibility we should add a new phase `post_launch` which can also execute API requests after the cluster has been launched. | elastic/rally | diff --git a/tests/mechanic/launcher_test.py b/tests/mechanic/launcher_test.py
index 7e9d89bf..8110aae9 100644
--- a/tests/mechanic/launcher_test.py
+++ b/tests/mechanic/launcher_test.py
@@ -1,6 +1,6 @@
-from unittest import TestCase
+from unittest import TestCase, mock
-from esrally import config
+from esrally import config, exceptions
from esrally.mechanic import launcher
@@ -11,14 +11,15 @@ class MockMetricsStore:
class MockClientFactory:
def __init__(self, hosts, client_options):
- pass
+ self.client_options = client_options
def create(self):
- return MockClient()
+ return MockClient(self.client_options)
class MockClient:
- def __init__(self):
+ def __init__(self, client_options):
+ self.client_options = client_options
self.cluster = SubClient({
"cluster_name": "rally-benchmark-cluster",
"nodes": {
@@ -54,8 +55,14 @@ class MockClient:
}
def info(self):
+ if self.client_options.get("raise-error-on-info", False):
+ import elasticsearch
+ raise elasticsearch.ConnectionError("Unittest error")
return self._info
+ def search(self, *args, **kwargs):
+ return {}
+
class SubClient:
def __init__(self, info):
@@ -73,7 +80,7 @@ class ExternalLauncherTests(TestCase):
cfg = config.Config()
cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
- cfg.add(config.Scope.application, "client", "options", [])
+ cfg.add(config.Scope.application, "client", "options", {})
m = launcher.ExternalLauncher(cfg, MockMetricsStore(), client_factory_class=MockClientFactory)
m.start()
@@ -85,10 +92,58 @@ class ExternalLauncherTests(TestCase):
cfg = config.Config()
cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
- cfg.add(config.Scope.application, "client", "options", [])
+ cfg.add(config.Scope.application, "client", "options", {})
cfg.add(config.Scope.application, "mechanic", "distribution.version", "2.3.3")
m = launcher.ExternalLauncher(cfg, MockMetricsStore(), client_factory_class=MockClientFactory)
m.start()
# did not change user defined value
self.assertEqual(cfg.opts("mechanic", "distribution.version"), "2.3.3")
+
+
+class ClusterLauncherTests(TestCase):
+ def test_launches_cluster_with_post_launch_handler(self):
+ on_post_launch = mock.Mock()
+ cfg = config.Config()
+ cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
+ cfg.add(config.Scope.application, "client", "options", {})
+ cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
+ cfg.add(config.Scope.application, "mechanic", "telemetry.params", {})
+
+ cluster_launcher = launcher.ClusterLauncher(cfg, MockMetricsStore(),
+ on_post_launch=on_post_launch, client_factory_class=MockClientFactory)
+ cluster = cluster_launcher.start()
+
+ self.assertEqual(["10.0.0.10:9200", "10.0.0.11:9200"], cluster.hosts)
+ self.assertIsNotNone(cluster.telemetry)
+ on_post_launch.assert_called_once()
+
+ def test_launches_cluster_without_post_launch_handler(self):
+ cfg = config.Config()
+ cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
+ cfg.add(config.Scope.application, "client", "options", {})
+ cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
+ cfg.add(config.Scope.application, "mechanic", "telemetry.params", {})
+
+ cluster_launcher = launcher.ClusterLauncher(cfg, MockMetricsStore(), client_factory_class=MockClientFactory)
+ cluster = cluster_launcher.start()
+
+ self.assertEqual(["10.0.0.10:9200", "10.0.0.11:9200"], cluster.hosts)
+ self.assertIsNotNone(cluster.telemetry)
+
+ @mock.patch("time.sleep")
+ def test_error_on_cluster_launch(self, sleep):
+ on_post_launch = mock.Mock()
+ cfg = config.Config()
+ cfg.add(config.Scope.application, "client", "hosts", ["10.0.0.10:9200", "10.0.0.11:9200"])
+ # Simulate that the client will raise an error upon startup
+ cfg.add(config.Scope.application, "client", "options", {"raise-error-on-info": True})
+ cfg.add(config.Scope.application, "mechanic", "telemetry.devices", [])
+ cfg.add(config.Scope.application, "mechanic", "telemetry.params", {})
+
+ cluster_launcher = launcher.ClusterLauncher(cfg, MockMetricsStore(),
+ on_post_launch=on_post_launch, client_factory_class=MockClientFactory)
+ with self.assertRaisesRegex(exceptions.LaunchError,
+ "Elasticsearch REST API layer is not available. Forcefully terminated cluster."):
+ cluster_launcher.start()
+ on_post_launch.assert_not_called()
\ No newline at end of file
diff --git a/tests/mechanic/provisioner_test.py b/tests/mechanic/provisioner_test.py
index 7c29095a..b06ead1f 100644
--- a/tests/mechanic/provisioner_test.py
+++ b/tests/mechanic/provisioner_test.py
@@ -439,60 +439,11 @@ class PluginInstallerTests(TestCase):
installer = provisioner.PluginInstaller(plugin, hook_handler_class=PluginInstallerTests.NoopHookHandler)
self.assertEqual(0, len(installer.hook_handler.hook_calls))
- installer.invoke_install_hook(provisioner.ProvisioningPhase.post_install, {"foo": "bar"})
+ installer.invoke_install_hook(team.PluginBootstrapPhase.post_install, {"foo": "bar"})
self.assertEqual(1, len(installer.hook_handler.hook_calls))
self.assertEqual({"foo": "bar"}, installer.hook_handler.hook_calls["post_install"])
-class InstallHookHandlerTests(TestCase):
- class UnitTestComponentLoader:
- def __init__(self, root_path, component_entry_point, recurse):
- self.root_path = root_path
- self.component_entry_point = component_entry_point
- self.recurse = recurse
- self.registration_function = None
-
- def load(self):
- return self.registration_function
-
- class UnitTestHook:
- def __init__(self, phase="post_install"):
- self.phase = phase
- self.call_counter = 0
-
- def post_install_hook(self, config_names, variables, **kwargs):
- self.call_counter += variables["increment"]
-
- def register(self, handler):
- # we can register multiple hooks here
- handler.register(self.phase, self.post_install_hook)
- handler.register(self.phase, self.post_install_hook)
-
- def test_loads_module(self):
- plugin = team.PluginDescriptor("unittest-plugin")
- hook = InstallHookHandlerTests.UnitTestHook()
- handler = provisioner.InstallHookHandler(plugin, loader_class=InstallHookHandlerTests.UnitTestComponentLoader)
-
- handler.loader.registration_function = hook
- handler.load()
-
- handler.invoke("post_install", {"increment": 4})
-
- # we registered our hook twice. Check that it has been called twice.
- self.assertEqual(hook.call_counter, 2 * 4)
-
- def test_cannot_register_for_unknown_phase(self):
- plugin = team.PluginDescriptor("unittest-plugin")
- hook = InstallHookHandlerTests.UnitTestHook(phase="this_is_an_unknown_install_phase")
- handler = provisioner.InstallHookHandler(plugin, loader_class=InstallHookHandlerTests.UnitTestComponentLoader)
-
- handler.loader.registration_function = hook
- with self.assertRaises(exceptions.SystemSetupError) as ctx:
- handler.load()
- self.assertEqual("Provisioning phase [this_is_an_unknown_install_phase] is unknown. Valid phases are: ['post_install'].",
- ctx.exception.args[0])
-
-
class DockerProvisionerTests(TestCase):
@mock.patch("esrally.utils.sysstats.total_memory")
@mock.patch("uuid.uuid4")
diff --git a/tests/mechanic/team_test.py b/tests/mechanic/team_test.py
index 37f80635..b8a94ecb 100644
--- a/tests/mechanic/team_test.py
+++ b/tests/mechanic/team_test.py
@@ -118,3 +118,52 @@ class PluginLoaderTests(TestCase):
"var": "0",
"hello": "true"
}, plugin.variables)
+
+
+class PluginBootstrapHookHandlerTests(TestCase):
+ class UnitTestComponentLoader:
+ def __init__(self, root_path, component_entry_point, recurse):
+ self.root_path = root_path
+ self.component_entry_point = component_entry_point
+ self.recurse = recurse
+ self.registration_function = None
+
+ def load(self):
+ return self.registration_function
+
+ class UnitTestHook:
+ def __init__(self, phase="post_install"):
+ self.phase = phase
+ self.call_counter = 0
+
+ def post_install_hook(self, config_names, variables, **kwargs):
+ self.call_counter += variables["increment"]
+
+ def register(self, handler):
+ # we can register multiple hooks here
+ handler.register(self.phase, self.post_install_hook)
+ handler.register(self.phase, self.post_install_hook)
+
+ def test_loads_module(self):
+ plugin = team.PluginDescriptor("unittest-plugin")
+ hook = PluginBootstrapHookHandlerTests.UnitTestHook()
+ handler = team.PluginBootstrapHookHandler(plugin, loader_class=PluginBootstrapHookHandlerTests.UnitTestComponentLoader)
+
+ handler.loader.registration_function = hook
+ handler.load()
+
+ handler.invoke("post_install", variables={"increment": 4})
+
+ # we registered our hook twice. Check that it has been called twice.
+ self.assertEqual(hook.call_counter, 2 * 4)
+
+ def test_cannot_register_for_unknown_phase(self):
+ plugin = team.PluginDescriptor("unittest-plugin")
+ hook = PluginBootstrapHookHandlerTests.UnitTestHook(phase="this_is_an_unknown_install_phase")
+ handler = team.PluginBootstrapHookHandler(plugin, loader_class=PluginBootstrapHookHandlerTests.UnitTestComponentLoader)
+
+ handler.loader.registration_function = hook
+ with self.assertRaises(exceptions.SystemSetupError) as ctx:
+ handler.load()
+ self.assertEqual("Phase [this_is_an_unknown_install_phase] is unknown. Valid phases are: ['post_install', 'post_launch'].",
+ ctx.exception.args[0])
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 4
} | 0.10 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-benchmark"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
elasticsearch==6.2.0
-e git+https://github.com/elastic/rally.git@ebc10f53af246b3e34f751c1346aec9ed800981e#egg=esrally
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==2.9.5
jsonschema==2.5.1
MarkupSafe==2.0.1
packaging==21.3
pluggy==1.0.0
psutil==5.4.0
py==1.11.0
py-cpuinfo==3.2.0
pyparsing==3.1.4
pytest==7.0.1
pytest-benchmark==3.4.1
tabulate==0.8.1
thespian==3.9.2
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.22
zipp==3.6.0
| name: rally
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- elasticsearch==6.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==2.9.5
- jsonschema==2.5.1
- markupsafe==2.0.1
- packaging==21.3
- pluggy==1.0.0
- psutil==5.4.0
- py==1.11.0
- py-cpuinfo==3.2.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-benchmark==3.4.1
- tabulate==0.8.1
- thespian==3.9.2
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.22
- zipp==3.6.0
prefix: /opt/conda/envs/rally
| [
"tests/mechanic/launcher_test.py::ClusterLauncherTests::test_error_on_cluster_launch",
"tests/mechanic/launcher_test.py::ClusterLauncherTests::test_launches_cluster_with_post_launch_handler",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_invokes_hook",
"tests/mechanic/team_test.py::PluginBootstrapHookHandlerTests::test_cannot_register_for_unknown_phase",
"tests/mechanic/team_test.py::PluginBootstrapHookHandlerTests::test_loads_module"
]
| []
| [
"tests/mechanic/launcher_test.py::ExternalLauncherTests::test_setup_external_cluster_multiple_nodes",
"tests/mechanic/launcher_test.py::ExternalLauncherTests::test_setup_external_cluster_single_node",
"tests/mechanic/launcher_test.py::ClusterLauncherTests::test_launches_cluster_without_post_launch_handler",
"tests/mechanic/provisioner_test.py::BareProvisionerTests::test_prepare_distribution_ge_63_with_plugins",
"tests/mechanic/provisioner_test.py::BareProvisionerTests::test_prepare_distribution_lt_63_with_plugins",
"tests/mechanic/provisioner_test.py::BareProvisionerTests::test_prepare_without_plugins",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_cleanup",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_cleanup_nothing_on_preserve",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_prepare_default_data_paths",
"tests/mechanic/provisioner_test.py::ElasticsearchInstallerTests::test_prepare_user_provided_data_path",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_plugin_successfully",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_plugin_with_io_error",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_plugin_with_unknown_error",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_install_unknown_plugin",
"tests/mechanic/provisioner_test.py::PluginInstallerTests::test_pass_plugin_properties",
"tests/mechanic/provisioner_test.py::DockerProvisionerTests::test_provisioning",
"tests/mechanic/team_test.py::CarLoaderTests::test_lists_car_names",
"tests/mechanic/team_test.py::CarLoaderTests::test_load_car_with_mixin_multiple_config_bases",
"tests/mechanic/team_test.py::CarLoaderTests::test_load_car_with_mixin_single_config_base",
"tests/mechanic/team_test.py::CarLoaderTests::test_load_known_car",
"tests/mechanic/team_test.py::CarLoaderTests::test_raises_error_on_empty_config_base",
"tests/mechanic/team_test.py::CarLoaderTests::test_raises_error_on_missing_config_base",
"tests/mechanic/team_test.py::CarLoaderTests::test_raises_error_on_unknown_car",
"tests/mechanic/team_test.py::PluginLoaderTests::test_cannot_load_community_plugin_with_missing_config",
"tests/mechanic/team_test.py::PluginLoaderTests::test_cannot_load_plugin_with_missing_config",
"tests/mechanic/team_test.py::PluginLoaderTests::test_lists_plugins",
"tests/mechanic/team_test.py::PluginLoaderTests::test_loads_community_plugin_without_configuration",
"tests/mechanic/team_test.py::PluginLoaderTests::test_loads_configured_plugin",
"tests/mechanic/team_test.py::PluginLoaderTests::test_loads_core_plugin"
]
| []
| Apache License 2.0 | 2,438 | [
"esrally/mechanic/mechanic.py",
"esrally/mechanic/launcher.py",
"esrally/mechanic/provisioner.py",
"esrally/mechanic/team.py"
]
| [
"esrally/mechanic/mechanic.py",
"esrally/mechanic/launcher.py",
"esrally/mechanic/provisioner.py",
"esrally/mechanic/team.py"
]
|
|
juju__python-libjuju-228 | 668945a53cdf696bb2d6d8da518a07094b22c7c7 | 2018-04-23 15:26:44 | 462989bbd919f209ebb7454305f53e4b94714487 | diff --git a/.travis.yml b/.travis.yml
index 0e907f0..9a1fcce 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -3,28 +3,29 @@ sudo: required
language: python
python:
- "3.6"
-before_script:
- - sudo addgroup lxd || true
- - sudo usermod -a -G lxd $USER || true
- - sudo ln -s /snap/bin/juju /usr/bin/juju
- - sudo ln -s /snap/bin/lxc /usr/bin/lxc
before_install:
- sudo add-apt-repository -y ppa:jonathonf/python-3.6
- sudo add-apt-repository ppa:chris-lea/libsodium -y
- sudo apt-get update -q
- sudo apt-get remove -qy lxd lxd-client
- sudo apt-get install snapd libsodium-dev -y
- - sudo snap install lxd || true
+install:
+ - sudo snap install lxd || true # ignore failures so that unit tests will still run, at least
+ - sudo snap install juju --classic --$JUJU_CHANNEL || true
- sudo snap install juju-wait --classic || true
-install: pip install tox-travis
+ - pip install tox-travis
env:
global: >
TEST_AGENTS='{"agents":[{"url":"https://api.staging.jujucharms.com/identity","username":"libjuju-ci@yellow"}],"key":{"private":"88OOCxIHQNguRG7zFg2y2Hx5Ob0SeVKKBRnjyehverc=","public":"fDn20+5FGyN2hYO7z0rFUyoHGUnfrleslUNtoYsjNSs="}}'
+ PATH="/snap/bin:$PATH"
matrix:
- JUJU_CHANNEL=stable
- JUJU_CHANNEL=edge
+before_script:
+ - sudo bash -c 'for i in 5 10 15 30; do [[ -e /var/snap/lxd/common/lxd/unix.socket ]] && break; sleep $i; done'
+ - sudo lxd init --auto || true
+ - sudo addgroup lxd || true
+ - sudo usermod -a -G lxd $USER || true
script:
- - sudo snap install juju --classic --$JUJU_CHANNEL
- - sudo ln -s /snap/bin/juju /usr/bin/juju || true
- - sudo -E sudo -u $USER -E /snap/bin/juju bootstrap localhost test --config 'identity-url=https://api.staging.jujucharms.com/identity' --config 'allow-model-access=true'
+ - sudo -E sudo -u $USER -E juju bootstrap localhost test --config 'identity-url=https://api.staging.jujucharms.com/identity' --config 'allow-model-access=true'
- tox -e py35,integration
diff --git a/juju/constraints.py b/juju/constraints.py
index 998862d..0050673 100644
--- a/juju/constraints.py
+++ b/juju/constraints.py
@@ -29,6 +29,8 @@ FACTORS = {
"P": 1024 * 1024 * 1024
}
+LIST_KEYS = {'tags', 'spaces'}
+
SNAKE1 = re.compile(r'(.)([A-Z][a-z]+)')
SNAKE2 = re.compile('([a-z0-9])([A-Z])')
@@ -47,8 +49,10 @@ def parse(constraints):
return constraints
constraints = {
- normalize_key(k): normalize_value(v) for k, v in [
- s.split("=") for s in constraints.split(" ")]}
+ normalize_key(k): (
+ normalize_list_value(v) if k in LIST_KEYS else
+ normalize_value(v)
+ ) for k, v in [s.split("=") for s in constraints.split(" ")]}
return constraints
@@ -72,13 +76,12 @@ def normalize_value(value):
# Translate aliases to Megabytes. e.g. 1G = 10240
return int(value[:-1]) * FACTORS[value[-1:]]
- if "," in value:
- # Handle csv strings.
- values = value.split(",")
- values = [normalize_value(v) for v in values]
- return values
-
if value.isdigit():
return int(value)
return value
+
+
+def normalize_list_value(value):
+ values = value.strip().split(',')
+ return [normalize_value(value) for value in values]
| Single tag constraint causes JujuAPIError
When a constraint spec is given to deploy or in a bundle with a single tag constraint (e.g., `tags=node1234` rather than `tags=node1234,othertag`) it causes the following exception:
```
Traceback (most recent call last):
File "/snap/conjure-up/987/lib/python3.6/site-packages/conjureup/controllers/deploy/common.py", line 24, in do_deploy
await app.juju.client.deploy(fn)
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/model.py", line 1186, in deploy
await handler.execute_plan()
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/model.py", line 1780, in execute_plan
result = await method(*step.args)
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/model.py", line 1927, in deploy
storage=storage,
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/model.py", line 1299, in _deploy
result = await app_facade.Deploy([app])
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/client/facade.py", line 412, in wrapper
reply = await f(*args, **kwargs)
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/client/_client5.py", line 503, in Deploy
reply = await self.rpc(msg)
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/client/facade.py", line 537, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/snap/conjure-up/987/lib/python3.6/site-packages/juju/client/connection.py", line 314, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: json: cannot unmarshal string into Go struct field Value.tags of type []string
```
This is due to the constraints parsing [only creating a list if there is a comma](https://github.com/juju/python-libjuju/blob/master/juju/constraints.py#L75) whereas it needs to happen always for `tags` and `spaces`. | juju/python-libjuju | diff --git a/tests/unit/test_constraints.py b/tests/unit/test_constraints.py
index 00b9156..3c52090 100644
--- a/tests/unit/test_constraints.py
+++ b/tests/unit/test_constraints.py
@@ -32,6 +32,12 @@ class TestConstraints(unittest.TestCase):
self.assertEqual(_("10G"), 10 * 1024)
self.assertEqual(_("10M"), 10)
self.assertEqual(_("10"), 10)
+ self.assertEqual(_("foo,bar"), "foo,bar")
+
+ def test_normalize_list_val(self):
+ _ = constraints.normalize_list_value
+
+ self.assertEqual(_("foo"), ["foo"])
self.assertEqual(_("foo,bar"), ["foo", "bar"])
def test_parse_constraints(self):
@@ -43,6 +49,9 @@ class TestConstraints(unittest.TestCase):
)
self.assertEqual(
- _("mem=10G foo=bar,baz"),
- {"mem": 10 * 1024, "foo": ["bar", "baz"]}
+ _("mem=10G foo=bar,baz tags=tag1 spaces=space1,space2"),
+ {"mem": 10 * 1024,
+ "foo": "bar,baz",
+ "tags": ["tag1"],
+ "spaces": ["space1", "space2"]}
)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 2
} | 0.7 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"docs/requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
babel==2.17.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
docutils==0.18.1
exceptiongroup==1.2.2
idna==3.10
imagesize==1.4.1
iniconfig==2.1.0
Jinja2==3.1.6
-e git+https://github.com/juju/python-libjuju.git@668945a53cdf696bb2d6d8da518a07094b22c7c7#egg=juju
jujubundlelib==0.5.7
macaroonbakery==1.3.4
MarkupSafe==3.0.2
packaging==24.2
pluggy==1.5.0
protobuf==6.30.2
pycparser==2.22
Pygments==2.19.1
pymacaroons==0.13.0
PyNaCl==1.5.0
pyRFC3339==1.1
pytest==8.3.5
pytz==2017.3
PyYAML==3.13
requests==2.32.3
six==1.17.0
snowballstemmer==2.2.0
Sphinx==1.6.5
sphinx-rtd-theme==1.2.1
sphinxcontrib-asyncio==0.2.0
sphinxcontrib-jquery==2.0.0
sphinxcontrib-serializinghtml==2.0.0
sphinxcontrib-websupport==1.2.4
theblues==0.5.2
tomli==2.2.1
urllib3==2.3.0
websockets==4.0.1
| name: python-libjuju
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- babel==2.17.0
- certifi==2025.1.31
- cffi==1.17.1
- charset-normalizer==3.4.1
- docutils==0.18.1
- exceptiongroup==1.2.2
- idna==3.10
- imagesize==1.4.1
- iniconfig==2.1.0
- jinja2==3.1.6
- jujubundlelib==0.5.7
- macaroonbakery==1.3.4
- markupsafe==3.0.2
- packaging==24.2
- pluggy==1.5.0
- protobuf==6.30.2
- pycparser==2.22
- pygments==2.19.1
- pymacaroons==0.13.0
- pynacl==1.5.0
- pyrfc3339==1.1
- pytest==8.3.5
- pytz==2017.3
- pyyaml==3.13
- requests==2.32.3
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==1.6.5
- sphinx-rtd-theme==1.2.1
- sphinxcontrib-asyncio==0.2.0
- sphinxcontrib-jquery==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- sphinxcontrib-websupport==1.2.4
- theblues==0.5.2
- tomli==2.2.1
- urllib3==2.3.0
- websockets==4.0.1
prefix: /opt/conda/envs/python-libjuju
| [
"tests/unit/test_constraints.py::TestConstraints::test_normalize_list_val",
"tests/unit/test_constraints.py::TestConstraints::test_normalize_val",
"tests/unit/test_constraints.py::TestConstraints::test_parse_constraints"
]
| []
| [
"tests/unit/test_constraints.py::TestConstraints::test_mem_regex",
"tests/unit/test_constraints.py::TestConstraints::test_normalize_key"
]
| []
| Apache License 2.0 | 2,439 | [
"juju/constraints.py",
".travis.yml"
]
| [
"juju/constraints.py",
".travis.yml"
]
|
|
spencerahill__aospy-266 | 098da63959f5f67871950d48a799d86325c902d2 | 2018-04-23 18:10:40 | f8af04e7e9deec9fccd0337a074e9da174e83504 | spencerkclark: > But now I can't remember how we generated the tutorial notebook. Do you recall? I could manipulate the .ipynb file directly, but this seems fraught.
Yeah don't edit the the .ipynb file directly :). The easiest thing is to just open it in Jupyter, make the changes, and save it.
spencerahill: The `calc` test suite is now failing across the board due to a problem with the tearDown, e.g. [here](https://travis-ci.org/spencerahill/aospy/jobs/370355983).
I don't see an obvious way that what I introduced caused this, although this is the first time we're seeing it. I just added a try/except guard to see if that solves it.
spencerahill: >I just added a try/except guard to see if that solves it.
Hmm, now these have been replaced by even less intuitive ones, e.g. [here](https://travis-ci.org/spencerahill/aospy/jobs/370365778)
It seems like the example objects have somehow been corrupted. These tests are all passing on my local machine.
spencerkclark: > Hmm, now these have been replaced by even less intuitive ones, e.g. here
I think those were always there -- see L1828-L1873 in the [old build](https://travis-ci.org/spencerahill/aospy/jobs/370355983#L1828-L1873) for example. The tear down errors were just side effects of the results never being computed/saved to files.
Xarray recently released a new version (version 0.10.3); which version are you on locally?
spencerahill: >Xarray recently released a new version (version 0.10.3); which version are you on locally?
Good call. I was on 10.2; after updating to 10.3 I can reproduce. I'll open a separate issue to track this.
spencerahill: >I still need to look at the code, but I read through the docstrings (which are in general very nice!) and noticed a few minor things that I didn't want to forget.
Thanks! FYI I've realized after this commit that the logic I implemented is incomplete (i.e. wrong)...I'm working on a better solution and will ping you when it's ready for a proper review.
spencerahill: C.f. my comment https://github.com/spencerahill/aospy/issues/267#issuecomment-385822091, I think after all that dealing with non-cyclic longitudes is probably outside of the scope of this PR (despite that being partly the original motivation). Frankly, I am almost out of spare cycles for the next few weeks, and I want to get what's here so far in.
So we would return to the cyclic points in conjunction with how we treat region boundaries in a future PR. @spencerkclark does that sound OK to you? Sorry that I turned this one into such a long and winding road.
Either way, I'd appreciate a review of this PR when you get the chance. Thanks!
spencerkclark: > So we would return to the cyclic points in conjunction with how we treat region boundaries in a future PR. @spencerkclark does that sound OK to you?
Totally fine! I'll try and give things a review over the next few days.
spencerahill: OK, I think this is ready to go. Will merge once tests pass. Thanks @spencerkclark for the invaluable help. | diff --git a/aospy/examples/example_obj_lib.py b/aospy/examples/example_obj_lib.py
index 340a868..a71f934 100644
--- a/aospy/examples/example_obj_lib.py
+++ b/aospy/examples/example_obj_lib.py
@@ -100,8 +100,10 @@ precip_conv_frac = Var(
globe = Region(
name='globe',
description='Entire globe',
- lat_bounds=(-90, 90),
- lon_bounds=(0, 360),
+ west_bound=0,
+ east_bound=360,
+ south_bound=-90,
+ north_bound=90,
do_land_mask=False
)
@@ -109,8 +111,10 @@ globe = Region(
tropics = Region(
name='tropics',
description='Tropics, defined as 30S-30N',
- lat_bounds=(-30, 30),
- lon_bounds=(0, 360),
+ west_bound=0,
+ east_bound=360,
+ south_bound=-30,
+ north_bound=30,
do_land_mask=False
)
diff --git a/aospy/examples/tutorial.ipynb b/aospy/examples/tutorial.ipynb
index 2040cf2..6d234ce 100644
--- a/aospy/examples/tutorial.ipynb
+++ b/aospy/examples/tutorial.ipynb
@@ -2,10 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"aospy Tutorial\n",
"========\n",
@@ -14,10 +11,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Preliminaries\n",
"-------------\n",
@@ -28,9 +22,7 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -41,10 +33,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Now we'll use the fantastic [xarray](http://xarray.pydata.org/en/stable/) package to inspect the data:"
]
@@ -52,11 +41,7 @@
{
"cell_type": "code",
"execution_count": 2,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"data": {
@@ -90,30 +75,21 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"We see that, in this particular model, the variable names for these two forms of precipitation are \"condensation_rain\" and \"convection_rain\", respectively. The file also includes the coordinate arrays (\"lat\", \"time\", etc.) that indicate where in space and time the data refers to."
]
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Now that we know where and what the data is, we'll proceed through the workflow described in the [Using aospy](http://aospy.readthedocs.io/en/latest/using-aospy.html) section of the documentation."
]
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Describing your data\n",
"===========\n",
@@ -132,9 +108,7 @@
"cell_type": "code",
"execution_count": 3,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -145,10 +119,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"We then pass this to the `Run` constructor, along with a name for the run and an optional description.\n",
"\n",
@@ -159,9 +130,7 @@
"cell_type": "code",
"execution_count": 4,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -175,10 +144,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Models\n",
"------\n",
@@ -194,9 +160,7 @@
"cell_type": "code",
"execution_count": 5,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -213,10 +177,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Projects\n",
"--------\n",
@@ -228,9 +189,7 @@
"cell_type": "code",
"execution_count": 6,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -245,20 +204,14 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"This extra `Proj` level of organization may seem like overkill for this simple example, but it really comes in handy once you start using aospy for more than one project."
]
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Defining physical quantities and regions\n",
"======================\n",
@@ -275,9 +228,7 @@
"cell_type": "code",
"execution_count": 7,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -299,10 +250,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"When it comes time to load data corresponding to either of these from one or more particular netCDF files, aospy will search for variables matching either `name` or any of the names in `alt_names`, stopping at the first successful one. This makes the common problem of model-specific variable names a breeze: if you end up with data with a new name for your variable, just add it to `alt_names`.\n",
"\n",
@@ -315,9 +263,7 @@
"cell_type": "code",
"execution_count": 8,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -348,10 +294,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Notice the `func` and `variables` attributes that weren't in the prior `Var` constuctors. These signify the function to use and the physical quantities to pass to that function in order to compute the quantity.\n",
"\n",
@@ -373,9 +316,7 @@
"cell_type": "code",
"execution_count": 9,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -383,16 +324,20 @@
"globe = Region(\n",
" name='globe',\n",
" description='Entire globe',\n",
- " lat_bounds=(-90, 90),\n",
- " lon_bounds=(0, 360),\n",
+ " west_bound=0,\n",
+ " east_bound=360,\n",
+ " south_bound=-90,\n",
+ " north_bound=90,\n",
" do_land_mask=False\n",
")\n",
"\n",
"tropics = Region(\n",
" name='tropics',\n",
" description='Global tropics, defined as 30S-30N',\n",
- " lat_bounds=(-30, 30),\n",
- " lon_bounds=(0, 360),\n",
+ " west_bound=0,\n",
+ " east_bound=360,\n",
+ " south_bound=-30,\n",
+ " north_bound=30,\n",
" do_land_mask=False\n",
")\n",
"example_proj.regions = [globe, tropics]"
@@ -400,10 +345,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"We now have all of the needed metadata in place. So let's start crunching numbers!\n",
"\n",
@@ -422,9 +364,7 @@
"cell_type": "code",
"execution_count": 10,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -451,10 +391,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"See the `api-ref` on `aospy.submit_mult_calcs` for more on `calc_suite_specs`, including accepted values for each key.\n",
"\n",
@@ -465,9 +402,7 @@
"cell_type": "code",
"execution_count": 11,
"metadata": {
- "collapsed": true,
- "deletable": true,
- "editable": true
+ "collapsed": true
},
"outputs": [],
"source": [
@@ -477,10 +412,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Now let's submit this for execution:"
]
@@ -488,11 +420,7 @@
{
"cell_type": "code",
"execution_count": 12,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"name": "stderr",
@@ -558,10 +486,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"This permutes over all of the parameter settings in `calc_suite_specs`, generating and executing the resulting calculation. In this case, it will compute all four variables and perform annual averages, both for each gridpoint and regionally averaged.\n",
"\n",
@@ -570,10 +495,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Results\n",
"-------\n",
@@ -584,11 +506,7 @@
{
"cell_type": "code",
"execution_count": 13,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"data": {
@@ -610,10 +528,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Each `Calc` object includes the paths to the output"
]
@@ -621,11 +536,7 @@
{
"cell_type": "code",
"execution_count": 14,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"data": {
@@ -645,10 +556,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"and the results of each output type"
]
@@ -656,11 +564,7 @@
{
"cell_type": "code",
"execution_count": 15,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"data": {
@@ -715,10 +619,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Note: you may have noticed that `subset_...` and `raw_...` coordinates have years 1678 and later, when our data was from model years 4 through 6. This is because [technical details upstream](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timestamp-limitations) limit the range of supported whole years to 1678-2262.\n",
"\n",
@@ -733,11 +634,7 @@
{
"cell_type": "code",
"execution_count": 16,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"name": "stderr",
@@ -783,10 +680,7 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"We see that precipitation maximizes at the equator and has a secondary maximum in the mid-latitudes. Also, the convective precipitation dominates the total in the Tropics, but moving poleward the gridscale condensation plays an increasingly larger fractional role (note different colorscales in each panel).\n",
"\n",
@@ -799,11 +693,7 @@
{
"cell_type": "code",
"execution_count": 17,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [
{
"name": "stdout",
@@ -866,20 +756,14 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"As was evident from the plots, we see that most precipitation (80.8%) in the tropics comes from convective rainfall, but averaged over the globe the large-scale condensation is a more equal player (40.2% for large-scale, 59.8% for convective)."
]
},
{
"cell_type": "markdown",
- "metadata": {
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"source": [
"Beyond this simple example\n",
"==============\n",
@@ -913,11 +797,7 @@
{
"cell_type": "code",
"execution_count": 18,
- "metadata": {
- "collapsed": false,
- "deletable": true,
- "editable": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"# Optional: remove created files\n",
@@ -942,9 +822,36 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.0"
+ "version": "3.6.3"
+ },
+ "latex_envs": {
+ "LaTeX_envs_menu_present": true,
+ "autocomplete": true,
+ "bibliofile": "biblio.bib",
+ "cite_by": "apalike",
+ "current_citInitial": 1,
+ "eqLabelWithNumbers": true,
+ "eqNumInitial": 1,
+ "hotkeys": {
+ "equation": "Ctrl-E",
+ "itemize": "Ctrl-I"
+ },
+ "labels_anchors": false,
+ "latex_user_defs": false,
+ "report_style_numbering": false,
+ "user_envs_cfg": false
+ },
+ "toc": {
+ "nav_menu": {},
+ "number_sections": true,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": "block",
+ "toc_window_display": false
}
},
"nbformat": 4,
- "nbformat_minor": 0
+ "nbformat_minor": 1
}
diff --git a/aospy/region.py b/aospy/region.py
index 2d27eed..546e88c 100644
--- a/aospy/region.py
+++ b/aospy/region.py
@@ -1,33 +1,24 @@
"""Functionality pertaining to aggregating data over geographical regions."""
+from collections import namedtuple
import logging
import numpy as np
-from . import internal_names
+from .internal_names import (
+ LAND_MASK_STR,
+ LAT_STR,
+ LON_STR,
+ SFC_AREA_STR,
+ YEAR_STR
+)
+from .utils.longitude import _maybe_cast_to_lon
-def _add_to_mask(data, lat_bounds, lon_bounds):
- """Add mask spanning given lat-lon rectangle."""
- mask_lat = ((data[internal_names.LAT_STR] > lat_bounds[0]) &
- (data[internal_names.LAT_STR] < lat_bounds[1]))
- return mask_lat & ((data[internal_names.LON_STR] > lon_bounds[0]) &
- (data[internal_names.LON_STR] < lon_bounds[1]))
-
-
-def _make_mask(data, mask_bounds):
- """Construct the mask that defines this region."""
- # For each set of bounds add to the conditional.
- mask = False
- for lat_bounds, lon_bounds in mask_bounds:
- mask |= _add_to_mask(data, lat_bounds, lon_bounds)
- return mask
-
-
-def _get_land_mask(data, do_land_mask):
+def _get_land_mask(data, do_land_mask, land_mask_str=LAND_MASK_STR):
if not do_land_mask:
return 1
try:
- land_mask = data.land_mask.copy()
+ land_mask = data[land_mask_str].copy()
except AttributeError:
# TODO: Implement aospy built-in land mask to default to.
msg = ("No land mask found. Using empty mask, which amounts to "
@@ -38,12 +29,11 @@ def _get_land_mask(data, do_land_mask):
return 1
try:
percent_bool = land_mask.units.lower() in ('%', 'percent')
- logging.debug("Converting land mask from 0-100 to 0.0-1.0")
except AttributeError:
- # Wrong for the edge case where no grid cell is 100% land.
- percent_bool = land_mask.max() == 100
+ percent_bool = np.any(land_mask > 1)
if percent_bool:
land_mask *= 0.01
+ logging.debug("Converting land mask from 0-100 to 0.0-1.0")
if do_land_mask is True:
return land_mask
if do_land_mask == 'ocean':
@@ -56,9 +46,18 @@ def _get_land_mask(data, do_land_mask):
raise ValueError(msg)
-def _sum_over_lat_lon(arr):
- """Sum an array over the latitude and longitude dimensions."""
- return arr.sum(internal_names.LAT_STR).sum(internal_names.LON_STR)
+class BoundsRect(namedtuple('BoundsRect', ['west', 'east', 'south', 'north'])):
+ """Bounding longitudes and latitudes of a given lat-lon rectangle."""
+ def __new__(cls, west, east, south, north):
+ new_west = _maybe_cast_to_lon(west, strict=True)
+ new_east = _maybe_cast_to_lon(east, strict=True)
+ return super(BoundsRect, cls).__new__(cls, new_west, new_east,
+ south, north)
+
+ def __repr__(self):
+ return ("BoundsRect(west={0}, east={1}, south={2}, "
+ "north={3}".format(self.west, self.east, self.south,
+ self.north))
class Region(object):
@@ -92,14 +91,25 @@ class Region(object):
"""
- def __init__(self, name='', description='', lon_bounds=[], lat_bounds=[],
- mask_bounds=[], do_land_mask=False):
+ def __init__(self, name='', description='', west_bound=None,
+ east_bound=None, south_bound=None, north_bound=None,
+ mask_bounds=None, do_land_mask=False):
"""Instantiate a Region object.
- If a region spans across the endpoint of the data's longitude array
- (i.e. it crosses the Prime Meridian for data with longitudes spanning
- 0 to 360), it must be defined as the union of two sections extending to
- the east and to the west of the Prime Meridian.
+ Note that longitudes spanning (-180, 180), (0, 360), or any other range
+ are all supported: -180 to 0, 180 to 360, etc. are interpreted as the
+ western hemisphere, and 0-180, 360-540, etc. are interpreted as the
+ eastern hemisphere. This is true both of the region definition and of
+ any data upon which the region mask is applied.
+
+ E.g. suppose some of your data is defined on a -180 to 180 longitude
+ grid, some of it is defined on a 0 to 360 grid, and some of it is
+ defined on a -70 to 290 grid. A single Region object will work with
+ all three of these.
+
+ Conversely, latitudes are always treated as -90 as the South Pole, 0 as
+ the Equator, and 90 as the North Pole. Latitudes larger than 90 are
+ not physically meaningful.
Parameters
----------
@@ -109,14 +119,33 @@ class Region(object):
description : str, optional
A description of the region. This is not used internally by
aospy; it is solely for the user's information.
- lon_bounds, lat_bounds : length-2 sequence, optional
- The longitude and latitude bounds of the region. If the region
- boundaries are more complicated than a single lat-lon rectangle,
- use `mask_bounds` instead.
+ west_bound, east_bound : { scalar, aospy.Longitude }, optional
+ The western and eastern boundaries of the region. All input
+ longitudes are casted to ``aospy.Longitude`` objects, which
+ essentially maps them to a 180W to 180E grid. The region's
+ longitudes always start at ``west_bound`` and move toward the east
+ until reaching ``east_bound``. This means that there are two
+ distinct cases:
+
+ - If, after this translation, west_bound is less than east_bound,
+ the region includes the points east of west_bound and west of
+ east_bound.
+ - If west_bound greater than east_bound, then the
+ region is treated as wrapping around the dateline, i.e. it's
+ western-most point is east_bound, and it includes all points
+ moving east from there until west_bound.
+
+ If the region boundaries are more complicated than a single
+ lat-lon rectangle, use `mask_bounds` instead.
+
+ south_bound, north_bound : scalar, optional
+ The southern, and northern boundaries, respectively, of the
+ region. If the region boundaries are more complicated than a
+ single lat-lon rectangle, use `mask_bounds` instead.
mask_bounds : sequence, optional
- Each element is a length-2 tuple of the format `(lat_bounds,
- lon_bounds)`, where each of `lat_bounds` and `lon_bounds` are
- of the form described above.
+ Each element is a length-4 sequence of the format `(west_bound,
+ east_bound, south_bound, north_bound)`, where each of these
+ `_bound` arguments is of the form described above.
do_land_mask : { False, True, 'ocean', 'strict_land', 'strict_ocean'},
optional
Determines what, if any, land mask is applied in addition to the
@@ -130,63 +159,197 @@ class Region(object):
Examples
--------
- Define a region spanning the entire globe
+ Define a region spanning the entire globe:
+
+ >>> globe = Region(name='globe', west_bound=0, east_bound=360,
+ ... south_bound=-90, north_bound=90, do_land_mask=False)
- >>> globe = Region(name='globe', lat_bounds=(-90, 90),
- ... lon_bounds=(0, 360), do_land_mask=False)
+ Longitudes are handled as cyclic, so this definition could have
+ equivalently used `west_bound=-180, east_bound=180` or `west_bound=200,
+ east_bound=560`, or anything else that spanned 360 degrees total.
- Define a region corresponding to the Sahel region of Africa, which
- we'll define as land points within 10N-20N latitude and 18W-40E
- longitude. Because this region crosses the 0 degrees longitude point,
- it has to be defined using `mask_bounds` as the union of two lat-lon
- rectangles.
+ Define a region corresponding to land in the mid-latitudes, which we'll
+ define as land points within 30-60 degrees latitude in both
+ hemispheres. Because this region is the union of multiple lat-lon
+ rectangles, it has to be defined using `mask_bounds`:
- >>> sahel = Region(name='sahel', do_land_mask=True,
- ... mask_bounds=[((10, 20), (0, 40)),
- ... ((10, 20), (342, 360))])
+ >>> land_mid_lats = Region(name='land_midlat', do_land_mask=True,
+ ... mask_bounds=[(-180, 180, 30, 60),
+ ... (-180, 180, -30, -60)])
+
+ Define a region spanning the southern Tropical Atlantic ocean, which
+ we'll take to be all ocean points between 60W and 30E and between the
+ Equator and 30S:
+
+ >>> atl_south_trop = Region(name='atl_sh_trop', west_bound=-60,
+ ... east_bound=30, south_bound=-30,
+ ... north_bound=0, do_land_mask='ocean')
+
+ Define the "opposite" region, i.e. all ocean points in the southern
+ Tropics *outside* of the Atlantic. We simply swap ``west_bound`` and
+ ``east_bound`` of the previous example:
+
+ >>> non_atl_south_trop = Region(name='non_atl_sh_trop', west_bound=30,
+ ... east_bound=-60, south_bound=-30,
+ ... north_bound=0, do_land_mask='ocean')
"""
self.name = name
self.description = description
- if lon_bounds and lat_bounds and not mask_bounds:
- self.mask_bounds = [(lat_bounds, lon_bounds)]
- else:
- self.mask_bounds = mask_bounds
self.do_land_mask = do_land_mask
+ if mask_bounds is None:
+ self.mask_bounds = tuple([BoundsRect(west_bound, east_bound,
+ south_bound, north_bound)])
+ else:
+ bounds = []
+ for rect_bounds in mask_bounds:
+ if len(rect_bounds) != 4:
+ raise ValueError("Each element of `mask_bounds` must be a "
+ "length-4 array with values (west, east, "
+ "south, north). Value given: "
+ "{}".format(rect_bounds))
+ else:
+ bounds.append(BoundsRect(*rect_bounds))
+ self.mask_bounds = tuple(bounds)
+
def __str__(self):
return 'Geographical region "' + self.name + '"'
- __repr__ = __str__
+ def _make_mask(self, data, lon_str=LON_STR, lat_str=LAT_STR):
+ """Construct the mask that defines a region on a given data's grid."""
+ mask = False
+ for west, east, south, north in self.mask_bounds:
+ if west < east:
+ mask_lon = (data[lon_str] > west) & (data[lon_str] < east)
+ else:
+ mask_lon = (data[lon_str] < west) | (data[lon_str] > east)
+ mask_lat = (data[lat_str] > south) & (data[lat_str] < north)
+ mask |= mask_lon & mask_lat
+ return mask
+
+ def mask_var(self, data, lon_cyclic=True, lon_str=LON_STR,
+ lat_str=LAT_STR):
+ """Mask the given data outside this region.
+
+ Parameters
+ ----------
+ data : xarray.DataArray
+ The array to be regionally masked.
+ lon_cyclic : bool, optional (default True)
+ Whether or not the longitudes of ``data`` span the whole globe,
+ meaning that they should be wrapped around as necessary to cover
+ the Region's full width.
+ lon_str, lat_str : str, optional
+ The names of the longitude and latitude dimensions, respectively,
+ in the data to be masked. Defaults are
+ ``aospy.internal_names.LON_STR`` and
+ ``aospy.internal_names.LON_STR``, respectively.
+
+ Returns
+ -------
+ xarray.DataArray
+ The original array with points outside of the region masked.
- def mask_var(self, data):
- """Mask the data of the given variable outside the region."""
- return data.where(_make_mask(data, self.mask_bounds))
+ """
+ # TODO: is this still necessary?
+ if not lon_cyclic:
+ if self.west_bound > self.east_bound:
+ raise ValueError("Longitudes of data to be masked are "
+ "specified as non-cyclic, but Region's "
+ "definition requires wraparound longitudes.")
+ masked = data.where(self._make_mask(data, lon_str=lon_str,
+ lat_str=lat_str))
+ return masked
+
+ def ts(self, data, lon_cyclic=True, lon_str=LON_STR, lat_str=LAT_STR,
+ land_mask_str=LAND_MASK_STR, sfc_area_str=SFC_AREA_STR):
+ """Create yearly time-series of region-averaged data.
- def ts(self, data):
- """Create time-series of region-average data."""
- data_masked = self.mask_var(data)
- sfc_area = data.sfc_area
- land_mask = _get_land_mask(data, self.do_land_mask)
+ Parameters
+ ----------
+ data : xarray.DataArray
+ The array to create the regional timeseries of
+ lon_cyclic : { None, True, False }, optional (default True)
+ Whether or not the longitudes of ``data`` span the whole globe,
+ meaning that they should be wrapped around as necessary to cover
+ the Region's full width.
+ lat_str, lon_str, land_mask_str, sfc_area_str : str, optional
+ The name of the latitude, longitude, land mask, and surface area
+ coordinates, respectively, in ``data``. Defaults are the
+ corresponding values in ``aospy.internal_names``.
+
+ Returns
+ -------
+ xarray.DataArray
+ The timeseries of values averaged within the region and within each
+ year, one value per year.
- weights = self.mask_var(sfc_area) * land_mask
+ """
+ data_masked = self.mask_var(data, lon_cyclic=lon_cyclic,
+ lon_str=lon_str, lat_str=lat_str)
+ sfc_area = data[sfc_area_str]
+ sfc_area_masked = self.mask_var(sfc_area, lon_cyclic=lon_cyclic,
+ lon_str=lon_str, lat_str=lat_str)
+ land_mask = _get_land_mask(data, self.do_land_mask,
+ land_mask_str=land_mask_str)
+ weights = sfc_area_masked * land_mask
# Mask weights where data values are initially invalid in addition
# to applying the region mask.
weights = weights.where(np.isfinite(data))
- sum_weights = _sum_over_lat_lon(weights)
- return (_sum_over_lat_lon(data_masked*sfc_area*land_mask) /
- sum_weights)
-
- def av(self, data):
- """Time average of region-average time-series."""
- ts_ = self.ts(data)
- if 'year' not in ts_.coords:
- return ts_
- return ts_.mean('year')
-
- def std(self, data):
- """Standard deviation of region-average time-series."""
- ts_ = self.ts(data)
- if 'year' not in ts_.coords:
- return ts_
- return ts_.std('year')
+ weights_reg_sum = weights.sum(lon_str).sum(lat_str)
+ data_reg_sum = (data_masked * sfc_area_masked *
+ land_mask).sum(lat_str).sum(lon_str)
+ return data_reg_sum / weights_reg_sum
+
+ def av(self, data, lon_str=LON_STR, lat_str=LAT_STR,
+ land_mask_str=LAND_MASK_STR, sfc_area_str=SFC_AREA_STR):
+ """Time-average of region-averaged data.
+
+ Parameters
+ ----------
+ data : xarray.DataArray
+ The array to compute the regional time-average of
+ lat_str, lon_str, land_mask_str, sfc_area_str : str, optional
+ The name of the latitude, longitude, land mask, and surface area
+ coordinates, respectively, in ``data``. Defaults are the
+ corresponding values in ``aospy.internal_names``.
+
+ Returns
+ -------
+ xarray.DataArray
+ The region-averaged and time-averaged data.
+
+ """
+ ts = self.ts(data, lon_str=lon_str, lat_str=lat_str,
+ land_mask_str=land_mask_str, sfc_area_str=sfc_area_str)
+ if YEAR_STR not in ts.coords:
+ return ts
+ else:
+ return ts.mean(YEAR_STR)
+
+ def std(self, data, lon_str=LON_STR, lat_str=LAT_STR,
+ land_mask_str=LAND_MASK_STR, sfc_area_str=SFC_AREA_STR):
+ """Temporal standard deviation of region-averaged data.
+
+ Parameters
+ ----------
+ data : xarray.DataArray
+ The array to compute the regional time-average of
+ lat_str, lon_str, land_mask_str, sfc_area_str : str, optional
+ The name of the latitude, longitude, land mask, and surface area
+ coordinates, respectively, in ``data``. Defaults are the
+ corresponding values in ``aospy.internal_names``.
+
+ Returns
+ -------
+ xarray.DataArray
+ The temporal standard deviation of the region-averaged data
+
+ """
+ ts = self.ts(data, lon_str=lon_str, lat_str=lat_str,
+ land_mask_str=land_mask_str, sfc_area_str=sfc_area_str)
+ if YEAR_STR not in ts.coords:
+ return ts
+ else:
+ return ts.std(YEAR_STR)
diff --git a/aospy/utils/__init__.py b/aospy/utils/__init__.py
index 36f40a3..e9da584 100644
--- a/aospy/utils/__init__.py
+++ b/aospy/utils/__init__.py
@@ -1,6 +1,9 @@
"""Subpackage comprising various utility functions used elsewhere in aospy."""
from . import io
+from . import longitude
+from .longitude import Longitude
from . import times
from . import vertcoord
-__all__ = ['io', 'times', 'vertcoord']
+
+__all__ = ['Longitude', 'io', 'longitude', 'times', 'vertcoord']
diff --git a/aospy/utils/longitude.py b/aospy/utils/longitude.py
new file mode 100644
index 0000000..f1d7e44
--- /dev/null
+++ b/aospy/utils/longitude.py
@@ -0,0 +1,253 @@
+#!/usr/bin/env python
+"""Functionality relating to parsing and comparing longitudes."""
+
+import numpy as np
+import xarray as xr
+
+
+def lon_to_0360(lon):
+ """Convert longitude(s) to be within [0, 360).
+
+ The Eastern hemisphere corresponds to 0 <= lon + (n*360) < 180, and the
+ Western Hemisphere corresponds to 180 <= lon + (n*360) < 360, where 'n' is
+ any integer (positive, negative, or zero).
+
+ Parameters
+ ----------
+ lon : scalar or sequence of scalars
+ One or more longitude values to be converted to lie in the [0, 360)
+ range
+
+ Returns
+ -------
+ If ``lon`` is a scalar, then a scalar of the same type in the range [0,
+ 360). If ``lon`` is array-like, then an array-like of the same type
+ with each element a scalar in the range [0, 360).
+
+ """
+ quotient = lon // 360
+ return lon - quotient*360
+
+
+def _lon_in_west_hem(lon):
+ if lon_to_0360(lon) >= 180:
+ return True
+ else:
+ return False
+
+
+def lon_to_pm180(lon):
+ """Convert longitude(s) to be within [-180, 180).
+
+ The Eastern hemisphere corresponds to 0 <= lon + (n*360) < 180, and the
+ Western Hemisphere corresponds to 180 <= lon + (n*360) < 360, where 'n' is
+ any integer (positive, negative, or zero).
+
+ Parameters
+ ----------
+ lon : scalar or sequence of scalars
+ One or more longitude values to be converted to lie in the [-180, 180)
+ range
+
+ Returns
+ -------
+ If ``lon`` is a scalar, then a scalar of the same type in the range
+ [-180, 180). If ``lon`` is array-like, then an array-like of the same
+ type with each element a scalar in the range [-180, 180).
+
+ """
+ lon0360 = lon_to_0360(lon)
+ if _lon_in_west_hem(lon0360):
+ return lon0360 - 360
+ else:
+ return lon0360
+
+
+def _maybe_cast_to_lon(obj, strict=False):
+ if isinstance(obj, Longitude):
+ return obj
+ try:
+ return Longitude(obj)
+ except (ValueError, TypeError) as e:
+ if strict:
+ raise type(e)(str(e))
+ else:
+ return obj
+
+
+def _other_to_lon(func):
+ """Wrapper for casting Longitude operator arguments to Longitude"""
+ def func_other_to_lon(obj, other):
+ return func(obj, _maybe_cast_to_lon(other))
+ return func_other_to_lon
+
+
+class Longitude(object):
+ """Geographic longitude.
+
+ Enables unambiguous comparison of longitudes using the standard comparison
+ operators, regardless of they were initially represented with a 0 to 360
+ convention, -180 to 180 convention, or anything else, and even if the
+ original convention differs between them.
+
+ Specifically, the ``<`` operator assesses if the first object is to the
+ west of the second object, with the standard convention that longitudes in
+ the Western Hemisphere are always to the west of longitudes in the Eastern
+ Hemisphere. The ``>`` operator is defined analogously. ``==``, ``>=``,
+ and ``<=`` are also all defined.
+
+ In addition to other Longitude objects, the operators can be used to
+ compare a Longitude object to anything that can be casted to a Longitude
+ object, or to any sequence (e.g. a list or xarray.DataArray) whose elements
+ can be casted to Longitude objects.
+
+ """
+ def __init__(self, value):
+ """
+ Parameters
+ ----------
+ value : {scalar, str}
+ Scalars get converted to longitudes using the convention that 0-180
+ corresponds to the Eastern Hemisphere, 180-360 corresponds to the
+ Western Hemisphere, 360-540 the Eastern Hemisphere, and so on,
+ including for negative numbers.
+
+ Strings must be castable to a float or be a positive number in the
+ range 0-180 followed by a single letter 'e' or 'w' (case
+ insensitive). For example, ``Longitude('10w')`` would yield a
+ ``Longitude`` object corresponding to 10 degrees west longitude.
+ """
+ try:
+ val_as_float = float(value)
+ except (ValueError, TypeError):
+ if not isinstance(value, str):
+ raise ValueError('value must be a scalar or a string')
+ if value[-1].lower() not in ('w', 'e'):
+ raise ValueError("string inputs must end in 'e' or 'w'")
+ try:
+ lon_value = float(value[:-1])
+ except ValueError:
+ raise ValueError('improperly formatted string')
+ if (lon_value < 0) or (lon_value > 180):
+ raise ValueError('Value given as strings with hemisphere '
+ 'identifier must have numerical values '
+ 'within 0 and +180. Value given: '
+ '{}'.format(lon_value))
+ self._longitude = lon_value
+ self._hemisphere = value[-1].upper()
+ else:
+ lon_pm180 = lon_to_pm180(val_as_float)
+ if _lon_in_west_hem(val_as_float):
+ self._longitude = abs(lon_pm180)
+ self._hemisphere = 'W'
+ else:
+ self._longitude = lon_pm180
+ self._hemisphere = 'E'
+
+ @property
+ def longitude(self):
+ """(scalar) The unsigned numerical value of the longitude.
+
+ Always in the range 0 to 180. Must be combined with the ``hemisphere``
+ attribute to specify the exact latitude.
+
+ """
+ return self._longitude
+
+ @longitude.setter
+ def longitude(self, value):
+ raise ValueError("'longitude' property cannot be modified after "
+ "Longitude object has been created.")
+
+ @property
+ def hemisphere(self):
+ """{'W', 'E'} The longitude's hemisphere, either western or eastern."""
+ return self._hemisphere
+
+ @hemisphere.setter
+ def hemisphere(self, value):
+ raise ValueError("'hemisphere' property cannot be modified after "
+ "Longitude object has been created.")
+
+ def __repr__(self):
+ return "Longitude('{0}{1}')".format(self.longitude, self.hemisphere)
+
+ @_other_to_lon
+ def __eq__(self, other):
+ if isinstance(other, Longitude):
+ return (self.hemisphere == other.hemisphere and
+ self.longitude == other.longitude)
+ else:
+ return xr.apply_ufunc(np.equal, other, self)
+
+ @_other_to_lon
+ def __lt__(self, other):
+ if isinstance(other, Longitude):
+ if self.hemisphere == 'W':
+ if other.hemisphere == 'E':
+ return True
+ else:
+ return self.longitude > other.longitude
+ else:
+ if other.hemisphere == 'W':
+ return False
+ else:
+ return self.longitude < other.longitude
+ else:
+ return xr.apply_ufunc(np.greater, other, self)
+
+ @_other_to_lon
+ def __gt__(self, other):
+ if isinstance(other, Longitude):
+ if self.hemisphere == 'W':
+ if other.hemisphere == 'E':
+ return False
+ else:
+ return self.longitude < other.longitude
+ else:
+ if other.hemisphere == 'W':
+ return True
+ else:
+ return self.longitude > other.longitude
+ else:
+ return xr.apply_ufunc(np.less, other, self)
+
+ @_other_to_lon
+ def __le__(self, other):
+ if isinstance(other, Longitude):
+ return self < other or self == other
+ else:
+ return xr.apply_ufunc(np.greater_equal, other, self)
+
+ @_other_to_lon
+ def __ge__(self, other):
+ if isinstance(other, Longitude):
+ return self > other or self == other
+ else:
+ return xr.apply_ufunc(np.less_equal, other, self)
+
+ def to_0360(self):
+ """Convert longitude to its numerical value within [0, 360)."""
+ if self.hemisphere == 'W':
+ return -1*self.longitude + 360
+ else:
+ return self.longitude
+
+ def to_pm180(self):
+ """Convert longitude to its numerical value within [-180, 180)."""
+ if self.hemisphere == 'W':
+ return -1*self.longitude
+ else:
+ return self.longitude
+
+ @_other_to_lon
+ def __add__(self, other):
+ return Longitude(self.to_0360() + other.to_0360())
+
+ @_other_to_lon
+ def __sub__(self, other):
+ return Longitude(self.to_0360() - other.to_0360())
+
+
+if __name__ == '__main__':
+ pass
diff --git a/docs/api.rst b/docs/api.rst
index 4d258ba..79e2954 100644
--- a/docs/api.rst
+++ b/docs/api.rst
@@ -187,8 +187,8 @@ Utilities
aospy includes a number of utility functions that are used internally
and may also be useful to users for their own purposes. These include
-functions pertaining to input/output (IO), time arrays, andvertical
-coordinates.
+functions pertaining to input/output (IO), longitudes, time arrays,
+and vertical coordinates.
utils.io
--------
@@ -197,6 +197,13 @@ utils.io
:members:
:undoc-members:
+utils.longitude
+---------------
+
+.. automodule:: aospy.utils.longitude
+ :members:
+ :undoc-members:
+
utils.times
-----------
diff --git a/docs/examples.rst b/docs/examples.rst
index 2a26075..6402eba 100644
--- a/docs/examples.rst
+++ b/docs/examples.rst
@@ -116,7 +116,7 @@ a name for the run and an optional description.
float32 data to float64. This behavior is turned on by default. If you
would like to disable this behavior you can set the ``upcast_float32``
argument in your ``DataLoader`` constructors to ``False``.
-
+
Models
======
@@ -311,16 +311,20 @@ whole globe and at the Tropics:
globe = Region(
name='globe',
description='Entire globe',
- lat_bounds=(-90, 90),
- lon_bounds=(0, 360),
+ west_bound=0,
+ east_bound=360,
+ south_bound=-90,
+ north_bound=90,
do_land_mask=False
)
tropics = Region(
name='tropics',
description='Global tropics, defined as 30S-30N',
- lat_bounds=(-30, 30),
- lon_bounds=(0, 360),
+ west_bound=0,
+ east_bound=360,
+ south_bound=-30,
+ north_bound=30,
do_land_mask=False
)
example_proj.regions = [globe, tropics]
diff --git a/docs/whats-new.rst b/docs/whats-new.rst
index 6ce763e..66af034 100644
--- a/docs/whats-new.rst
+++ b/docs/whats-new.rst
@@ -11,6 +11,11 @@ v0.3 (unreleased)
Breaking Changes
~~~~~~~~~~~~~~~~
+- ``aospy.Region`` no longer can be instantiated using ``lat_bounds``
+ and ``lon_bounds`` keywords. These have been replaced with the more
+ explicit ``east_bound``, ``west_bound``, ``south_bound``, and
+ ``north_bound`` (:pull:`266`). By `Spencer Hill
+ <https://github.com/spencerahill>`_.
- Drop support for Python 3.4, since our core upstream dependency
xarray is also dropping it as of their 0.11 release (:pull:`255`).
By `Spencer Hill <https://github.com/spencerahill>`_.
@@ -39,6 +44,17 @@ Documentation
Enhancements
~~~~~~~~~~~~
+- Create ``utils.longitude`` module and ``Longitude`` class for
+ representing and comparing longitudes. Used internally by
+ ``aospy.Region`` to construct masks, but could also be useful for
+ users outside the standard aospy workflow (:pull:`266`). By
+ `Spencer Hill <https://github.com/spencerahill>`_.
+- Add support for ``Region`` methods ``mask_var``, ``ts``, ``av``, and
+ ``std`` for data that doesn't conform to aospy naming conventions,
+ making these methods now useful in more interactive contexts in
+ addition to within the standard main script-based work flow
+ (:pull:`266`). By `Spencer Hill
+ <https://github.com/spencerahill>`_.
- Raise an exception with an informative message if
``submit_mult_calcs`` (and thus the main script) generates zero
calculations, which can happen if one of the parameters is
@@ -72,6 +88,13 @@ Enhancements
Bug Fixes
~~~~~~~~~
+- Use the new ``Longitude`` class to support any longitude numbering
+ convention (e.g. -180 to 180, 0 to 360, or any other) for both
+ defining ``Region`` objects and for input data to be masked. Fixes
+ bug wherein a region could be silently partially clipped off when
+ masking input data with longitudes of a different numbering
+ convention. Fixes :issue:`229` via :pull:`266`. By `Spencer Hill
+ <https://github.com/spencerahill>`_.
- Cast input DataArrays with datatype ``np.float32`` to ``np.float64``
as a workaround for incorrectly computed means on float32 arrays in
bottleneck (see `pydata/xarray#1346
| Region definitions assume [0, 360] longitudes range
Just noticed this while using `Region.ts` on data that is on a [-180, 180] longitude grid, using the following region:
```python
sahel = Region(
name='sahel',
description='African Sahel',
mask_bounds=[((10, 20), (0, 40)), ((10, 20), (342, 360))],
do_land_mask=True
)
```
The (342, 360) bit gets dropped, since it is out of the array's (-180, 180) range, causing the region to be truncated to (0, 40). This is physically incorrect and thus a bug.
We need to translate all region definitions and the longitude arrays on which the regional reductions are operating to a standard longitude array, which by convention should be either (-180, 180) or (0, 360). This can be accomplished by noting that `(0, 180) + 360*i` corresponds to the Eastern Hemisphere and `(180, 360) + 360*j` corresponds to the Western hemisphere, where `i` and `j` are integers. E.g. `i=0, j=-1` corresponds to the (-180, 180) grid.
So the steps would be
1. Convert the region's longitude bounds to aospy's chosen reference longitude.
2. Convert the array's longitudes to aospy's chosen reference longitude.
3. Perform the reduction.
Once implemented, we should be able to support longitude arrays with arbitrary start and end values. | spencerahill/aospy | diff --git a/aospy/test/data/objects/examples.py b/aospy/test/data/objects/examples.py
index 0c2e110..7ddb311 100644
--- a/aospy/test/data/objects/examples.py
+++ b/aospy/test/data/objects/examples.py
@@ -91,14 +91,17 @@ sphum = Var(
globe = Region(
name='globe',
description='Entire globe',
- lat_bounds=(-90, 90),
- lon_bounds=(0, 360),
+ west_bound=0,
+ east_bound=360,
+ south_bound=-90,
+ north_bound=90,
do_land_mask=False
)
sahel = Region(
name='sahel',
description='African Sahel',
- mask_bounds=[((10, 20), (0, 40)), ((10, 20), (342, 360))],
+ mask_bounds=[(0, 40, 10, 20),
+ (342, 360, 10, 20)],
do_land_mask=True
)
diff --git a/aospy/test/test_calc_basic.py b/aospy/test/test_calc_basic.py
index e4000ed..b36e0aa 100755
--- a/aospy/test/test_calc_basic.py
+++ b/aospy/test/test_calc_basic.py
@@ -57,7 +57,10 @@ class TestCalcBasic(unittest.TestCase):
def tearDown(self):
for direc in [example_proj.direc_out, example_proj.tar_direc_out]:
- shutil.rmtree(direc)
+ try:
+ shutil.rmtree(direc)
+ except OSError:
+ pass
def test_annual_mean(self):
calc = Calc(intvl_out='ann', dtype_out_time='av', **self.test_params)
diff --git a/aospy/test/test_region.py b/aospy/test/test_region.py
index 0fbd962..3d9e0de 100644
--- a/aospy/test/test_region.py
+++ b/aospy/test/test_region.py
@@ -3,13 +3,29 @@ import pytest
import xarray as xr
from aospy import Region
-from aospy.region import _get_land_mask
-from aospy.internal_names import (LAT_STR, LON_STR,
- SFC_AREA_STR, LAND_MASK_STR)
+from aospy.region import (
+ _get_land_mask,
+ BoundsRect,
+)
+from aospy.internal_names import (
+ LAT_STR,
+ LON_STR,
+ SFC_AREA_STR,
+ LAND_MASK_STR
+)
+from aospy.utils import Longitude
@pytest.fixture()
-def data_for_reg_calcs():
+def values_for_reg_arr():
+ return np.array([[-2., 1.],
+ [np.nan, 5.],
+ [3., 3.],
+ [4., 4.2]])
+
+
[email protected]()
+def data_for_reg_calcs(values_for_reg_arr):
lat = [-10., 1., 10., 20.]
lon = [1., 10.]
sfc_area = [0.5, 1., 0.5, 0.25]
@@ -23,62 +39,160 @@ def data_for_reg_calcs():
sfc_area, _ = xr.broadcast(sfc_area, lon)
land_mask, _ = xr.broadcast(land_mask, lon)
- da = xr.DataArray([[2., 2.],
- [np.nan, 5.],
- [3., 3.],
- [4., 4.]], coords=[lat, lon])
- da[SFC_AREA_STR] = sfc_area
- da[LAND_MASK_STR] = land_mask
+ da = xr.DataArray(values_for_reg_arr, coords=[lat, lon])
+ da.coords[SFC_AREA_STR] = sfc_area
+ da.coords[LAND_MASK_STR] = land_mask
return da
+_alt_names = {LON_STR: 'LONS', LAT_STR: 'LATS', LAND_MASK_STR: 'lm',
+ SFC_AREA_STR: 'AREA'}
+
+
[email protected]()
+def data_reg_alt_names(data_for_reg_calcs):
+ return data_for_reg_calcs.rename(_alt_names)
+
+
region_no_land_mask = Region(
name='test',
description='Test region with no land mask',
- lat_bounds=(0., 90.),
- lon_bounds=(0., 5.),
+ west_bound=0.,
+ east_bound=5,
+ south_bound=0,
+ north_bound=90.,
do_land_mask=False
)
region_land_mask = Region(
name='test',
- description='Test region with no land mask',
- lat_bounds=(0., 90.),
- lon_bounds=(0., 5.),
+ description='Test region with land mask',
+ west_bound=0.,
+ east_bound=5,
+ south_bound=0,
+ north_bound=90.,
do_land_mask=True
)
+_expected_mask = [[False, False],
+ [True, False],
+ [True, False],
+ [True, False]]
+
+
+def test_get_land_mask_without_land_mask(data_for_reg_calcs):
+ result = _get_land_mask(data_for_reg_calcs,
+ region_no_land_mask.do_land_mask)
+ expected = 1
+ assert result == expected
+
+
+def test_get_land_mask_with_land_mask(data_for_reg_calcs):
+ result = _get_land_mask(data_for_reg_calcs, region_land_mask.do_land_mask)
+ expected = data_for_reg_calcs[LAND_MASK_STR]
+ xr.testing.assert_identical(result, expected)
+
+
+def test_get_land_mask_non_aospy_name(data_reg_alt_names):
+ result = _get_land_mask(data_reg_alt_names, region_land_mask.do_land_mask,
+ land_mask_str=_alt_names[LAND_MASK_STR])
+ expected = data_reg_alt_names[_alt_names[LAND_MASK_STR]]
+ xr.testing.assert_identical(result, expected)
+
+
+def test_region_init():
+ region = Region(
+ name='test',
+ description='region description',
+ west_bound=0.,
+ east_bound=5,
+ south_bound=0,
+ north_bound=90.,
+ do_land_mask=True
+ )
+ assert region.name == 'test'
+ assert region.description == 'region description'
+ assert isinstance(region.mask_bounds, tuple)
+ assert len(region.mask_bounds) == 1
+ assert isinstance(region.mask_bounds[0], BoundsRect)
+ assert np.all(region.mask_bounds[0] ==
+ (Longitude(0.), Longitude(5), 0, 90.))
+ assert region.do_land_mask is True
+
+
+def test_region_init_mult_rect():
+ bounds_in = [[1, 2, 3, 4], (-12, -30, 2.3, 9)]
+ region = Region(name='test', mask_bounds=bounds_in)
+ assert isinstance(region.mask_bounds, tuple)
+ assert len(region.mask_bounds) == 2
+ for (w, e, s, n), bounds in zip(bounds_in, region.mask_bounds):
+ assert isinstance(bounds, tuple)
+ assert np.all(bounds == (Longitude(w), Longitude(e), s, n))
+
+
+def test_region_init_bad_bounds():
+ with pytest.raises(ValueError):
+ Region(mask_bounds=[(1, 2, 3)])
+ Region(mask_bounds=[(1, 2, 3, 4),
+ (1, 2, 3)])
+
+
+def test_make_mask_single_rect(data_for_reg_calcs):
+ result = region_land_mask._make_mask(data_for_reg_calcs)
+ expected = xr.DataArray(_expected_mask, dims=[LAT_STR, LON_STR],
+ coords={LAT_STR: data_for_reg_calcs[LAT_STR],
+ LON_STR: data_for_reg_calcs[LON_STR]})
+ xr.testing.assert_equal(result.transpose(), expected)
+
+
+def test_make_mask_mult_rect(data_for_reg_calcs):
+ mask_bounds = (region_land_mask.mask_bounds[0], [0, 360, -20, -5])
+ region = Region(name='mult_rect', mask_bounds=mask_bounds)
+ result = region._make_mask(data_for_reg_calcs)
+ expected_values = [[True, True],
+ [True, False],
+ [True, False],
+ [True, False]]
+ expected = xr.DataArray(expected_values, dims=[LAT_STR, LON_STR],
+ coords={LAT_STR: data_for_reg_calcs[LAT_STR],
+ LON_STR: data_for_reg_calcs[LON_STR]})
+ xr.testing.assert_equal(result.transpose(), expected)
+
+
@pytest.mark.parametrize(
'region',
[region_no_land_mask, region_land_mask],
ids=['no-land-mask', 'land-mask'])
def test_mask_var(data_for_reg_calcs, region):
- # Test region masks first row and second column
- # of test data. Note that first element of
- # second row is np.nan in initial dataset.
+ # Test region masks first row and second column of test data. Note that
+ # first element of second row is np.nan in initial dataset.
expected_data = [[np.nan, np.nan],
[np.nan, np.nan],
[3., np.nan],
[4., np.nan]]
expected = data_for_reg_calcs.copy(deep=True)
expected.values = expected_data
-
result = region.mask_var(data_for_reg_calcs)
xr.testing.assert_identical(result, expected)
-def test_get_land_mask_without_land_mask(data_for_reg_calcs):
- result = _get_land_mask(data_for_reg_calcs,
- region_no_land_mask.do_land_mask)
- expected = 1
- assert result == expected
-
-
-def test_get_land_mask_with_land_mask(data_for_reg_calcs):
- result = _get_land_mask(data_for_reg_calcs, region_land_mask.do_land_mask)
- expected = data_for_reg_calcs[LAND_MASK_STR]
[email protected](
+ 'region',
+ [region_no_land_mask, region_land_mask],
+ ids=['no-land-mask', 'land-mask'])
+def test_mask_var_non_aospy_names(data_reg_alt_names, region):
+ # Test region masks first row and second column of test data. Note that
+ # first element of second row is np.nan in initial dataset.
+ expected_data = [[np.nan, np.nan],
+ [np.nan, np.nan],
+ [3., np.nan],
+ [4., np.nan]]
+ expected = data_reg_alt_names.copy(deep=True)
+ expected.values = expected_data
+ result = region.mask_var(data_reg_alt_names, lon_str=_alt_names[LON_STR],
+ lat_str=_alt_names[LAT_STR])
xr.testing.assert_identical(result, expected)
@@ -97,3 +211,15 @@ def test_ts_land_mask(data_for_reg_calcs):
result = region_land_mask.ts(data_for_reg_calcs)
expected = xr.DataArray(data_for_reg_calcs.values[3, 0])
xr.testing.assert_identical(result, expected)
+
+
+_map_to_alt_names = {'lon_str': _alt_names[LON_STR],
+ 'lat_str': _alt_names[LAT_STR],
+ 'land_mask_str': _alt_names[LAND_MASK_STR],
+ 'sfc_area_str': _alt_names[SFC_AREA_STR]}
+
+
+def test_ts_non_aospy_names(data_reg_alt_names):
+ result = region_land_mask.ts(data_reg_alt_names, **_map_to_alt_names)
+ expected = xr.DataArray(data_reg_alt_names.values[3, 0])
+ xr.testing.assert_identical(result, expected)
diff --git a/aospy/test/test_utils_longitude.py b/aospy/test/test_utils_longitude.py
new file mode 100644
index 0000000..1dc254e
--- /dev/null
+++ b/aospy/test/test_utils_longitude.py
@@ -0,0 +1,178 @@
+#!/usr/bin/env python
+import numpy as np
+import pytest
+import xarray as xr
+
+from aospy.utils.longitude import Longitude, _maybe_cast_to_lon
+
+
+_good_init_vals_attrs_objs = {
+ -10: [10, 'W', Longitude('10W')],
+ 190.2: [169.8, 'W', Longitude('169.8W')],
+ 25: [25, 'E', Longitude('25E')],
+ 365: [5, 'E', Longitude('5E')],
+ '45.5e': [45.5, 'E', Longitude('45.5E')],
+ '22.2w': [22.2, 'W', Longitude('22.2W')],
+ '0': [0, 'E', Longitude('0E')],
+ }
+
+
+_bad_init_vals = ['10ee', '-20e', '190w', None, 'abc', {'a': 1}]
+
+
[email protected](('val', 'attrs_and_obj'),
+ zip(_good_init_vals_attrs_objs.keys(),
+ _good_init_vals_attrs_objs.values()))
+def test_longitude_init_good(val, attrs_and_obj):
+ obj = Longitude(val)
+ expected_lon = attrs_and_obj[0]
+ expected_hem = attrs_and_obj[1]
+ expected_obj = attrs_and_obj[2]
+ assert obj.longitude == expected_lon
+ assert obj.hemisphere == expected_hem
+ assert Longitude(val) == expected_obj
+
+
+def test_longitude_properties():
+ lon = Longitude(5)
+ with pytest.raises(ValueError):
+ lon.longitude = 10
+ lon.hemisphere = 'W'
+
+
[email protected](
+ ('obj', 'expected_val'),
+ [(Longitude('10w'), "Longitude('10.0W')"),
+ (Longitude(0), "Longitude('0.0E')"),
+ (Longitude(180), "Longitude('180.0W')")])
+def test_longitude_repr(obj, expected_val):
+ assert obj.__repr__() == expected_val
+
+
[email protected]('bad_val', _bad_init_vals)
+def test_longitude_init_bad(bad_val):
+ with pytest.raises(ValueError):
+ Longitude(bad_val)
+
+
[email protected]('val', _good_init_vals_attrs_objs.keys())
+def test_maybe_cast_to_lon_good(val):
+ assert isinstance(_maybe_cast_to_lon(val), Longitude)
+
+
[email protected]('bad_val', _bad_init_vals)
+def test_maybe_cast_to_lon_bad(bad_val):
+ assert isinstance(_maybe_cast_to_lon(bad_val), type(bad_val))
+ with pytest.raises((ValueError, TypeError)):
+ _maybe_cast_to_lon(bad_val, strict=True)
+
+
[email protected](
+ ('obj1', 'obj2'),
+ [(Longitude('100W'), Longitude('100W')),
+ (Longitude('90E'), Longitude('90E')),
+ (Longitude(0), Longitude(0)),
+ (Longitude('0E'), 0),
+ (Longitude('0E'), 720),
+ (Longitude('0E'), [0, 720]),
+ (Longitude('0E'), xr.DataArray([0, 720]))])
+def test_lon_eq(obj1, obj2):
+ assert np.all(obj1 == obj2)
+ assert np.all(obj2 == obj1)
+
+
[email protected](
+ ('obj1', 'obj2'),
+ [(Longitude('100W'), Longitude('90W')),
+ (Longitude('90E'), Longitude('100E')),
+ (Longitude('10W'), Longitude('0E')),
+ (Longitude('0E'), 10),
+ (Longitude('0E'), [5, 10]),
+ (Longitude('0E'), xr.DataArray([5, 10]))])
+def test_lon_lt(obj1, obj2):
+ assert np.all(obj1 < obj2)
+ assert np.all(obj2 > obj1)
+
+
[email protected](
+ ('obj1', 'obj2'),
+ [(Longitude('90W'), Longitude('100W')),
+ (Longitude('100E'), Longitude('90E')),
+ (Longitude('0E'), Longitude('10W')),
+ (Longitude('0E'), -10),
+ (Longitude('0E'), [-10, -5]),
+ (Longitude('0E'), xr.DataArray([-10, -5]))])
+def test_lon_gt(obj1, obj2):
+ assert np.all(obj1 > obj2)
+ assert np.all(obj2 < obj1)
+
+
[email protected](
+ ('obj1', 'obj2'),
+ [(Longitude('100W'), Longitude('100W')),
+ (Longitude('90E'), Longitude('90E')),
+ (Longitude(0), Longitude(0)),
+ (Longitude('100W'), Longitude('90W')),
+ (Longitude('90E'), Longitude('100E')),
+ (Longitude('10W'), Longitude('0E')),
+ (Longitude('0E'), 10),
+ (Longitude('0E'), [10, 0]),
+ (Longitude('0E'), xr.DataArray([10, 0]))])
+def test_lon_leq(obj1, obj2):
+ assert np.all(obj1 <= obj2)
+ assert np.all(obj2 >= obj2)
+
+
[email protected](
+ ('obj1', 'obj2'),
+ [(Longitude('100W'), Longitude('100W')),
+ (Longitude('90E'), Longitude('90E')),
+ (Longitude(0), Longitude(0)),
+ (Longitude('90W'), Longitude('100W')),
+ (Longitude('100E'), Longitude('90E')),
+ (Longitude('0E'), Longitude('10W')),
+ (Longitude('0E'), -10),
+ (Longitude('0E'), [0, -10]),
+ (Longitude('0E'), xr.DataArray([0, -10]))])
+def test_lon_geq(obj1, obj2):
+ assert np.all(obj1 >= obj2)
+ assert np.all(obj2 <= obj1)
+
+
[email protected](
+ ('obj', 'expected_val'),
+ [(Longitude('100W'), 260),
+ (Longitude(0), 0),
+ (Longitude('20E'), 20)])
+def test_to_0360(obj, expected_val):
+ assert obj.to_0360() == expected_val
+
+
[email protected](
+ ('obj', 'expected_val'),
+ [(Longitude('100W'), -100),
+ (Longitude(0), 0),
+ (Longitude('20E'), 20)])
+def test_to_pm180(obj, expected_val):
+ assert obj.to_pm180() == expected_val
+
+
[email protected](
+ ('obj1', 'obj2', 'expected_val'),
+ [(Longitude(1), Longitude(1), Longitude(2)),
+ (Longitude(175), Longitude(10), Longitude('175W'))])
+def test_lon_add(obj1, obj2, expected_val):
+ assert obj1 + obj2 == expected_val
+
+
[email protected](
+ ('obj1', 'obj2', 'expected_val'),
+ [(Longitude(1), Longitude(1), Longitude(0)),
+ (Longitude(185), Longitude(10), Longitude('175E')),
+ (Longitude(370), Longitude(20), Longitude('10W'))])
+def test_lon_sub(obj1, obj2, expected_val):
+ assert obj1 - obj2 == expected_val
+
+
+if __name__ == '__main__':
+ pass
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 7
} | 0.2 | {
"env_vars": null,
"env_yml_path": [
"ci/environment-py36.yml"
],
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "environment.yml",
"pip_packages": [
"pytest",
"flake8",
"pytest-cov",
"pytest-catchlog"
],
"pre_install": null,
"python": "3.5",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/spencerahill/aospy.git@098da63959f5f67871950d48a799d86325c902d2#egg=aospy
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1633990451307/work
async_generator @ file:///home/conda/feedstock_root/build_artifacts/async_generator_1722652753231/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1671632566681/work
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1702571698061/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1696630167146/work
bokeh @ file:///home/conda/feedstock_root/build_artifacts/bokeh_1625756939897/work
certifi==2021.5.30
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1631636256886/work
cftime @ file:///home/conda/feedstock_root/build_artifacts/cftime_1632539733990/work
charset-normalizer==2.0.12
click==7.1.2
cloudpickle @ file:///home/conda/feedstock_root/build_artifacts/cloudpickle_1674202310934/work
contextvars==2.4
coverage==6.2
coveralls==3.3.1
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
cytoolz==0.11.0
dask @ file:///home/conda/feedstock_root/build_artifacts/dask-core_1614995065708/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
distributed @ file:///home/conda/feedstock_root/build_artifacts/distributed_1615002625500/work
docopt==0.6.2
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
flake8 @ file:///home/conda/feedstock_root/build_artifacts/flake8_1659645013175/work
fsspec @ file:///home/conda/feedstock_root/build_artifacts/fsspec_1674184942191/work
future @ file:///home/conda/feedstock_root/build_artifacts/future_1610147328086/work
HeapDict==1.0.1
idna==3.10
immutables @ file:///home/conda/feedstock_root/build_artifacts/immutables_1628601257972/work
importlib-metadata==4.2.0
iniconfig @ file:///home/conda/feedstock_root/build_artifacts/iniconfig_1603384189793/work
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1620912934572/work/dist/ipykernel-5.5.5-py3-none-any.whl
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1609697613279/work
ipython_genutils @ file:///home/conda/feedstock_root/build_artifacts/ipython_genutils_1716278396992/work
ipywidgets @ file:///home/conda/feedstock_root/build_artifacts/ipywidgets_1679421482533/work
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1605054537831/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1636510082894/work
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema_1634752161479/work
jupyter @ file:///home/conda/feedstock_root/build_artifacts/jupyter_1696255489086/work
jupyter-client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1642858610849/work
jupyter-console @ file:///home/conda/feedstock_root/build_artifacts/jupyter_console_1676328545892/work
jupyter-core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1631852698933/work
jupyterlab-pygments @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_pygments_1601375948261/work
jupyterlab-widgets @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_widgets_1655961217661/work
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1610099771815/work
locket @ file:///home/conda/feedstock_root/build_artifacts/locket_1650660393415/work
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1621455668064/work
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1611858699142/work
mccabe @ file:///home/conda/feedstock_root/build_artifacts/mccabe_1643049622439/work
mistune @ file:///home/conda/feedstock_root/build_artifacts/mistune_1673904152039/work
more-itertools @ file:///home/conda/feedstock_root/build_artifacts/more-itertools_1690211628840/work
msgpack @ file:///home/conda/feedstock_root/build_artifacts/msgpack-python_1610121702224/work
nbclient @ file:///home/conda/feedstock_root/build_artifacts/nbclient_1637327213451/work
nbconvert @ file:///home/conda/feedstock_root/build_artifacts/nbconvert_1605401832871/work
nbformat @ file:///home/conda/feedstock_root/build_artifacts/nbformat_1617383142101/work
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1705850609492/work
netCDF4 @ file:///home/conda/feedstock_root/build_artifacts/netcdf4_1633096406418/work
notebook @ file:///home/conda/feedstock_root/build_artifacts/notebook_1616419146127/work
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1626681920064/work
olefile @ file:///home/conda/feedstock_root/build_artifacts/olefile_1602866521163/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
pandas==1.1.5
pandocfilters @ file:///home/conda/feedstock_root/build_artifacts/pandocfilters_1631603243851/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1595548966091/work
partd @ file:///home/conda/feedstock_root/build_artifacts/partd_1617910651905/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow @ file:///home/conda/feedstock_root/build_artifacts/pillow_1630696616009/work
pluggy @ file:///home/conda/feedstock_root/build_artifacts/pluggy_1631522669284/work
prometheus-client @ file:///home/conda/feedstock_root/build_artifacts/prometheus_client_1689032443210/work
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1670414775770/work
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1610127101219/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
py @ file:///home/conda/feedstock_root/build_artifacts/py_1636301881863/work
pycodestyle @ file:///home/conda/feedstock_root/build_artifacts/pycodestyle_1659638152915/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pyflakes @ file:///home/conda/feedstock_root/build_artifacts/pyflakes_1659210156976/work
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1672682006896/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1724616129934/work
PyQt5==5.12.3
PyQt5_sip==4.19.18
PyQtChart==5.12
PyQtWebEngine==5.12.1
pyrsistent @ file:///home/conda/feedstock_root/build_artifacts/pyrsistent_1610146795286/work
pytest==6.2.5
pytest-catchlog==1.2.2
pytest-cov==4.0.0
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1693930252784/work
PyYAML==5.4.1
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1631793305981/work
qtconsole @ file:///home/conda/feedstock_root/build_artifacts/qtconsole-base_1640876679830/work
QtPy @ file:///home/conda/feedstock_root/build_artifacts/qtpy_1643828301492/work
requests==2.27.1
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1629411471490/work
Send2Trash @ file:///home/conda/feedstock_root/build_artifacts/send2trash_1682601222253/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
sortedcontainers @ file:///home/conda/feedstock_root/build_artifacts/sortedcontainers_1621217038088/work
tblib @ file:///home/conda/feedstock_root/build_artifacts/tblib_1616261298899/work
terminado @ file:///home/conda/feedstock_root/build_artifacts/terminado_1631128154882/work
testpath @ file:///home/conda/feedstock_root/build_artifacts/testpath_1645693042223/work
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1604308577558/work
tomli==1.2.3
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1610094701020/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1631041982274/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1644850595256/work
urllib3==1.26.20
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1699959196938/work
webencodings @ file:///home/conda/feedstock_root/build_artifacts/webencodings_1694681268211/work
widgetsnbextension @ file:///home/conda/feedstock_root/build_artifacts/widgetsnbextension_1655939017940/work
xarray @ file:///home/conda/feedstock_root/build_artifacts/xarray_1621474818012/work
zict==2.0.0
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1633302054558/work
| name: aospy
channels:
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- alsa-lib=1.2.7.2=h166bdaf_0
- argon2-cffi=21.1.0=py36h8f6f2f9_0
- async_generator=1.10=pyhd8ed1ab_1
- attrs=22.2.0=pyh71513ae_0
- backcall=0.2.0=pyh9f0ad1d_0
- backports=1.0=pyhd8ed1ab_4
- backports.functools_lru_cache=2.0.0=pyhd8ed1ab_0
- bleach=6.1.0=pyhd8ed1ab_0
- bokeh=2.3.3=py36h5fab9bb_0
- bzip2=1.0.8=h4bc722e_7
- c-ares=1.34.4=hb9d3cd8_0
- ca-certificates=2025.1.31=hbcca054_0
- certifi=2021.5.30=py36h5fab9bb_0
- cffi=1.14.6=py36hd8eec40_1
- cftime=1.5.1=py36he33b4a0_0
- click=7.1.2=pyh9f0ad1d_0
- cloudpickle=2.2.1=pyhd8ed1ab_0
- contextvars=2.4=py_0
- curl=7.87.0=h6312ad2_0
- cycler=0.11.0=pyhd8ed1ab_0
- cytoolz=0.11.0=py36h8f6f2f9_3
- dask=2021.3.0=pyhd8ed1ab_0
- dask-core=2021.3.0=pyhd8ed1ab_0
- dbus=1.13.6=h5008d03_3
- decorator=5.1.1=pyhd8ed1ab_0
- defusedxml=0.7.1=pyhd8ed1ab_0
- distributed=2021.3.0=py36h5fab9bb_0
- entrypoints=0.4=pyhd8ed1ab_0
- expat=2.6.4=h5888daf_0
- flake8=5.0.4=pyhd8ed1ab_0
- font-ttf-dejavu-sans-mono=2.37=hab24e00_0
- font-ttf-inconsolata=3.000=h77eed37_0
- font-ttf-source-code-pro=2.038=h77eed37_0
- font-ttf-ubuntu=0.83=h77eed37_3
- fontconfig=2.14.2=h14ed4e7_0
- fonts-conda-ecosystem=1=0
- fonts-conda-forge=1=0
- freetype=2.12.1=h267a509_2
- fsspec=2023.1.0=pyhd8ed1ab_0
- future=0.18.2=py36h5fab9bb_3
- gettext=0.23.1=h5888daf_0
- gettext-tools=0.23.1=h5888daf_0
- glib=2.80.2=hf974151_0
- glib-tools=2.80.2=hb6ce0ca_0
- gst-plugins-base=1.20.3=h57caac4_2
- gstreamer=1.20.3=hd4edc92_2
- hdf4=4.2.15=h9772cbc_5
- hdf5=1.12.1=nompi_h2386368_104
- heapdict=1.0.1=py_0
- icu=69.1=h9c3ff4c_0
- immutables=0.16=py36h8f6f2f9_0
- importlib_metadata=4.8.1=hd8ed1ab_1
- iniconfig=1.1.1=pyh9f0ad1d_0
- ipykernel=5.5.5=py36hcb3619a_0
- ipython=7.16.1=py36he448a4c_2
- ipython_genutils=0.2.0=pyhd8ed1ab_1
- ipywidgets=7.7.4=pyhd8ed1ab_0
- jedi=0.17.2=py36h5fab9bb_1
- jinja2=3.0.3=pyhd8ed1ab_0
- jpeg=9e=h0b41bf4_3
- jsonschema=4.1.2=pyhd8ed1ab_0
- jupyter=1.0.0=pyhd8ed1ab_10
- jupyter_client=7.1.2=pyhd8ed1ab_0
- jupyter_console=6.5.1=pyhd8ed1ab_0
- jupyter_core=4.8.1=py36h5fab9bb_0
- jupyterlab_pygments=0.1.2=pyh9f0ad1d_0
- jupyterlab_widgets=1.1.1=pyhd8ed1ab_0
- keyutils=1.6.1=h166bdaf_0
- kiwisolver=1.3.1=py36h605e78d_1
- krb5=1.20.1=hf9c8cef_0
- lcms2=2.12=hddcbb42_0
- ld_impl_linux-64=2.43=h712a8e2_4
- lerc=3.0=h9c3ff4c_0
- libasprintf=0.23.1=h8e693c7_0
- libasprintf-devel=0.23.1=h8e693c7_0
- libblas=3.9.0=20_linux64_openblas
- libcblas=3.9.0=20_linux64_openblas
- libclang=13.0.1=default_hb5137d0_10
- libcurl=7.87.0=h6312ad2_0
- libdeflate=1.10=h7f98852_0
- libedit=3.1.20250104=pl5321h7949ede_0
- libev=4.33=hd590300_2
- libevent=2.1.10=h9b69904_4
- libexpat=2.6.4=h5888daf_0
- libffi=3.4.6=h2dba641_0
- libgcc=14.2.0=h767d61c_2
- libgcc-ng=14.2.0=h69a702a_2
- libgettextpo=0.23.1=h5888daf_0
- libgettextpo-devel=0.23.1=h5888daf_0
- libgfortran=14.2.0=h69a702a_2
- libgfortran-ng=14.2.0=h69a702a_2
- libgfortran5=14.2.0=hf1ad2bd_2
- libglib=2.80.2=hf974151_0
- libgomp=14.2.0=h767d61c_2
- libiconv=1.18=h4ce23a2_1
- liblapack=3.9.0=20_linux64_openblas
- libllvm13=13.0.1=hf817b99_2
- liblzma=5.6.4=hb9d3cd8_0
- liblzma-devel=5.6.4=hb9d3cd8_0
- libnetcdf=4.8.1=nompi_h329d8a1_102
- libnghttp2=1.51.0=hdcd2b5c_0
- libnsl=2.0.1=hd590300_0
- libogg=1.3.5=h4ab18f5_0
- libopenblas=0.3.25=pthreads_h413a1c8_0
- libopus=1.3.1=h7f98852_1
- libpng=1.6.43=h2797004_0
- libpq=14.5=h2baec63_5
- libsodium=1.0.18=h36c2ea0_1
- libsqlite=3.46.0=hde9e2c9_0
- libssh2=1.10.0=haa6b8db_3
- libstdcxx=14.2.0=h8f9b012_2
- libstdcxx-ng=14.2.0=h4852527_2
- libtiff=4.3.0=h0fcbabc_4
- libuuid=2.38.1=h0b41bf4_0
- libvorbis=1.3.7=h9c3ff4c_0
- libwebp-base=1.5.0=h851e524_0
- libxcb=1.13=h7f98852_1004
- libxkbcommon=1.0.3=he3ba5ed_0
- libxml2=2.9.14=haae042b_4
- libzip=1.9.2=hc869a4a_1
- libzlib=1.2.13=h4ab18f5_6
- locket=1.0.0=pyhd8ed1ab_0
- markupsafe=2.0.1=py36h8f6f2f9_0
- matplotlib=3.3.4=py36h5fab9bb_0
- matplotlib-base=3.3.4=py36hd391965_0
- mccabe=0.7.0=pyhd8ed1ab_0
- mistune=0.8.4=pyh1a96a4e_1006
- more-itertools=10.0.0=pyhd8ed1ab_0
- msgpack-python=1.0.2=py36h605e78d_1
- mysql-common=8.0.32=h14678bc_0
- mysql-libs=8.0.32=h54cf53e_0
- nbclient=0.5.9=pyhd8ed1ab_0
- nbconvert=6.0.7=py36h5fab9bb_3
- nbformat=5.1.3=pyhd8ed1ab_0
- ncurses=6.5=h2d0b736_3
- nest-asyncio=1.6.0=pyhd8ed1ab_0
- netcdf4=1.5.7=nompi_py36h775750b_103
- notebook=6.3.0=py36h5fab9bb_0
- nspr=4.36=h5888daf_0
- nss=3.100=hca3bf56_0
- numpy=1.19.5=py36hfc0c790_2
- olefile=0.46=pyh9f0ad1d_1
- openjpeg=2.5.0=h7d73246_0
- openssl=1.1.1w=hd590300_0
- packaging=21.3=pyhd8ed1ab_0
- pandas=1.1.5=py36h284efc9_0
- pandoc=2.19.2=h32600fe_2
- pandocfilters=1.5.0=pyhd8ed1ab_0
- parso=0.7.1=pyh9f0ad1d_0
- partd=1.2.0=pyhd8ed1ab_0
- pcre2=10.43=hcad00b1_0
- pexpect=4.8.0=pyh1a96a4e_2
- pickleshare=0.7.5=py_1003
- pillow=8.3.2=py36h676a545_0
- pip=21.3.1=pyhd8ed1ab_0
- pluggy=1.0.0=py36h5fab9bb_1
- prometheus_client=0.17.1=pyhd8ed1ab_0
- prompt-toolkit=3.0.36=pyha770c72_0
- prompt_toolkit=3.0.36=hd8ed1ab_0
- psutil=5.8.0=py36h8f6f2f9_1
- pthread-stubs=0.4=hb9d3cd8_1002
- ptyprocess=0.7.0=pyhd3deb0d_0
- py=1.11.0=pyh6c4a22f_0
- pycodestyle=2.9.1=pyhd8ed1ab_0
- pycparser=2.21=pyhd8ed1ab_0
- pyflakes=2.5.0=pyhd8ed1ab_0
- pygments=2.14.0=pyhd8ed1ab_0
- pyparsing=3.1.4=pyhd8ed1ab_0
- pyqt=5.12.3=py36h5fab9bb_7
- pyqt-impl=5.12.3=py36h7ec31b9_7
- pyqt5-sip=4.19.18=py36hc4f0c31_7
- pyqtchart=5.12=py36h7ec31b9_7
- pyqtwebengine=5.12.1=py36h7ec31b9_7
- pyrsistent=0.17.3=py36h8f6f2f9_2
- pytest=6.2.5=py36h5fab9bb_0
- python=3.6.15=hb7a2778_0_cpython
- python-dateutil=2.8.2=pyhd8ed1ab_0
- python_abi=3.6=2_cp36m
- pytz=2023.3.post1=pyhd8ed1ab_0
- pyyaml=5.4.1=py36h8f6f2f9_1
- pyzmq=22.3.0=py36h7068817_0
- qt=5.12.9=h1304e3e_6
- qtconsole-base=5.2.2=pyhd8ed1ab_1
- qtpy=2.0.1=pyhd8ed1ab_0
- readline=8.2=h8c095d6_2
- scipy=1.5.3=py36h81d768a_1
- send2trash=1.8.2=pyh41d4057_0
- setuptools=58.0.4=py36h5fab9bb_2
- six=1.16.0=pyh6c4a22f_0
- sortedcontainers=2.4.0=pyhd8ed1ab_0
- sqlite=3.46.0=h6d4b2fc_0
- tblib=1.7.0=pyhd8ed1ab_0
- terminado=0.12.1=py36h5fab9bb_0
- testpath=0.6.0=pyhd8ed1ab_0
- tk=8.6.13=noxft_h4845f30_101
- toml=0.10.2=pyhd8ed1ab_0
- toolz=0.12.0=pyhd8ed1ab_0
- tornado=6.1=py36h8f6f2f9_1
- traitlets=4.3.3=pyhd8ed1ab_2
- typing-extensions=4.1.1=hd8ed1ab_0
- typing_extensions=4.1.1=pyha770c72_0
- wcwidth=0.2.10=pyhd8ed1ab_0
- webencodings=0.5.1=pyhd8ed1ab_2
- wheel=0.37.1=pyhd8ed1ab_0
- widgetsnbextension=3.6.1=pyha770c72_0
- xarray=0.18.2=pyhd8ed1ab_0
- xorg-libxau=1.0.12=hb9d3cd8_0
- xorg-libxdmcp=1.1.5=hb9d3cd8_0
- xz=5.6.4=hbcc6ac9_0
- xz-gpl-tools=5.6.4=hbcc6ac9_0
- xz-tools=5.6.4=hb9d3cd8_0
- yaml=0.2.5=h7f98852_2
- zeromq=4.3.5=h59595ed_1
- zict=2.0.0=py_0
- zipp=3.6.0=pyhd8ed1ab_0
- zlib=1.2.13=h4ab18f5_6
- zstd=1.5.6=ha6fb4c9_0
- pip:
- charset-normalizer==2.0.12
- coverage==6.2
- coveralls==3.3.1
- docopt==0.6.2
- idna==3.10
- importlib-metadata==4.2.0
- pytest-catchlog==1.2.2
- pytest-cov==4.0.0
- requests==2.27.1
- tomli==1.2.3
- urllib3==1.26.20
prefix: /opt/conda/envs/aospy
| [
"aospy/test/test_calc_basic.py::test_calc_object_string_time_options[av]",
"aospy/test/test_calc_basic.py::test_calc_object_string_time_options[std]",
"aospy/test/test_calc_basic.py::test_calc_object_string_time_options[ts]",
"aospy/test/test_calc_basic.py::test_calc_object_string_time_options[reg.av]",
"aospy/test/test_calc_basic.py::test_calc_object_string_time_options[reg.std]",
"aospy/test/test_calc_basic.py::test_calc_object_string_time_options[reg.ts]",
"aospy/test/test_calc_basic.py::test_calc_object_time_options",
"aospy/test/test_calc_basic.py::test_attrs[--None--]",
"aospy/test/test_calc_basic.py::test_attrs[m--None-m-]",
"aospy/test/test_calc_basic.py::test_attrs[-rain-None--rain]",
"aospy/test/test_calc_basic.py::test_attrs[m-rain-None-m-rain]",
"aospy/test/test_calc_basic.py::test_attrs[--vert_av--]",
"aospy/test/test_calc_basic.py::test_attrs[m--vert_av-m-]",
"aospy/test/test_calc_basic.py::test_attrs[-rain-vert_av--rain]",
"aospy/test/test_calc_basic.py::test_attrs[m-rain-vert_av-m-rain]",
"aospy/test/test_calc_basic.py::test_attrs[--vert_int-(vertical",
"aospy/test/test_calc_basic.py::test_attrs[m--vert_int-(vertical",
"aospy/test/test_calc_basic.py::test_attrs[-rain-vert_int-(vertical",
"aospy/test/test_calc_basic.py::test_attrs[m-rain-vert_int-(vertical",
"aospy/test/test_region.py::test_get_land_mask_without_land_mask",
"aospy/test/test_region.py::test_get_land_mask_with_land_mask",
"aospy/test/test_region.py::test_get_land_mask_non_aospy_name",
"aospy/test/test_region.py::test_region_init",
"aospy/test/test_region.py::test_region_init_mult_rect",
"aospy/test/test_region.py::test_region_init_bad_bounds",
"aospy/test/test_region.py::test_make_mask_single_rect",
"aospy/test/test_region.py::test_make_mask_mult_rect",
"aospy/test/test_region.py::test_mask_var[no-land-mask]",
"aospy/test/test_region.py::test_mask_var[land-mask]",
"aospy/test/test_region.py::test_mask_var_non_aospy_names[no-land-mask]",
"aospy/test/test_region.py::test_mask_var_non_aospy_names[land-mask]",
"aospy/test/test_region.py::test_ts_no_land_mask",
"aospy/test/test_region.py::test_ts_land_mask",
"aospy/test/test_region.py::test_ts_non_aospy_names",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[-10-attrs_and_obj0]",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[190.2-attrs_and_obj1]",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[25-attrs_and_obj2]",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[365-attrs_and_obj3]",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[45.5e-attrs_and_obj4]",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[22.2w-attrs_and_obj5]",
"aospy/test/test_utils_longitude.py::test_longitude_init_good[0-attrs_and_obj6]",
"aospy/test/test_utils_longitude.py::test_longitude_properties",
"aospy/test/test_utils_longitude.py::test_longitude_repr[obj0-Longitude('10.0W')]",
"aospy/test/test_utils_longitude.py::test_longitude_repr[obj1-Longitude('0.0E')]",
"aospy/test/test_utils_longitude.py::test_longitude_repr[obj2-Longitude('180.0W')]",
"aospy/test/test_utils_longitude.py::test_longitude_init_bad[10ee]",
"aospy/test/test_utils_longitude.py::test_longitude_init_bad[-20e]",
"aospy/test/test_utils_longitude.py::test_longitude_init_bad[190w]",
"aospy/test/test_utils_longitude.py::test_longitude_init_bad[None]",
"aospy/test/test_utils_longitude.py::test_longitude_init_bad[abc]",
"aospy/test/test_utils_longitude.py::test_longitude_init_bad[bad_val5]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[-10]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[190.2]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[25]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[365]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[45.5e]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[22.2w]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_good[0]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_bad[10ee]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_bad[-20e]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_bad[190w]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_bad[None]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_bad[abc]",
"aospy/test/test_utils_longitude.py::test_maybe_cast_to_lon_bad[bad_val5]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj10-obj20]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj11-obj21]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj12-obj22]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj13-0]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj14-720]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj15-obj25]",
"aospy/test/test_utils_longitude.py::test_lon_eq[obj16-obj26]",
"aospy/test/test_utils_longitude.py::test_lon_lt[obj10-obj20]",
"aospy/test/test_utils_longitude.py::test_lon_lt[obj11-obj21]",
"aospy/test/test_utils_longitude.py::test_lon_lt[obj12-obj22]",
"aospy/test/test_utils_longitude.py::test_lon_lt[obj13-10]",
"aospy/test/test_utils_longitude.py::test_lon_lt[obj14-obj24]",
"aospy/test/test_utils_longitude.py::test_lon_lt[obj15-obj25]",
"aospy/test/test_utils_longitude.py::test_lon_gt[obj10-obj20]",
"aospy/test/test_utils_longitude.py::test_lon_gt[obj11-obj21]",
"aospy/test/test_utils_longitude.py::test_lon_gt[obj12-obj22]",
"aospy/test/test_utils_longitude.py::test_lon_gt[obj13--10]",
"aospy/test/test_utils_longitude.py::test_lon_gt[obj14-obj24]",
"aospy/test/test_utils_longitude.py::test_lon_gt[obj15-obj25]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj10-obj20]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj11-obj21]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj12-obj22]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj13-obj23]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj14-obj24]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj15-obj25]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj16-10]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj17-obj27]",
"aospy/test/test_utils_longitude.py::test_lon_leq[obj18-obj28]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj10-obj20]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj11-obj21]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj12-obj22]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj13-obj23]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj14-obj24]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj15-obj25]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj16--10]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj17-obj27]",
"aospy/test/test_utils_longitude.py::test_lon_geq[obj18-obj28]",
"aospy/test/test_utils_longitude.py::test_to_0360[obj0-260]",
"aospy/test/test_utils_longitude.py::test_to_0360[obj1-0]",
"aospy/test/test_utils_longitude.py::test_to_0360[obj2-20]",
"aospy/test/test_utils_longitude.py::test_to_pm180[obj0--100]",
"aospy/test/test_utils_longitude.py::test_to_pm180[obj1-0]",
"aospy/test/test_utils_longitude.py::test_to_pm180[obj2-20]",
"aospy/test/test_utils_longitude.py::test_lon_add[obj10-obj20-expected_val0]",
"aospy/test/test_utils_longitude.py::test_lon_add[obj11-obj21-expected_val1]",
"aospy/test/test_utils_longitude.py::test_lon_sub[obj10-obj20-expected_val0]",
"aospy/test/test_utils_longitude.py::test_lon_sub[obj11-obj21-expected_val1]",
"aospy/test/test_utils_longitude.py::test_lon_sub[obj12-obj22-expected_val2]"
]
| [
"aospy/test/test_calc_basic.py::test_recursive_calculation",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_annual_mean",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_annual_ts",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_complex_reg_av",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_monthly_mean",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_monthly_ts",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_seasonal_mean",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_seasonal_ts",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_simple_reg_av",
"aospy/test/test_calc_basic.py::TestCalcBasic::test_simple_reg_ts",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_annual_mean",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_annual_ts",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_complex_reg_av",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_monthly_mean",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_monthly_ts",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_seasonal_mean",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_seasonal_ts",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_simple_reg_av",
"aospy/test/test_calc_basic.py::TestCalcComposite::test_simple_reg_ts",
"aospy/test/test_calc_basic.py::TestCalc3D::test_annual_mean",
"aospy/test/test_calc_basic.py::TestCalc3D::test_annual_ts",
"aospy/test/test_calc_basic.py::TestCalc3D::test_complex_reg_av",
"aospy/test/test_calc_basic.py::TestCalc3D::test_monthly_mean",
"aospy/test/test_calc_basic.py::TestCalc3D::test_monthly_ts",
"aospy/test/test_calc_basic.py::TestCalc3D::test_seasonal_mean",
"aospy/test/test_calc_basic.py::TestCalc3D::test_seasonal_ts",
"aospy/test/test_calc_basic.py::TestCalc3D::test_simple_reg_av",
"aospy/test/test_calc_basic.py::TestCalc3D::test_simple_reg_ts",
"aospy/test/test_calc_basic.py::test_calc_object_no_time_options[None]",
"aospy/test/test_calc_basic.py::test_calc_object_no_time_options[dtype_out_time1]"
]
| []
| []
| Apache License 2.0 | 2,440 | [
"docs/whats-new.rst",
"aospy/region.py",
"aospy/utils/__init__.py",
"aospy/examples/example_obj_lib.py",
"docs/examples.rst",
"aospy/utils/longitude.py",
"docs/api.rst",
"aospy/examples/tutorial.ipynb"
]
| [
"docs/whats-new.rst",
"aospy/region.py",
"aospy/utils/__init__.py",
"aospy/examples/example_obj_lib.py",
"docs/examples.rst",
"aospy/utils/longitude.py",
"docs/api.rst",
"aospy/examples/tutorial.ipynb"
]
|
Eyepea__aiosip-111 | 3219ca46cdfd115a72101d92cd1f717f78d9f7b9 | 2018-04-23 21:51:49 | 3219ca46cdfd115a72101d92cd1f717f78d9f7b9 | diff --git a/aiosip/transaction.py b/aiosip/transaction.py
index da61942..4f2b0ab 100644
--- a/aiosip/transaction.py
+++ b/aiosip/transaction.py
@@ -145,6 +145,9 @@ class FutureTransaction(BaseTransaction):
self.dialog.end_transaction(self)
def _result(self, msg):
+ if self.authentification:
+ self.authentification.cancel()
+ self.authentification = None
self._future.set_result(msg)
self.dialog.end_transaction(self)
| Improper handling of 401 authentication failures
In a call flow such as
* send REGISTER
* get 401 response with auth challenge
* send REGISTER with valid authentication
* get 401 response with no challenge (ie: your credentials are fine, but still denied)
The client will continue to retransmit the request with the authorization header | Eyepea/aiosip | diff --git a/tests/test_sip_scenario.py b/tests/test_sip_scenario.py
index d38e7c3..fdb1a89 100644
--- a/tests/test_sip_scenario.py
+++ b/tests/test_sip_scenario.py
@@ -92,6 +92,53 @@ async def test_authentication(test_server, protocol, loop, from_details, to_deta
await app.close()
+async def test_authentication_rejection(test_server, protocol, loop, from_details, to_details):
+ received_messages = list()
+
+ class Dialplan(aiosip.BaseDialplan):
+
+ async def resolve(self, *args, **kwargs):
+ await super().resolve(*args, **kwargs)
+ return self.subscribe
+
+ async def subscribe(self, request, message):
+ dialog = request._create_dialog()
+
+ received_messages.append(message)
+ await dialog.unauthorized(message)
+
+ async for message in dialog:
+ received_messages.append(message)
+ await dialog.reply(message, 401)
+
+ app = aiosip.Application(loop=loop)
+ server_app = aiosip.Application(loop=loop, dialplan=Dialplan())
+ server = await test_server(server_app)
+
+ peer = await app.connect(
+ protocol=protocol,
+ remote_addr=(
+ server.sip_config['server_host'],
+ server.sip_config['server_port'],
+ )
+ )
+
+ result = await peer.register(
+ expires=1800,
+ from_details=aiosip.Contact.from_header(from_details),
+ to_details=aiosip.Contact.from_header(to_details),
+ password='testing_pass',
+ )
+
+ # wait long enough to ensure no improper retransmit
+ await asyncio.sleep(1)
+ assert len(received_messages) == 2
+ assert result.status_code == 401
+
+ await server_app.close()
+ await app.close()
+
+
async def test_invite(test_server, protocol, loop, from_details, to_details):
call_established = loop.create_future()
call_disconnected = loop.create_future()
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "Pipfile",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiodns==3.2.0
-e git+https://github.com/Eyepea/aiosip.git@3219ca46cdfd115a72101d92cd1f717f78d9f7b9#egg=aiosip
attrs==22.2.0
certifi==2021.5.30
cffi==1.15.1
coverage==6.2
cssselect==1.1.0
importlib-metadata==4.8.3
iniconfig==1.1.1
lxml==5.3.1
multidict==5.2.0
packaging==21.3
pipfile==0.0.2
pluggy==1.0.0
py==1.11.0
pycares==4.3.0
pycparser==2.21
pyparsing==3.1.4
pyquery==1.4.3
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
typing_extensions==4.1.1
websockets==9.1
zipp==3.6.0
| name: aiosip
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- pipfile=0.0.2=py_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aiodns==3.2.0
- attrs==22.2.0
- cffi==1.15.1
- coverage==6.2
- cssselect==1.1.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- lxml==5.3.1
- multidict==5.2.0
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pycares==4.3.0
- pycparser==2.21
- pyparsing==3.1.4
- pyquery==1.4.3
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- tomli==1.2.3
- typing-extensions==4.1.1
- websockets==9.1
- zipp==3.6.0
prefix: /opt/conda/envs/aiosip
| [
"tests/test_sip_scenario.py::test_authentication_rejection[udp]",
"tests/test_sip_scenario.py::test_authentication_rejection[tcp]",
"tests/test_sip_scenario.py::test_invite[udp]",
"tests/test_sip_scenario.py::test_cancel[udp]"
]
| []
| [
"tests/test_sip_scenario.py::test_notify[udp]",
"tests/test_sip_scenario.py::test_notify[tcp]",
"tests/test_sip_scenario.py::test_authentication[udp]",
"tests/test_sip_scenario.py::test_authentication[tcp]",
"tests/test_sip_scenario.py::test_invite[tcp]",
"tests/test_sip_scenario.py::test_cancel[tcp]"
]
| []
| Apache License 2.0 | 2,441 | [
"aiosip/transaction.py"
]
| [
"aiosip/transaction.py"
]
|
|
dask__dask-3436 | 0fd986fb3f9aefb2c441f135fc807c18471a61b8 | 2018-04-24 13:51:42 | 48c4a589393ebc5b335cc5c7df291901401b0b15 | mrocklin: Thank you for taking this on @jrbourbeau !
@philippjfr does this resolve your original issue?
jrbourbeau: The original regression test I added was using the `dask.array.greater_equal` ufunc, which actually doesn't raise any errors. So I replaced the added test with the original code snippet given in issue #3392.
mrocklin: It looks like there are some genuine testing errors in travis-ci. I suspect that this fix works for some versions, but not others.
jrbourbeau: It looks like the NumPy 1.11.0 tests on Travis are failing because I've only modified `__array_ufunc__` (which is new in NumPy version 1.13) and not `__array_wrap__` (which is what's being used in the 1.11.0 tests). | diff --git a/dask/array/__init__.py b/dask/array/__init__.py
index 67c697118..bf37a71b2 100644
--- a/dask/array/__init__.py
+++ b/dask/array/__init__.py
@@ -7,13 +7,14 @@ from .core import (Array, block, concatenate, stack, from_array, store,
broadcast_arrays, broadcast_to)
from .routines import (take, choose, argwhere, where, coarsen, insert,
ravel, roll, unique, squeeze, ptp, diff, ediff1d,
- bincount, digitize, histogram, cov, array, dstack,
- vstack, hstack, compress, extract, round, count_nonzero,
- flatnonzero, nonzero, around, isin, isnull, notnull,
- isclose, allclose, corrcoef, swapaxes, tensordot,
- transpose, dot, vdot, matmul, apply_along_axis,
- apply_over_axes, result_type, atleast_1d, atleast_2d,
- atleast_3d, piecewise, flip, flipud, fliplr, einsum)
+ gradient, bincount, digitize, histogram, cov, array,
+ dstack, vstack, hstack, compress, extract, round,
+ count_nonzero, flatnonzero, nonzero, around, isin,
+ isnull, notnull, isclose, allclose, corrcoef, swapaxes,
+ tensordot, transpose, dot, vdot, matmul,
+ apply_along_axis, apply_over_axes, result_type,
+ atleast_1d, atleast_2d, atleast_3d, piecewise, flip,
+ flipud, fliplr, einsum)
from .reshape import reshape
from .ufunc import (add, subtract, multiply, divide, logaddexp, logaddexp2,
true_divide, floor_divide, negative, power, remainder, mod, conj, exp,
diff --git a/dask/array/ghost.py b/dask/array/ghost.py
index 84538ef1b..c7be674df 100644
--- a/dask/array/ghost.py
+++ b/dask/array/ghost.py
@@ -361,7 +361,7 @@ def add_dummy_padding(x, depth, boundary):
array([..., 0, 1, 2, 3, 4, 5, ...])
"""
for k, v in boundary.items():
- d = depth[k]
+ d = depth.get(k, 0)
if v == 'none' and d > 0:
empty_shape = list(x.shape)
empty_shape[k] = d
@@ -465,4 +465,5 @@ def coerce_boundary(ndim, boundary):
boundary = (boundary,) * ndim
if isinstance(boundary, tuple):
boundary = dict(zip(range(ndim), boundary))
+
return boundary
diff --git a/dask/array/routines.py b/dask/array/routines.py
index 72e88433a..7c2c9f3ac 100644
--- a/dask/array/routines.py
+++ b/dask/array/routines.py
@@ -1,14 +1,15 @@
from __future__ import division, print_function, absolute_import
import inspect
+import math
import warnings
from collections import Iterable
from distutils.version import LooseVersion
from functools import wraps, partial
-from numbers import Integral
+from numbers import Number, Real, Integral
import numpy as np
-from toolz import concat, sliding_window, interleave
+from toolz import concat, merge, sliding_window, interleave
from .. import sharedict
from ..core import flatten
@@ -404,6 +405,71 @@ def ediff1d(ary, to_end=None, to_begin=None):
return r
+def _gradient_kernel(f, grad_varargs, grad_kwargs):
+ return np.gradient(f, *grad_varargs, **grad_kwargs)
+
+
+@wraps(np.gradient)
+def gradient(f, *varargs, **kwargs):
+ f = asarray(f)
+
+ if not all([isinstance(e, Number) for e in varargs]):
+ raise NotImplementedError("Only numeric scalar spacings supported.")
+
+ if varargs == ():
+ varargs = (1,)
+ if len(varargs) == 1:
+ varargs = f.ndim * varargs
+ if len(varargs) != f.ndim:
+ raise TypeError(
+ "Spacing must either be a scalar or a scalar per dimension."
+ )
+
+ kwargs["edge_order"] = math.ceil(kwargs.get("edge_order", 1))
+ if kwargs["edge_order"] > 2:
+ raise ValueError("edge_order must be less than or equal to 2.")
+
+ drop_result_list = False
+ axis = kwargs.pop("axis", None)
+ if axis is None:
+ axis = tuple(range(f.ndim))
+ elif isinstance(axis, Integral):
+ drop_result_list = True
+ axis = (axis,)
+
+ for e in axis:
+ if not isinstance(e, Integral):
+ raise TypeError("%s, invalid value for axis" % repr(e))
+ if not (-f.ndim <= e < f.ndim):
+ raise ValueError("axis, %s, is out of bounds" % repr(e))
+
+ if len(axis) != len(set(axis)):
+ raise ValueError("duplicate axes not allowed")
+
+ axis = tuple(ax % f.ndim for ax in axis)
+
+ if issubclass(f.dtype.type, (np.bool8, Integral)):
+ f = f.astype(float)
+ elif issubclass(f.dtype.type, Real) and f.dtype.itemsize < 4:
+ f = f.astype(float)
+
+ r = [
+ f.map_overlap(
+ _gradient_kernel,
+ dtype=f.dtype,
+ depth={j: 1 if j == ax else 0 for j in range(f.ndim)},
+ boundary="none",
+ grad_varargs=(varargs[i],),
+ grad_kwargs=merge(kwargs, {"axis": ax}),
+ )
+ for i, ax in enumerate(axis)
+ ]
+ if drop_result_list:
+ r = r[0]
+
+ return r
+
+
@wraps(np.bincount)
def bincount(x, weights=None, minlength=None):
if minlength is None:
diff --git a/dask/dataframe/core.py b/dask/dataframe/core.py
index b00218d68..0ddbf62d4 100644
--- a/dask/dataframe/core.py
+++ b/dask/dataframe/core.py
@@ -344,8 +344,12 @@ class _Frame(DaskMethodsMixin, OperatorMethodMixin):
def __array_ufunc__(self, numpy_ufunc, method, *inputs, **kwargs):
out = kwargs.get('out', ())
for x in inputs + out:
- if not isinstance(x, (Number, Scalar, _Frame, Array,
- pd.DataFrame, pd.Series, pd.Index)):
+ # ufuncs work with 0-dimensional NumPy ndarrays
+ # so we don't want to raise NotImplemented
+ if isinstance(x, np.ndarray) and x.shape == ():
+ continue
+ elif not isinstance(x, (Number, Scalar, _Frame, Array,
+ pd.DataFrame, pd.Series, pd.Index)):
return NotImplemented
if method == '__call__':
@@ -1751,7 +1755,10 @@ class Series(_Frame):
def __array_wrap__(self, array, context=None):
if isinstance(context, tuple) and len(context) > 0:
- index = context[1][0].index
+ if isinstance(context[1][0], np.ndarray) and context[1][0].shape == ():
+ index = None
+ else:
+ index = context[1][0].index
return pd.Series(array, index=index, name=self.name)
@@ -2311,7 +2318,10 @@ class DataFrame(_Frame):
def __array_wrap__(self, array, context=None):
if isinstance(context, tuple) and len(context) > 0:
- index = context[1][0].index
+ if isinstance(context[1][0], np.ndarray) and context[1][0].shape == ():
+ index = None
+ else:
+ index = context[1][0].index
return pd.DataFrame(array, index=index, columns=self.columns)
diff --git a/dask/dataframe/groupby.py b/dask/dataframe/groupby.py
index 28a0854d8..ee086513e 100644
--- a/dask/dataframe/groupby.py
+++ b/dask/dataframe/groupby.py
@@ -1221,7 +1221,7 @@ class SeriesGroupBy(_GroupBy):
if self._slice:
result = result[self._slice]
- if not isinstance(arg, (list, dict)):
+ if not isinstance(arg, (list, dict)) and isinstance(result, DataFrame):
result = result[result.columns[0]]
return result
diff --git a/dask/dataframe/io/demo.py b/dask/dataframe/io/demo.py
index 055059ede..54d62f3f2 100644
--- a/dask/dataframe/io/demo.py
+++ b/dask/dataframe/io/demo.py
@@ -51,7 +51,12 @@ def make_timeseries_part(start, end, dtypes, freq, state_data):
return df
-def make_timeseries(start, end, dtypes, freq, partition_freq, seed=None):
+def make_timeseries(start='2000-01-01',
+ end='2000-12-31',
+ dtypes={'name': str, 'id': int, 'x': float, 'y': float},
+ freq='10s',
+ partition_freq='1M',
+ seed=None):
""" Create timeseries dataframe with random data
Parameters
diff --git a/docs/source/array-api.rst b/docs/source/array-api.rst
index f6fa498c4..b0b77d5e7 100644
--- a/docs/source/array-api.rst
+++ b/docs/source/array-api.rst
@@ -83,6 +83,7 @@ Top level user functions:
frompyfunc
full
full_like
+ gradient
histogram
hstack
hypot
@@ -423,6 +424,7 @@ Other functions
.. autofunction:: frompyfunc
.. autofunction:: full
.. autofunction:: full_like
+.. autofunction:: gradient
.. autofunction:: histogram
.. autofunction:: hstack
.. autofunction:: hypot
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
index 5ca12ab13..bfac52c44 100644
--- a/docs/source/changelog.rst
+++ b/docs/source/changelog.rst
@@ -18,11 +18,13 @@ Array
- The ``topk`` API has changed from topk(k, array) to the more conventional topk(array, k).
The legacy API still works but is now deprecated. (:pr:`2965`) `Guido Imperiale`_
- New function ``argtopk`` for Dask Arrays (:pr:`3396`) `Guido Imperiale`_
+- Fix handling partial depth and boundary in ``map_overlap`` (:pr:`3445`) `John A Kirkham`_
+- Add ``gradient`` for Dask Arrays (:pr:`3434`) `John A Kirkham`_
+
DataFrame
+++++++++
-
- Allow `t` as shorthand for `table` in `to_hdf` for pandas compatibility (:pr:`3330`) `Jörg Dietrich`_
- Added top level `isna` method for Dask DataFrames (:pr:`3294`) `Christopher Ren`_
- Fix selection on partition column on ``read_parquet`` for ``engine="pyarrow"`` (:pr:`3207`) `Uwe Korn`_
@@ -32,7 +34,10 @@ DataFrame
- Provide more informative error message for meta= errors (:pr:`3343`) `Matthew Rocklin`_
- add orc reader (:pr:`3284`) `Martin Durant`_
- Default compression for parquet now always Snappy, in line with pandas (:pr:`3373`) `Martin Durant`_
+- Fixed bug in Dask DataFrame and Series comparisons with NumPy scalars (:pr:`3436`) `James Bourbeau`_
- Remove outdated requirement from repartition docstring (:pr:`3440`) `Jörg Dietrich`_
+- Fixed bug in aggregation when only a Series is selected (:pr:`3446`) `Jörg Dietrich`_
+- Add default values to make_timeseries (:pr:`3421`) `Matthew Rocklin`_
Bag
+++
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 548e5074f..8ac85a6a1 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -299,13 +299,6 @@ extlinks = {
# --Options for sphinx extensions -----------------------------------------------
-intersphinx_mapping = {'pandas': ('http://pandas.pydata.org/pandas-docs/stable/',
- 'http://pandas.pydata.org/pandas-docs/stable/objects.inv'),
- 'numpy': ('https://docs.scipy.org/doc/numpy/',
- 'https://docs.scipy.org/doc/numpy/objects.inv')}
-
-# --Options for sphinx extensions -----------------------------------------------
-
intersphinx_mapping = {'pandas': ('http://pandas.pydata.org/pandas-docs/stable/',
'http://pandas.pydata.org/pandas-docs/stable/objects.inv'),
'numpy': ('https://docs.scipy.org/doc/numpy/',
diff --git a/docs/source/futures.rst b/docs/source/futures.rst
index 838c13e83..4fb879f67 100644
--- a/docs/source/futures.rst
+++ b/docs/source/futures.rst
@@ -582,12 +582,14 @@ API
fire_and_forget
get_client
secede
+ rejoin
wait
.. autofunction:: as_completed
.. autofunction:: fire_and_forget
.. autofunction:: get_client
.. autofunction:: secede
+.. autofunction:: rejoin
.. autofunction:: wait
.. autoclass:: Client
| Series comparison to NumPy scalar fails
Comparing a dask Series with a NumPy scalar or 0D array fails in certain cases. Here is a minimal reproducible example:
```python
import dask.dataframe as dd
import pandas as pd
np.float64(5.2) >= dd.from_array(np.arange(10))
```
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-d905075c0565> in <module>()
1
----> 2 np.float64(10) >= dd.from_array(np.arange(10))
~/miniconda/envs/anacondaviz/lib/python3.6/site-packages/dask/dataframe/core.py in __array_wrap__(self, array, context)
1704 def __array_wrap__(self, array, context=None):
1705 if isinstance(context, tuple) and len(context) > 0:
-> 1706 index = context[1][0].index
1707
1708 return pd.Series(array, index=index, name=self.name)
AttributeError: 'numpy.ndarray' object has no attribute 'index'
```
I can reproduce this using all releases in the dask 0.17.x series. | dask/dask | diff --git a/dask/array/tests/test_ghost.py b/dask/array/tests/test_ghost.py
index 4aa770bde..0497ed95b 100644
--- a/dask/array/tests/test_ghost.py
+++ b/dask/array/tests/test_ghost.py
@@ -195,8 +195,11 @@ def test_map_overlap():
exp1 = d.map_overlap(lambda x: x + x.size, depth=1, dtype=d.dtype)
exp2 = d.map_overlap(lambda x: x + x.size, depth={0: 1, 1: 1},
boundary={0: 'reflect', 1: 'none'}, dtype=d.dtype)
+ exp3 = d.map_overlap(lambda x: x + x.size, depth={1: 1},
+ boundary={1: 'reflect'}, dtype=d.dtype)
assert_eq(exp1, x + 16)
assert_eq(exp2, x + 12)
+ assert_eq(exp3, x + 8)
@pytest.mark.parametrize("boundary", [
diff --git a/dask/array/tests/test_routines.py b/dask/array/tests/test_routines.py
index 7b8443644..a5a214eab 100644
--- a/dask/array/tests/test_routines.py
+++ b/dask/array/tests/test_routines.py
@@ -1,6 +1,7 @@
from __future__ import division, print_function, absolute_import
import itertools
+from numbers import Number
import textwrap
import pytest
@@ -419,6 +420,36 @@ def test_ediff1d(shape, to_end, to_begin):
assert_eq(da.ediff1d(a, to_end, to_begin), np.ediff1d(x, to_end, to_begin))
[email protected]('shape, varargs, axis', [
+ [(10, 15, 20), (), None],
+ [(10, 15, 20), (2,), None],
+ [(10, 15, 20), (1.0, 1.5, 2.0), None],
+ [(10, 15, 20), (), 0],
+ [(10, 15, 20), (), 1],
+ [(10, 15, 20), (), 2],
+ [(10, 15, 20), (), -1],
+ [(10, 15, 20), (), (0, 2)],
+])
[email protected]('edge_order', [
+ 1,
+ 2
+])
+def test_gradient(shape, varargs, axis, edge_order):
+ a = np.random.randint(0, 10, shape)
+ d_a = da.from_array(a, chunks=(len(shape) * (5,)))
+
+ r = np.gradient(a, *varargs, axis=axis, edge_order=edge_order)
+ r_a = da.gradient(d_a, *varargs, axis=axis, edge_order=edge_order)
+
+ if isinstance(axis, Number):
+ assert_eq(r, r_a)
+ else:
+ assert len(r) == len(r_a)
+
+ for e_r, e_r_a in zip(r, r_a):
+ assert_eq(e_r, e_r_a)
+
+
def test_bincount():
x = np.array([2, 1, 5, 2, 1])
d = da.from_array(x, chunks=2)
diff --git a/dask/bytes/tests/test_http.py b/dask/bytes/tests/test_http.py
index ce877c524..52ec7b991 100644
--- a/dask/bytes/tests/test_http.py
+++ b/dask/bytes/tests/test_http.py
@@ -2,6 +2,7 @@ import os
import pytest
import requests
import subprocess
+import sys
import time
from dask.bytes.core import open_files
@@ -19,9 +20,9 @@ def dir_server():
f.write(b'a' * 10000)
if PY2:
- cmd = ['python', '-m', 'SimpleHTTPServer', '8999']
+ cmd = [sys.executable, '-m', 'SimpleHTTPServer', '8999']
else:
- cmd = ['python', '-m', 'http.server', '8999']
+ cmd = [sys.executable, '-m', 'http.server', '8999']
p = subprocess.Popen(cmd, cwd=d)
timeout = 10
while True:
diff --git a/dask/dataframe/io/tests/test_demo.py b/dask/dataframe/io/tests/test_demo.py
index d6b71a270..03b43abb1 100644
--- a/dask/dataframe/io/tests/test_demo.py
+++ b/dask/dataframe/io/tests/test_demo.py
@@ -39,6 +39,13 @@ def test_make_timeseries():
assert a._name != e._name
+def test_make_timeseries_no_args():
+ df = dd.demo.make_timeseries()
+ assert 1 < df.npartitions < 1000
+ assert len(df.columns) > 1
+ assert len(set(df.dtypes)) > 1
+
+
def test_no_overlaps():
df = dd.demo.make_timeseries('2000', '2001', {'A': float},
freq='3H', partition_freq='3M')
diff --git a/dask/dataframe/tests/test_groupby.py b/dask/dataframe/tests/test_groupby.py
index 7d1f81d9b..370b54b56 100644
--- a/dask/dataframe/tests/test_groupby.py
+++ b/dask/dataframe/tests/test_groupby.py
@@ -1429,3 +1429,13 @@ def test_groupby_agg_custom__mode():
expected = expected['cc'].groupby([expected['g0'], expected['g1']]).agg('sum')
assert_eq(actual, expected)
+
+
+def test_groupby_select_column_agg():
+ pdf = pd.DataFrame({'A': [1, 2, 3, 1, 2, 3, 1, 2, 4],
+ 'B': [-0.776, -0.4, -0.873, 0.054, 1.419, -0.948,
+ -0.967, -1.714, -0.666]})
+ ddf = dd.from_pandas(pdf, npartitions=4)
+ actual = ddf.groupby('A')['B'].agg('var')
+ expected = pdf.groupby('A')['B'].agg('var')
+ assert_eq(actual, expected)
diff --git a/dask/dataframe/tests/test_ufunc.py b/dask/dataframe/tests/test_ufunc.py
index 9e115ccd8..a83cc9fa4 100644
--- a/dask/dataframe/tests/test_ufunc.py
+++ b/dask/dataframe/tests/test_ufunc.py
@@ -374,3 +374,18 @@ def test_ufunc_with_reduction(redfunc, ufunc, pandas):
with pytest.warns(None):
assert isinstance(np_redfunc(dask), (dd.DataFrame, dd.Series, dd.core.Scalar))
assert_eq(np_redfunc(np_ufunc(dask)), np_redfunc(np_ufunc(pandas)))
+
+
[email protected]('pandas',
+ [pd.Series(np.random.randint(1, 100, size=100)),
+ pd.DataFrame({'A': np.random.randint(1, 100, size=20),
+ 'B': np.random.randint(1, 100, size=20),
+ 'C': np.abs(np.random.randn(20))})])
[email protected]('scalar', [15, 16.4, np.int64(15), np.float64(16.4)])
+def test_ufunc_numpy_scalar_comparison(pandas, scalar):
+ # Regression test for issue #3392
+
+ dask_compare = scalar >= dd.from_pandas(pandas, npartitions=3)
+ pandas_compare = scalar >= pandas
+
+ assert_eq(dask_compare, pandas_compare)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 10
} | 1.21 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[complete]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8",
"moto"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
boto3==1.23.10
botocore==1.26.10
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
click==8.0.4
cloudpickle==2.2.1
cryptography==40.0.2
-e git+https://github.com/dask/dask.git@0fd986fb3f9aefb2c441f135fc807c18471a61b8#egg=dask
dataclasses==0.8
distributed==1.21.8
flake8==5.0.4
HeapDict==1.0.1
idna==3.10
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
jmespath==0.10.0
locket==1.0.0
MarkupSafe==2.0.1
mccabe==0.7.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
moto==4.0.13
msgpack==1.0.5
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
partd==1.2.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
python-dateutil==2.9.0.post0
pytz==2025.2
requests==2.27.1
responses==0.17.0
s3transfer==0.5.2
six==1.17.0
sortedcontainers==2.4.0
tblib==1.7.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
toolz==0.12.0
tornado==6.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
Werkzeug==2.0.3
xmltodict==0.14.2
zict==2.1.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: dask
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- boto3==1.23.10
- botocore==1.26.10
- cffi==1.15.1
- charset-normalizer==2.0.12
- click==8.0.4
- cloudpickle==2.2.1
- cryptography==40.0.2
- dataclasses==0.8
- distributed==1.21.8
- flake8==5.0.4
- heapdict==1.0.1
- idna==3.10
- importlib-metadata==4.2.0
- jinja2==3.0.3
- jmespath==0.10.0
- locket==1.0.0
- markupsafe==2.0.1
- mccabe==0.7.0
- moto==4.0.13
- msgpack==1.0.5
- numpy==1.19.5
- pandas==1.1.5
- partd==1.2.0
- psutil==7.0.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- requests==2.27.1
- responses==0.17.0
- s3transfer==0.5.2
- six==1.17.0
- sortedcontainers==2.4.0
- tblib==1.7.0
- toolz==0.12.0
- tornado==6.1
- urllib3==1.26.20
- werkzeug==2.0.3
- xmltodict==0.14.2
- zict==2.1.0
prefix: /opt/conda/envs/dask
| [
"dask/array/tests/test_routines.py::test_gradient[1-shape0-varargs0-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape1-varargs1-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape2-varargs2-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape3-varargs3-0]",
"dask/array/tests/test_routines.py::test_gradient[1-shape4-varargs4-1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape5-varargs5-2]",
"dask/array/tests/test_routines.py::test_gradient[1-shape6-varargs6--1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape7-varargs7-axis7]",
"dask/array/tests/test_routines.py::test_gradient[2-shape0-varargs0-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape1-varargs1-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape2-varargs2-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape3-varargs3-0]",
"dask/array/tests/test_routines.py::test_gradient[2-shape4-varargs4-1]",
"dask/array/tests/test_routines.py::test_gradient[2-shape5-varargs5-2]",
"dask/array/tests/test_routines.py::test_gradient[2-shape6-varargs6--1]",
"dask/array/tests/test_routines.py::test_gradient[2-shape7-varargs7-axis7]",
"dask/dataframe/tests/test_groupby.py::test_groupby_select_column_agg",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[scalar2-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[scalar2-pandas1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.41-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.41-pandas1]"
]
| [
"dask/dataframe/io/tests/test_demo.py::test_make_timeseries",
"dask/dataframe/io/tests/test_demo.py::test_make_timeseries_no_args",
"dask/dataframe/io/tests/test_demo.py::test_no_overlaps",
"dask/dataframe/tests/test_groupby.py::test_full_groupby",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_dir",
"dask/dataframe/tests/test_groupby.py::test_groupby_on_index[get_sync]",
"dask/dataframe/tests/test_groupby.py::test_groupby_on_index[get]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_agg",
"dask/dataframe/tests/test_groupby.py::test_groupby_index_array",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[grouper5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_apply_tasks",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[mean-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[mean-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_split_out_multi_column_groupby",
"dask/dataframe/tests/test_groupby.py::test_groupby_unaligned_index",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[mean]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_custom__mode",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-greater]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-less]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-greater]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-less]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_xor1]"
]
| [
"dask/array/tests/test_ghost.py::test_fractional_slice",
"dask/array/tests/test_ghost.py::test_ghost_internal",
"dask/array/tests/test_ghost.py::test_trim_internal",
"dask/array/tests/test_ghost.py::test_periodic",
"dask/array/tests/test_ghost.py::test_reflect",
"dask/array/tests/test_ghost.py::test_nearest",
"dask/array/tests/test_ghost.py::test_constant",
"dask/array/tests/test_ghost.py::test_boundaries",
"dask/array/tests/test_ghost.py::test_ghost",
"dask/array/tests/test_ghost.py::test_map_overlap",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[None]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[reflect]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[periodic]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[nearest]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[none]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[0]",
"dask/array/tests/test_ghost.py::test_nearest_ghost",
"dask/array/tests/test_ghost.py::test_0_depth",
"dask/array/tests/test_ghost.py::test_some_0_depth",
"dask/array/tests/test_ghost.py::test_one_chunk_along_axis",
"dask/array/tests/test_ghost.py::test_constant_boundaries",
"dask/array/tests/test_ghost.py::test_depth_equals_boundary_length",
"dask/array/tests/test_ghost.py::test_bad_depth_raises",
"dask/array/tests/test_ghost.py::test_none_boundaries",
"dask/array/tests/test_ghost.py::test_ghost_small",
"dask/array/tests/test_routines.py::test_array",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_3d]",
"dask/array/tests/test_routines.py::test_transpose",
"dask/array/tests/test_routines.py::test_transpose_negative_axes",
"dask/array/tests/test_routines.py::test_swapaxes",
"dask/array/tests/test_routines.py::test_flip[shape0-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape0-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape1-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape1-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape2-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape2-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape3-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape3-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape4-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape4-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape0-y_shape0]",
"dask/array/tests/test_routines.py::test_matmul[x_shape1-y_shape1]",
"dask/array/tests/test_routines.py::test_matmul[x_shape2-y_shape2]",
"dask/array/tests/test_routines.py::test_matmul[x_shape3-y_shape3]",
"dask/array/tests/test_routines.py::test_matmul[x_shape4-y_shape4]",
"dask/array/tests/test_routines.py::test_matmul[x_shape5-y_shape5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape6-y_shape6]",
"dask/array/tests/test_routines.py::test_matmul[x_shape7-y_shape7]",
"dask/array/tests/test_routines.py::test_matmul[x_shape8-y_shape8]",
"dask/array/tests/test_routines.py::test_matmul[x_shape9-y_shape9]",
"dask/array/tests/test_routines.py::test_matmul[x_shape10-y_shape10]",
"dask/array/tests/test_routines.py::test_matmul[x_shape11-y_shape11]",
"dask/array/tests/test_routines.py::test_matmul[x_shape12-y_shape12]",
"dask/array/tests/test_routines.py::test_matmul[x_shape13-y_shape13]",
"dask/array/tests/test_routines.py::test_matmul[x_shape14-y_shape14]",
"dask/array/tests/test_routines.py::test_matmul[x_shape15-y_shape15]",
"dask/array/tests/test_routines.py::test_matmul[x_shape16-y_shape16]",
"dask/array/tests/test_routines.py::test_matmul[x_shape17-y_shape17]",
"dask/array/tests/test_routines.py::test_matmul[x_shape18-y_shape18]",
"dask/array/tests/test_routines.py::test_matmul[x_shape19-y_shape19]",
"dask/array/tests/test_routines.py::test_matmul[x_shape20-y_shape20]",
"dask/array/tests/test_routines.py::test_matmul[x_shape21-y_shape21]",
"dask/array/tests/test_routines.py::test_matmul[x_shape22-y_shape22]",
"dask/array/tests/test_routines.py::test_matmul[x_shape23-y_shape23]",
"dask/array/tests/test_routines.py::test_matmul[x_shape24-y_shape24]",
"dask/array/tests/test_routines.py::test_tensordot",
"dask/array/tests/test_routines.py::test_tensordot_2[0]",
"dask/array/tests/test_routines.py::test_tensordot_2[1]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes2]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes3]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes4]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes5]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes6]",
"dask/array/tests/test_routines.py::test_dot_method",
"dask/array/tests/test_routines.py::test_vdot[shape0-chunks0]",
"dask/array/tests/test_routines.py::test_vdot[shape1-chunks1]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-range-<lambda>]",
"dask/array/tests/test_routines.py::test_ptp[shape0-None]",
"dask/array/tests/test_routines.py::test_ptp[shape1-0]",
"dask/array/tests/test_routines.py::test_ptp[shape2-1]",
"dask/array/tests/test_routines.py::test_ptp[shape3-2]",
"dask/array/tests/test_routines.py::test_ptp[shape4--1]",
"dask/array/tests/test_routines.py::test_diff[0-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[0-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[0-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[0-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[1-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[1-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[1-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[1-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[2-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[2-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[2-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[2-shape3--1]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape1]",
"dask/array/tests/test_routines.py::test_bincount",
"dask/array/tests/test_routines.py::test_bincount_with_weights",
"dask/array/tests/test_routines.py::test_bincount_raises_informative_error_on_missing_minlength_kwarg",
"dask/array/tests/test_routines.py::test_digitize",
"dask/array/tests/test_routines.py::test_histogram",
"dask/array/tests/test_routines.py::test_histogram_alternative_bins_range",
"dask/array/tests/test_routines.py::test_histogram_return_type",
"dask/array/tests/test_routines.py::test_histogram_extra_args_and_shapes",
"dask/array/tests/test_routines.py::test_cov",
"dask/array/tests/test_routines.py::test_corrcoef",
"dask/array/tests/test_routines.py::test_round",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-True]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[True]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[False]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_ravel",
"dask/array/tests/test_routines.py::test_squeeze[None-True]",
"dask/array/tests/test_routines.py::test_squeeze[None-False]",
"dask/array/tests/test_routines.py::test_squeeze[0-True]",
"dask/array/tests/test_routines.py::test_squeeze[0-False]",
"dask/array/tests/test_routines.py::test_squeeze[-1-True]",
"dask/array/tests/test_routines.py::test_squeeze[-1-False]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-True]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-False]",
"dask/array/tests/test_routines.py::test_vstack",
"dask/array/tests/test_routines.py::test_hstack",
"dask/array/tests/test_routines.py::test_dstack",
"dask/array/tests/test_routines.py::test_take",
"dask/array/tests/test_routines.py::test_take_dask_from_numpy",
"dask/array/tests/test_routines.py::test_compress",
"dask/array/tests/test_routines.py::test_extract",
"dask/array/tests/test_routines.py::test_isnull",
"dask/array/tests/test_routines.py::test_isclose",
"dask/array/tests/test_routines.py::test_allclose",
"dask/array/tests/test_routines.py::test_choose",
"dask/array/tests/test_routines.py::test_piecewise",
"dask/array/tests/test_routines.py::test_piecewise_otherwise",
"dask/array/tests/test_routines.py::test_argwhere",
"dask/array/tests/test_routines.py::test_argwhere_obj",
"dask/array/tests/test_routines.py::test_argwhere_str",
"dask/array/tests/test_routines.py::test_where",
"dask/array/tests/test_routines.py::test_where_scalar_dtype",
"dask/array/tests/test_routines.py::test_where_bool_optimization",
"dask/array/tests/test_routines.py::test_where_nonzero",
"dask/array/tests/test_routines.py::test_where_incorrect_args",
"dask/array/tests/test_routines.py::test_count_nonzero",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_str",
"dask/array/tests/test_routines.py::test_flatnonzero",
"dask/array/tests/test_routines.py::test_nonzero",
"dask/array/tests/test_routines.py::test_nonzero_method",
"dask/array/tests/test_routines.py::test_coarsen",
"dask/array/tests/test_routines.py::test_coarsen_with_excess",
"dask/array/tests/test_routines.py::test_insert",
"dask/array/tests/test_routines.py::test_multi_insert",
"dask/array/tests/test_routines.py::test_result_type",
"dask/array/tests/test_routines.py::test_einsum[abc,bad->abcd]",
"dask/array/tests/test_routines.py::test_einsum[abcdef,bcdfg->abcdeg]",
"dask/array/tests/test_routines.py::test_einsum[ea,fb,abcd,gc,hd->efgh]",
"dask/array/tests/test_routines.py::test_einsum[ab,b]",
"dask/array/tests/test_routines.py::test_einsum[aa]",
"dask/array/tests/test_routines.py::test_einsum[a,a->]",
"dask/array/tests/test_routines.py::test_einsum[a,a->a]",
"dask/array/tests/test_routines.py::test_einsum[a,a]",
"dask/array/tests/test_routines.py::test_einsum[a,b]",
"dask/array/tests/test_routines.py::test_einsum[a,b,c]",
"dask/array/tests/test_routines.py::test_einsum[a]",
"dask/array/tests/test_routines.py::test_einsum[ba,b]",
"dask/array/tests/test_routines.py::test_einsum[ba,b->]",
"dask/array/tests/test_routines.py::test_einsum[defab,fedbc->defac]",
"dask/array/tests/test_routines.py::test_einsum[ab...,bc...->ac...]",
"dask/array/tests/test_routines.py::test_einsum[a...a]",
"dask/array/tests/test_routines.py::test_einsum[abc...->cba...]",
"dask/array/tests/test_routines.py::test_einsum[...ab->...a]",
"dask/array/tests/test_routines.py::test_einsum[a...a->a...]",
"dask/array/tests/test_routines.py::test_einsum[...abc,...abcd->...d]",
"dask/array/tests/test_routines.py::test_einsum[ab...,b->ab...]",
"dask/array/tests/test_routines.py::test_einsum[aa->a]",
"dask/array/tests/test_routines.py::test_einsum[ab,ab,c->c]",
"dask/array/tests/test_routines.py::test_einsum[aab,bc->ac]",
"dask/array/tests/test_routines.py::test_einsum[aab,bcc->ac]",
"dask/array/tests/test_routines.py::test_einsum[fdf,cdd,ccd,afe->ae]",
"dask/array/tests/test_routines.py::test_einsum[fff,fae,bef,def->abd]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts0]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts1]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts2]",
"dask/array/tests/test_routines.py::test_einsum_order[C]",
"dask/array/tests/test_routines.py::test_einsum_order[F]",
"dask/array/tests/test_routines.py::test_einsum_order[A]",
"dask/array/tests/test_routines.py::test_einsum_order[K]",
"dask/array/tests/test_routines.py::test_einsum_casting[no]",
"dask/array/tests/test_routines.py::test_einsum_casting[equiv]",
"dask/array/tests/test_routines.py::test_einsum_casting[safe]",
"dask/array/tests/test_routines.py::test_einsum_casting[same_kind]",
"dask/array/tests/test_routines.py::test_einsum_casting[unsafe]",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction2",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction3",
"dask/bytes/tests/test_http.py::test_simple",
"dask/bytes/tests/test_http.py::test_ops[None]",
"dask/bytes/tests/test_http.py::test_ops[99999]",
"dask/bytes/tests/test_http.py::test_ops_blocksize",
"dask/bytes/tests/test_http.py::test_errors",
"dask/bytes/tests/test_http.py::test_files",
"dask/bytes/tests/test_http.py::test_bag",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_apply_multiarg",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_get_group",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_nunique",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_nunique_across_group_same_value",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_propagates_names",
"dask/dataframe/tests/test_groupby.py::test_series_groupby",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_errors",
"dask/dataframe/tests/test_groupby.py::test_groupby_set_index",
"dask/dataframe/tests/test_groupby.py::test_split_apply_combine_on_series",
"dask/dataframe/tests/test_groupby.py::test_groupby_reduction_split[split_every]",
"dask/dataframe/tests/test_groupby.py::test_groupby_reduction_split[split_out]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_numeric_column_names",
"dask/dataframe/tests/test_groupby.py::test_groupby_multiprocessing",
"dask/dataframe/tests/test_groupby.py::test_groupby_normalize_index",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-size]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[sum]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[mean]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[min]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[max]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[count]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[size]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[std]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[nunique]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[first]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[last]",
"dask/dataframe/tests/test_groupby.py::test_aggregate_build_agg_args__reuse_of_intermediates",
"dask/dataframe/tests/test_groupby.py::test_aggregate__dask",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[sum-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[sum-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[sum-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[mean-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[min-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[min-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[min-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[max-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[max-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[max-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[count-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[count-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[count-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[size-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[size-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[size-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[std-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[std-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[std-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[var-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[var-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[var-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[nunique-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[nunique-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[nunique-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[first-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[first-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[first-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[last-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[last-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[last-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupy_non_aligned_index",
"dask/dataframe/tests/test_groupby.py::test_groupy_series_wrong_grouper",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-5-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-5-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-5-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-5-20]",
"dask/dataframe/tests/test_groupby.py::test_groupby_split_out_num",
"dask/dataframe/tests/test_groupby.py::test_groupby_not_supported",
"dask/dataframe/tests/test_groupby.py::test_groupby_numeric_column",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-a-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-a-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-a-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-key1-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-key1-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-key1-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-a-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-a-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-a-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-key1-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-key1-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-key1-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-a-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-a-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-a-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-key1-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-key1-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-key1-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative_axis1[cumsum]",
"dask/dataframe/tests/test_groupby.py::test_cumulative_axis1[cumprod]",
"dask/dataframe/tests/test_groupby.py::test_groupby_slice_agg_reduces",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_single",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[a]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[slice_1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[slice_2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[slice_3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[cumprod]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[cumcount]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[cumsum]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[var]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[sum]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[count]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[size]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[std]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[min]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[max]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[first]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[last]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-group_args0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-group_args1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-group_args2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-idx]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-group_args0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-group_args1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-group_args2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-idx]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-group_args0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-group_args1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-group_args2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-idx]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec0-dask_spec0-False]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec1-dask_spec1-True]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec2-dask_spec2-False]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec3-dask_spec3-False]",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_agg_custom_mean[mean-mean]",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_agg_custom_mean[pandas_spec1-dask_spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_agg_custom_mean[pandas_spec2-dask_spec2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_custom__name_clash_with_internal_same_column",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_custom__name_clash_with_internal_different_column",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[isreal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[iscomplex]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[real]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[imag]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[angle]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[i0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[sinc]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[nan_to_num]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-greater]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-less]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-greater]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-less]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_clip[pandas0-5-50]",
"dask/dataframe/tests/test_ufunc.py::test_clip[pandas1-5.5-40.5]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[conj]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[exp]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log2]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log10]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log1p]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[expm1]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sqrt]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[square]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sin]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[cos]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[tan]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arcsin]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arccos]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arctan]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sinh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[cosh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[tanh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arcsinh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arccosh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arctanh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[deg2rad]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[rad2deg]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[isfinite]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[isinf]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[isnan]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[signbit]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[degrees]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[radians]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[rint]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[fabs]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sign]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[absolute]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[floor]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[ceil]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[trunc]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[logical_not]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[cbrt]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[exp2]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[negative]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[reciprocal]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[spacing]",
"dask/dataframe/tests/test_ufunc.py::test_frame_2ufunc_out",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[15-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[15-pandas1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.40-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.40-pandas1]"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,442 | [
"docs/source/futures.rst",
"dask/dataframe/groupby.py",
"dask/dataframe/io/demo.py",
"dask/dataframe/core.py",
"docs/source/array-api.rst",
"dask/array/routines.py",
"dask/array/__init__.py",
"docs/source/changelog.rst",
"dask/array/ghost.py",
"docs/source/conf.py"
]
| [
"docs/source/futures.rst",
"dask/dataframe/groupby.py",
"dask/dataframe/io/demo.py",
"dask/dataframe/core.py",
"docs/source/array-api.rst",
"dask/array/routines.py",
"dask/array/__init__.py",
"docs/source/changelog.rst",
"dask/array/ghost.py",
"docs/source/conf.py"
]
|
datosgobar__pydatajson-153 | dae546a739eb2aab1c34b3d8bbb5896fe804e0aa | 2018-04-24 17:27:32 | adb85a7de7dfa073ddf9817a5fe2d125f9ce4e54 | diff --git a/pydatajson/federation.py b/pydatajson/federation.py
index 43e932e..b503d95 100644
--- a/pydatajson/federation.py
+++ b/pydatajson/federation.py
@@ -5,11 +5,13 @@ de la API de CKAN.
"""
from __future__ import print_function
+import logging
from ckanapi import RemoteCKAN
-from ckanapi.errors import NotFound
+from ckanapi.errors import NotFound, NotAuthorized
from .ckan_utils import map_dataset_to_package, map_theme_to_group
from .search import get_datasets
+logger = logging.getLogger(__name__)
def push_dataset_to_ckan(catalog, owner_org, dataset_origin_identifier,
portal_url, apikey, catalog_id=None,
@@ -250,14 +252,20 @@ def harvest_catalog_to_ckan(catalog, portal_url, apikey, catalog_id,
Returns:
str: El id del dataset en el catálogo de destino.
"""
- dataset_list = dataset_list or [ds['identifier']
- for ds in catalog.datasets]
+ # Evitar entrar con valor falsy
+ if dataset_list is None:
+ dataset_list = [ds['identifier'] for ds in catalog.datasets]
owner_org = owner_org or catalog_id
harvested = []
for dataset_id in dataset_list:
- harvested_id = harvest_dataset_to_ckan(
- catalog, owner_org, dataset_id, portal_url, apikey, catalog_id)
- harvested.append(harvested_id)
+ try:
+ harvested_id = harvest_dataset_to_ckan(
+ catalog, owner_org, dataset_id, portal_url, apikey, catalog_id)
+ harvested.append(harvested_id)
+ except (NotAuthorized, NotFound, KeyError, TypeError) as e:
+ logger.error("Error federando catalogo:"+catalog_id+", dataset:"+dataset_id + "al portal: "+portal_url)
+ logger.error(str(e))
+
return harvested
| Robustecer el manejo de harvest_catalog_to_ckan()
Hay que corregir 2 problemas:
- En caso de pasar una lista vacía en el dataset list, no se debe federar ningún dataset. Actualmente se federan todos.
-En caso de que alguno de las llamadas a `harvest_dataset_to_ckan()` falle, loggear y continuar con el resto. Actualmente la federación entera del catálogo levanta la excepción. | datosgobar/pydatajson | diff --git a/tests/test_federation.py b/tests/test_federation.py
index fe95079..9d0515f 100644
--- a/tests/test_federation.py
+++ b/tests/test_federation.py
@@ -223,6 +223,13 @@ class PushDatasetTestCase(unittest.TestCase):
self.assertCountEqual([self.catalog_id+'_'+ds['identifier'] for ds in self.catalog.datasets],
harvested_ids)
+ @patch('pydatajson.federation.RemoteCKAN', autospec=True)
+ def test_harvest_catalog_with_empty_list(self, mock_portal):
+ harvested_ids = harvest_catalog_to_ckan(self.catalog, 'portal', 'key', self.catalog_id,
+ owner_org='owner', dataset_list=[])
+ mock_portal.assert_not_called()
+ self.assertEqual([], harvested_ids)
+
class RemoveDatasetTestCase(unittest.TestCase):
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
chardet==3.0.4
ckanapi==4.0
docopt==0.6.2
et-xmlfile==1.1.0
idna==2.6
importlib-metadata==4.8.3
iniconfig==1.1.1
isodate==0.6.0
jdcal==1.4.1
jsonschema==2.6.0
openpyxl==2.4.11
packaging==21.3
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/datosgobar/pydatajson.git@dae546a739eb2aab1c34b3d8bbb5896fe804e0aa#egg=pydatajson
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.6.1
requests==2.18.4
rfc3987==1.3.7
six==1.11.0
tomli==1.2.3
typing_extensions==4.1.1
unicodecsv==0.14.1
Unidecode==0.4.21
urllib3==1.22
zipp==3.6.0
| name: pydatajson
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- chardet==3.0.4
- ckanapi==4.0
- docopt==0.6.2
- et-xmlfile==1.1.0
- idna==2.6
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- isodate==0.6.0
- jdcal==1.4.1
- jsonschema==2.6.0
- openpyxl==2.4.11
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.6.1
- requests==2.18.4
- rfc3987==1.3.7
- six==1.11.0
- tomli==1.2.3
- typing-extensions==4.1.1
- unicodecsv==0.14.1
- unidecode==0.04.21
- urllib3==1.22
- zipp==3.6.0
prefix: /opt/conda/envs/pydatajson
| [
"tests/test_federation.py::PushDatasetTestCase::test_harvest_catalog_with_empty_list"
]
| []
| [
"tests/test_federation.py::PushDatasetTestCase::test_dataset_id_is_preserved_if_catalog_id_is_not_passed",
"tests/test_federation.py::PushDatasetTestCase::test_dataset_level_wrappers",
"tests/test_federation.py::PushDatasetTestCase::test_dataset_without_license_sets_notspecified",
"tests/test_federation.py::PushDatasetTestCase::test_harvest_catalog_with_dataset_list",
"tests/test_federation.py::PushDatasetTestCase::test_harvest_catalog_with_no_optional_parametres",
"tests/test_federation.py::PushDatasetTestCase::test_harvest_catalog_with_owner_org",
"tests/test_federation.py::PushDatasetTestCase::test_id_is_created_correctly",
"tests/test_federation.py::PushDatasetTestCase::test_id_is_updated_correctly",
"tests/test_federation.py::PushDatasetTestCase::test_licenses_are_interpreted_correctly",
"tests/test_federation.py::PushDatasetTestCase::test_tags_are_passed_correctly",
"tests/test_federation.py::RemoveDatasetTestCase::test_empty_search_doesnt_call_purge",
"tests/test_federation.py::RemoveDatasetTestCase::test_filter_in_datasets",
"tests/test_federation.py::RemoveDatasetTestCase::test_filter_in_out_datasets",
"tests/test_federation.py::RemoveDatasetTestCase::test_query_one_dataset",
"tests/test_federation.py::RemoveDatasetTestCase::test_query_over_500_datasets",
"tests/test_federation.py::RemoveDatasetTestCase::test_remove_through_filters_and_organization",
"tests/test_federation.py::PushThemeTestCase::test_ckan_portal_is_called_with_correct_parametres",
"tests/test_federation.py::PushThemeTestCase::test_empty_theme_search_raises_exception",
"tests/test_federation.py::PushThemeTestCase::test_function_pushes_theme_by_identifier",
"tests/test_federation.py::PushThemeTestCase::test_function_pushes_theme_by_label",
"tests/test_federation.py::PushCatalogThemesTestCase::test_empty_portal_pushes_every_theme",
"tests/test_federation.py::PushCatalogThemesTestCase::test_full_portal_pushes_nothing",
"tests/test_federation.py::PushCatalogThemesTestCase::test_non_empty_intersection_pushes_missing_themes"
]
| []
| MIT License | 2,443 | [
"pydatajson/federation.py"
]
| [
"pydatajson/federation.py"
]
|
|
allisson__python-simple-rest-client-9 | 2b7a0b84f9ba93ab0223da50b28365f61433e73c | 2018-04-24 18:06:42 | 2b7a0b84f9ba93ab0223da50b28365f61433e73c | codecov-io: # [Codecov](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=h1) Report
> Merging [#9](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=desc) into [master](https://codecov.io/gh/allisson/python-simple-rest-client/commit/2b7a0b84f9ba93ab0223da50b28365f61433e73c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #9 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 6 6
Lines 184 188 +4
=====================================
+ Hits 184 188 +4
```
| [Impacted Files](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [simple\_rest\_client/request.py](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9/diff?src=pr&el=tree#diff-c2ltcGxlX3Jlc3RfY2xpZW50L3JlcXVlc3QucHk=) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=footer). Last update [2b7a0b8...c862d35](https://codecov.io/gh/allisson/python-simple-rest-client/pull/9?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/.gitignore b/.gitignore
index 95ea3fd..ed03ab1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -39,7 +39,7 @@ htmlcov/
.tox/
.coverage
.coverage.*
-.cache
+.pytest_cache
nosetests.xml
coverage.xml
*,cover
diff --git a/CHANGES.rst b/CHANGES.rst
index 13d44c2..229f54b 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -1,6 +1,11 @@
Changelog
---------
+0.5.2
+~~~~~
+
+* Fix JSONDecodeError when processing empty server responses (thanks @zmbbb).
+
0.5.1
~~~~~
diff --git a/simple_rest_client/request.py b/simple_rest_client/request.py
index b412489..05713a9 100644
--- a/simple_rest_client/request.py
+++ b/simple_rest_client/request.py
@@ -3,7 +3,7 @@ import logging
import async_timeout
from json_encoder import json
-from .decorators import handle_request_error, handle_async_request_error
+from .decorators import handle_async_request_error, handle_request_error
from .models import Response
logger = logging.getLogger(__name__)
@@ -26,7 +26,9 @@ def make_request(session, request):
if 'text' in content_type:
body = client_response.text
elif 'json' in content_type:
- body = json.loads(client_response.text)
+ body = client_response.text
+ if body:
+ body = json.loads(body)
else:
body = client_response.content
@@ -58,7 +60,9 @@ async def make_async_request(session, request):
if 'text' in content_type:
body = await client_response.text()
elif 'json' in content_type:
- body = json.loads(await client_response.text())
+ body = await client_response.text()
+ if body:
+ body = json.loads(body)
else:
body = await client_response.read()
| json.decoder.JSONDecodeError when processing empty server responses
@zmbbb report:
Hey, really like your lib and the elegant interface.
I encountered an exception (see below) when I received an empty response from the server, in this case a "204 NO CONTENT". The lib forwarded the empty response to JSON for decoding which raised the exception. As a quick fix, I added a check whether client_response.text is True. This works for me, for empty and non-empty responses.
The exception:
[...]
File "/usr/local/lib/python3.5/dist-packages/simple_rest_client/resource.py", line 107, in action_method
return make_request(self.session, request)
File "/usr/local/lib/python3.5/dist-packages/simple_rest_client/decorators.py", line 39, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/simple_rest_client/request.py", line 30, in make_request
body = json.loads(client_response.text)
File "/usr/local/lib/python3.5/dist-packages/json_encoder/json/init.py", line 229, in loads
**kw
File "/usr/lib/python3.5/json/init.py", line 332, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
| allisson/python-simple-rest-client | diff --git a/tests/test_resource.py b/tests/test_resource.py
index 5f8fc0b..f3a70bd 100644
--- a/tests/test_resource.py
+++ b/tests/test_resource.py
@@ -83,24 +83,25 @@ def test_resource_actions(url, method, status, action, args, kwargs, reqres_reso
assert response.body == {'success': True}
[email protected]('content_type,response_body', [
- ('application/json', {'success': True}),
- ('text/plain', '{"success": true}'),
- ('application/octet-stream', b'{"success": true}'),
[email protected]('content_type,response_body,expected_response_body', [
+ ('application/json', '{"success": true}', {'success': True}),
+ ('application/json', '', ''),
+ ('text/plain', '{"success": true}', '{"success": true}'),
+ ('application/octet-stream', '{"success": true}', b'{"success": true}'),
])
@responses.activate
-def test_resource_response_body(content_type, response_body, reqres_resource):
+def test_resource_response_body(content_type, response_body, expected_response_body, reqres_resource):
url = 'https://reqres.in/api/users'
responses.add(
responses.GET,
url,
- body=b'{"success": true}',
+ body=response_body,
status=200,
content_type=content_type
)
response = reqres_resource.list()
- assert response.body == response_body
+ assert response.body == expected_response_body
@pytest.mark.asyncio
@@ -124,14 +125,15 @@ async def test_async_resource_actions(url, method, status, action, args, kwargs,
@pytest.mark.asyncio
[email protected]('content_type,response_body', [
- ('application/json', {'success': True}),
- ('text/plain', '{"success": true}'),
- ('application/octet-stream', b'{"success": true}'),
[email protected]('content_type,response_body,expected_response_body', [
+ ('application/json', '{"success": true}', {'success': True}),
+ ('application/json', '', ''),
+ ('text/plain', '{"success": true}', '{"success": true}'),
+ ('application/octet-stream', '{"success": true}', b'{"success": true}'),
])
-async def test_asyncresource_response_body(content_type, response_body, reqres_async_resource):
+async def test_asyncresource_response_body(content_type, response_body, expected_response_body, reqres_async_resource):
url = 'https://reqres.in/api/users'
with aioresponses() as mock_response:
- mock_response.get(url, status=200, body=b'{"success": true}', headers={'Content-Type': content_type})
+ mock_response.get(url, status=200, body=response_body, headers={'Content-Type': content_type})
response = await reqres_async_resource.list()
- assert response.body == response_body
+ assert response.body == expected_response_body
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 3
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"aioresponses",
"asynctest",
"codecov",
"flake8",
"pytest",
"pytest-asyncio",
"pytest-cov",
"responses",
"Sphinx",
"twine",
"wheel"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiohttp==3.8.6
aioresponses==0.7.6
aiosignal==1.2.0
alabaster==0.7.13
async-timeout==4.0.2
asynctest==0.13.0
attrs==22.2.0
Babel==2.11.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
colorama==0.4.5
coverage==6.2
cryptography==40.0.2
docutils==0.17.1
flake8==5.0.4
frozenlist==1.2.0
idna==3.10
idna-ssl==1.1.0
imagesize==1.4.1
importlib-metadata==4.2.0
importlib-resources==5.4.0
iniconfig==1.1.1
jeepney==0.7.1
Jinja2==3.0.3
json-encoder==0.4.4
keyring==23.4.1
MarkupSafe==2.0.1
mccabe==0.7.0
multidict==5.2.0
packaging==21.3
pkginfo==1.10.0
pluggy==1.0.0
py==1.11.0
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
python-status==1.0.1
pytz==2025.2
readme-renderer==34.0
requests==2.27.1
requests-toolbelt==1.0.0
responses==0.17.0
rfc3986==1.5.0
SecretStorage==3.3.3
-e git+https://github.com/allisson/python-simple-rest-client.git@2b7a0b84f9ba93ab0223da50b28365f61433e73c#egg=simple_rest_client
singledispatch==3.7.0
six==1.17.0
snowballstemmer==2.2.0
Sphinx==4.3.2
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
tomli==1.2.3
tqdm==4.64.1
twine==3.8.0
typing_extensions==4.1.1
urllib3==1.26.20
webencodings==0.5.1
yarl==1.7.2
zipp==3.6.0
| name: python-simple-rest-client
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aiohttp==3.8.6
- aioresponses==0.7.6
- aiosignal==1.2.0
- alabaster==0.7.13
- async-timeout==4.0.2
- asynctest==0.13.0
- attrs==22.2.0
- babel==2.11.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- colorama==0.4.5
- coverage==6.2
- cryptography==40.0.2
- docutils==0.17.1
- flake8==5.0.4
- frozenlist==1.2.0
- idna==3.10
- idna-ssl==1.1.0
- imagesize==1.4.1
- importlib-metadata==4.2.0
- importlib-resources==5.4.0
- iniconfig==1.1.1
- jeepney==0.7.1
- jinja2==3.0.3
- json-encoder==0.4.4
- keyring==23.4.1
- markupsafe==2.0.1
- mccabe==0.7.0
- multidict==5.2.0
- packaging==21.3
- pkginfo==1.10.0
- pluggy==1.0.0
- py==1.11.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- python-status==1.0.1
- pytz==2025.2
- readme-renderer==34.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- responses==0.17.0
- rfc3986==1.5.0
- secretstorage==3.3.3
- singledispatch==3.7.0
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==4.3.2
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==1.2.3
- tqdm==4.64.1
- twine==3.8.0
- typing-extensions==4.1.1
- urllib3==1.26.20
- webencodings==0.5.1
- yarl==1.7.2
- zipp==3.6.0
prefix: /opt/conda/envs/python-simple-rest-client
| [
"tests/test_resource.py::test_resource_response_body[application/json--]",
"tests/test_resource.py::test_asyncresource_response_body[application/json--]"
]
| []
| [
"tests/test_resource.py::test_base_resource_actions",
"tests/test_resource.py::test_base_resource_get_action_full_url",
"tests/test_resource.py::test_base_resource_get_action_full_url_with_append_slash",
"tests/test_resource.py::test_base_resource_get_action_full_url_with_action_not_found",
"tests/test_resource.py::test_base_resource_get_action_full_url_with_action_url_match_error",
"tests/test_resource.py::test_custom_resource_actions",
"tests/test_resource.py::test_custom_resource_get_action_full_url",
"tests/test_resource.py::test_resource_actions[https://reqres.in/api/users-GET-200-list-None-kwargs0]",
"tests/test_resource.py::test_resource_actions[https://reqres.in/api/users-POST-201-create-None-kwargs1]",
"tests/test_resource.py::test_resource_actions[https://reqres.in/api/users/2-GET-200-retrieve-2-kwargs2]",
"tests/test_resource.py::test_resource_actions[https://reqres.in/api/users/2-PUT-200-update-2-kwargs3]",
"tests/test_resource.py::test_resource_actions[https://reqres.in/api/users/2-PATCH-200-partial_update-2-kwargs4]",
"tests/test_resource.py::test_resource_actions[https://reqres.in/api/users/2-DELETE-204-destroy-2-kwargs5]",
"tests/test_resource.py::test_resource_response_body[application/json-{\"success\":",
"tests/test_resource.py::test_resource_response_body[text/plain-{\"success\":",
"tests/test_resource.py::test_resource_response_body[application/octet-stream-{\"success\":",
"tests/test_resource.py::test_async_resource_actions[https://reqres.in/api/users-GET-200-list-None-kwargs0]",
"tests/test_resource.py::test_async_resource_actions[https://reqres.in/api/users-POST-201-create-None-kwargs1]",
"tests/test_resource.py::test_async_resource_actions[https://reqres.in/api/users/2-GET-200-retrieve-2-kwargs2]",
"tests/test_resource.py::test_async_resource_actions[https://reqres.in/api/users/2-PUT-200-update-2-kwargs3]",
"tests/test_resource.py::test_async_resource_actions[https://reqres.in/api/users/2-PATCH-200-partial_update-2-kwargs4]",
"tests/test_resource.py::test_async_resource_actions[https://reqres.in/api/users/2-DELETE-204-destroy-2-kwargs5]",
"tests/test_resource.py::test_asyncresource_response_body[application/json-{\"success\":",
"tests/test_resource.py::test_asyncresource_response_body[text/plain-{\"success\":",
"tests/test_resource.py::test_asyncresource_response_body[application/octet-stream-{\"success\":"
]
| []
| MIT License | 2,444 | [
".gitignore",
"simple_rest_client/request.py",
"CHANGES.rst"
]
| [
".gitignore",
"simple_rest_client/request.py",
"CHANGES.rst"
]
|
pytorch__ignite-158 | 6d58cc358d6085a38d9ed1e48853ec7afe4d489a | 2018-04-25 13:20:29 | 6d58cc358d6085a38d9ed1e48853ec7afe4d489a | alykhantejani: yeah, or the alternative is to put the things in `__init__.py` into
`ignite/engine.py` :shrug:
On Wed, Apr 25, 2018 at 4:03 PM, vfdev <[email protected]> wrote:
> *@vfdev-5* commented on this pull request.
> ------------------------------
>
> In ignite/engine/__init__.py
> <https://github.com/pytorch/ignite/pull/158#discussion_r184094172>:
>
> > @@ -1,4 +1,4 @@
> -from ignite.engines.engine import Engine, Events, State
> +from .engine import Engine, State, Events
>
> from ignite.engine.engine import Engine
> we should have called the package engine to complete the similarity :)
>
> Otherwise, if you put Engine, State, Event into ignite.__init__.py it
> could be nicer, what do you think ?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pytorch/ignite/pull/158#discussion_r184094172>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AAp8WiE-ZzzH_f7Ip6HE30aEM14dqXNKks5tsJAkgaJpZM4TjaUB>
> .
>
jasonkriss: LGTM!
vfdev-5: @alykhantejani as you are working on this, could you fix also logger messages in `Engine` as it is not trainer anymore:
- ["Training starting with max_epochs={}"](https://github.com/pytorch/ignite/blob/master/ignite/engines/engine.py#L132), maybe, by "Engine run starting with max_epochs={}"
- ["Training complete. Time taken %02d:%02d:%02d"](https://github.com/pytorch/ignite/blob/master/ignite/engines/engine.py#L147) by "Engine run complete. Time taken %02d:%02d:%02d"
- ["Training is terminating due to exception: %s"](https://github.com/pytorch/ignite/blob/master/ignite/engines/engine.py#L150) by "Engine run is terminating due to exception: %s"
thanks
alykhantejani: @vfdev-5 sure thing. Was waiting to merge the other PRs so others don't have to fix merge conflicts :) | diff --git a/docs/source/concepts.rst b/docs/source/concepts.rst
index 171965bb..325d2aeb 100644
--- a/docs/source/concepts.rst
+++ b/docs/source/concepts.rst
@@ -4,7 +4,7 @@ Concepts
Engine
------
-The **essence** of the framework is the class :class:`ignite.engines.Engine`, an abstraction that loops a given number of times over
+The **essence** of the framework is the class :class:`ignite.engine.Engine`, an abstraction that loops a given number of times over
provided data, executes a processing function and returns a result:
.. code-block:: python
@@ -36,14 +36,14 @@ For example, model trainer for a supervised task:
Events and Handlers
-------------------
-To improve the :class:`ignite.engines.Engine`'s flexibility, an event system is introduced that facilitates interaction on each step of
+To improve the :class:`ignite.engine.Engine`'s flexibility, an event system is introduced that facilitates interaction on each step of
the run:
- *engine is started/completed*
- *epoch is started/completed*
- *batch iteration is started/completed*
-Complete list of events can be found `here <https://github.com/pytorch/ignite/blob/master/ignite/engines/engine.py#L8>`_.
+Complete list of events can be found `here <https://github.com/pytorch/ignite/blob/master/ignite/engine/engine.py#L8>`_.
Thus, user can execute a custom code as an event handler. Let us consider in more detail what happens when :meth:`Engine.run` is called:
@@ -89,7 +89,8 @@ Attaching an event handler is simple using method :meth:`Engine.add_event_handle
State
-----
-A state is introduced in :class:`ignite.engines.Engine` to store the output of the `process_function`, current epoch, iteration and other
+A state is introduced in :class:`ignite.engine.Engine` to store the output of the `process_function`, current epoch,
+ iteration and other
helpful information. For example, in case of supervised trainer, we can log computed loss value, completed iterations and
epochs:
@@ -107,7 +108,7 @@ epochs:
.. Note ::
- A good practice is to use :class:`ignite.engines.State` also as a storage of user data created in update or handler functions.
+ A good practice is to use :class:`ignite.engine.State` also as a storage of user data created in update or handler functions.
For example, we would like to save `new_attribute` in the `state`:
.. code-block:: python
diff --git a/docs/source/engines.rst b/docs/source/engine.rst
similarity index 70%
rename from docs/source/engines.rst
rename to docs/source/engine.rst
index 298ac1b0..307d5003 100644
--- a/docs/source/engines.rst
+++ b/docs/source/engine.rst
@@ -1,7 +1,7 @@
-ignite.engines
+ignite.engine
==============
-.. currentmodule:: ignite.engines
+.. currentmodule:: ignite.engine
.. autoclass:: Engine
diff --git a/docs/source/index.rst b/docs/source/index.rst
index ac8cb517..5790bddb 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -15,7 +15,7 @@ PyTorch.
:maxdepth: 2
:caption: Package Reference
- engines
+ engine
handlers
metrics
exceptions
diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst
index 62b7fec1..a7fc35a2 100644
--- a/docs/source/quickstart.rst
+++ b/docs/source/quickstart.rst
@@ -10,7 +10,7 @@ Code
.. code-block:: python
- from ignite.engines import Events, create_supervised_trainer, create_supervised_evaluator
+ from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
model = Net()
train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)
@@ -53,7 +53,7 @@ datasets (as :class:`torch.utils.data.DataLoader`), optimizer and loss function:
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8))
loss = torch.nn.NLLLoss()
-Next we define trainer and evaluator engines. The main component of Ignite is the :class:`Engine`, an abstraction over your
+Next we define trainer and evaluator engines. The main component of Ignite is the :class:`ignite.engine.Engine`, an abstraction over your
training loop. Getting started with the engine is easy, the constructor only requires one things:
- `update_function`: a function which is passed the engine and a batch and it passes data through and updates your model
diff --git a/examples/mnist.py b/examples/mnist.py
index 74e6cf79..ab78c69f 100644
--- a/examples/mnist.py
+++ b/examples/mnist.py
@@ -9,7 +9,7 @@ import torch.nn.functional as F
from torchvision.transforms import Compose, ToTensor, Normalize
from torchvision.datasets import MNIST
-from ignite.engines import Events, create_supervised_trainer, create_supervised_evaluator
+from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import CategoricalAccuracy, Loss
diff --git a/examples/mnist_with_tensorboardx.py b/examples/mnist_with_tensorboardx.py
index 5b80c726..92c3e71c 100644
--- a/examples/mnist_with_tensorboardx.py
+++ b/examples/mnist_with_tensorboardx.py
@@ -30,7 +30,7 @@ try:
except ImportError:
raise RuntimeError("No tensorboardX package is found. Please install with the command: \npip install tensorboardX")
-from ignite.engines import Events, create_supervised_trainer, create_supervised_evaluator
+from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import CategoricalAccuracy, Loss
diff --git a/examples/mnist_with_visdom.py b/examples/mnist_with_visdom.py
index 8114bfcd..e6e994b4 100644
--- a/examples/mnist_with_visdom.py
+++ b/examples/mnist_with_visdom.py
@@ -14,7 +14,7 @@ try:
except ImportError:
raise RuntimeError("No visdom package is found. Please install it with command: \n pip install visdom")
-from ignite.engines import Events, create_supervised_trainer, create_supervised_evaluator
+from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import CategoricalAccuracy, Loss
diff --git a/ignite/engines/__init__.py b/ignite/engine/__init__.py
similarity index 96%
rename from ignite/engines/__init__.py
rename to ignite/engine/__init__.py
index 1a45aea9..6f9aefeb 100644
--- a/ignite/engines/__init__.py
+++ b/ignite/engine/__init__.py
@@ -1,6 +1,6 @@
import torch
-from ignite.engines.engine import Engine, Events, State
+from ignite.engine.engine import Engine, State, Events
from ignite._utils import convert_tensor
diff --git a/ignite/engines/engine.py b/ignite/engine/engine.py
similarity index 94%
rename from ignite/engines/engine.py
rename to ignite/engine/engine.py
index 30802762..191fb704 100644
--- a/ignite/engines/engine.py
+++ b/ignite/engine/engine.py
@@ -10,7 +10,7 @@ IS_PYTHON2 = sys.version_info[0] < 3
class Events(Enum):
- """Events that are fired by the :class:`ignite.engines.Engine` during execution"""
+ """Events that are fired by the :class:`ignite.engine.Engine` during execution"""
EPOCH_STARTED = "epoch_started"
EPOCH_COMPLETED = "epoch_completed"
STARTED = "started"
@@ -71,7 +71,7 @@ class Engine(object):
"""Add an event handler to be executed when the specified event is fired
Args:
- event_name (Events): event from ignite.engines.Events to attach the handler to
+ event_name (Events): event from ignite.engine.Events to attach the handler to
handler (Callable): the callable event handler that should be invoked
*args: optional args to be passed to `handler`
**kwargs: optional keyword args to be passed to `handler`
@@ -194,7 +194,7 @@ class Engine(object):
self.state = State(dataloader=data, epoch=0, max_epochs=max_epochs, metrics={})
try:
- self._logger.info("Training starting with max_epochs={}".format(max_epochs))
+ self._logger.info("Engine run starting with max_epochs={}".format(max_epochs))
start_time = time.time()
self._fire_event(Events.STARTED)
while self.state.epoch < max_epochs and not self.should_terminate:
@@ -209,10 +209,10 @@ class Engine(object):
self._fire_event(Events.COMPLETED)
time_taken = time.time() - start_time
hours, mins, secs = _to_hours_mins_secs(time_taken)
- self._logger.info("Training complete. Time taken %02d:%02d:%02d" % (hours, mins, secs))
+ self._logger.info("Engine run complete. Time taken %02d:%02d:%02d" % (hours, mins, secs))
except BaseException as e:
- self._logger.error("Training is terminating due to exception: %s", str(e))
+ self._logger.error("Engine run is terminating due to exception: %s", str(e))
self._handle_exception(e)
return self.state
diff --git a/ignite/handlers/checkpoint.py b/ignite/handlers/checkpoint.py
index 48cabde7..f515bf36 100644
--- a/ignite/handlers/checkpoint.py
+++ b/ignite/handlers/checkpoint.py
@@ -9,7 +9,7 @@ class ModelCheckpoint(object):
""" ModelCheckpoint handler can be used to periodically save objects to disk.
This handler accepts two arguments:
- - an `ignite.engines.Engine` object
+ - an `ignite.engine.Engine` object
- a `dict` mapping names (`str`) to objects that should be saved to disk.
See Notes and Examples for further details.
@@ -24,7 +24,7 @@ class ModelCheckpoint(object):
Exactly one of (`save_interval`, `score_function`) arguments must be provided.
score_function (Callable, optional):
if not None, it should be a function taking a single argument,
- an `ignite.engines.Engine` object,
+ an `ignite.engine.Engine` object,
and return a score (`float`). Objects with highest scores will be retained.
Exactly one of (`save_interval`, `score_function`) arguments must be provided.
score_name (str, optional):
@@ -61,7 +61,7 @@ class ModelCheckpoint(object):
Examples:
>>> import os
- >>> from ignite.engines import Engine, Events
+ >>> from ignite.engine import Engine, Events
>>> from ignite.handlers import ModelCheckpoint
>>> from torch import nn
>>> trainer = Engine(lambda batch: None)
diff --git a/ignite/handlers/early_stopping.py b/ignite/handlers/early_stopping.py
index 1a747db1..1f89985a 100644
--- a/ignite/handlers/early_stopping.py
+++ b/ignite/handlers/early_stopping.py
@@ -1,6 +1,6 @@
import logging
-from ignite.engines import Engine
+from ignite.engine import Engine
class EarlyStopping(object):
@@ -10,14 +10,14 @@ class EarlyStopping(object):
patience (int):
Number of events to wait if no improvement and then stop the training
score_function (Callable):
- It should be a function taking a single argument, an `ignite.engines.Engine` object,
+ It should be a function taking a single argument, an `ignite.engine.Engine` object,
and return a score `float`. An improvement is considered if the score is higher.
Examples:
.. code-block:: python
- from ignite.engines import Engine, Events
+ from ignite.engine import Engine, Events
from ignite.handlers import EarlyStopping
def score_function(engine):
diff --git a/ignite/handlers/timing.py b/ignite/handlers/timing.py
index d4083df2..1a45367f 100644
--- a/ignite/handlers/timing.py
+++ b/ignite/handlers/timing.py
@@ -1,4 +1,4 @@
-from ignite.engines import Events
+from ignite.engine import Events
try:
from time import perf_counter
@@ -64,8 +64,7 @@ class Timer:
0.10016545779653825
Using the Timer to measure average time it takes to process a single batch of examples
-
- >>> from ignite.engines import Engine, Events
+ >>> from ignite.engine import Engine, Events
>>> from ignite.handlers import Timer
>>> trainer = Engine(training_update_function)
>>> timer = Timer(average=True)
@@ -88,15 +87,15 @@ class Timer:
""" Register callbacks to control the timer.
Args:
- engine (ignite.engines.Engine):
+ engine (ignite.engine.Engine):
Engine that this timer will be attached to
- start (ignite.engines.Events):
+ start (ignite.engine.Events):
Event which should start (reset) the timer
- pause (ignite.engines.Events):
+ pause (ignite.engine.Events):
Event which should pause the timer
- resume (ignite.engines.Events, optional):
+ resume (ignite.engine.Events, optional):
Event which should resume the timer
- step (ignite.engines.Events, optional):
+ step (ignite.engine.Events, optional):
Event which should call the `step` method of the counter
Returns:
diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py
index 33ddc14f..f65e91d2 100644
--- a/ignite/metrics/metric.py
+++ b/ignite/metrics/metric.py
@@ -1,6 +1,6 @@
from abc import ABCMeta, abstractmethod
-from ignite.engines import Events
+from ignite.engine import Events
class Metric(object):
| rename ignite.engines -> ignite.engine
@jasonkriss @vfdev-5 was there a reason we didn't do this when we merged the engines into one? | pytorch/ignite | diff --git a/tests/ignite/engines/test_engine.py b/tests/ignite/engines/test_engine.py
index a5cae67d..a77fc4d2 100644
--- a/tests/ignite/engines/test_engine.py
+++ b/tests/ignite/engines/test_engine.py
@@ -10,7 +10,7 @@ from torch.nn import Linear
from torch.nn.functional import mse_loss
from torch.optim import SGD
-from ignite.engines import Engine, Events, State, create_supervised_trainer, create_supervised_evaluator
+from ignite.engine import Engine, Events, State, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import MeanSquaredError
diff --git a/tests/ignite/handlers/test_checkpoint.py b/tests/ignite/handlers/test_checkpoint.py
index 043500a3..c6126af8 100644
--- a/tests/ignite/handlers/test_checkpoint.py
+++ b/tests/ignite/handlers/test_checkpoint.py
@@ -6,7 +6,7 @@ import torch
import torch.nn as nn
import shutil
-from ignite.engines import Engine, Events
+from ignite.engine import Engine, Events
from ignite.handlers import ModelCheckpoint
_PREFIX = 'PREFIX'
diff --git a/tests/ignite/handlers/test_early_stopping.py b/tests/ignite/handlers/test_early_stopping.py
index 0044e024..db1b8b9f 100644
--- a/tests/ignite/handlers/test_early_stopping.py
+++ b/tests/ignite/handlers/test_early_stopping.py
@@ -1,7 +1,7 @@
import pytest
-from ignite.engines import Engine, Events
+from ignite.engine import Engine, Events
from ignite.handlers import EarlyStopping
diff --git a/tests/ignite/handlers/test_timing.py b/tests/ignite/handlers/test_timing.py
index ab0dc606..1044054d 100644
--- a/tests/ignite/handlers/test_timing.py
+++ b/tests/ignite/handlers/test_timing.py
@@ -1,6 +1,6 @@
import time
-from ignite.engines import Engine, Events
+from ignite.engine import Engine, Events
from ignite.handlers import Timer
diff --git a/tests/ignite/metrics/test_metric.py b/tests/ignite/metrics/test_metric.py
index 4ee6d0e6..070fea20 100644
--- a/tests/ignite/metrics/test_metric.py
+++ b/tests/ignite/metrics/test_metric.py
@@ -1,5 +1,5 @@
from ignite.metrics import Metric
-from ignite.engines import State
+from ignite.engine import State
import torch
from mock import MagicMock
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 13
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"numpy",
"mock",
"pytest",
"codecov",
"pytest-cov",
"tqdm",
"scikit-learn",
"visdom",
"torchvision",
"tensorboardX",
"gym"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
charset-normalizer==2.0.12
cloudpickle==2.2.1
codecov==2.1.13
coverage==6.2
dataclasses==0.8
decorator==4.4.2
enum34==1.1.10
gym==0.26.2
gym-notices==0.0.8
idna==3.10
-e git+https://github.com/pytorch/ignite.git@6d58cc358d6085a38d9ed1e48853ec7afe4d489a#egg=ignite
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
importlib-resources==5.4.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
joblib==1.1.1
jsonpatch==1.32
jsonpointer==2.3
mock==5.2.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
networkx==2.5.1
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==8.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
protobuf==4.21.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
requests==2.27.1
scikit-learn==0.24.2
scipy==1.5.4
six==1.17.0
tensorboardX==2.6.2.2
threadpoolctl==3.1.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
torch==1.10.1
torchvision==0.11.2
tornado==6.1
tqdm==4.64.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
visdom==0.2.4
websocket-client==1.3.1
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: ignite
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- charset-normalizer==2.0.12
- cloudpickle==2.2.1
- codecov==2.1.13
- coverage==6.2
- dataclasses==0.8
- decorator==4.4.2
- enum34==1.1.10
- gym==0.26.2
- gym-notices==0.0.8
- idna==3.10
- importlib-resources==5.4.0
- joblib==1.1.1
- jsonpatch==1.32
- jsonpointer==2.3
- mock==5.2.0
- networkx==2.5.1
- numpy==1.19.5
- pillow==8.4.0
- protobuf==4.21.0
- pytest-cov==4.0.0
- requests==2.27.1
- scikit-learn==0.24.2
- scipy==1.5.4
- six==1.17.0
- tensorboardx==2.6.2.2
- threadpoolctl==3.1.0
- tomli==1.2.3
- torch==1.10.1
- torchvision==0.11.2
- tornado==6.1
- tqdm==4.64.1
- urllib3==1.26.20
- visdom==0.2.4
- websocket-client==1.3.1
prefix: /opt/conda/envs/ignite
| [
"tests/ignite/engines/test_engine.py::test_terminate",
"tests/ignite/engines/test_engine.py::test_invalid_process_raises_with_invalid_signature",
"tests/ignite/engines/test_engine.py::test_add_event_handler_raises_with_invalid_event",
"tests/ignite/engines/test_engine.py::test_add_event_handler_raises_with_invalid_signature",
"tests/ignite/engines/test_engine.py::test_add_event_handler",
"tests/ignite/engines/test_engine.py::test_adding_multiple_event_handlers",
"tests/ignite/engines/test_engine.py::test_args_and_kwargs_are_passed_to_event",
"tests/ignite/engines/test_engine.py::test_on_decorator_raises_with_invalid_event",
"tests/ignite/engines/test_engine.py::test_on_decorator",
"tests/ignite/engines/test_engine.py::test_returns_state",
"tests/ignite/engines/test_engine.py::test_state_attributes",
"tests/ignite/engines/test_engine.py::test_default_exception_handler",
"tests/ignite/engines/test_engine.py::test_custom_exception_handler",
"tests/ignite/engines/test_engine.py::test_current_epoch_counter_increases_every_epoch",
"tests/ignite/engines/test_engine.py::test_current_iteration_counter_increases_every_iteration",
"tests/ignite/engines/test_engine.py::test_stopping_criterion_is_max_epochs",
"tests/ignite/engines/test_engine.py::test_terminate_at_end_of_epoch_stops_run",
"tests/ignite/engines/test_engine.py::test_terminate_at_start_of_epoch_stops_run_after_completing_iteration",
"tests/ignite/engines/test_engine.py::test_terminate_stops_run_mid_epoch",
"tests/ignite/engines/test_engine.py::test_iteration_events_are_fired",
"tests/ignite/engines/test_engine.py::test_create_supervised_trainer",
"tests/ignite/engines/test_engine.py::test_create_supervised",
"tests/ignite/engines/test_engine.py::test_create_supervised_with_metrics",
"tests/ignite/handlers/test_checkpoint.py::test_args_validation",
"tests/ignite/handlers/test_checkpoint.py::test_simple_recovery",
"tests/ignite/handlers/test_checkpoint.py::test_simple_recovery_from_existing_non_empty",
"tests/ignite/handlers/test_checkpoint.py::test_atomic",
"tests/ignite/handlers/test_checkpoint.py::test_last_k",
"tests/ignite/handlers/test_checkpoint.py::test_best_k",
"tests/ignite/handlers/test_checkpoint.py::test_best_k_with_suffix",
"tests/ignite/handlers/test_checkpoint.py::test_with_engine",
"tests/ignite/handlers/test_checkpoint.py::test_no_state_dict",
"tests/ignite/handlers/test_checkpoint.py::test_with_state_dict",
"tests/ignite/handlers/test_checkpoint.py::test_valid_state_dict_save",
"tests/ignite/handlers/test_early_stopping.py::test_args_validation",
"tests/ignite/handlers/test_early_stopping.py::test_simple_early_stopping",
"tests/ignite/handlers/test_early_stopping.py::test_simple_no_early_stopping",
"tests/ignite/handlers/test_early_stopping.py::test_with_engine_early_stopping",
"tests/ignite/handlers/test_early_stopping.py::test_with_engine_no_early_stopping",
"tests/ignite/handlers/test_timing.py::test_timer",
"tests/ignite/metrics/test_metric.py::test_no_transform",
"tests/ignite/metrics/test_metric.py::test_transform"
]
| []
| []
| []
| BSD 3-Clause "New" or "Revised" License | 2,445 | [
"ignite/handlers/early_stopping.py",
"ignite/handlers/checkpoint.py",
"examples/mnist_with_tensorboardx.py",
"docs/source/index.rst",
"ignite/metrics/metric.py",
"examples/mnist_with_visdom.py",
"docs/source/engines.rst",
"examples/mnist.py",
"ignite/handlers/timing.py",
"docs/source/concepts.rst",
"ignite/engines/engine.py",
"docs/source/quickstart.rst",
"ignite/engines/__init__.py"
]
| [
"ignite/handlers/early_stopping.py",
"ignite/handlers/checkpoint.py",
"examples/mnist_with_tensorboardx.py",
"docs/source/engine.rst",
"ignite/engine/engine.py",
"docs/source/index.rst",
"ignite/metrics/metric.py",
"ignite/engine/__init__.py",
"examples/mnist_with_visdom.py",
"examples/mnist.py",
"ignite/handlers/timing.py",
"docs/source/concepts.rst",
"docs/source/quickstart.rst"
]
|
ELIFE-ASU__Neet-105 | 041332432596020896894dbaa66282010db9e065 | 2018-04-25 16:50:40 | 041332432596020896894dbaa66282010db9e065 | diff --git a/neet/boolean/logicnetwork.py b/neet/boolean/logicnetwork.py
index b9342f1..b173fdf 100644
--- a/neet/boolean/logicnetwork.py
+++ b/neet/boolean/logicnetwork.py
@@ -109,13 +109,13 @@ class LogicNetwork(object):
# Encode the mask.
mask_code = long(0)
for idx in indices:
- mask_code += 2 ** idx # Low order, low index.
+ mask_code += 2 ** long(idx) # Low order, low index.
# Encode each condition of truth table.
encoded_sub_table = set()
for condition in conditions:
encoded_condition = long(0)
for idx, state in zip(indices, condition):
- encoded_condition += 2 ** idx if long(state) else 0
+ encoded_condition += 2 ** long(idx) if int(state) else 0
encoded_sub_table.add(encoded_condition)
self._encoded_table.append((mask_code, encoded_sub_table))
| LogicNetwork table encoding issue
See comments on the team_grn slack channel. I'll add more here later. | ELIFE-ASU/Neet | diff --git a/test/test_logic.py b/test/test_logic.py
index 523d2d9..304c019 100644
--- a/test/test_logic.py
+++ b/test/test_logic.py
@@ -2,7 +2,8 @@
# Use of this source code is governed by a MIT
# license that can be found in the LICENSE file.
"""Unit test for LogicNetwork"""
-import unittest
+import unittest, numpy as np
+from neet.python3 import *
from neet.boolean import LogicNetwork
from neet.exceptions import FormatError
@@ -27,6 +28,16 @@ class TestLogicNetwork(unittest.TestCase):
self.assertEqual(['A', 'B'], net.names)
self.assertEqual([(2, {0, 2}), (1, {1})], net._encoded_table)
+ def test_init_long(self):
+ table = [((), set()) for _ in range(65)]
+ table[0] = ((np.int64(64),), set('1'))
+
+ mask = long(2)**64
+
+ net = LogicNetwork(table)
+ self.assertEqual(net.table, table)
+ self.assertEqual(net._encoded_table[0], (mask, set([mask])))
+
def test_inplace_update(self):
net = LogicNetwork([((1,), {'0', '1'}), ((0,), {'1'})])
state = [0, 1]
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 2
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"nose-cov",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.5",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
cov-core==1.15.0
coverage==6.2
decorator==4.4.2
importlib-metadata==4.8.3
iniconfig==1.1.1
-e git+https://github.com/ELIFE-ASU/Neet.git@041332432596020896894dbaa66282010db9e065#egg=neet
networkx==2.5.1
nose==1.3.7
nose-cov==1.6
numpy==1.19.5
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyinform==0.2.0
pyparsing==3.1.4
pytest==7.0.1
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: Neet
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- cov-core==1.15.0
- coverage==6.2
- decorator==4.4.2
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- networkx==2.5.1
- nose==1.3.7
- nose-cov==1.6
- numpy==1.19.5
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyinform==0.2.0
- pyparsing==3.1.4
- pytest==7.0.1
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/Neet
| [
"test/test_logic.py::TestLogicNetwork::test_init_long"
]
| []
| [
"test/test_logic.py::TestLogicNetwork::test_has_metadata",
"test/test_logic.py::TestLogicNetwork::test_init",
"test/test_logic.py::TestLogicNetwork::test_inplace_update",
"test/test_logic.py::TestLogicNetwork::test_is_fixed_sized",
"test/test_logic.py::TestLogicNetwork::test_is_network",
"test/test_logic.py::TestLogicNetwork::test_logic_simple_read",
"test/test_logic.py::TestLogicNetwork::test_logic_simple_read_custom_comment",
"test/test_logic.py::TestLogicNetwork::test_logic_simple_read_empty",
"test/test_logic.py::TestLogicNetwork::test_logic_simple_read_no_commas",
"test/test_logic.py::TestLogicNetwork::test_logic_simple_read_no_header",
"test/test_logic.py::TestLogicNetwork::test_logic_simple_read_no_node_headers",
"test/test_logic.py::TestLogicNetwork::test_neighbors_both",
"test/test_logic.py::TestLogicNetwork::test_neighbors_in",
"test/test_logic.py::TestLogicNetwork::test_neighbors_out",
"test/test_logic.py::TestLogicNetwork::test_node_dependency",
"test/test_logic.py::TestLogicNetwork::test_reduce_table",
"test/test_logic.py::TestLogicNetwork::test_to_networkx_graph_names",
"test/test_logic.py::TestLogicNetwork::test_to_networkx_graph_names_fail",
"test/test_logic.py::TestLogicNetwork::test_to_networkx_metadata",
"test/test_logic.py::TestLogicNetwork::test_update",
"test/test_logic.py::TestLogicNetwork::test_update_exceptions"
]
| []
| MIT License | 2,446 | [
"neet/boolean/logicnetwork.py"
]
| [
"neet/boolean/logicnetwork.py"
]
|
|
dask__dask-3446 | 0fd986fb3f9aefb2c441f135fc807c18471a61b8 | 2018-04-25 19:28:41 | 48c4a589393ebc5b335cc5c7df291901401b0b15 | diff --git a/dask/array/__init__.py b/dask/array/__init__.py
index 67c697118..bf37a71b2 100644
--- a/dask/array/__init__.py
+++ b/dask/array/__init__.py
@@ -7,13 +7,14 @@ from .core import (Array, block, concatenate, stack, from_array, store,
broadcast_arrays, broadcast_to)
from .routines import (take, choose, argwhere, where, coarsen, insert,
ravel, roll, unique, squeeze, ptp, diff, ediff1d,
- bincount, digitize, histogram, cov, array, dstack,
- vstack, hstack, compress, extract, round, count_nonzero,
- flatnonzero, nonzero, around, isin, isnull, notnull,
- isclose, allclose, corrcoef, swapaxes, tensordot,
- transpose, dot, vdot, matmul, apply_along_axis,
- apply_over_axes, result_type, atleast_1d, atleast_2d,
- atleast_3d, piecewise, flip, flipud, fliplr, einsum)
+ gradient, bincount, digitize, histogram, cov, array,
+ dstack, vstack, hstack, compress, extract, round,
+ count_nonzero, flatnonzero, nonzero, around, isin,
+ isnull, notnull, isclose, allclose, corrcoef, swapaxes,
+ tensordot, transpose, dot, vdot, matmul,
+ apply_along_axis, apply_over_axes, result_type,
+ atleast_1d, atleast_2d, atleast_3d, piecewise, flip,
+ flipud, fliplr, einsum)
from .reshape import reshape
from .ufunc import (add, subtract, multiply, divide, logaddexp, logaddexp2,
true_divide, floor_divide, negative, power, remainder, mod, conj, exp,
diff --git a/dask/array/ghost.py b/dask/array/ghost.py
index 84538ef1b..c7be674df 100644
--- a/dask/array/ghost.py
+++ b/dask/array/ghost.py
@@ -361,7 +361,7 @@ def add_dummy_padding(x, depth, boundary):
array([..., 0, 1, 2, 3, 4, 5, ...])
"""
for k, v in boundary.items():
- d = depth[k]
+ d = depth.get(k, 0)
if v == 'none' and d > 0:
empty_shape = list(x.shape)
empty_shape[k] = d
@@ -465,4 +465,5 @@ def coerce_boundary(ndim, boundary):
boundary = (boundary,) * ndim
if isinstance(boundary, tuple):
boundary = dict(zip(range(ndim), boundary))
+
return boundary
diff --git a/dask/array/routines.py b/dask/array/routines.py
index 72e88433a..7c2c9f3ac 100644
--- a/dask/array/routines.py
+++ b/dask/array/routines.py
@@ -1,14 +1,15 @@
from __future__ import division, print_function, absolute_import
import inspect
+import math
import warnings
from collections import Iterable
from distutils.version import LooseVersion
from functools import wraps, partial
-from numbers import Integral
+from numbers import Number, Real, Integral
import numpy as np
-from toolz import concat, sliding_window, interleave
+from toolz import concat, merge, sliding_window, interleave
from .. import sharedict
from ..core import flatten
@@ -404,6 +405,71 @@ def ediff1d(ary, to_end=None, to_begin=None):
return r
+def _gradient_kernel(f, grad_varargs, grad_kwargs):
+ return np.gradient(f, *grad_varargs, **grad_kwargs)
+
+
+@wraps(np.gradient)
+def gradient(f, *varargs, **kwargs):
+ f = asarray(f)
+
+ if not all([isinstance(e, Number) for e in varargs]):
+ raise NotImplementedError("Only numeric scalar spacings supported.")
+
+ if varargs == ():
+ varargs = (1,)
+ if len(varargs) == 1:
+ varargs = f.ndim * varargs
+ if len(varargs) != f.ndim:
+ raise TypeError(
+ "Spacing must either be a scalar or a scalar per dimension."
+ )
+
+ kwargs["edge_order"] = math.ceil(kwargs.get("edge_order", 1))
+ if kwargs["edge_order"] > 2:
+ raise ValueError("edge_order must be less than or equal to 2.")
+
+ drop_result_list = False
+ axis = kwargs.pop("axis", None)
+ if axis is None:
+ axis = tuple(range(f.ndim))
+ elif isinstance(axis, Integral):
+ drop_result_list = True
+ axis = (axis,)
+
+ for e in axis:
+ if not isinstance(e, Integral):
+ raise TypeError("%s, invalid value for axis" % repr(e))
+ if not (-f.ndim <= e < f.ndim):
+ raise ValueError("axis, %s, is out of bounds" % repr(e))
+
+ if len(axis) != len(set(axis)):
+ raise ValueError("duplicate axes not allowed")
+
+ axis = tuple(ax % f.ndim for ax in axis)
+
+ if issubclass(f.dtype.type, (np.bool8, Integral)):
+ f = f.astype(float)
+ elif issubclass(f.dtype.type, Real) and f.dtype.itemsize < 4:
+ f = f.astype(float)
+
+ r = [
+ f.map_overlap(
+ _gradient_kernel,
+ dtype=f.dtype,
+ depth={j: 1 if j == ax else 0 for j in range(f.ndim)},
+ boundary="none",
+ grad_varargs=(varargs[i],),
+ grad_kwargs=merge(kwargs, {"axis": ax}),
+ )
+ for i, ax in enumerate(axis)
+ ]
+ if drop_result_list:
+ r = r[0]
+
+ return r
+
+
@wraps(np.bincount)
def bincount(x, weights=None, minlength=None):
if minlength is None:
diff --git a/dask/dataframe/groupby.py b/dask/dataframe/groupby.py
index 28a0854d8..ee086513e 100644
--- a/dask/dataframe/groupby.py
+++ b/dask/dataframe/groupby.py
@@ -1221,7 +1221,7 @@ class SeriesGroupBy(_GroupBy):
if self._slice:
result = result[self._slice]
- if not isinstance(arg, (list, dict)):
+ if not isinstance(arg, (list, dict)) and isinstance(result, DataFrame):
result = result[result.columns[0]]
return result
diff --git a/docs/source/array-api.rst b/docs/source/array-api.rst
index f6fa498c4..b0b77d5e7 100644
--- a/docs/source/array-api.rst
+++ b/docs/source/array-api.rst
@@ -83,6 +83,7 @@ Top level user functions:
frompyfunc
full
full_like
+ gradient
histogram
hstack
hypot
@@ -423,6 +424,7 @@ Other functions
.. autofunction:: frompyfunc
.. autofunction:: full
.. autofunction:: full_like
+.. autofunction:: gradient
.. autofunction:: histogram
.. autofunction:: hstack
.. autofunction:: hypot
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
index 5ca12ab13..d74933916 100644
--- a/docs/source/changelog.rst
+++ b/docs/source/changelog.rst
@@ -18,11 +18,13 @@ Array
- The ``topk`` API has changed from topk(k, array) to the more conventional topk(array, k).
The legacy API still works but is now deprecated. (:pr:`2965`) `Guido Imperiale`_
- New function ``argtopk`` for Dask Arrays (:pr:`3396`) `Guido Imperiale`_
+- Fix handling partial depth and boundary in ``map_overlap`` (:pr:`3445`) `John A Kirkham`_
+- Add ``gradient`` for Dask Arrays (:pr:`3434`) `John A Kirkham`_
+
DataFrame
+++++++++
-
- Allow `t` as shorthand for `table` in `to_hdf` for pandas compatibility (:pr:`3330`) `Jörg Dietrich`_
- Added top level `isna` method for Dask DataFrames (:pr:`3294`) `Christopher Ren`_
- Fix selection on partition column on ``read_parquet`` for ``engine="pyarrow"`` (:pr:`3207`) `Uwe Korn`_
@@ -33,6 +35,7 @@ DataFrame
- add orc reader (:pr:`3284`) `Martin Durant`_
- Default compression for parquet now always Snappy, in line with pandas (:pr:`3373`) `Martin Durant`_
- Remove outdated requirement from repartition docstring (:pr:`3440`) `Jörg Dietrich`_
+- Fixed bug in aggregation when only a Series is selected (:pr:`3446`) `Jörg Dietrich`_
Bag
+++
| Dataframe groupby()[column].agg fails with AttributeError
Minimal not working example (dask 0.17.2, pandas 0.22.0, Python 3.6):
```
import pandas as pd
from dask import dataframe as dd
df = pd.DataFrame({'A': [1, 2, 3, 1, 2, 3, 1, 2, 4],
'B': [-0.776, -0.4, -0.873, 0.054, 1.419, -0.948, -0.967, -1.714,
-0.666]})
ddf = dd.from_pandas(df, npartitions=1)
ddf.groupby('A')['B'].agg('var')
```
```
AttributeError Traceback (most recent call last)
<ipython-input-1-b43466d1ae6a> in <module>()
6 -0.666]})
7 ddf = dd.from_pandas(df, npartitions=1)
----> 8 ddf.groupby('A')['B'].agg('var')
9
10
~/applications/anaconda3/lib/python3.6/site-packages/dask/dataframe/groupby.py in agg(self, arg, split_every, split_out)
1217 @derived_from(pd.core.groupby.SeriesGroupBy)
1218 def agg(self, arg, split_every=None, split_out=1):
-> 1219 return self.aggregate(arg, split_every=split_every, split_out=split_out)
~/applications/anaconda3/lib/python3.6/site-packages/dask/dataframe/groupby.py in aggregate(self, arg, split_every, split_out)
1211
1212 if not isinstance(arg, (list, dict)):
-> 1213 result = result[result.columns[0]]
1214
1215 return result
AttributeError: 'Series' object has no attribute 'columns'
``` | dask/dask | diff --git a/dask/array/tests/test_ghost.py b/dask/array/tests/test_ghost.py
index 4aa770bde..0497ed95b 100644
--- a/dask/array/tests/test_ghost.py
+++ b/dask/array/tests/test_ghost.py
@@ -195,8 +195,11 @@ def test_map_overlap():
exp1 = d.map_overlap(lambda x: x + x.size, depth=1, dtype=d.dtype)
exp2 = d.map_overlap(lambda x: x + x.size, depth={0: 1, 1: 1},
boundary={0: 'reflect', 1: 'none'}, dtype=d.dtype)
+ exp3 = d.map_overlap(lambda x: x + x.size, depth={1: 1},
+ boundary={1: 'reflect'}, dtype=d.dtype)
assert_eq(exp1, x + 16)
assert_eq(exp2, x + 12)
+ assert_eq(exp3, x + 8)
@pytest.mark.parametrize("boundary", [
diff --git a/dask/array/tests/test_routines.py b/dask/array/tests/test_routines.py
index 7b8443644..a5a214eab 100644
--- a/dask/array/tests/test_routines.py
+++ b/dask/array/tests/test_routines.py
@@ -1,6 +1,7 @@
from __future__ import division, print_function, absolute_import
import itertools
+from numbers import Number
import textwrap
import pytest
@@ -419,6 +420,36 @@ def test_ediff1d(shape, to_end, to_begin):
assert_eq(da.ediff1d(a, to_end, to_begin), np.ediff1d(x, to_end, to_begin))
[email protected]('shape, varargs, axis', [
+ [(10, 15, 20), (), None],
+ [(10, 15, 20), (2,), None],
+ [(10, 15, 20), (1.0, 1.5, 2.0), None],
+ [(10, 15, 20), (), 0],
+ [(10, 15, 20), (), 1],
+ [(10, 15, 20), (), 2],
+ [(10, 15, 20), (), -1],
+ [(10, 15, 20), (), (0, 2)],
+])
[email protected]('edge_order', [
+ 1,
+ 2
+])
+def test_gradient(shape, varargs, axis, edge_order):
+ a = np.random.randint(0, 10, shape)
+ d_a = da.from_array(a, chunks=(len(shape) * (5,)))
+
+ r = np.gradient(a, *varargs, axis=axis, edge_order=edge_order)
+ r_a = da.gradient(d_a, *varargs, axis=axis, edge_order=edge_order)
+
+ if isinstance(axis, Number):
+ assert_eq(r, r_a)
+ else:
+ assert len(r) == len(r_a)
+
+ for e_r, e_r_a in zip(r, r_a):
+ assert_eq(e_r, e_r_a)
+
+
def test_bincount():
x = np.array([2, 1, 5, 2, 1])
d = da.from_array(x, chunks=2)
diff --git a/dask/dataframe/tests/test_groupby.py b/dask/dataframe/tests/test_groupby.py
index 7d1f81d9b..370b54b56 100644
--- a/dask/dataframe/tests/test_groupby.py
+++ b/dask/dataframe/tests/test_groupby.py
@@ -1429,3 +1429,13 @@ def test_groupby_agg_custom__mode():
expected = expected['cc'].groupby([expected['g0'], expected['g1']]).agg('sum')
assert_eq(actual, expected)
+
+
+def test_groupby_select_column_agg():
+ pdf = pd.DataFrame({'A': [1, 2, 3, 1, 2, 3, 1, 2, 4],
+ 'B': [-0.776, -0.4, -0.873, 0.054, 1.419, -0.948,
+ -0.967, -1.714, -0.666]})
+ ddf = dd.from_pandas(pdf, npartitions=4)
+ actual = ddf.groupby('A')['B'].agg('var')
+ expected = pdf.groupby('A')['B'].agg('var')
+ assert_eq(actual, expected)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 6
} | 1.21 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[complete]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8",
"moto"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
boto3==1.23.10
botocore==1.26.10
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
click==8.0.4
cloudpickle==2.2.1
cryptography==40.0.2
-e git+https://github.com/dask/dask.git@0fd986fb3f9aefb2c441f135fc807c18471a61b8#egg=dask
dataclasses==0.8
distributed==1.21.8
flake8==5.0.4
HeapDict==1.0.1
idna==3.10
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
jmespath==0.10.0
locket==1.0.0
MarkupSafe==2.0.1
mccabe==0.7.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
moto==4.0.13
msgpack==1.0.5
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
partd==1.2.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
python-dateutil==2.9.0.post0
pytz==2025.2
requests==2.27.1
responses==0.17.0
s3transfer==0.5.2
six==1.17.0
sortedcontainers==2.4.0
tblib==1.7.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
toolz==0.12.0
tornado==6.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
Werkzeug==2.0.3
xmltodict==0.14.2
zict==2.1.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: dask
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- boto3==1.23.10
- botocore==1.26.10
- cffi==1.15.1
- charset-normalizer==2.0.12
- click==8.0.4
- cloudpickle==2.2.1
- cryptography==40.0.2
- dataclasses==0.8
- distributed==1.21.8
- flake8==5.0.4
- heapdict==1.0.1
- idna==3.10
- importlib-metadata==4.2.0
- jinja2==3.0.3
- jmespath==0.10.0
- locket==1.0.0
- markupsafe==2.0.1
- mccabe==0.7.0
- moto==4.0.13
- msgpack==1.0.5
- numpy==1.19.5
- pandas==1.1.5
- partd==1.2.0
- psutil==7.0.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- requests==2.27.1
- responses==0.17.0
- s3transfer==0.5.2
- six==1.17.0
- sortedcontainers==2.4.0
- tblib==1.7.0
- toolz==0.12.0
- tornado==6.1
- urllib3==1.26.20
- werkzeug==2.0.3
- xmltodict==0.14.2
- zict==2.1.0
prefix: /opt/conda/envs/dask
| [
"dask/array/tests/test_routines.py::test_gradient[1-shape0-varargs0-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape1-varargs1-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape2-varargs2-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape3-varargs3-0]",
"dask/array/tests/test_routines.py::test_gradient[1-shape4-varargs4-1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape5-varargs5-2]",
"dask/array/tests/test_routines.py::test_gradient[1-shape6-varargs6--1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape7-varargs7-axis7]",
"dask/array/tests/test_routines.py::test_gradient[2-shape0-varargs0-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape1-varargs1-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape2-varargs2-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape3-varargs3-0]",
"dask/array/tests/test_routines.py::test_gradient[2-shape4-varargs4-1]",
"dask/array/tests/test_routines.py::test_gradient[2-shape5-varargs5-2]",
"dask/array/tests/test_routines.py::test_gradient[2-shape6-varargs6--1]",
"dask/array/tests/test_routines.py::test_gradient[2-shape7-varargs7-axis7]",
"dask/dataframe/tests/test_groupby.py::test_groupby_select_column_agg"
]
| [
"dask/dataframe/tests/test_groupby.py::test_full_groupby",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_dir",
"dask/dataframe/tests/test_groupby.py::test_groupby_on_index[get_sync]",
"dask/dataframe/tests/test_groupby.py::test_groupby_on_index[get]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_agg",
"dask/dataframe/tests/test_groupby.py::test_groupby_index_array",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[grouper5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_apply_tasks",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec6]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec6]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-spec0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-spec0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[mean-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[mean-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-grouper4]",
"dask/dataframe/tests/test_groupby.py::test_split_out_multi_column_groupby",
"dask/dataframe/tests/test_groupby.py::test_groupby_unaligned_index",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[mean]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_custom__mode"
]
| [
"dask/array/tests/test_ghost.py::test_fractional_slice",
"dask/array/tests/test_ghost.py::test_ghost_internal",
"dask/array/tests/test_ghost.py::test_trim_internal",
"dask/array/tests/test_ghost.py::test_periodic",
"dask/array/tests/test_ghost.py::test_reflect",
"dask/array/tests/test_ghost.py::test_nearest",
"dask/array/tests/test_ghost.py::test_constant",
"dask/array/tests/test_ghost.py::test_boundaries",
"dask/array/tests/test_ghost.py::test_ghost",
"dask/array/tests/test_ghost.py::test_map_overlap",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[None]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[reflect]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[periodic]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[nearest]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[none]",
"dask/array/tests/test_ghost.py::test_map_overlap_no_depth[0]",
"dask/array/tests/test_ghost.py::test_nearest_ghost",
"dask/array/tests/test_ghost.py::test_0_depth",
"dask/array/tests/test_ghost.py::test_some_0_depth",
"dask/array/tests/test_ghost.py::test_one_chunk_along_axis",
"dask/array/tests/test_ghost.py::test_constant_boundaries",
"dask/array/tests/test_ghost.py::test_depth_equals_boundary_length",
"dask/array/tests/test_ghost.py::test_bad_depth_raises",
"dask/array/tests/test_ghost.py::test_none_boundaries",
"dask/array/tests/test_ghost.py::test_ghost_small",
"dask/array/tests/test_routines.py::test_array",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_3d]",
"dask/array/tests/test_routines.py::test_transpose",
"dask/array/tests/test_routines.py::test_transpose_negative_axes",
"dask/array/tests/test_routines.py::test_swapaxes",
"dask/array/tests/test_routines.py::test_flip[shape0-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape0-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape1-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape1-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape2-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape2-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape3-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape3-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape4-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape4-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape0-y_shape0]",
"dask/array/tests/test_routines.py::test_matmul[x_shape1-y_shape1]",
"dask/array/tests/test_routines.py::test_matmul[x_shape2-y_shape2]",
"dask/array/tests/test_routines.py::test_matmul[x_shape3-y_shape3]",
"dask/array/tests/test_routines.py::test_matmul[x_shape4-y_shape4]",
"dask/array/tests/test_routines.py::test_matmul[x_shape5-y_shape5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape6-y_shape6]",
"dask/array/tests/test_routines.py::test_matmul[x_shape7-y_shape7]",
"dask/array/tests/test_routines.py::test_matmul[x_shape8-y_shape8]",
"dask/array/tests/test_routines.py::test_matmul[x_shape9-y_shape9]",
"dask/array/tests/test_routines.py::test_matmul[x_shape10-y_shape10]",
"dask/array/tests/test_routines.py::test_matmul[x_shape11-y_shape11]",
"dask/array/tests/test_routines.py::test_matmul[x_shape12-y_shape12]",
"dask/array/tests/test_routines.py::test_matmul[x_shape13-y_shape13]",
"dask/array/tests/test_routines.py::test_matmul[x_shape14-y_shape14]",
"dask/array/tests/test_routines.py::test_matmul[x_shape15-y_shape15]",
"dask/array/tests/test_routines.py::test_matmul[x_shape16-y_shape16]",
"dask/array/tests/test_routines.py::test_matmul[x_shape17-y_shape17]",
"dask/array/tests/test_routines.py::test_matmul[x_shape18-y_shape18]",
"dask/array/tests/test_routines.py::test_matmul[x_shape19-y_shape19]",
"dask/array/tests/test_routines.py::test_matmul[x_shape20-y_shape20]",
"dask/array/tests/test_routines.py::test_matmul[x_shape21-y_shape21]",
"dask/array/tests/test_routines.py::test_matmul[x_shape22-y_shape22]",
"dask/array/tests/test_routines.py::test_matmul[x_shape23-y_shape23]",
"dask/array/tests/test_routines.py::test_matmul[x_shape24-y_shape24]",
"dask/array/tests/test_routines.py::test_tensordot",
"dask/array/tests/test_routines.py::test_tensordot_2[0]",
"dask/array/tests/test_routines.py::test_tensordot_2[1]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes2]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes3]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes4]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes5]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes6]",
"dask/array/tests/test_routines.py::test_dot_method",
"dask/array/tests/test_routines.py::test_vdot[shape0-chunks0]",
"dask/array/tests/test_routines.py::test_vdot[shape1-chunks1]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-range-<lambda>]",
"dask/array/tests/test_routines.py::test_ptp[shape0-None]",
"dask/array/tests/test_routines.py::test_ptp[shape1-0]",
"dask/array/tests/test_routines.py::test_ptp[shape2-1]",
"dask/array/tests/test_routines.py::test_ptp[shape3-2]",
"dask/array/tests/test_routines.py::test_ptp[shape4--1]",
"dask/array/tests/test_routines.py::test_diff[0-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[0-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[0-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[0-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[1-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[1-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[1-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[1-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[2-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[2-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[2-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[2-shape3--1]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape1]",
"dask/array/tests/test_routines.py::test_bincount",
"dask/array/tests/test_routines.py::test_bincount_with_weights",
"dask/array/tests/test_routines.py::test_bincount_raises_informative_error_on_missing_minlength_kwarg",
"dask/array/tests/test_routines.py::test_digitize",
"dask/array/tests/test_routines.py::test_histogram",
"dask/array/tests/test_routines.py::test_histogram_alternative_bins_range",
"dask/array/tests/test_routines.py::test_histogram_return_type",
"dask/array/tests/test_routines.py::test_histogram_extra_args_and_shapes",
"dask/array/tests/test_routines.py::test_cov",
"dask/array/tests/test_routines.py::test_corrcoef",
"dask/array/tests/test_routines.py::test_round",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-True]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[True]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[False]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_ravel",
"dask/array/tests/test_routines.py::test_squeeze[None-True]",
"dask/array/tests/test_routines.py::test_squeeze[None-False]",
"dask/array/tests/test_routines.py::test_squeeze[0-True]",
"dask/array/tests/test_routines.py::test_squeeze[0-False]",
"dask/array/tests/test_routines.py::test_squeeze[-1-True]",
"dask/array/tests/test_routines.py::test_squeeze[-1-False]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-True]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-False]",
"dask/array/tests/test_routines.py::test_vstack",
"dask/array/tests/test_routines.py::test_hstack",
"dask/array/tests/test_routines.py::test_dstack",
"dask/array/tests/test_routines.py::test_take",
"dask/array/tests/test_routines.py::test_take_dask_from_numpy",
"dask/array/tests/test_routines.py::test_compress",
"dask/array/tests/test_routines.py::test_extract",
"dask/array/tests/test_routines.py::test_isnull",
"dask/array/tests/test_routines.py::test_isclose",
"dask/array/tests/test_routines.py::test_allclose",
"dask/array/tests/test_routines.py::test_choose",
"dask/array/tests/test_routines.py::test_piecewise",
"dask/array/tests/test_routines.py::test_piecewise_otherwise",
"dask/array/tests/test_routines.py::test_argwhere",
"dask/array/tests/test_routines.py::test_argwhere_obj",
"dask/array/tests/test_routines.py::test_argwhere_str",
"dask/array/tests/test_routines.py::test_where",
"dask/array/tests/test_routines.py::test_where_scalar_dtype",
"dask/array/tests/test_routines.py::test_where_bool_optimization",
"dask/array/tests/test_routines.py::test_where_nonzero",
"dask/array/tests/test_routines.py::test_where_incorrect_args",
"dask/array/tests/test_routines.py::test_count_nonzero",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_str",
"dask/array/tests/test_routines.py::test_flatnonzero",
"dask/array/tests/test_routines.py::test_nonzero",
"dask/array/tests/test_routines.py::test_nonzero_method",
"dask/array/tests/test_routines.py::test_coarsen",
"dask/array/tests/test_routines.py::test_coarsen_with_excess",
"dask/array/tests/test_routines.py::test_insert",
"dask/array/tests/test_routines.py::test_multi_insert",
"dask/array/tests/test_routines.py::test_result_type",
"dask/array/tests/test_routines.py::test_einsum[abc,bad->abcd]",
"dask/array/tests/test_routines.py::test_einsum[abcdef,bcdfg->abcdeg]",
"dask/array/tests/test_routines.py::test_einsum[ea,fb,abcd,gc,hd->efgh]",
"dask/array/tests/test_routines.py::test_einsum[ab,b]",
"dask/array/tests/test_routines.py::test_einsum[aa]",
"dask/array/tests/test_routines.py::test_einsum[a,a->]",
"dask/array/tests/test_routines.py::test_einsum[a,a->a]",
"dask/array/tests/test_routines.py::test_einsum[a,a]",
"dask/array/tests/test_routines.py::test_einsum[a,b]",
"dask/array/tests/test_routines.py::test_einsum[a,b,c]",
"dask/array/tests/test_routines.py::test_einsum[a]",
"dask/array/tests/test_routines.py::test_einsum[ba,b]",
"dask/array/tests/test_routines.py::test_einsum[ba,b->]",
"dask/array/tests/test_routines.py::test_einsum[defab,fedbc->defac]",
"dask/array/tests/test_routines.py::test_einsum[ab...,bc...->ac...]",
"dask/array/tests/test_routines.py::test_einsum[a...a]",
"dask/array/tests/test_routines.py::test_einsum[abc...->cba...]",
"dask/array/tests/test_routines.py::test_einsum[...ab->...a]",
"dask/array/tests/test_routines.py::test_einsum[a...a->a...]",
"dask/array/tests/test_routines.py::test_einsum[...abc,...abcd->...d]",
"dask/array/tests/test_routines.py::test_einsum[ab...,b->ab...]",
"dask/array/tests/test_routines.py::test_einsum[aa->a]",
"dask/array/tests/test_routines.py::test_einsum[ab,ab,c->c]",
"dask/array/tests/test_routines.py::test_einsum[aab,bc->ac]",
"dask/array/tests/test_routines.py::test_einsum[aab,bcc->ac]",
"dask/array/tests/test_routines.py::test_einsum[fdf,cdd,ccd,afe->ae]",
"dask/array/tests/test_routines.py::test_einsum[fff,fae,bef,def->abd]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts0]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts1]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts2]",
"dask/array/tests/test_routines.py::test_einsum_order[C]",
"dask/array/tests/test_routines.py::test_einsum_order[F]",
"dask/array/tests/test_routines.py::test_einsum_order[A]",
"dask/array/tests/test_routines.py::test_einsum_order[K]",
"dask/array/tests/test_routines.py::test_einsum_casting[no]",
"dask/array/tests/test_routines.py::test_einsum_casting[equiv]",
"dask/array/tests/test_routines.py::test_einsum_casting[safe]",
"dask/array/tests/test_routines.py::test_einsum_casting[same_kind]",
"dask/array/tests/test_routines.py::test_einsum_casting[unsafe]",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction2",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction3",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_apply_multiarg",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[True-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_full_groupby_multilevel[False-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[sum-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[mean-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[min-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[max-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[count-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[size-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[std-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[var-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[nunique-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[first-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>5]",
"dask/dataframe/tests/test_groupby.py::test_groupby_multilevel_getitem[last-<lambda>6]",
"dask/dataframe/tests/test_groupby.py::test_groupby_get_group",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_nunique",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_nunique_across_group_same_value",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_propagates_names",
"dask/dataframe/tests/test_groupby.py::test_series_groupby",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_errors",
"dask/dataframe/tests/test_groupby.py::test_groupby_set_index",
"dask/dataframe/tests/test_groupby.py::test_split_apply_combine_on_series",
"dask/dataframe/tests/test_groupby.py::test_groupby_reduction_split[split_every]",
"dask/dataframe/tests/test_groupby.py::test_groupby_reduction_split[split_out]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_apply_shuffle_multilevel[<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_numeric_column_names",
"dask/dataframe/tests/test_groupby.py::test_groupby_multiprocessing",
"dask/dataframe/tests/test_groupby.py::test_groupby_normalize_index",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>0-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>1-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>2-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>3-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-False-spec5]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec3]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__examples[<lambda>4-None-spec5]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-False-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>0-None-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-False-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>1-None-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-False-size]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-spec2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-sum]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregate__examples[<lambda>2-None-size]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[sum]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[mean]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[min]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[max]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[count]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[size]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[std]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[var]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[nunique]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[first]",
"dask/dataframe/tests/test_groupby.py::test_aggregate__single_element_groups[last]",
"dask/dataframe/tests/test_groupby.py::test_aggregate_build_agg_args__reuse_of_intermediates",
"dask/dataframe/tests/test_groupby.py::test_aggregate__dask",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[sum-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[mean-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[min-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[max-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[count-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[size-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[std-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[var-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[nunique-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[first-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_aggregations_multilevel[last-<lambda>4]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[sum-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[sum-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[sum-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[mean-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[min-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[min-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[min-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[max-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[max-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[max-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[count-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[count-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[count-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[size-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[size-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[size-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[std-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[std-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[std-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[var-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[var-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[var-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[nunique-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[nunique-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[nunique-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[first-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[first-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[first-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[last-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[last-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_series_aggregations_multilevel[last-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>0-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>1-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_meta_content[<lambda>2-<lambda>3]",
"dask/dataframe/tests/test_groupby.py::test_groupy_non_aligned_index",
"dask/dataframe/tests/test_groupby.py::test_groupy_series_wrong_grouper",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[None-5-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[1-5-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[5-5-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-2-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-2-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-2-20]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-5-1]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-5-4]",
"dask/dataframe/tests/test_groupby.py::test_hash_groupby_aggregate[20-5-20]",
"dask/dataframe/tests/test_groupby.py::test_groupby_split_out_num",
"dask/dataframe/tests/test_groupby.py::test_groupby_not_supported",
"dask/dataframe/tests/test_groupby.py::test_groupby_numeric_column",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-a-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-a-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-a-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-key1-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-key1-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumsum-key1-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-a-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-a-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-a-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-key1-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-key1-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumprod-key1-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-a-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-a-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-a-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-key1-c]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-key1-d]",
"dask/dataframe/tests/test_groupby.py::test_cumulative[cumcount-key1-sel2]",
"dask/dataframe/tests/test_groupby.py::test_cumulative_axis1[cumsum]",
"dask/dataframe/tests/test_groupby.py::test_cumulative_axis1[cumprod]",
"dask/dataframe/tests/test_groupby.py::test_groupby_slice_agg_reduces",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_single",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[a]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[slice_1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[slice_2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_grouper_multiple[slice_3]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[cumprod]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[cumcount]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[cumsum]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[var]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[sum]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[count]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[size]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[std]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[min]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[max]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[first]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_agg_funcs[last]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-group_args0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-group_args1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-group_args2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[amin-idx]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-group_args0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-group_args1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-group_args2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[mean-idx]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-group_args0]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-group_args1]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-group_args2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_column_and_index_apply[<lambda>-idx]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec0-dask_spec0-False]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec1-dask_spec1-True]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec2-dask_spec2-False]",
"dask/dataframe/tests/test_groupby.py::test_dataframe_groupby_agg_custom_sum[pandas_spec3-dask_spec3-False]",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_agg_custom_mean[mean-mean]",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_agg_custom_mean[pandas_spec1-dask_spec1]",
"dask/dataframe/tests/test_groupby.py::test_series_groupby_agg_custom_mean[pandas_spec2-dask_spec2]",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_custom__name_clash_with_internal_same_column",
"dask/dataframe/tests/test_groupby.py::test_groupby_agg_custom__name_clash_with_internal_different_column"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,447 | [
"docs/source/array-api.rst",
"dask/dataframe/groupby.py",
"dask/array/routines.py",
"dask/array/__init__.py",
"docs/source/changelog.rst",
"dask/array/ghost.py"
]
| [
"docs/source/array-api.rst",
"dask/dataframe/groupby.py",
"dask/array/routines.py",
"dask/array/__init__.py",
"docs/source/changelog.rst",
"dask/array/ghost.py"
]
|
|
eliben__pycparser-255 | 168f54c3ae324c3827d22fb90e456653e6fe584a | 2018-04-26 10:13:25 | 168f54c3ae324c3827d22fb90e456653e6fe584a | eliben: Thanks! | diff --git a/pycparser/c_generator.py b/pycparser/c_generator.py
index 0575b8b..4c86f84 100644
--- a/pycparser/c_generator.py
+++ b/pycparser/c_generator.py
@@ -283,8 +283,8 @@ class CGenerator(object):
for name in n.name:
if isinstance(name, c_ast.ID):
s += '.' + name.name
- elif isinstance(name, c_ast.Constant):
- s += '[' + name.value + ']'
+ else:
+ s += '[' + self.visit(name) + ']'
s += ' = ' + self._visit_expr(n.expr)
return s
| Constant expressions in designated initializers are not generated back to C
While pycparser correctly parses a constant-expression in a designated initializer (the AST is correct), it fails to write it back when generating C code.
Consider the following code:
```C
void myFunction(void)
{
int array[3] = {[0] = 0, [1] = 1, [1+1] = 2};
}
```
Parsing it, then using `CGenerator` to generate the source produces:
```C
void myFunction(void)
{
int array[3] = {[0] = 0, [1] = 1, = 2};
}
```
The C99 grammar describes the designator part of designated initializers as:
```ebnf
designator: [ constant-expression ]
. identifier
```
(See §6.7.8 in http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf)
The ```CGenerator.visit_NamedInitializer``` currently only considers the `ID` and `Constant` types.
The `Constant` branch should either be extended to other types or be an `else:` branch. | eliben/pycparser | diff --git a/tests/test_c_generator.py b/tests/test_c_generator.py
index 9385e80..4e38f28 100644
--- a/tests/test_c_generator.py
+++ b/tests/test_c_generator.py
@@ -228,6 +228,11 @@ class TestCtoC(unittest.TestCase):
}
''')
+ def test_issue246(self):
+ self._assert_ctoc_correct(r'''
+ int array[3] = {[0] = 0, [1] = 1, [1+1] = 2};
+ ''')
+
def test_exprlist_with_semi(self):
self._assert_ctoc_correct(r'''
void x() {
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 2.18 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
-e git+https://github.com/eliben/pycparser.git@168f54c3ae324c3827d22fb90e456653e6fe584a#egg=pycparser
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: pycparser
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/pycparser
| [
"tests/test_c_generator.py::TestCtoC::test_issue246"
]
| []
| [
"tests/test_c_generator.py::TestFunctionDeclGeneration::test_partial_funcdecl_generation",
"tests/test_c_generator.py::TestCtoC::test_casts",
"tests/test_c_generator.py::TestCtoC::test_comma_op_assignment",
"tests/test_c_generator.py::TestCtoC::test_comma_op_in_ternary",
"tests/test_c_generator.py::TestCtoC::test_comma_operator_funcarg",
"tests/test_c_generator.py::TestCtoC::test_complex_decls",
"tests/test_c_generator.py::TestCtoC::test_compound_literal",
"tests/test_c_generator.py::TestCtoC::test_enum",
"tests/test_c_generator.py::TestCtoC::test_enum_typedef",
"tests/test_c_generator.py::TestCtoC::test_expr_list_in_initializer_list",
"tests/test_c_generator.py::TestCtoC::test_exprlist_with_semi",
"tests/test_c_generator.py::TestCtoC::test_exprlist_with_subexprlist",
"tests/test_c_generator.py::TestCtoC::test_exprs",
"tests/test_c_generator.py::TestCtoC::test_generate_struct_union_enum_exception",
"tests/test_c_generator.py::TestCtoC::test_initlist",
"tests/test_c_generator.py::TestCtoC::test_issue36",
"tests/test_c_generator.py::TestCtoC::test_issue37",
"tests/test_c_generator.py::TestCtoC::test_issue83",
"tests/test_c_generator.py::TestCtoC::test_issue84",
"tests/test_c_generator.py::TestCtoC::test_krstyle",
"tests/test_c_generator.py::TestCtoC::test_nest_initializer_list",
"tests/test_c_generator.py::TestCtoC::test_nest_named_initializer",
"tests/test_c_generator.py::TestCtoC::test_pragma",
"tests/test_c_generator.py::TestCtoC::test_statements",
"tests/test_c_generator.py::TestCtoC::test_struct_decl",
"tests/test_c_generator.py::TestCtoC::test_switchcase",
"tests/test_c_generator.py::TestCtoC::test_ternary",
"tests/test_c_generator.py::TestCtoC::test_trivial_decls"
]
| []
| BSD License | 2,448 | [
"pycparser/c_generator.py"
]
| [
"pycparser/c_generator.py"
]
|
EdinburghGenomics__clarity_scripts-48 | 32c21fa719365176a9101a8a7ce72eb07f3ac85d | 2018-04-26 10:43:04 | 32c21fa719365176a9101a8a7ce72eb07f3ac85d | diff --git a/EPPs/common.py b/EPPs/common.py
index ead0ee4..b734a33 100644
--- a/EPPs/common.py
+++ b/EPPs/common.py
@@ -71,9 +71,12 @@ class StepEPP(AppLogger):
f = open(file_or_uid)
else:
a = Artifact(self.lims, id=file_or_uid)
- f = StringIO(self.get_file_contents(uri=a.files[0].uri, encoding=encoding, crlf=crlf))
-
- self.open_files.append(f)
+ if a.files:
+ f = StringIO(self.get_file_contents(uri=a.files[0].uri, encoding=encoding, crlf=crlf))
+ else:
+ f = None
+ if f:
+ self.open_files.append(f)
return f
# TODO: remove this when we switch to pyclarity_lims
diff --git a/scripts/convert_and_dispatch_genotypes.py b/scripts/convert_and_dispatch_genotypes.py
index 4cb8a39..18720e0 100644
--- a/scripts/convert_and_dispatch_genotypes.py
+++ b/scripts/convert_and_dispatch_genotypes.py
@@ -1,10 +1,13 @@
#!/usr/bin/env python
import csv
+from collections import defaultdict
from os import remove
from os.path import join, dirname, abspath
-from collections import defaultdict
+
+import sys
+from egcg_core.app_logging import AppLogger
from egcg_core.config import Configuration
-from egcg_core.app_logging import AppLogger, logging_default as log_cfg
+
import EPPs
from EPPs.common import StepEPP, step_argparser
@@ -13,7 +16,6 @@ snp_cfg = Configuration(join(etc_path, 'SNPs_definition.yml'))
default_fai = join(etc_path, 'genotype_32_SNPs_genome_600bp.fa.fai')
default_flank_length = 600
-logger = log_cfg.get_logger(__name__)
SNPs_definitions = snp_cfg['GRCh37_32_SNPs']
# Accepted valid headers in the SNP CSV file
@@ -24,29 +26,22 @@ HEADERS_ASSAY_ID = ['SNPName', 'Assay Name']
vcf_header = ['#CHROM', 'POS', 'ID', 'REF', 'ALT', 'QUAL', 'FILTER', 'INFO', 'FORMAT']
start_vcf_header = ["##fileformat=VCFv4.1", '##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">']
+genotype_udf_file_id = 'Genotyping results file id'
+output_genotype_udf_number_call = 'Number of Calls (This Run)'
+submitted_genotype_udf_number_call = 'Number of Calls (Best Run)'
+submitted_nb_genotype_tries = 'QuantStudio Data Import Completed #'
+
class GenotypeConversion(AppLogger):
- def __init__(self, input_genotypes_contents, accufill_content, mode, fai=default_fai, flank_length=default_flank_length):
+ def __init__(self, input_genotypes_contents, fai=default_fai, flank_length=default_flank_length):
self.all_records = defaultdict(dict)
self.sample_names = set()
self.input_genotypes_contents = input_genotypes_contents
self.fai = fai
self.flank_length = flank_length
- self.accufill_content = accufill_content
self._valid_array_barcodes = None
- self.info("Parsing genotypes in '%s' mode", mode)
- if mode == 'igmm':
- self.parse_genotype_csv()
- elif mode == 'quantStudio':
- if self.accufill_content:
- self.parse_quantstudio_flex_genotype()
- else:
- msg = 'Missing Accufill log file to confirm Array ids please provide with --accufill_log'
- self.critical(msg)
- raise ValueError(msg)
- else:
- raise ValueError('Unexpected genotype format: %s' % mode)
+ self.parse_quantstudio_flex_genotype()
self.info('Parsed %s samples', len(self.sample_names))
reference_lengths = self._parse_genome_fai()
@@ -101,23 +96,6 @@ class GenotypeConversion(AppLogger):
return f
raise ValueError('Could not find any valid fields in ' + str(observed_fieldnames))
- def parse_genotype_csv(self):
- for input_genotypes_content in self.input_genotypes_contents:
- reader = csv.DictReader(input_genotypes_content, delimiter='\t')
- fields = set(reader.fieldnames)
-
- header_sample_id = self._find_field(HEADERS_SAMPLE_ID, fields)
- header_assay_id = self._find_field(HEADERS_ASSAY_ID, fields)
- header_call = self._find_field(HEADERS_CALL, fields)
-
- for line in reader:
- sample = line[header_sample_id]
- if not sample or sample.lower() == 'blank':
- # Entries with blank as sample name are entries with water and no DNA
- continue
- assay_id = line[header_assay_id]
- self.add_genotype(sample, assay_id, line.get(header_call))
-
def parse_quantstudio_flex_genotype(self):
for input_genotypes_content in self.input_genotypes_contents:
result_lines = []
@@ -139,17 +117,6 @@ class GenotypeConversion(AppLogger):
header_assay_id = self._find_field(['Assay ID'], sp_header)
header_call = self._find_field(['Call'], sp_header)
- # Check the barcode is valid according to the accufill log file
- if not parameters['Barcode'] in self.valid_array_barcodes:
- msg = 'Array barcode %s is not in the list of valid barcodes (%s)' % (
- parameters['Barcode'],
- ', '.join(self.valid_array_barcodes)
- )
- self.critical(msg)
- raise ValueError(msg)
- else:
- logger.info('Validate array barcode %s', parameters['Barcode'])
-
for line in result_lines[1:]:
sp_line = line.split('\t')
sample = sp_line[sp_header.index(header_sample_id)]
@@ -158,13 +125,29 @@ class GenotypeConversion(AppLogger):
continue
assay_id = sp_line[sp_header.index(header_assay_id)]
snp_def = SNPs_definitions.get(assay_id)
+ if not snp_def:
+ # Remove control wells
+ continue
+ assay_id = sp_line[sp_header.index(header_assay_id)]
+ snp_def = SNPs_definitions.get(assay_id)
call = sp_line[sp_header.index(header_call)]
if not call == 'Undetermined':
- call_type, c = call.split()
- e1, e2 = c.split('/')
- a1 = snp_def.get(e1.split('_')[-1])
- a2 = snp_def.get(e2.split('_')[-1])
+ sp_call = call.split()
+
+ e1, e2 = ' '.join(sp_call[1:]).split('/')
+ if e1 == 'Allele 1':
+ a1 = snp_def.get('V')
+ elif e1 == 'Allele 2':
+ a1 = snp_def.get('M')
+ else:
+ a1 = snp_def.get(e1.split('_')[-1])
+ if e2 == 'Allele 1':
+ a2 = snp_def.get('V')
+ elif e2 == 'Allele 2':
+ a2 = snp_def.get('M')
+ else:
+ a2 = snp_def.get(e2.split('_')[-1])
call = a1 + a2
self.add_genotype(sample, assay_id, call, parameters['Barcode'])
@@ -180,7 +163,8 @@ class GenotypeConversion(AppLogger):
snp_def['ref_base'], snp_def['alt_base'], ".", ".", ".", "GT"]
self.all_records[snp_def['snp_id']]['SNP'] = snp
if sample in self.all_records[snp_def['snp_id']]:
- msg = 'Sample {} found more than once for SNPs {} while parsing {}'.format(sample, snp_def['snp_id'], array_barcode)
+ msg = 'Sample {} found more than once for SNPs {} while parsing {}'.format(sample, snp_def['snp_id'],
+ array_barcode)
self.critical(msg)
raise Exception(msg)
self.all_records[snp_def['snp_id']][sample] = genotype
@@ -216,102 +200,119 @@ class GenotypeConversion(AppLogger):
open_file.write('\n'.join(lines))
return vcf_file
- def _parse_accufill_load_csv(self):
- all_arrays = set()
- reader = csv.DictReader(self.accufill_content, delimiter='\t')
- header_holder = 'Plate Holder Position'
- header_plate_barcode = 'Sample Plate Barcode'
- header_array = 'OpenArray Plate Barcode'
- for line in reader:
- all_arrays.add((line[header_array], line[header_holder], line[header_plate_barcode]))
- return all_arrays
-
- @property
- def valid_array_barcodes(self):
- if not self._valid_array_barcodes:
- array_info = self._parse_accufill_load_csv()
- self._valid_array_barcodes = list(set([a for a, h, p in array_info]))
- return self._valid_array_barcodes
+ def nb_calls(self, sample):
+ return sum([1 for snps_id in self.snps_order if self.all_records[snps_id].get(sample) != './.'])
class UploadVcfToSamples(StepEPP):
- def __init__(self, step_uri, username, password, log_file, mode, input_genotypes_files,
- accufill_log=None, no_upload=False):
+ def __init__(self, step_uri, username, password, log_file, input_genotypes_files):
super().__init__(step_uri, username, password, log_file)
- self.no_upload = no_upload
input_genotypes_contents = []
for s in input_genotypes_files:
- input_genotypes_contents.append(self.open_or_download_file(s))
- if accufill_log:
- accufill_log_content = self.open_or_download_file(accufill_log)
+ f = self.open_or_download_file(s)
+ if f:
+ input_genotypes_contents.append(f)
+ self.geno_conv = GenotypeConversion(input_genotypes_contents, default_fai, default_flank_length)
+
+ def _find_output_art(self, input_art):
+ return [o.get('uri') for i, o in self.process.input_output_maps if
+ i.get('limsid') == input_art.id and o.get('output-generation-type') == 'PerInput']
+
+ def _upload_genotyping_for_one_sample(self, artifact):
+ lims_sample = artifact.samples[0]
+ vcf_file = self.geno_conv.generate_vcf(lims_sample.name)
+ nb_call = self.geno_conv.nb_calls(lims_sample.name)
+ output_arts = self._find_output_art(artifact)
+ # there should only be one
+ assert len(output_arts) == 1
+ # upload the number of calls to output
+ output_arts[0].udf[output_genotype_udf_number_call] = nb_call
+ output_arts[0].put()
+
+ # and the vcf file
+ lims_file = self.lims.upload_new_file(lims_sample, vcf_file)
+ # increment the nb of tries
+ if submitted_nb_genotype_tries not in lims_sample.udf:
+ lims_sample.udf[submitted_nb_genotype_tries] = 1
else:
- accufill_log_content = None
-
- self.geno_conv = GenotypeConversion(input_genotypes_contents, accufill_log_content, mode, default_fai,
- default_flank_length)
+ lims_sample.udf[submitted_nb_genotype_tries] += 1
+
+ if submitted_genotype_udf_number_call not in lims_sample.udf:
+ # This is the first genotyping results
+ lims_sample.udf[submitted_genotype_udf_number_call] = nb_call
+ lims_sample.udf[genotype_udf_file_id] = lims_file.id
+ elif lims_sample.udf.get(submitted_genotype_udf_number_call) and \
+ nb_call > lims_sample.udf.get(submitted_genotype_udf_number_call):
+ # This genotyping is better than before
+ lims_sample.udf[submitted_genotype_udf_number_call] = nb_call
+ lims_sample.udf[genotype_udf_file_id] = lims_file.id
+ else:
+ self.info(
+ 'Sample %s new genotype has %s call(s), previous genotype has %s call(s)',
+ lims_sample.name,
+ nb_call,
+ lims_sample.udf[submitted_genotype_udf_number_call]
+ )
+ # finally upload the submitted samples
+ lims_sample.put()
+ remove(vcf_file)
def _run(self):
invalid_lims_samples = []
valid_samples = []
genotyping_sample_used = []
- artifacts = self.process.all_inputs()
- self.info('Matching against %s artifacts', len(artifacts))
- for artifact in artifacts:
- vcf_file = None
+
+ # First check that all sample are present and matching
+ self.info('Matching %s sample from file against %s artifacts',
+ len(self.geno_conv.sample_names), len(self.artifacts))
+ for artifact in self.artifacts:
# Assume only one sample per artifact
lims_sample = artifact.samples[0]
- if lims_sample.name in self.geno_conv.sample_names:
- self.info('Matching %s' % lims_sample.name)
- vcf_file = self.geno_conv.generate_vcf(lims_sample.name)
- genotyping_sample_used.append(lims_sample.name)
- elif lims_sample.udf.get('User Sample Name') in self.geno_conv.sample_names:
- self.info('Matching %s against user sample name %s', lims_sample.name, lims_sample.udf.get('User Sample Name'))
- vcf_file = self.geno_conv.generate_vcf(lims_sample.udf.get('User Sample Name'), new_name=artifact.name)
- genotyping_sample_used.append(lims_sample.udf.get('User Sample Name'))
- else:
+ if lims_sample.name not in self.geno_conv.sample_names:
self.info('No match found for %s', lims_sample.name)
invalid_lims_samples.append(lims_sample)
- if vcf_file:
+ else:
+ self.info('Matching %s' % lims_sample.name)
+ genotyping_sample_used.append(lims_sample.name)
valid_samples.append(lims_sample)
- if not self.no_upload:
- file = self.lims.upload_new_file(lims_sample, vcf_file)
- if file:
- lims_sample.udf['Genotyping results file id'] = file.id
- lims_sample.put()
- remove(vcf_file)
unused_samples = set(self.geno_conv.sample_names).difference(set(genotyping_sample_used))
- self.info('Matched and uploaded %s artifacts against %s genotype results', len(set(valid_samples)), len(set(genotyping_sample_used)))
+ self.info('Matched and uploaded %s artifacts against %s genotype results', len(set(valid_samples)),
+ len(set(genotyping_sample_used)))
self.info('%s artifacts did not match', len(set(invalid_lims_samples)))
self.info('%s genotyping results were not used', len(unused_samples))
- # Message to print to stdout
+ # Message to print to stdout if there are missing samples
messages = []
if invalid_lims_samples:
messages.append('%s Samples are missing genotype' % len(invalid_lims_samples))
if len(self.geno_conv.sample_names) - len(valid_samples) > 0:
+ messages.append(
+ '%s genotypes have not been assigned' % (len(self.geno_conv.sample_names) - len(valid_samples)))
+
+ if messages:
# TODO send a message to the EPP
- messages.append('%s genotypes have not been assigned' % (len(self.geno_conv.sample_names) - len(valid_samples)))
- print(', '.join(messages))
+ print(', '.join(messages))
+ sys.exit(1)
+
+ # All samples are present and matching: upload all samples
+ for artifact in self.artifacts:
+ self._upload_genotyping_for_one_sample(artifact)
+
def main():
args = _parse_args()
- action = UploadVcfToSamples(args.step_uri, args.username, args.password, args.log_file, args.format,
- args.input_genotypes, args.accufill_log, args.no_upload)
+ action = UploadVcfToSamples(args.step_uri, args.username, args.password, args.log_file,
+ args.input_genotypes)
action.run()
def _parse_args():
p = step_argparser()
- p.add_argument('--format', dest='format', type=str, choices=['igmm', 'quantStudio'],
- help='The format of the genotype file')
p.add_argument('--input_genotypes', dest='input_genotypes', type=str, nargs='+',
help='The files or artifact id that contains the genotype for all the samples')
- p.add_argument('--accufill_log', dest='accufill_log', type=str, required=False,
- help='The file that contains the location and name of each of the array')
- p.add_argument('--no_upload', dest='no_upload', action='store_true', help='Prevent any upload to the LIMS')
return p.parse_args()
diff --git a/scripts/populate_review_step.py b/scripts/populate_review_step.py
index 9cccfd0..3ba1948 100644
--- a/scripts/populate_review_step.py
+++ b/scripts/populate_review_step.py
@@ -1,6 +1,5 @@
#!/usr/bin/env python
import datetime
-from egcg_core import util
from cached_property import cached_property
from EPPs.common import StepEPP, RestCommunicationEPP, step_argparser
from EPPs.config import load_config
@@ -19,8 +18,8 @@ class StepPopulator(StepEPP, RestCommunicationEPP):
if io[0]['uri'].samples[0].name == sample_name and io[1]['output-type'] == 'ResultFile'
]
- def check_rest_data_and_artifacts(self, sample_name):
- query_args = {'where': {'sample_id': sample_name}}
+ def check_rest_data_and_artifacts(self, sample_name, selector):
+ query_args = {selector: {'sample_id': sample_name}}
rest_entities = self.get_documents(self.endpoint, **query_args)
artifacts = self.output_artifacts_per_sample(sample_name=sample_name)
if len(rest_entities) != len(artifacts): # in sample review this will be 1, in run review this will be more
@@ -31,18 +30,6 @@ class StepPopulator(StepEPP, RestCommunicationEPP):
)
return rest_entities, artifacts
- def delivered(self, sample_name):
- d = {'yes': True, 'no': False}
- query_args = {'where': {'sample_id': sample_name}}
- sample = self.get_documents('samples', **query_args)[0]
- return d.get(sample.get('delivered'))
-
- def processed(self, sample_name):
- query_args = {'where': {'sample_id': sample_name}}
- sample = self.get_documents('samples', **query_args)[0]
- processing_status = util.query_dict(sample, 'aggregated.most_recent_proc.status')
- return processing_status == 'finished'
-
def _run(self):
raise NotImplementedError
@@ -64,7 +51,7 @@ class PullInfo(StepPopulator):
self.lims.put_batch(artifacts_to_upload)
def add_artifact_info(self, sample):
- rest_entities, artifacts = self.check_rest_data_and_artifacts(sample.name)
+ rest_entities, artifacts = self.check_rest_data_and_artifacts(sample.name, 'match')
artifacts_to_upload = set()
for i in range(len(rest_entities)):
for art_field, api_field in self.metrics_mapping:
@@ -96,16 +83,15 @@ class PullInfo(StepPopulator):
class PullRunElementInfo(PullInfo):
- endpoint = 'run_elements'
+ endpoint = 'aggregate/run_elements'
metrics_mapping = [
('RE Id', 'run_element_id'),
('RE Nb Reads', 'passing_filter_reads'),
- ('RE Yield', 'aggregated.clean_yield_in_gb'),
- ('RE Yield Q30', 'aggregated.clean_yield_q30_in_gb'),
- ('RE %Q30', 'aggregated.clean_pc_q30'),
- ('RE Coverage', 'coverage.mean'),
+ ('RE Yield', 'clean_yield_in_gb'),
+ ('RE Yield Q30', 'clean_yield_q30_in_gb'),
+ ('RE %Q30', 'clean_pc_q30'),
('RE Estimated Duplicate Rate', 'lane_pc_optical_dups'),
- ('RE %Adapter', 'aggregated.pc_adaptor'),
+ ('RE %Adapter', 'pc_adapter'),
('RE Review status', 'reviewed'),
('RE Review Comment', 'review_comments'),
('RE Review date', 'review_date'),
@@ -116,6 +102,7 @@ class PullRunElementInfo(PullInfo):
def assess_sample(self, sample):
artifacts_to_upload = set()
+
artifacts = self.output_artifacts_per_sample(sample_name=sample.name)
un_reviewed_artifacts = [a for a in artifacts if a.udf.get('RE Review status') not in ['pass', 'fail']]
if un_reviewed_artifacts:
@@ -124,69 +111,36 @@ class PullRunElementInfo(PullInfo):
# Artifacts that pass the review
pass_artifacts = [a for a in artifacts if a.udf.get('RE Review status') == 'pass']
+
# Artifacts that fail the review
fail_artifacts = [a for a in artifacts if a.udf.get('RE Review status') == 'fail']
- # Artifacts that are new
- new_artifacts = [a for a in artifacts if a.udf.get('RE previous Useable') not in ['yes', 'no']]
-
- # skip samples which have been delivered, mark any new REs as such, not changing older RE comments
- if self.delivered(sample.name):
- for a in new_artifacts:
- a.udf['RE Useable Comment'] = 'AR: Delivered'
- a.udf['RE Useable'] = 'no'
-
- for a in pass_artifacts + fail_artifacts:
- if a.udf.get('RE previous Useable Comment') and a.udf.get('RE previous Useable'):
- a.udf['RE Useable Comment'] = a.udf.get('RE previous Useable Comment')
- a.udf['RE Useable'] = a.udf.get('RE previous Useable')
- artifacts_to_upload.update(artifacts)
- return artifacts_to_upload
+ target_yield = float(sample.udf.get('Yield for Quoted Coverage (Gb)'))
+ good_re_yield = sum([float(a.udf.get('RE Yield Q30')) for a in pass_artifacts])
- # skip samples which have been processed, mark any new REs as such, not changing older RE comments
- if self.processed(sample.name):
- for a in pass_artifacts + fail_artifacts:
- if a.udf.get('RE previous Useable Comment') and a.udf.get('RE previous Useable'):
- a.udf['RE Useable Comment'] = a.udf.get('RE previous Useable Comment')
- a.udf['RE Useable'] = a.udf.get('RE previous Useable')
-
- for a in new_artifacts:
- a.udf['RE Useable Comment'] = 'AR: Sample already processed'
+ # Just the right amount of good yield: take it all
+ if target_yield < good_re_yield < target_yield * 2:
+ for a in pass_artifacts:
+ a.udf['RE Useable'] = 'yes'
+ a.udf['RE Useable Comment'] = 'AR: Good yield'
+ for a in fail_artifacts:
a.udf['RE Useable'] = 'no'
-
+ a.udf['RE Useable Comment'] = 'AR: Failed and not needed'
artifacts_to_upload.update(artifacts)
- return artifacts_to_upload
-
- target_yield = float(sample.udf.get('Required Yield (Gb)'))
- good_re_yield = sum([float(a.udf.get('RE Yield')) for a in pass_artifacts])
-
- # Increase target coverage by 5% to resolve borderline cases
- target_coverage = 1.05 * sample.udf.get('Coverage (X)')
- obtained_coverage = float(sum([a.udf.get('RE Coverage') for a in pass_artifacts]))
# Too much good yield limit to the best quality ones
- if good_re_yield > target_yield * 2 and obtained_coverage > target_coverage:
+ elif good_re_yield > target_yield * 2:
# Too much yield: sort the good artifact by quality
pass_artifacts.sort(key=lambda x: x.udf.get('RE %Q30'), reverse=True)
current_yield = 0
for a in pass_artifacts:
- current_yield += float(a.udf.get('RE Yield'))
+ current_yield += float(a.udf.get('RE Yield Q30'))
if current_yield < target_yield * 2:
a.udf['RE Useable'] = 'yes'
a.udf['RE Useable Comment'] = 'AR: Good yield'
else:
a.udf['RE Useable'] = 'no'
- a.udf['RE Useable Comment'] = 'AR: Too much good yield'
- for a in fail_artifacts:
- a.udf['RE Useable'] = 'no'
- a.udf['RE Useable Comment'] = 'AR: Failed and not needed'
- artifacts_to_upload.update(artifacts)
-
- # Just the right amount of good yield: take it all
- elif target_yield < good_re_yield < target_yield * 2 or obtained_coverage > target_coverage:
- for a in pass_artifacts:
- a.udf['RE Useable'] = 'yes'
- a.udf['RE Useable Comment'] = 'AR: Good yield'
+ a.udf['RE Useable Comment'] = 'AR: To much good yield'
for a in fail_artifacts:
a.udf['RE Useable'] = 'no'
a.udf['RE Useable Comment'] = 'AR: Failed and not needed'
@@ -199,16 +153,16 @@ class PullRunElementInfo(PullInfo):
class PullSampleInfo(PullInfo):
- endpoint = 'samples'
+ endpoint = 'aggregate/samples'
metrics_mapping = [
- ('SR Yield (Gb)', 'aggregated.clean_yield_in_gb'),
- ('SR %Q30', 'aggregated.clean_pc_q30'),
- ('SR % Mapped', 'aggregated.pc_mapped_reads'),
- ('SR % Duplicates', 'aggregated.pc_duplicate_reads'),
- ('SR Mean Coverage', 'aggregated.mean_coverage'),
- ('SR Species Found', 'matching_species'),
- ('SR Sex Check Match', 'aggregated.gender_match'),
- ('SR Genotyping Match', 'aggregated.genotype_match'),
+ ('SR Yield (Gb)', 'clean_yield_in_gb'),
+ ('SR %Q30', 'clean_pc_q30'),
+ ('SR % Mapped', 'pc_mapped_reads'),
+ ('SR % Duplicates', 'pc_duplicate_reads'),
+ ('SR Mean Coverage', 'coverage.mean'),
+ ('SR Species Found', 'species_contamination'),
+ ('SR Sex Check Match', 'gender_match'),
+ ('SR Genotyping Match', 'genotype_match'),
('SR Freemix', 'sample_contamination.freemix'),
('SR Review Status', 'reviewed'),
('SR Review Comments', 'review_comments'),
@@ -238,9 +192,9 @@ class PullSampleInfo(PullInfo):
def field_from_entity(self, entity, api_field):
# TODO: remove once Rest API has a sensible field for species found
- if api_field == 'matching_species':
- species = entity[api_field]
- return ', '.join(species)
+ if api_field == 'species_contamination':
+ species = entity[api_field]['contaminant_unique_mapped']
+ return ', '.join(k for k in sorted(species) if species[k] > 500)
return super().field_from_entity(entity, api_field)
@@ -260,7 +214,7 @@ class PushInfo(StepPopulator):
_ = self.output_artifacts
for sample in self.samples:
self.info('Pushing data for sample %s', sample.name)
- rest_entities, artifacts = self.check_rest_data_and_artifacts(sample.name)
+ rest_entities, artifacts = self.check_rest_data_and_artifacts(sample.name, 'where')
rest_api_data = {}
for e in rest_entities:
rest_api_data[e[self.api_id_field]] = e
diff --git a/setup.py b/setup.py
index 404668a..ac46760 100644
--- a/setup.py
+++ b/setup.py
@@ -12,7 +12,8 @@ version = '0.7.dev0'
setup(
name='clarity_scripts',
version=version,
- packages=find_packages(exclude=('tests',)),
+ packages=['EPPs'],
+ package_data={'EPPs': ['etc/*']},
url='https://github.com/EdinburghGenomics/clarity_scripts',
license='MIT',
description='Clarity EPPs used in Edinburgh Genomics',
| Add Check for poor quality genotype in convert_and_dispatch_genotypes.py
We need to check the quality of the genotype and make sure we have at least 20 good SNPs.
If not we need to set a UDF (to be named) so that the next step script can check if its value. | EdinburghGenomics/clarity_scripts | diff --git a/tests/assets/VGA55_QuantStudio 12K Flex_export.txt b/tests/assets/VGA55_QuantStudio 12K Flex_export.txt
deleted file mode 100755
index 7bbc637..0000000
--- a/tests/assets/VGA55_QuantStudio 12K Flex_export.txt
+++ /dev/null
@@ -1,86 +0,0 @@
-* Barcode = VGA55
-* Block Type = OpenArray Block
-* Chemistry = TAQMAN
-* Comment = NA
-* Date Created = 09-08-2016 15:39:38 PM BST
-* Date Modified = 09-08-2016 19:17:53 PM BST
-* Experiment File Name = C:\Applied Biosystems\QuantStudio 12K Flex Software\User Files\experiments\VGA55_2016_09_08_153938.eds
-* Experiment Name = VGA55
-* Experiment Run Start Time = 09-08-2016 15:40:56 PM BST
-* Experiment Run Stop Time = 09-08-2016 19:15:03 PM BST
-* Experiment Type = SNP Genotyping
-* Instrument Name = 285880963
-* Instrument Serial Number = 285880963
-* Instrument Type = QuantStudio 12K Flex
-* Passive Reference =
-* Quantification Cycle Method = Crt
-* Signal Smoothing On = true
-* Stage/ Cycle where Analysis is performed = Stage 3, Step 3
-* User Name = NA
-
-[Results]
-Well Well Position Omit Sample Name Assay ID Allele1 Name Allele2 Name Allele1 Dyes Allele2 Dyes NCBI SNP Reference Context Sequence Quality Value SNP Assay Name Task Allele1 R Allele2 R ROX Signal Quality(%) Call Method Call Cycle Allele1 Amp Score Allele2 Amp Score Allele1 Cq Conf Allele2 Cq Conf Allele 1 Crt Allele 2 Crt ALLELE2CRTNOISE ALLELE1CRTNOISE ALLELE1CRTAMPLITUDE ALLELE2CRTAMPLITUDE
-1 A1a1 false V0001P001A01 C__11821218_10 C__11821218_10_V C__11821218_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs4855056 "TAATTAGGATTATGTGTACTTGCCT[A/G]TGAGTTCTCAAATAGCTAATGATAC" 95.000 C__11821218_10 UNKNOWN 1,020.718 836.246 740.735 98.752 Heterozygous C__11821218_10_V/C__11821218_10_M Auto 40 1.049 1.069 0.449 0.598 30.524 31.961 N N N N
-2 A1a2 false V0001P001A01 C___1083232_10 C___1083232_10_V C___1083232_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2032598 "TAATGGCCAGCAATTTAGTATTGCC[T/C]GACTTTTACTAATGCATGTGCTGTT" 95.000 C___1083232_10 UNKNOWN 204.954 390.974 793.618 77.375 Undetermined Auto 40 0.672 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
-3 A1a3 false V0001P001A01 C___1007630_10 C___1007630_10_V C___1007630_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs441460 "CATGGAAACAGAGCCTGGATCATAC[A/G]GGGAACTGCAACCATGATTGGATTA" 95.000 C___1007630_10 UNKNOWN 760.956 2,675.284 871.529 98.898 Homozygous C___1007630_10_M/C___1007630_10_M Auto 40 0.902 1.285 0.364 0.743 21.696 23.000 N N N N
-4 A1a4 false V0001P001A01 C__15935210_10 C__15935210_10_V C__15935210_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2259397 "AGTAGGGACCAAGGAAAGGTCATCT[C/T]GGACATAATCGGGAGCTGTGGCTAA" 95.000 C__15935210_10 UNKNOWN 3,564.644 2,183.033 949.794 98.886 Heterozygous C__15935210_10_V/C__15935210_10_M Auto 40 1.343 1.108 0.651 0.552 23.065 23.453 N N N N
-5 A1a5 false V0001P001A01 C___1563023_10 C___1563023_10_V C___1563023_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2136241 "GGCAAATGTAGCACATGTGAGGGTA[C/T]AAAATTATGATGCTACAATCAGAAA" 95.000 C___1563023_10 UNKNOWN 591.356 2,932.495 896.176 98.879 Homozygous C___1563023_10_M/C___1563023_10_M Auto 40 0.886 1.374 0.289 0.950 23.142 23.552 N N N N
-6 A1a6 false V0001P001A01 C___1902433_10 C___1902433_10_V C___1902433_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs10771010 "GAGTAAGGAAGGGTATACTAAGGAA[C/T]AGAGTCACCGGGAGGAGCAACTTCA" 95.000 C___1902433_10 UNKNOWN -63.816 3,861.686 832.176 98.898 Homozygous C___1902433_10_M/C___1902433_10_M Auto 40 0.740 1.361 0.000 0.786 Undetermined 23.813 N Y Y N
-7 A1a7 false V0001P001A01 C__11710129_10 C__11710129_10_V C__11710129_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs11083515 "ACACCAACAATTTGCTGTTATCCAC[A/G]GTCACTCAGCTAAACCTATGCCCTG" 95.000 C__11710129_10 UNKNOWN 3,359.858 3,136.062 1,063.177 98.898 Heterozygous C__11710129_10_V/C__11710129_10_M Auto 40 1.300 1.219 0.579 0.557 23.806 23.834 N N N N
-8 A1a8 false V0001P001A01 C___1027548_20 C___1027548_20_V C___1027548_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs768983 "CCATACTATAAACGAACTGTGAGTA[T/C]GCTCCACCAATTCCAAACAAACGTT" 95.000 C___1027548_20 UNKNOWN 131.272 394.760 1,054.177 83.251 Undetermined Auto 40 0.553 0.928 0.000 0.000 Undetermined Undetermined Y Y Y Y
-9 A1b1 false V0001P001A01 C___1250735_20 C___1250735_20_V C___1250735_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs4751955 "CTGGTCTTCAGCTTCCCTTTTCAGA[A/G]TGGAATGCACACGGTAAGTTTGTGA" 95.000 C___1250735_20 UNKNOWN 2,741.215 4,148.047 882.176 98.898 Heterozygous C___1250735_20_V/C___1250735_20_M Auto 40 1.277 1.365 0.569 0.728 22.585 22.332 N N N N
-10 A1b2 false V0001P001A01 C__11522992_10 C__11522992_10_V C__11522992_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs6598531 "GCTTTGCAAATTCAAGTGAAAGCTC[G/T]GCAGAAACACAAGCGGGGAAGCCCT" 95.000 C__11522992_10 UNKNOWN 1,409.446 4,052.771 855.706 98.898 Homozygous C__11522992_10_M/C__11522992_10_M Auto 40 1.138 1.384 0.481 0.727 25.210 24.742 N N N N
-11 A1b3 false V0001P001A01 C___1122315_10 C___1122315_10_V C___1122315_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs11660213 "TGTCTAGAGCTGTATGCTTTCATGT[A/G]GTAGCCAGTAGCCACATGTGGCTAT" 95.000 C___1122315_10 UNKNOWN 4,926.423 3,438.564 869.529 98.886 Heterozygous C___1122315_10_V/C___1122315_10_M Auto 40 1.427 1.275 0.732 0.544 22.625 22.366 N N N N
-12 A1b4 false V0001P001A01 C___1801627_20 C___1801627_20_V C___1801627_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs10869955 "AAACCAGCAGAACAGTAACATAATC[A/C]TGGGAGTGACATCCCTTCACTCTTG" 95.000 C___1801627_20 UNKNOWN 5,232.941 2,699.524 856.265 98.898 Homozygous C___1801627_20_V/C___1801627_20_V Auto 40 1.516 1.166 0.966 0.421 23.163 23.161 N N N N
-13 A1b5 false V0001P001A01 C__10076371_10 C__10076371_10_V C__10076371_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs4783229 "GGGTCTTCCTCAACATAGCACATAA[C/T]CTGGCTTTCTAGATATTTTCCTTAA" 95.000 C__10076371_10 UNKNOWN 5,347.260 4,650.109 835.500 98.886 Heterozygous C__10076371_10_V/C__10076371_10_M Auto 40 1.573 1.469 0.982 0.968 23.159 23.468 N N N N
-14 A1b6 false V0001P001A01 C___1670459_10 C___1670459_10_V C___1670459_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs6554653 "CATGAAACCACCCGGCAAATACTTA[C/T]ACATAGACTGATTTAGAGTGGAAAA" 95.000 C___1670459_10 UNKNOWN 4,222.754 3,025.554 841.088 98.898 Heterozygous C___1670459_10_V/C___1670459_10_M Auto 40 1.383 1.237 0.709 0.619 23.513 23.604 N N N N
-15 A1b7 false V0001P001A01 C__16205730_10 C__16205730_10_V C__16205730_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2336695 "TAACTGATGTAGTGAAATGTTGAGA[A/G]AATCAGATATTAATGGTTGGCTGAG" 95.000 C__16205730_10 UNKNOWN 5,268.573 5,328.659 1,076.882 98.898 Heterozygous C__16205730_10_V/C__16205730_10_M Auto 40 1.460 1.475 0.730 0.877 23.615 23.487 N N N N
-16 A1b8 false V0001P001A01 C__27402849_10 C__27402849_10_V C__27402849_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs6927758 "TCTGAAATAGTGCTTATTGCATCGA[C/T]CAAAAAGAAGTGAATGCTGGAGTGG" 95.000 C__27402849_10 UNKNOWN 902.983 3,573.698 913.647 98.879 Homozygous C__27402849_10_M/C__27402849_10_M Auto 40 1.116 1.422 0.422 0.975 23.428 23.287 N N N N
-17 A1c1 false V0001P001A01 C___2728408_10 C___2728408_10_V C___2728408_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs3010325 "TTTTGTACCTAGGGTTGTCTGATAC[C/T]AAAGCTCAAGTTCATTCTACTTCGT" 95.000 C___2728408_10 UNKNOWN 510.448 4,620.122 891.618 98.898 Homozygous C___2728408_10_M/C___2728408_10_M Auto 40 0.910 1.453 0.403 0.933 23.921 22.797 N N N N
-18 A1c2 false V0001P001A01 C__26546714_10 C__26546714_10_V C__26546714_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs7773994 "ATAATATGAAACACCATGCAATCAT[G/T]AAGAGATTTTAATGTATCTTATCAA" 95.000 C__26546714_10 UNKNOWN 3,285.366 1,858.643 844.912 98.898 Heterozygous C__26546714_10_V/C__26546714_10_M Auto 40 1.275 1.145 0.560 0.600 23.267 22.390 N N N N
-19 A1c3 false V0001P001A01 C__26524789_10 C__26524789_10_V C__26524789_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs3742257 "CCCAGAGCTGTGCAGTGTAGTGCCC[C/T]GGGTCTAGGCAACAGCAGAAAGTGG" 95.000 C__26524789_10 UNKNOWN 2,024.071 2,259.527 829.500 98.898 Heterozygous C__26524789_10_V/C__26524789_10_M Auto 40 1.192 1.179 0.498 0.502 22.740 22.961 N N N N
-20 A1c4 false V0001P001A01 C__31386842_10 C__31386842_10_V C__31386842_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs12318959 "AAACATTTTCGGCATTGCTTATTGA[C/T]GATGGCAGGGAAAGTTGAAGTTTCC" 95.000 C__31386842_10 UNKNOWN 4,477.846 4,023.062 860.500 98.898 Heterozygous C__31386842_10_V/C__31386842_10_M Auto 40 1.403 1.346 0.680 0.693 23.165 23.256 N N N N
-21 A1c5 false V0001P001A01 C__30044763_10 C__30044763_10_V C__30044763_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs10194978 "AAGGATTAGATAAAACAAAATCCGG[A/G]CACTTGACATGCTCAGGGATCAAAT" 95.000 C__30044763_10 UNKNOWN 4,038.190 3,469.543 849.412 98.886 Heterozygous C__30044763_10_V/C__30044763_10_M Auto 40 1.407 1.370 0.725 0.854 23.679 23.806 N N N N
-22 A1c6 false V0001P001A01 C__29619553_10 C__29619553_10_V C__29619553_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs9396715 "TCTCCTAGAAATCTTGGCCAAAGCA[C/T]AACTGGGTAAAAATAAGGAAAATAT" 95.000 C__29619553_10 UNKNOWN 5,732.102 4,183.877 830.559 98.898 Heterozygous C__29619553_10_V/C__29619553_10_M Auto 40 1.528 1.396 0.891 0.765 23.696 23.811 N N N N
-23 A1c7 false V0001P001A01 C___2953330_10 C___2953330_10_V C___2953330_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs7796391 "TATTCTGCAAAGCCATAGGGACAAA[A/G]CTACCCAAGACCATGGGAACCCACC" 95.000 C___2953330_10 UNKNOWN 3,042.546 5,472.797 903.765 98.898 Homozygous C___2953330_10_M/C___2953330_10_M Auto 40 1.294 1.636 0.451 0.978 26.655 22.447 N N N N
-24 A1c8 false V0001P001A01 C___8938211_20 C___8938211_20_V C___8938211_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs3913290 "AGAGCTCACAATCAAGCCTGGTGAC[T/C]TGAGACTAGCCCTTGTGCATTCATA" 95.000 C___8938211_20 UNKNOWN 145.950 267.808 923.912 79.091 Undetermined Auto 40 0.664 0.731 0.000 0.000 Undetermined Undetermined Y Y Y Y
-25 A1d1 false V0001P001A01 C___8924366_10 C___8924366_10_V C___8924366_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1377935 "TAATAAAATATGTAAGCGCAAGATA[C/T]TATGCTGGGCACCAAATAGGACACC" 95.000 C___8924366_10 UNKNOWN 3,663.103 2,752.647 848.706 98.898 Heterozygous C___8924366_10_V/C___8924366_10_M Auto 40 1.348 1.248 0.579 0.626 26.277 26.222 N N N N
-26 A1d2 false V0001P001A01 C___8850710_10 C___8850710_10_V C___8850710_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1157213 "GTTCATCACAGGGAACAGTGGTCTT[C/T]CAATATCTTCCTAGTCAGGACCCAT" 95.000 C___8850710_10 UNKNOWN 2,240.700 1,780.808 782.618 98.898 Heterozygous C___8850710_10_V/C___8850710_10_M Auto 40 1.189 1.099 0.496 0.488 23.209 23.937 N N N N
-27 A1d3 false V0001P001A01 C___7457509_10 C___7457509_10_V C___7457509_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1567612 "ATGGTTCCAAATATTTTATGCGTAT[A/G]CTTTATCTCTTACTTGCAGTACATA" 95.000 C___7457509_10 UNKNOWN 4,380.590 3,873.021 805.000 98.892 Heterozygous C___7457509_10_V/C___7457509_10_M Auto 40 1.530 1.415 0.987 0.900 22.986 23.708 N N N N
-28 A1d4 false V0001P001A01 C___7431888_10 C___7431888_10_V C___7431888_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1533486 "TTGTCTCAACCATCCAGAAATCAGT[G/T]TATTGAATTGGTTGATTTGATGTTA" 95.000 C___7431888_10 UNKNOWN 4,947.120 261.837 799.088 98.898 Homozygous C___7431888_10_V/C___7431888_10_V Auto 40 1.532 0.662 0.980 0.000 22.746 Undetermined Y N N Y
-29 A1d5 false V0001P001A01 C___7421900_10 C___7421900_10_V C___7421900_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1415762 "TTATGCAGAATACTAAAAAATCTTC[C/T]GAATCTTCTATCCTGTGTCTATCTT" 95.000 C___7421900_10 UNKNOWN 5,062.511 5,515.088 842.941 98.898 Heterozygous C___7421900_10_V/C___7421900_10_M Auto 40 1.501 1.548 0.874 0.985 24.043 23.648 N N N N
-30 A1d6 false V0001P001A01 C_____43852_10 C_____43852_10_V C_____43852_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs946065 "TGAGTGAGGTGTAGTTCACTTCATC[A/C]GGTAGAATCCCAACAAGCTGAGATG" 95.000 C_____43852_10 UNKNOWN 5,458.129 4,582.492 796.882 98.898 Heterozygous C_____43852_10_V/C_____43852_10_M Auto 40 1.479 1.379 0.908 0.787 23.357 23.708 N N N N
-31 A1d7 false V0001P001A01 C__33211212_10 C__33211212_10_V C__33211212_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs7564899 "TATCCTGTAAAGAAGCATCATGCAA[A/G]GGACTTTAAGTCACAGTAATTTAAT" 95.000 C__33211212_10 UNKNOWN 4,268.200 289.616 811.412 98.898 Homozygous C__33211212_10_V/C__33211212_10_V Auto 40 1.435 0.677 0.773 0.000 23.797 Undetermined Y N N Y
-32 A1d8 false V0001P001A01 C___3227711_10 C___3227711_10_V C___3227711_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs4971536 "CAAAGAAGGGATCCTCAACTAACTA[C/T]GGGCAAAATAGATGGCCTCTCCCGT" 95.000 C___3227711_10 UNKNOWN 464.252 4,792.832 857.029 98.879 Homozygous C___3227711_10_M/C___3227711_10_M Auto 40 0.996 1.450 0.392 0.982 24.586 23.731 N N N N
-33 A1e1 false V0001P001C01 C__11821218_10 C__11821218_10_V C__11821218_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs4855056 "TAATTAGGATTATGTGTACTTGCCT[A/G]TGAGTTCTCAAATAGCTAATGATAC" 95.000 C__11821218_10 UNKNOWN 1,632.064 1,510.517 838.676 98.898 Heterozygous C__11821218_10_V/C__11821218_10_M Auto 40 1.097 1.066 0.446 0.484 23.435 23.569 N N N N
-34 A1e2 false V0001P001C01 C___1083232_10 C___1083232_10_V C___1083232_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2032598 "TAATGGCCAGCAATTTAGTATTGCC[T/C]GACTTTTACTAATGCATGTGCTGTT" 95.000 C___1083232_10 UNKNOWN 286.011 500.054 827.471 77.307 Undetermined Auto 40 0.596 0.700 0.000 0.000 Undetermined Undetermined Y Y Y Y
-35 A1e3 false V0001P001C01 C___1007630_10 C___1007630_10_V C___1007630_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs441460 "CATGGAAACAGAGCCTGGATCATAC[A/G]GGGAACTGCAACCATGATTGGATTA" 95.000 C___1007630_10 UNKNOWN 640.605 2,929.887 815.118 98.898 Homozygous C___1007630_10_M/C___1007630_10_M Auto 40 0.888 1.330 0.443 0.795 22.120 21.851 N N N N
-36 A1e4 false V0001P001C01 C__15935210_10 C__15935210_10_V C__15935210_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2259397 "AGTAGGGACCAAGGAAAGGTCATCT[C/T]GGACATAATCGGGAGCTGTGGCTAA" 95.000 C__15935210_10 UNKNOWN 3,879.001 2,410.567 784.059 98.886 Heterozygous C__15935210_10_V/C__15935210_10_M Auto 40 1.375 1.170 0.708 0.525 21.488 22.031 N N N N
-37 A1e5 false V0001P001C01 C___1563023_10 C___1563023_10_V C___1563023_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2136241 "GGCAAATGTAGCACATGTGAGGGTA[C/T]AAAATTATGATGCTACAATCAGAAA" 95.000 C___1563023_10 UNKNOWN 789.813 3,148.686 825.382 98.879 Homozygous C___1563023_10_M/C___1563023_10_M Auto 40 0.976 1.389 0.344 0.932 21.849 21.347 N N N N
-38 A1e6 false V0001P001C01 C___1902433_10 C___1902433_10_V C___1902433_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs10771010 "GAGTAAGGAAGGGTATACTAAGGAA[C/T]AGAGTCACCGGGAGGAGCAACTTCA" 95.000 C___1902433_10 UNKNOWN -91.633 4,123.588 815.882 98.898 Homozygous C___1902433_10_M/C___1902433_10_M Auto 40 0.441 1.378 0.000 0.758 Undetermined 22.493 N Y Y N
-39 A1e7 false V0001P001C01 C__11710129_10 C__11710129_10_V C__11710129_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs11083515 "ACACCAACAATTTGCTGTTATCCAC[A/G]GTCACTCAGCTAAACCTATGCCCTG" 95.000 C__11710129_10 UNKNOWN 3,396.146 2,647.173 836.471 98.898 Heterozygous C__11710129_10_V/C__11710129_10_M Auto 40 1.332 1.196 0.596 0.541 22.409 22.366 N N N N
-40 A1e8 false V0001P001C01 C___1027548_20 C___1027548_20_V C___1027548_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs768983 "CCATACTATAAACGAACTGTGAGTA[T/C]GCTCCACCAATTCCAAACAAACGTT" 95.000 C___1027548_20 UNKNOWN 107.831 405.235 807.647 88.277 Undetermined Auto 40 0.686 0.884 0.000 0.000 Undetermined Undetermined Y Y Y Y
-41 A1f1 false V0001P001C01 C___1250735_20 C___1250735_20_V C___1250735_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs4751955 "CTGGTCTTCAGCTTCCCTTTTCAGA[A/G]TGGAATGCACACGGTAAGTTTGTGA" 95.000 C___1250735_20 UNKNOWN 2,862.330 4,475.204 865.765 98.898 Heterozygous C___1250735_20_V/C___1250735_20_M Auto 40 1.260 1.395 0.507 0.742 20.795 20.858 N N N N
-42 A1f2 false V0001P001C01 C__11522992_10 C__11522992_10_V C__11522992_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs6598531 "GCTTTGCAAATTCAAGTGAAAGCTC[G/T]GCAGAAACACAAGCGGGGAAGCCCT" 95.000 C__11522992_10 UNKNOWN 1,368.138 3,802.691 817.294 98.898 Homozygous C__11522992_10_M/C__11522992_10_M Auto 40 1.097 1.360 0.454 0.696 24.517 24.063 N N N N
-43 A1f3 false V0001P001C01 C___1122315_10 C___1122315_10_V C___1122315_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs11660213 "TGTCTAGAGCTGTATGCTTTCATGT[A/G]GTAGCCAGTAGCCACATGTGGCTAT" 95.000 C___1122315_10 UNKNOWN 4,742.033 3,578.874 817.971 98.886 Heterozygous C___1122315_10_V/C___1122315_10_M Auto 40 1.388 1.280 0.610 0.533 21.246 20.805 N N N N
-44 A1f4 false V0001P001C01 C___1801627_20 C___1801627_20_V C___1801627_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs10869955 "AAACCAGCAGAACAGTAACATAATC[A/C]TGGGAGTGACATCCCTTCACTCTTG" 95.000 C___1801627_20 UNKNOWN 5,104.149 2,923.351 804.647 98.898 Homozygous C___1801627_20_V/C___1801627_20_V Auto 40 1.516 1.203 0.967 0.404 21.250 21.027 N N N N
-45 A1f5 false V0001P001C01 C__10076371_10 C__10076371_10_V C__10076371_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs4783229 "GGGTCTTCCTCAACATAGCACATAA[C/T]CTGGCTTTCTAGATATTTTCCTTAA" 95.000 C__10076371_10 UNKNOWN 5,300.292 4,971.917 841.647 98.886 Heterozygous C__10076371_10_V/C__10076371_10_M Auto 40 1.559 1.459 0.975 0.933 21.875 21.864 N N N N
-46 A1f6 false V0001P001C01 C___1670459_10 C___1670459_10_V C___1670459_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs6554653 "CATGAAACCACCCGGCAAATACTTA[C/T]ACATAGACTGATTTAGAGTGGAAAA" 95.000 C___1670459_10 UNKNOWN 4,406.018 3,051.817 845.706 98.898 Heterozygous C___1670459_10_V/C___1670459_10_M Auto 40 1.399 1.242 0.730 0.685 21.715 22.448 N N N N
-47 A1f7 false V0001P001C01 C__16205730_10 C__16205730_10_V C__16205730_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs2336695 "TAACTGATGTAGTGAAATGTTGAGA[A/G]AATCAGATATTAATGGTTGGCTGAG" 95.000 C__16205730_10 UNKNOWN 4,891.253 4,564.681 867.059 98.898 Heterozygous C__16205730_10_V/C__16205730_10_M Auto 40 1.444 1.393 0.673 0.718 22.710 22.566 N N N N
-48 A1f8 false V0001P001C01 C__27402849_10 C__27402849_10_V C__27402849_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs6927758 "TCTGAAATAGTGCTTATTGCATCGA[C/T]CAAAAAGAAGTGAATGCTGGAGTGG" 95.000 C__27402849_10 UNKNOWN 1,074.975 3,701.309 853.147 98.879 Homozygous C__27402849_10_M/C__27402849_10_M Auto 40 1.067 1.425 0.417 0.935 22.536 21.330 N N N N
-49 A1g1 false V0001P001C01 C___2728408_10 C___2728408_10_V C___2728408_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs3010325 "TTTTGTACCTAGGGTTGTCTGATAC[C/T]AAAGCTCAAGTTCATTCTACTTCGT" 95.000 C___2728408_10 UNKNOWN 597.050 4,751.075 864.235 98.898 Homozygous C___2728408_10_M/C___2728408_10_M Auto 40 1.018 1.454 0.312 0.953 22.312 21.390 N N N N
-50 A1g2 false V0001P001C01 C__26546714_10 C__26546714_10_V C__26546714_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs7773994 "ATAATATGAAACACCATGCAATCAT[G/T]AAGAGATTTTAATGTATCTTATCAA" 95.000 C__26546714_10 UNKNOWN 3,375.979 1,600.646 821.294 98.838 Heterozygous C__26546714_10_V/C__26546714_10_M Auto 40 1.274 1.103 0.536 0.478 22.427 21.644 N N N N
-51 A1g3 false V0001P001C01 C__26524789_10 C__26524789_10_V C__26524789_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs3742257 "CCCAGAGCTGTGCAGTGTAGTGCCC[C/T]GGGTCTAGGCAACAGCAGAAAGTGG" 95.000 C__26524789_10 UNKNOWN 1,909.544 2,368.155 830.735 98.898 Heterozygous C__26524789_10_V/C__26524789_10_M Auto 40 1.155 1.170 0.494 0.478 21.543 21.478 N N N N
-52 A1g4 false V0001P001C01 C__31386842_10 C__31386842_10_V C__31386842_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs12318959 "AAACATTTTCGGCATTGCTTATTGA[C/T]GATGGCAGGGAAAGTTGAAGTTTCC" 95.000 C__31386842_10 UNKNOWN 4,792.921 3,885.181 852.500 98.898 Heterozygous C__31386842_10_V/C__31386842_10_M Auto 40 1.420 1.330 0.682 0.652 21.777 21.967 N N N N
-53 A1g5 false V0001P001C01 C__30044763_10 C__30044763_10_V C__30044763_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs10194978 "AAGGATTAGATAAAACAAAATCCGG[A/G]CACTTGACATGCTCAGGGATCAAAT" 95.000 C__30044763_10 UNKNOWN 4,286.648 3,213.266 834.118 98.886 Heterozygous C__30044763_10_V/C__30044763_10_M Auto 40 1.419 1.351 0.738 0.788 22.109 22.398 N N N N
-54 A1g6 false V0001P001C01 C__29619553_10 C__29619553_10_V C__29619553_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs9396715 "TCTCCTAGAAATCTTGGCCAAAGCA[C/T]AACTGGGTAAAAATAAGGAAAATAT" 95.000 C__29619553_10 UNKNOWN 6,131.003 4,093.376 837.294 98.898 Heterozygous C__29619553_10_V/C__29619553_10_M Auto 40 1.550 1.372 0.921 0.714 22.899 23.595 N N N N
-55 A1g7 false V0001P001C01 C___2953330_10 C___2953330_10_V C___2953330_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs7796391 "TATTCTGCAAAGCCATAGGGACAAA[A/G]CTACCCAAGACCATGGGAACCCACC" 95.000 C___2953330_10 UNKNOWN 2,988.263 5,410.118 845.971 98.898 Homozygous C___2953330_10_M/C___2953330_10_M Auto 40 1.287 1.618 0.470 0.977 26.149 21.094 N N N N
-56 A1g8 false V0001P001C01 C___8938211_20 C___8938211_20_V C___8938211_20_M VIC-NFQ-MGB FAM-NFQ-MGB rs3913290 "AGAGCTCACAATCAAGCCTGGTGAC[T/C]TGAGACTAGCCCTTGTGCATTCATA" 95.000 C___8938211_20 UNKNOWN 129.949 -81.473 884.294 100.000 Undetermined Auto 40 0.412 0.676 0.000 0.000 Undetermined Undetermined Y Y Y Y
-57 A1h1 false V0001P001C01 C___8924366_10 C___8924366_10_V C___8924366_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1377935 "TAATAAAATATGTAAGCGCAAGATA[C/T]TATGCTGGGCACCAAATAGGACACC" 95.000 C___8924366_10 UNKNOWN 3,951.246 2,986.782 817.882 98.898 Heterozygous C___8924366_10_V/C___8924366_10_M Auto 40 1.349 1.261 0.607 0.598 24.787 24.715 N N N N
-58 A1h2 false V0001P001C01 C___8850710_10 C___8850710_10_V C___8850710_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1157213 "GTTCATCACAGGGAACAGTGGTCTT[C/T]CAATATCTTCCTAGTCAGGACCCAT" 95.000 C___8850710_10 UNKNOWN 2,308.116 1,821.997 821.941 98.898 Heterozygous C___8850710_10_V/C___8850710_10_M Auto 40 1.179 1.082 0.545 0.407 21.650 21.015 N N N N
-59 A1h3 false V0001P001C01 C___7457509_10 C___7457509_10_V C___7457509_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1567612 "ATGGTTCCAAATATTTTATGCGTAT[A/G]CTTTATCTCTTACTTGCAGTACATA" 95.000 C___7457509_10 UNKNOWN 4,407.470 4,127.951 866.500 98.892 Heterozygous C___7457509_10_V/C___7457509_10_M Auto 40 1.511 1.449 0.981 0.940 21.574 21.827 N N N N
-60 A1h4 false V0001P001C01 C___7431888_10 C___7431888_10_V C___7431888_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1533486 "TTGTCTCAACCATCCAGAAATCAGT[G/T]TATTGAATTGGTTGATTTGATGTTA" 95.000 C___7431888_10 UNKNOWN 5,475.837 327.218 892.941 98.898 Homozygous C___7431888_10_V/C___7431888_10_V Auto 40 1.526 0.597 0.976 0.000 21.072 Undetermined Y N N Y
-61 A1h5 false V0001P001C01 C___7421900_10 C___7421900_10_V C___7421900_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs1415762 "TTATGCAGAATACTAAAAAATCTTC[C/T]GAATCTTCTATCCTGTGTCTATCTT" 95.000 C___7421900_10 UNKNOWN 5,200.576 5,466.343 837.324 98.898 Heterozygous C___7421900_10_V/C___7421900_10_M Auto 40 1.505 1.528 0.879 0.988 22.077 21.937 N N N N
-62 A1h6 false V0001P001C01 C_____43852_10 C_____43852_10_V C_____43852_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs946065 "TGAGTGAGGTGTAGTTCACTTCATC[A/C]GGTAGAATCCCAACAAGCTGAGATG" 95.000 C_____43852_10 UNKNOWN 5,945.144 5,152.079 878.824 98.898 Heterozygous C_____43852_10_V/C_____43852_10_M Auto 40 1.473 1.423 0.823 0.768 21.354 21.632 N N N N
-63 A1h7 false V0001P001C01 C__33211212_10 C__33211212_10_V C__33211212_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs7564899 "TATCCTGTAAAGAAGCATCATGCAA[A/G]GGACTTTAAGTCACAGTAATTTAAT" 95.000 C__33211212_10 UNKNOWN 5,326.211 765.152 1,136.677 98.898 Homozygous C__33211212_10_V/C__33211212_10_V Auto 40 1.473 0.989 0.772 0.734 22.382 20.877 N N N N
-64 A1h8 false V0001P001C01 C___3227711_10 C___3227711_10_V C___3227711_10_M VIC-NFQ-MGB FAM-NFQ-MGB rs4971536 "CAAAGAAGGGATCCTCAACTAACTA[C/T]GGGCAAAATAGATGGCCTCTCCCGT" 95.000 C___3227711_10 UNKNOWN 850.935 6,072.930 1,161.088 98.879 Homozygous C___3227711_10_M/C___3227711_10_M Auto 40 1.109 1.510 0.343 0.982 22.400 22.203 N N N N
diff --git a/tests/assets/YOA15_QuantStudio 12K Flex_export.txt b/tests/assets/YOA15_QuantStudio 12K Flex_export.txt
new file mode 100644
index 0000000..cace5a8
--- /dev/null
+++ b/tests/assets/YOA15_QuantStudio 12K Flex_export.txt
@@ -0,0 +1,86 @@
+* Barcode = YOA15
+* Block Type = OpenArray Block
+* Chemistry = TAQMAN
+* Comment = NA
+* Date Created = 03-29-2018 13:13:29 PM BST
+* Date Modified = 03-29-2018 17:28:03 PM BST
+* Experiment File Name =
+* Experiment Name = YOA15
+* Experiment Run Start Time = 03-29-2018 13:43:38 PM BST
+* Experiment Run Stop Time = 03-29-2018 17:22:28 PM BST
+* Experiment Type = SNP Genotyping
+* Instrument Name = 285880963
+* Instrument Serial Number = 285880963
+* Instrument Type = QuantStudio 12K Flex
+* Passive Reference =
+* Quantification Cycle Method = Crt
+* Signal Smoothing On = true
+* Stage/ Cycle where Analysis is performed = Stage 3, Step 3
+* User Name = NA
+
+[Results]
+Well Well Position Omit Sample Name Assay ID Allele1 Name Allele2 Name Allele1 Dyes Allele2 Dyes NCBI SNP Reference Context Sequence Quality Value SNP Assay Name Task Allele1 R Allele2 R ROX Signal Quality(%) Call Method Call Cycle Allele1 Amp Score Allele2 Amp Score Allele1 Cq Conf Allele2 Cq Conf Allele 1 Crt Allele 2 Crt ALLELE2CRTNOISE ALLELE1CRTNOISE ALLELE1CRTAMPLITUDE ALLELE2CRTAMPLITUDE
+1 A1a1 false V0001P001A01 C___7457509_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___7457509_10 UNKNOWN 198.664 40.499 552.000 100.000 Undetermined Auto 40 0.855 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+2 A1a2 false V0001P001A01 C___8850710_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___8850710_10 UNKNOWN 525.829 205.242 529.471 100.000 Undetermined Auto 40 1.106 0.572 0.000 0.000 Undetermined Undetermined Y Y Y Y
+3 A1a3 false V0001P001A01 C___2728408_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___2728408_10 UNKNOWN 321.178 142.015 550.471 100.000 Undetermined Auto 40 0.582 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+4 A1a4 false V0001P001A01 C___1801627_20 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1801627_20 UNKNOWN 48.855 83.225 573.206 100.000 Undetermined Auto 40 0.000 0.217 0.000 0.000 Undetermined Undetermined Y Y Y Y
+5 A1a5 false V0001P001A01 C___1902433_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1902433_10 UNKNOWN 182.043 947.563 605.618 100.000 Undetermined Auto 40 0.665 1.286 0.000 0.892 Undetermined 44.408 N Y Y N
+6 A1a6 false V0001P001A01 C__11710129_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__11710129_10 UNKNOWN 1,292.521 113.839 579.353 98.732 Homozygous Allele 1/Allele 1 Auto 40 1.276 0.000 0.927 0.000 39.021 Undetermined Y N N Y
+7 A1a7 false V0001P001A01 C___1250735_20 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1250735_20 UNKNOWN 425.446 4,102.513 774.529 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.878 1.472 0.525 0.956 31.130 30.640 N N N N
+8 A1a8 false V0001P001A01 ANEPTUW Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANEPTUW UNKNOWN 501.006 4,043.622 778.412 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.932 1.452 0.000 0.983 Undetermined 31.330 N Y N N
+9 A1b1 false V0001P001A01 C___7421900_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___7421900_10 UNKNOWN 118.855 370.462 504.794 100.000 Undetermined Auto 40 0.746 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+10 A1b2 false V0001P001A01 C___1007630_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1007630_10 UNKNOWN 248.688 239.377 557.824 100.000 Undetermined Auto 40 0.000 0.825 0.000 0.000 Undetermined Undetermined Y Y Y Y
+11 A1b3 false V0001P001A01 C___1670459_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1670459_10 UNKNOWN 527.680 1,025.728 576.559 100.000 Undetermined Auto 40 0.689 1.255 0.000 0.937 Undetermined 44.089 N Y Y N
+12 A1b4 false V0001P001A01 C_____43852_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C_____43852_10 UNKNOWN 2,861.802 2,801.190 611.353 98.892 Heterozygous Allele 1/Allele 2 Auto 40 1.288 1.316 0.677 0.802 31.538 31.478 N N N N
+13 A1b5 false V0001P001A01 C__30044763_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__30044763_10 UNKNOWN 657.668 2,728.557 618.824 98.898 Homozygous Allele 2/Allele 2 Auto 40 1.005 1.353 0.661 0.880 32.822 31.477 N N N N
+14 A1b6 false V0001P001A01 C__15935210_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__15935210_10 UNKNOWN 2,523.050 847.968 584.353 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.297 0.938 0.758 0.512 32.334 31.281 N N N N
+15 A1b7 false V0001P001A01 ANGZFYR Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANGZFYR UNKNOWN 470.446 4,425.145 599.059 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.891 1.470 0.406 0.909 27.515 25.823 N N N N
+16 A1b8 false V0001P001A01 ANKA34K Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANKA34K UNKNOWN 448.738 4,349.168 566.588 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.905 1.459 0.453 0.903 29.364 27.026 N N N N
+17 A1c1 false V0001P001A01 C__29619553_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__29619553_10 UNKNOWN 208.281 254.803 594.059 100.000 Undetermined Manual 40 0.622 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+18 A1c2 false V0001P001A01 C___1122315_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1122315_10 UNKNOWN 222.025 303.971 618.794 100.000 Undetermined Auto 40 0.673 0.790 0.000 0.000 Undetermined Undetermined Y Y Y Y
+19 A1c3 false V0001P001A01 C__16205730_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__16205730_10 UNKNOWN 233.670 390.890 569.235 100.000 Undetermined Manual 40 0.391 0.448 0.000 0.000 Undetermined Undetermined Y Y Y Y
+20 A1c4 false V0001P001A01 C___7431888_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___7431888_10 UNKNOWN 819.978 253.656 592.941 100.000 Undetermined Auto 40 1.235 0.927 0.000 0.000 Undetermined Undetermined Y Y N Y
+21 A1c5 false V0001P001A01 C__26524789_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__26524789_10 UNKNOWN 188.906 1,205.846 605.676 98.803 Homozygous Allele 2/Allele 2 Auto 40 0.000 1.171 0.000 0.813 Undetermined 35.466 N Y Y N
+22 A1c6 false V0001P001A01 C__31386842_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__31386842_10 UNKNOWN 1,121.544 530.678 595.088 100.000 Undetermined Auto 40 1.363 1.080 0.882 0.000 46.171 Undetermined Y N N Y
+23 A1c7 false V0001P001A01 C__10076371_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__10076371_10 UNKNOWN -65.116 1,576.137 609.206 100.000 Undetermined Auto 40 0.859 1.453 0.000 0.935 Undetermined 46.050 N Y Y N
+24 A1c8 false V0001P001A01 ANFVMEU Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANFVMEU UNKNOWN -50.267 1,154.415 600.882 100.000 Undetermined Auto 40 0.629 1.358 0.000 0.942 Undetermined 46.708 N Y Y N
+25 A1d1 false V0001P001A01 C___2953330_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___2953330_10 UNKNOWN 199.458 85.860 590.324 100.000 Undetermined Auto 40 0.000 0.412 0.000 0.000 Undetermined Undetermined Y Y Y Y
+26 A1d2 false V0001P001A01 C__26546714_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__26546714_10 UNKNOWN 46.878 169.964 571.353 100.000 Undetermined Manual 40 0.661 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+27 A1d3 false V0001P001A01 C__11522992_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__11522992_10 UNKNOWN 67.143 78.577 579.647 100.000 Undetermined Manual 40 0.515 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+28 A1d4 false V0001P001A01 C___8924366_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___8924366_10 UNKNOWN 196.249 60.881 606.206 100.000 Undetermined Auto 40 0.530 0.727 0.000 0.000 Undetermined Undetermined Y Y Y Y
+29 A1d5 false V0001P001A01 C___1563023_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1563023_10 UNKNOWN 231.759 -68.010 600.794 100.000 Undetermined Auto 40 0.989 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+30 A1d6 false V0001P001A01 C__33211212_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__33211212_10 UNKNOWN 158.948 119.649 596.706 100.000 Undetermined Auto 40 0.549 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+31 A1d7 false V0001P001A01 ANH6AJN Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANH6AJN UNKNOWN -165.328 1,108.542 612.588 31.502 Undetermined Auto 40 0.819 1.376 0.000 0.923 Undetermined 47.045 N Y Y N
+32 A1d8 false V0001P001A01 ANMFXPH Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANMFXPH UNKNOWN -171.169 619.186 606.941 0.000 Undetermined Auto 40 0.199 1.252 0.000 0.000 Undetermined Undetermined Y Y Y N
+33 A1e1 false V0001P001C01 C___7457509_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___7457509_10 UNKNOWN 2,995.182 2,293.730 573.088 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.425 1.366 0.953 0.975 34.299 34.083 N N N N
+34 A1e2 false V0001P001C01 C___8850710_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___8850710_10 UNKNOWN 1,332.932 911.900 592.853 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.089 0.925 0.500 0.348 22.882 21.777 N N N N
+35 A1e3 false V0001P001C01 C___2728408_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___2728408_10 UNKNOWN 2,969.196 1,965.910 573.882 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.365 1.205 0.743 0.631 27.022 27.167 N N N N
+36 A1e4 false V0001P001C01 C___1801627_20 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1801627_20 UNKNOWN 1,751.393 3,100.312 552.294 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.256 1.376 0.715 0.847 31.384 31.341 N N N N
+37 A1e5 false V0001P001C01 C___1902433_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1902433_10 UNKNOWN 47.150 2,582.138 580.500 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.000 1.259 0.000 0.647 Undetermined 23.603 N Y Y N
+38 A1e6 false V0001P001C01 C__11710129_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__11710129_10 UNKNOWN 296.821 1,966.857 579.971 98.892 Homozygous Allele 2/Allele 2 Auto 40 0.000 1.181 0.000 0.488 Undetermined 22.391 N Y Y N
+39 A1e7 false V0001P001C01 C___1250735_20 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1250735_20 UNKNOWN 2,036.477 2,893.441 580.353 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.218 1.317 0.550 0.654 21.581 21.701 N N N N
+40 A1e8 false V0001P001C01 ANEPTUW Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANEPTUW UNKNOWN 2,170.768 3,416.340 571.765 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.244 1.361 0.559 0.705 21.803 22.006 N N N N
+41 A1f1 false V0001P001C01 C___7421900_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___7421900_10 UNKNOWN 3,849.263 1,264.666 587.118 98.898 Homozygous Allele 1/Allele 1 Auto 40 1.467 1.058 0.933 0.000 31.643 Undetermined Y N N N
+42 A1f2 false V0001P001C01 C___1007630_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1007630_10 UNKNOWN 1,958.177 838.665 596.382 98.484 Heterozygous Allele 1/Allele 2 Auto 40 1.258 1.052 0.688 0.000 32.947 Undetermined Y N N N
+43 A1f3 false V0001P001C01 C___1670459_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1670459_10 UNKNOWN 2,712.877 1,386.837 629.853 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.317 1.127 0.732 0.697 30.828 31.144 N N N N
+44 A1f4 false V0001P001C01 C_____43852_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C_____43852_10 UNKNOWN 1,267.726 4,125.392 574.029 98.892 Homozygous Allele 2/Allele 2 Auto 40 1.048 1.380 0.323 0.710 23.974 23.500 N N N N
+45 A1f5 false V0001P001C01 C__30044763_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__30044763_10 UNKNOWN 3,287.821 2,279.942 600.324 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.330 1.228 0.631 0.578 24.222 24.180 N N N N
+46 A1f6 false V0001P001C01 C__15935210_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__15935210_10 UNKNOWN 303.189 2,047.390 575.794 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.816 1.158 0.000 0.488 Undetermined 24.170 N Y Y N
+47 A1f7 false V0001P001C01 ANGZFYR Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANGZFYR UNKNOWN 2,246.758 3,304.016 614.529 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.278 1.336 0.597 0.676 21.999 22.069 N N N N
+48 A1f8 false V0001P001C01 ANKA34K Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANKA34K UNKNOWN 2,273.130 3,753.003 608.059 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.268 1.362 0.572 0.648 21.819 21.673 N N N N
+49 A1g1 false V0001P001C01 C__29619553_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__29619553_10 UNKNOWN 549.232 131.843 679.324 100.000 Undetermined Manual 40 1.134 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+50 A1g2 false V0001P001C01 C___1122315_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1122315_10 UNKNOWN 2,456.731 1,545.291 646.824 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.350 1.227 0.931 0.893 38.273 37.085 N N N N
+51 A1g3 false V0001P001C01 C__16205730_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__16205730_10 UNKNOWN 977.069 830.564 603.324 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.322 1.148 0.814 0.000 46.502 Undetermined Y N N N
+52 A1g4 false V0001P001C01 C___7431888_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___7431888_10 UNKNOWN 2,872.738 818.201 636.147 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.397 1.088 0.955 0.869 38.179 37.997 N N N N
+53 A1g5 false V0001P001C01 C__26524789_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__26524789_10 UNKNOWN 142.903 1,742.205 646.382 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.661 1.216 0.000 0.646 Undetermined 29.670 N Y Y N
+54 A1g6 false V0001P001C01 C__31386842_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__31386842_10 UNKNOWN 280.437 2,464.494 633.265 98.898 Homozygous Allele 2/Allele 2 Auto 40 0.883 1.369 0.000 0.966 Undetermined 37.552 N Y Y N
+55 A1g7 false V0001P001C01 C__10076371_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__10076371_10 UNKNOWN 3,572.756 1,960.236 597.853 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.506 1.305 0.970 0.907 37.792 37.796 N N N N
+56 A1g8 false V0001P001C01 ANFVMEU Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANFVMEU UNKNOWN 3,935.214 2,089.143 653.824 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.544 1.318 0.965 0.872 38.227 37.395 N N N N
+57 A1h1 false V0001P001C01 C___2953330_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___2953330_10 UNKNOWN 404.453 2,625.771 792.265 98.892 Homozygous Allele 2/Allele 2 Auto 40 0.000 1.556 0.000 0.952 Undetermined 45.556 N Y Y N
+58 A1h2 false V0001P001C01 C__26546714_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__26546714_10 UNKNOWN 141.880 295.943 772.206 100.000 Undetermined Manual 40 0.559 0.382 0.000 0.000 Undetermined Undetermined Y Y Y Y
+59 A1h3 false V0001P001C01 C__11522992_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__11522992_10 UNKNOWN 264.144 380.722 796.735 100.000 Undetermined Manual 40 0.825 0.000 0.000 0.000 Undetermined Undetermined Y Y Y Y
+60 A1h4 false V0001P001C01 C___8924366_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___8924366_10 UNKNOWN 648.600 423.215 754.059 100.000 Undetermined Manual 40 1.225 1.026 0.817 0.000 46.999 Undetermined Y N N Y
+61 A1h5 false V0001P001C01 C___1563023_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C___1563023_10 UNKNOWN 1,488.499 283.037 736.000 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.360 0.988 0.939 0.000 41.327 Undetermined Y N N Y
+62 A1h6 false V0001P001C01 C__33211212_10 Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 C__33211212_10 UNKNOWN 449.108 461.986 737.088 98.898 Heterozygous Allele 1/Allele 2 Auto 40 1.139 1.124 0.000 0.000 Undetermined Undetermined Y Y Y N
+63 A1h7 false V0001P001C01 ANH6AJN Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANH6AJN UNKNOWN 5,709.859 3,794.156 947.294 100.000 Heterozygous Allele 1/Allele 2 Manual 40 1.562 1.405 0.971 0.899 35.322 35.225 N N N N
+64 A1h8 false V0001P001C01 ANMFXPH Allele 1 Allele 2 VIC-NFQ-MGB FAM-NFQ-MGB 95.000 ANMFXPH UNKNOWN 6,027.021 3,848.086 995.176 98.798 Heterozygous Allele 1/Allele 2 Auto 40 1.595 1.399 0.970 0.868 35.221 35.093 N N N N
diff --git a/tests/test_common.py b/tests/test_common.py
index 43d6423..d50500e 100644
--- a/tests/test_common.py
+++ b/tests/test_common.py
@@ -41,7 +41,7 @@ class TestCommon(TestCase):
assets = join(dirname(abspath(__file__)), 'assets')
etc_path = join(abspath(dirname(EPPs.__file__)), 'etc')
genotype_csv = join(assets, 'E03159_WGS_32_panel_9504430.csv')
- genotype_quantStudio = join(assets, 'VGA55_QuantStudio 12K Flex_export.txt')
+ genotype_quantStudio = join(assets, 'YOA15_QuantStudio 12K Flex_export.txt')
accufill_log = join(assets, 'OpenArrayLoad_Log.csv')
small_reference_fai = join(assets, 'genotype_32_SNPs_genome_600bp.fa.fai')
reference_fai = join(assets, 'GRCh37.fa.fai')
diff --git a/tests/test_convert_and_dispatch_genotypes.py b/tests/test_convert_and_dispatch_genotypes.py
index 177010d..3d34ed3 100644
--- a/tests/test_convert_and_dispatch_genotypes.py
+++ b/tests/test_convert_and_dispatch_genotypes.py
@@ -21,19 +21,19 @@ class TestGenotypeConversion(TestCommon):
def setUp(self):
self.geno_conversion = GenotypeConversion(
- open_files([self.genotype_csv]), self.accufill_log, 'igmm', self.small_reference_fai, flank_length=600
+ open_files([self.genotype_quantStudio]), self.small_reference_fai, flank_length=600
)
def test_generate_vcf(self):
# header_lines = ['##header line1', '##header line2']
# snp_ids = ['id4', 'id2', 'id3', 'id1']
# TODO: make assertions on what header lines, snp IDs, etc. have been written
- sample_id = '9504430'
+ sample_id = 'V0001P001C01'
path = join(self.assets, 'test_generate')
vcf_file = path + '.vcf'
assert self.geno_conversion.generate_vcf(sample_id, new_name=path) == vcf_file
with open(vcf_file) as f:
- assert len([l for l in f.readlines() if not l.startswith('#')]) == 32
+ assert 26 == len([l for l in f.readlines() if not l.startswith('#')])
def test_get_genotype_from_call(self):
genotype = self.geno_conversion.get_genotype_from_call('A', 'T', 'Both', )
@@ -71,15 +71,9 @@ class TestGenotypeConversion(TestCommon):
]
assert refence_length == expected_ref_length
- def test_init_genotype_csv(self):
- assert self.geno_conversion.sample_names == {'9504430'}
- assert len(self.geno_conversion.all_records) == 32
-
def test_parse_QuantStudio_AIF_genotype(self):
- geno_conversion = GenotypeConversion(open_files([self.genotype_quantStudio]), open(self.accufill_log),
- 'quantStudio', self.small_reference_fai, flank_length=600)
- assert geno_conversion.sample_names == {'V0001P001C01', 'V0001P001A01'}
- assert len(geno_conversion.all_records) == 32
+ assert self.geno_conversion.sample_names == {'V0001P001C01', 'V0001P001A01'}
+ assert len(self.geno_conversion.all_records) == 26
def test_find_field(self):
observed_fieldnames = ('__this__', 'that', 'OTHER')
@@ -100,33 +94,97 @@ class TestUploadVcfToSamples(TestEPP):
'a_user',
'a_password',
self.log_file,
- mode='igmm',
- input_genotypes_files=[self.genotype_csv]
+ input_genotypes_files=[self.genotype_quantStudio]
)
-
+ self.lims_sample1 = FakeEntity(name='V0001P001A01', udf={}, put=Mock())
+ self.lims_sample2 = FakeEntity(name='V0001P001C01', udf={}, put=Mock())
fake_all_inputs = Mock(
return_value=[
- Mock(samples=[FakeEntity(name='this', udf={'User Sample Name': '9504430'}, put=Mock())])
+ Mock(samples=[self.lims_sample1]),
+ Mock(samples=[self.lims_sample2])
]
)
+ # all output artifacts
+ self.outputs = {}
+
+ def fake_find_output_art(inart):
+ if inart.samples[0] not in self.outputs:
+ self.outputs[inart.samples[0]]= Mock(samples=inart.samples, udf={}, put=Mock())
+ return [self.outputs[inart.samples[0]]]
+
self.patched_process = patch.object(StepEPP, 'process', new_callable=PropertyMock(
return_value=Mock(all_inputs=fake_all_inputs)
))
+ self.patched_find_output_art = patch.object(UploadVcfToSamples, '_find_output_art',
+ side_effect=fake_find_output_art)
- def test_upload(self):
+ def test_upload_first_time(self):
patched_log = patch('scripts.convert_and_dispatch_genotypes.UploadVcfToSamples.info')
- patched_generate_vcf = patch('scripts.convert_and_dispatch_genotypes.GenotypeConversion.generate_vcf')
+ patched_generate_vcf = patch('scripts.convert_and_dispatch_genotypes.GenotypeConversion.generate_vcf', return_value='uploaded_file')
patched_remove = patch('scripts.convert_and_dispatch_genotypes.remove')
exp_log_msgs = (
- ('Matching against %s artifacts', 1),
- ('Matching %s against user sample name %s', 'this', '9504430'),
- ('Matched and uploaded %s artifacts against %s genotype results', 1, 1),
+ ('Matching %s sample from file against %s artifacts', 2, 2),
+ ('Matching V0001P001A01',),
+ ('Matching V0001P001C01',),
+ ('Matched and uploaded %s artifacts against %s genotype results', 2, 2),
('%s artifacts did not match', 0),
('%s genotyping results were not used', 0)
)
- with patched_log as p, patched_generate_vcf, patched_remove, self.patched_lims, self.patched_process:
+ with patched_log as p, patched_generate_vcf, patched_remove, self.patched_lims as mlims, self.patched_process,\
+ self.patched_find_output_art:
+ mlims.upload_new_file.return_value = Mock(id='file_id')
self.epp._run()
+
for m in exp_log_msgs:
p.assert_any_call(*m)
+ mlims.upload_new_file.assert_any_call(self.lims_sample1, 'uploaded_file')
+ mlims.upload_new_file.assert_called_with(self.lims_sample2, 'uploaded_file')
+ self.lims_sample1.put.assert_called_once_with()
+ self.lims_sample2.put.assert_called_once_with()
+ assert self.lims_sample1.udf == {
+ 'QuantStudio Data Import Completed #': 1,
+ 'Number of Calls (Best Run)': 6,
+ 'Genotyping results file id': 'file_id'
+ }
+ assert self.outputs[self.lims_sample1].udf == {'Number of Calls (This Run)': 6}
+ assert self.lims_sample2.udf == {
+ 'QuantStudio Data Import Completed #': 1,
+ 'Number of Calls (Best Run)': 22,
+ 'Genotyping results file id': 'file_id'
+ }
+ assert self.outputs[self.lims_sample2].udf == {'Number of Calls (This Run)': 22}
+
+ def test_upload_second_time(self):
+ patched_log = patch('scripts.convert_and_dispatch_genotypes.UploadVcfToSamples.info')
+ patched_generate_vcf = patch('scripts.convert_and_dispatch_genotypes.GenotypeConversion.generate_vcf', return_value='uploaded_file')
+ patched_remove = patch('scripts.convert_and_dispatch_genotypes.remove')
+
+ with patched_log as p, patched_generate_vcf, patched_remove, self.patched_lims as mlims, self.patched_process, \
+ self.patched_find_output_art:
+ self.lims_sample1.udf = {
+ 'QuantStudio Data Import Completed #': 1,
+ 'Number of Calls (Best Run)': 12,
+ 'Genotyping results file id': 'old_file_id'
+ }
+ self.lims_sample2.udf = {
+ 'QuantStudio Data Import Completed #': 1,
+ 'Number of Calls (Best Run)': 12,
+ 'Genotyping results file id': 'old_file_id'
+ }
+ mlims.upload_new_file.return_value = Mock(id='file_id')
+ self.epp._run()
+ assert self.lims_sample1.udf == {
+ 'QuantStudio Data Import Completed #': 2,
+ 'Number of Calls (Best Run)': 12,
+ 'Genotyping results file id': 'old_file_id'
+ }
+ assert self.outputs[self.lims_sample1].udf == {'Number of Calls (This Run)': 6}
+
+ assert self.lims_sample2.udf == {
+ 'QuantStudio Data Import Completed #': 2,
+ 'Number of Calls (Best Run)': 22,
+ 'Genotyping results file id': 'file_id'
+ }
+ assert self.outputs[self.lims_sample2].udf == {'Number of Calls (This Run)': 22}
diff --git a/tests/test_populate_review_step.py b/tests/test_populate_review_step.py
index 6e6c8e0..d1b2eea 100644
--- a/tests/test_populate_review_step.py
+++ b/tests/test_populate_review_step.py
@@ -1,7 +1,7 @@
from pyclarity_lims.entities import Artifact
from scripts import populate_review_step as p
from tests.test_common import TestEPP, NamedMock
-from unittest.mock import Mock, patch, PropertyMock, call
+from unittest.mock import Mock, patch, PropertyMock
class TestPopulator(TestEPP):
@@ -13,7 +13,7 @@ class TestPopulator(TestEPP):
self.epp_cls,
'samples',
new_callable=PropertyMock(
- return_value=[NamedMock(real_name='a_sample', udf={'Required Yield (Gb)': 95, 'Coverage (X)': 30})]
+ return_value=[NamedMock(real_name='a_sample', udf={'Yield for Quoted Coverage (Gb)': 95})]
)
)
self.patched_lims = patch.object(self.epp_cls, 'lims', new_callable=PropertyMock)
@@ -30,49 +30,37 @@ class TestPopulator(TestEPP):
class TestPullRunElementInfo(TestPopulator):
epp_cls = p.PullRunElementInfo
fake_rest_entity = {
- 'aggregated': {'clean_yield_in_gb': 20,
- 'clean_yield_q30_in_gb': 15,
- 'clean_pc_q30': 75,
- 'pc_adaptor': 1.2},
'run_element_id': 'id',
'passing_filter_reads': 120000000,
+ 'clean_yield_in_gb': 20,
+ 'clean_yield_q30_in_gb': 15,
+ 'clean_pc_q30': 75,
'lane_pc_optical_dups': 10,
+ 'pc_adapter': 1.2,
'reviewed': 'pass',
'review_comments': 'alright',
- 'review_date': '12_02_2107_12:43:24',
+ 'review_date': '12_02_2107_12:43:24'
}
expected_udfs = {
'RE Id': 'id',
'RE Nb Reads': 120000000,
'RE Yield': 20,
'RE Yield Q30': 15,
- 'RE Coverage': 34.2,
'RE %Q30': 75,
'RE Estimated Duplicate Rate': 10,
'RE %Adapter': 1.2,
'RE Review status': 'pass',
'RE Review Comment': 'alright',
- 'RE Review date': '2107-02-12',
- 'RE Useable': 'yes',
- 'RE Useable Comment': 'AR: Good yield'
+ 'RE Review date': '2107-02-12'
}
def test_pull(self):
-
- patched_output_artifacts_per_sample = patch.object(
- self.epp_cls,
- 'output_artifacts_per_sample',
- return_value=[Mock(spec=Artifact, udf={'RE Coverage': 34.2}, samples=[NamedMock(real_name='a_sample')])]
- )
-
with self.patched_lims as pl, self.patched_samples, self.patched_get_docs as pg, \
- patched_output_artifacts_per_sample as poa:
+ self.patched_output_artifacts_per_sample as poa:
self.epp.run()
- assert pg.call_count == 3
- assert pg.call_args_list == [call('run_elements', where={'sample_id': 'a_sample'}),
- call('samples', where={'sample_id': 'a_sample'}),
- call('samples', where={'sample_id': 'a_sample'})]
+ assert pg.call_count == 1
+ pg.assert_called_with(self.epp.endpoint, match={'sample_id': 'a_sample'})
# Check that the udfs have been added
assert dict(poa.return_value[0].udf) == self.expected_udfs
@@ -84,16 +72,16 @@ class TestPullRunElementInfo(TestPopulator):
def patch_output_artifact(output_artifacts):
return patch.object(self.epp_cls, 'output_artifacts_per_sample', return_value=output_artifacts)
- sample = NamedMock(real_name='a_sample', udf={'Required Yield (Gb)': 95, 'Coverage (X)': 30})
+ sample = NamedMock(real_name='a_sample', udf={'Yield for Quoted Coverage (Gb)': 95})
patched_output_artifacts_per_sample = patch_output_artifact([
- Mock(spec=Artifact, udf={'RE Yield': 115, 'RE %Q30': 75, 'RE Review status': 'pass', 'RE Coverage': 35.2}),
- Mock(spec=Artifact, udf={'RE Yield': 95, 'RE %Q30': 85, 'RE Review status': 'pass', 'RE Coverage': 36.7}),
- Mock(spec=Artifact, udf={'RE Yield': 15, 'RE %Q30': 70, 'RE Review status': 'fail', 'RE Coverage': 34.1}),
+ Mock(spec=Artifact, udf={'RE Yield Q30': 115, 'RE %Q30': 75, 'RE Review status': 'pass'}),
+ Mock(spec=Artifact, udf={'RE Yield Q30': 95, 'RE %Q30': 85, 'RE Review status': 'pass'}),
+ Mock(spec=Artifact, udf={'RE Yield Q30': 15, 'RE %Q30': 70, 'RE Review status': 'fail'}),
])
- with patched_output_artifacts_per_sample as poa, self.patched_get_docs as pg:
+ with patched_output_artifacts_per_sample as poa:
self.epp.assess_sample(sample)
assert poa.return_value[0].udf['RE Useable'] == 'no'
- assert poa.return_value[0].udf['RE Useable Comment'] == 'AR: Too much good yield'
+ assert poa.return_value[0].udf['RE Useable Comment'] == 'AR: To much good yield'
assert poa.return_value[1].udf['RE Useable'] == 'yes'
assert poa.return_value[1].udf['RE Useable Comment'] == 'AR: Good yield'
@@ -102,61 +90,38 @@ class TestPullRunElementInfo(TestPopulator):
assert poa.return_value[2].udf['RE Useable Comment'] == 'AR: Failed and not needed'
patched_output_artifacts_per_sample = patch_output_artifact([
- Mock(spec=Artifact, udf={'RE Yield': 115, 'RE %Q30': 85, 'RE Review status': 'pass', 'RE Coverage': 35.2}),
- Mock(spec=Artifact, udf={'RE Yield': 15, 'RE %Q30': 70, 'RE Review status': 'fail', 'RE Coverage': 33.6}),
+ Mock(spec=Artifact, udf={'RE Yield Q30': 115, 'RE %Q30': 85, 'RE Review status': 'pass'}),
+ Mock(spec=Artifact, udf={'RE Yield Q30': 15, 'RE %Q30': 70, 'RE Review status': 'fail'}),
])
- with patched_output_artifacts_per_sample as poa, self.patched_get_docs as pg:
+ with patched_output_artifacts_per_sample as poa:
self.epp.assess_sample(sample)
assert poa.return_value[0].udf['RE Useable'] == 'yes'
assert poa.return_value[0].udf['RE Useable Comment'] == 'AR: Good yield'
-
assert poa.return_value[1].udf['RE Useable'] == 'no'
assert poa.return_value[1].udf['RE Useable Comment'] == 'AR: Failed and not needed'
- patched_output_artifacts_per_sample = patch_output_artifact([
- Mock(spec=Artifact, udf={'RE Yield': 115, 'RE %Q30': 85, 'RE Review status': 'pass', 'RE Coverage': 35.2}),
- Mock(spec=Artifact, udf={'RE Yield': 15, 'RE %Q30': 70, 'RE Review status': 'fail', 'RE Coverage': 33.6}),
- ])
-
- delivered = 'scripts.populate_review_step.PullRunElementInfo.delivered'
- processed = 'scripts.populate_review_step.PullRunElementInfo.processed'
- patched_delivered = patch(delivered, return_value=True)
- pathed_processed = patch(processed, return_value=True)
-
- with patched_output_artifacts_per_sample as poa, self.patched_get_docs as pg, patched_delivered:
- self.epp.assess_sample(sample)
- assert poa.return_value[0].udf['RE Useable'] == 'no'
- assert poa.return_value[0].udf['RE Useable Comment'] == 'AR: Delivered'
- assert poa.return_value[1].udf['RE Useable'] == 'no'
- assert poa.return_value[1].udf['RE Useable Comment'] == 'AR: Delivered'
-
- with patched_output_artifacts_per_sample as poa, self.patched_get_docs as pg, pathed_processed:
- self.epp.assess_sample(sample)
- assert poa.return_value[0].udf['RE Useable'] == 'no'
- assert poa.return_value[0].udf['RE Useable Comment'] == 'AR: Sample already processed'
- assert poa.return_value[1].udf['RE Useable'] == 'no'
- assert poa.return_value[1].udf['RE Useable Comment'] == 'AR: Sample already processed'
-
def test_field_from_entity(self):
entity = {'this': {'that': 'other'}}
assert self.epp.field_from_entity(entity, 'this.that') == 'other'
assert entity == {'this': {'that': 'other'}} # not changed
-class TestPullSampleInfo(TestPopulator):
+class TestPullSampleInfo(TestPullRunElementInfo):
epp_cls = p.PullSampleInfo
fake_rest_entity = {
'sample_id': 'a_sample',
'user_sample_id': 'a_user_sample_id',
'clean_yield_in_gb': 5,
- 'aggregated': {'clean_pc_q30': 70,
- 'pc_mapped_reads': 75,
- 'pc_duplicate_reads': 5,
- 'mean_coverage': 30,
- 'gender_match': 'Match',
- 'genotype_match': 'Match'},
- 'matching_species': ['Homo sapiens', 'Thingius thingy'],
+ 'clean_pc_q30': 70,
+ 'pc_mapped_reads': 75,
+ 'pc_duplicate_reads': 5,
+ 'coverage': {'mean': 30},
+ 'species_contamination': {
+ 'contaminant_unique_mapped': {'Homo sapiens': 70000, 'Thingius thingy': 501, 'Sus scrofa': 499}
+ },
+ 'gender_match': 'Match',
+ 'genotype_match': 'Match',
'sample_contamination': {'freemix': 0.1},
'reviewed': 'pass',
'review_comments': 'alright',
@@ -197,7 +162,7 @@ class TestPullSampleInfo(TestPopulator):
assert poa.return_value[1].udf['SR Useable Comments'] == 'AR: Review failed'
def test_field_from_entity(self):
- obs = self.epp.field_from_entity(self.fake_rest_entity, 'matching_species')
+ obs = self.epp.field_from_entity(self.fake_rest_entity, 'species_contamination')
assert obs == 'Homo sapiens, Thingius thingy'
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 4
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asana==0.6.7
attrs==22.2.0
cached-property==1.5.2
certifi==2021.5.30
-e git+https://github.com/EdinburghGenomics/clarity_scripts.git@32c21fa719365176a9101a8a7ce72eb07f3ac85d#egg=clarity_scripts
coverage==6.2
EGCG-Core==0.8.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==2.8
MarkupSafe==2.0.1
oauthlib==3.2.2
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyclarity-lims==0.4.8
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
PyYAML==6.0.1
requests==2.14.2
requests-oauthlib==0.8.0
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: clarity_scripts
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asana==0.6.7
- attrs==22.2.0
- cached-property==1.5.2
- coverage==6.2
- egcg-core==0.8.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==2.8
- markupsafe==2.0.1
- oauthlib==3.2.2
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyclarity-lims==0.4.8
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pyyaml==6.0.1
- requests==2.14.2
- requests-oauthlib==0.8.0
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/clarity_scripts
| [
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_find_field",
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_generate_vcf",
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_get_genotype_from_call",
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_order_from_fai",
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_parse_QuantStudio_AIF_genotype",
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_parse_genome_fai",
"tests/test_convert_and_dispatch_genotypes.py::TestGenotypeConversion::test_vcf_header_from_ref_length",
"tests/test_convert_and_dispatch_genotypes.py::TestUploadVcfToSamples::test_init",
"tests/test_convert_and_dispatch_genotypes.py::TestUploadVcfToSamples::test_upload_first_time",
"tests/test_convert_and_dispatch_genotypes.py::TestUploadVcfToSamples::test_upload_second_time",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_assess_sample",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_pull",
"tests/test_populate_review_step.py::TestPullSampleInfo::test_field_from_entity",
"tests/test_populate_review_step.py::TestPullSampleInfo::test_pull"
]
| []
| [
"tests/test_common.py::TestEPP::test_init",
"tests/test_common.py::TestRestCommunicationEPP::test_interaction",
"tests/test_common.py::TestFindNewestArtifactOriginatingFrom::test_find_newest_artifact_originating_from",
"tests/test_convert_and_dispatch_genotypes.py::TestEPP::test_init",
"tests/test_populate_review_step.py::TestEPP::test_init",
"tests/test_populate_review_step.py::TestPopulator::test_init",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_field_from_entity",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_init",
"tests/test_populate_review_step.py::TestPullSampleInfo::test_assess_sample",
"tests/test_populate_review_step.py::TestPullSampleInfo::test_init",
"tests/test_populate_review_step.py::TestPushRunElementInfo::test_init",
"tests/test_populate_review_step.py::TestPushRunElementInfo::test_push",
"tests/test_populate_review_step.py::TestPushSampleInfo::test_init",
"tests/test_populate_review_step.py::TestPushSampleInfo::test_push"
]
| []
| MIT License | 2,449 | [
"setup.py",
"scripts/populate_review_step.py",
"scripts/convert_and_dispatch_genotypes.py",
"EPPs/common.py"
]
| [
"setup.py",
"scripts/populate_review_step.py",
"scripts/convert_and_dispatch_genotypes.py",
"EPPs/common.py"
]
|
|
oasis-open__cti-python-stix2-172 | f778a45b33f9b74c5a3b62c38db477bb504c2202 | 2018-04-26 14:25:09 | 3084c9f51fcd00cf6b0ed76827af90d0e86746d5 | diff --git a/stix2/patterns.py b/stix2/patterns.py
index 23ce71b..3f9cbd9 100644
--- a/stix2/patterns.py
+++ b/stix2/patterns.py
@@ -147,6 +147,9 @@ class ListConstant(_Constant):
def make_constant(value):
+ if isinstance(value, _Constant):
+ return value
+
try:
return parse_into_datetime(value)
except ValueError:
| make_constant() should check if value is already a constant
``` python
>>> from stix2.patterns import StringConstant
>>> from stix2.patterns import make_constant
>>> a = StringConstant('foo')
>>> make_constant(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lu/Projects/cti-python-stix2/stix2/patterns.py", line 166, in make_constant
raise ValueError("Unable to create a constant from %s" % value)
ValueError: Unable to create a constant from 'foo'
``` | oasis-open/cti-python-stix2 | diff --git a/stix2/test/test_pattern_expressions.py b/stix2/test/test_pattern_expressions.py
index 74a7d0f..14e3774 100644
--- a/stix2/test/test_pattern_expressions.py
+++ b/stix2/test/test_pattern_expressions.py
@@ -372,3 +372,9 @@ def test_invalid_startstop_qualifier():
stix2.StartStopQualifier(datetime.date(2016, 6, 1),
'foo')
assert 'is not a valid argument for a Start/Stop Qualifier' in str(excinfo)
+
+
+def test_make_constant_already_a_constant():
+ str_const = stix2.StringConstant('Foo')
+ result = stix2.patterns.make_constant(str_const)
+ assert result is str_const
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 1.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"taxii2-client"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
antlr4-python3-runtime==4.9.3
async-generator==1.10
attrs==22.2.0
Babel==2.11.0
backcall==0.2.0
bleach==4.1.0
bump2version==1.0.1
bumpversion==0.6.0
certifi==2021.5.30
cfgv==3.3.1
charset-normalizer==2.0.12
coverage==6.2
decorator==5.1.1
defusedxml==0.7.1
distlib==0.3.9
docutils==0.18.1
entrypoints==0.4
filelock==3.4.1
identify==2.4.4
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
importlib-resources==5.2.3
iniconfig==1.1.1
ipython==7.16.3
ipython-genutils==0.2.0
jedi==0.17.2
Jinja2==3.0.3
jsonschema==3.2.0
jupyter-client==7.1.2
jupyter-core==4.9.2
jupyterlab-pygments==0.1.2
MarkupSafe==2.0.1
mistune==0.8.4
nbclient==0.5.9
nbconvert==6.0.7
nbformat==5.1.3
nbsphinx==0.3.2
nest-asyncio==1.6.0
nodeenv==1.6.0
packaging==21.3
pandocfilters==1.5.1
parso==0.7.1
pexpect==4.9.0
pickleshare==0.7.5
platformdirs==2.4.0
pluggy==1.0.0
pre-commit==2.17.0
prompt-toolkit==3.0.36
ptyprocess==0.7.0
py==1.11.0
Pygments==2.14.0
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
pytz==2025.2
PyYAML==6.0.1
pyzmq==25.1.2
requests==2.27.1
simplejson==3.20.1
six==1.17.0
snowballstemmer==2.2.0
Sphinx==1.5.6
sphinx-prompt==1.5.0
-e git+https://github.com/oasis-open/cti-python-stix2.git@f778a45b33f9b74c5a3b62c38db477bb504c2202#egg=stix2
stix2-patterns==2.0.0
taxii2-client==2.3.0
testpath==0.6.0
toml==0.10.2
tomli==1.2.3
tornado==6.1
tox==3.28.0
traitlets==4.3.3
typing_extensions==4.1.1
urllib3==1.26.20
virtualenv==20.16.2
wcwidth==0.2.13
webencodings==0.5.1
zipp==3.6.0
| name: cti-python-stix2
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- antlr4-python3-runtime==4.9.3
- async-generator==1.10
- attrs==22.2.0
- babel==2.11.0
- backcall==0.2.0
- bleach==4.1.0
- bump2version==1.0.1
- bumpversion==0.6.0
- cfgv==3.3.1
- charset-normalizer==2.0.12
- coverage==6.2
- decorator==5.1.1
- defusedxml==0.7.1
- distlib==0.3.9
- docutils==0.18.1
- entrypoints==0.4
- filelock==3.4.1
- identify==2.4.4
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- importlib-resources==5.2.3
- iniconfig==1.1.1
- ipython==7.16.3
- ipython-genutils==0.2.0
- jedi==0.17.2
- jinja2==3.0.3
- jsonschema==3.2.0
- jupyter-client==7.1.2
- jupyter-core==4.9.2
- jupyterlab-pygments==0.1.2
- markupsafe==2.0.1
- mistune==0.8.4
- nbclient==0.5.9
- nbconvert==6.0.7
- nbformat==5.1.3
- nbsphinx==0.3.2
- nest-asyncio==1.6.0
- nodeenv==1.6.0
- packaging==21.3
- pandocfilters==1.5.1
- parso==0.7.1
- pexpect==4.9.0
- pickleshare==0.7.5
- platformdirs==2.4.0
- pluggy==1.0.0
- pre-commit==2.17.0
- prompt-toolkit==3.0.36
- ptyprocess==0.7.0
- py==1.11.0
- pygments==2.14.0
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- pyzmq==25.1.2
- requests==2.27.1
- simplejson==3.20.1
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==1.5.6
- sphinx-prompt==1.5.0
- stix2-patterns==2.0.0
- taxii2-client==2.3.0
- testpath==0.6.0
- toml==0.10.2
- tomli==1.2.3
- tornado==6.1
- tox==3.28.0
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- virtualenv==20.16.2
- wcwidth==0.2.13
- webencodings==0.5.1
- zipp==3.6.0
prefix: /opt/conda/envs/cti-python-stix2
| [
"stix2/test/test_pattern_expressions.py::test_make_constant_already_a_constant"
]
| []
| [
"stix2/test/test_pattern_expressions.py::test_create_comparison_expression",
"stix2/test/test_pattern_expressions.py::test_boolean_expression",
"stix2/test/test_pattern_expressions.py::test_boolean_expression_with_parentheses",
"stix2/test/test_pattern_expressions.py::test_hash_followed_by_registryKey_expression_python_constant",
"stix2/test/test_pattern_expressions.py::test_hash_followed_by_registryKey_expression",
"stix2/test/test_pattern_expressions.py::test_file_observable_expression",
"stix2/test/test_pattern_expressions.py::test_multiple_file_observable_expression[AndObservationExpression-AND]",
"stix2/test/test_pattern_expressions.py::test_multiple_file_observable_expression[OrObservationExpression-OR]",
"stix2/test/test_pattern_expressions.py::test_root_types",
"stix2/test/test_pattern_expressions.py::test_artifact_payload",
"stix2/test/test_pattern_expressions.py::test_greater_than_python_constant",
"stix2/test/test_pattern_expressions.py::test_greater_than",
"stix2/test/test_pattern_expressions.py::test_less_than",
"stix2/test/test_pattern_expressions.py::test_greater_than_or_equal",
"stix2/test/test_pattern_expressions.py::test_less_than_or_equal",
"stix2/test/test_pattern_expressions.py::test_not",
"stix2/test/test_pattern_expressions.py::test_and_observable_expression",
"stix2/test/test_pattern_expressions.py::test_invalid_and_observable_expression",
"stix2/test/test_pattern_expressions.py::test_hex",
"stix2/test/test_pattern_expressions.py::test_multiple_qualifiers",
"stix2/test/test_pattern_expressions.py::test_set_op",
"stix2/test/test_pattern_expressions.py::test_timestamp",
"stix2/test/test_pattern_expressions.py::test_boolean",
"stix2/test/test_pattern_expressions.py::test_binary",
"stix2/test/test_pattern_expressions.py::test_list",
"stix2/test/test_pattern_expressions.py::test_list2",
"stix2/test/test_pattern_expressions.py::test_invalid_constant_type",
"stix2/test/test_pattern_expressions.py::test_invalid_integer_constant",
"stix2/test/test_pattern_expressions.py::test_invalid_timestamp_constant",
"stix2/test/test_pattern_expressions.py::test_invalid_float_constant",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[True-True0]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[False-False0]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[True-True1]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[False-False1]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[true-True]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[false-False]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[t-True]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[f-False]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[T-True]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[F-False]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[1-True]",
"stix2/test/test_pattern_expressions.py::test_boolean_constant[0-False]",
"stix2/test/test_pattern_expressions.py::test_invalid_boolean_constant",
"stix2/test/test_pattern_expressions.py::test_invalid_hash_constant[MD5-zzz]",
"stix2/test/test_pattern_expressions.py::test_invalid_hash_constant[ssdeep-zzz==]",
"stix2/test/test_pattern_expressions.py::test_invalid_hex_constant",
"stix2/test/test_pattern_expressions.py::test_invalid_binary_constant",
"stix2/test/test_pattern_expressions.py::test_escape_quotes_and_backslashes",
"stix2/test/test_pattern_expressions.py::test_like",
"stix2/test/test_pattern_expressions.py::test_issuperset",
"stix2/test/test_pattern_expressions.py::test_repeat_qualifier",
"stix2/test/test_pattern_expressions.py::test_invalid_repeat_qualifier",
"stix2/test/test_pattern_expressions.py::test_invalid_within_qualifier",
"stix2/test/test_pattern_expressions.py::test_startstop_qualifier",
"stix2/test/test_pattern_expressions.py::test_invalid_startstop_qualifier"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,450 | [
"stix2/patterns.py"
]
| [
"stix2/patterns.py"
]
|
|
G-Node__python-odml-284 | bc4bade4c93e0d5cb3ab8c0fb427fcf3c0ed96e1 | 2018-04-26 14:46:25 | eeff5922987b064681d1328f81af317d8171808f | diff --git a/odml/format.py b/odml/format.py
index bae2d68..7a0a796 100644
--- a/odml/format.py
+++ b/odml/format.py
@@ -130,7 +130,7 @@ class Section(Format):
_args = {
'id': 0,
'type': 1,
- 'name': 0,
+ 'name': 1,
'definition': 0,
'reference': 0,
'link': 0,
diff --git a/odml/property.py b/odml/property.py
index 2602dea..f6d0211 100644
--- a/odml/property.py
+++ b/odml/property.py
@@ -13,7 +13,7 @@ class BaseProperty(base.BaseObject):
"""An odML Property"""
_format = frmt.Property
- def __init__(self, name, value=None, parent=None, unit=None,
+ def __init__(self, name=None, value=None, parent=None, unit=None,
uncertainty=None, reference=None, definition=None,
dependency=None, dependency_value=None, dtype=None,
value_origin=None, id=None):
@@ -58,6 +58,11 @@ class BaseProperty(base.BaseObject):
print(e)
self._id = str(uuid.uuid4())
+ # Use id if no name was provided.
+ if not name:
+ name = self._id
+
+ self._name = name
self._parent = None
self._name = name
self._value_origin = value_origin
@@ -118,6 +123,14 @@ class BaseProperty(base.BaseObject):
@name.setter
def name(self, new_name):
+ if self.name == new_name:
+ return
+
+ curr_parent = self.parent
+ if hasattr(curr_parent, "properties") and new_name in curr_parent.properties:
+
+ raise KeyError("Object with the same name already exists!")
+
self._name = new_name
def __repr__(self):
diff --git a/odml/section.py b/odml/section.py
index fa08c1c..4707003 100644
--- a/odml/section.py
+++ b/odml/section.py
@@ -25,7 +25,7 @@ class BaseSection(base.Sectionable):
_format = format.Section
- def __init__(self, name, type=None, parent=None,
+ def __init__(self, name=None, type=None, parent=None,
definition=None, reference=None,
repository=None, link=None, include=None, id=None):
@@ -42,6 +42,10 @@ class BaseSection(base.Sectionable):
print(e)
self._id = str(uuid.uuid4())
+ # Use id if no name was provided.
+ if not name:
+ name = self._id
+
self._parent = None
self._name = name
self._definition = definition
@@ -94,6 +98,13 @@ class BaseSection(base.Sectionable):
@name.setter
def name(self, new_value):
+ if self.name == new_value:
+ return
+
+ curr_parent = self.parent
+ if hasattr(curr_parent, "sections") and new_value in curr_parent.sections:
+ raise KeyError("Object with the same name already exists!")
+
self._name = new_value
@property
diff --git a/odml/tools/odmlparser.py b/odml/tools/odmlparser.py
index fbc7c71..2edd2e5 100644
--- a/odml/tools/odmlparser.py
+++ b/odml/tools/odmlparser.py
@@ -48,6 +48,10 @@ class ODMLWriter:
raise ParserException(msg)
with open(filename, 'w') as file:
+ # Add XML header to support odML stylesheets.
+ if self.parser == 'XML':
+ file.write(xmlparser.XMLWriter.header)
+
file.write(self.to_string(odml_document))
def to_string(self, odml_document):
diff --git a/odml/tools/xmlparser.py b/odml/tools/xmlparser.py
index f2ea862..c935c99 100644
--- a/odml/tools/xmlparser.py
+++ b/odml/tools/xmlparser.py
@@ -5,11 +5,11 @@ Parses odML files. Can be invoked standalone:
python -m odml.tools.xmlparser file.odml
"""
import csv
+import sys
from lxml import etree as ET
from lxml.builder import E
# this is needed for py2exe to include lxml completely
from lxml import _elementpath as _dummy
-import sys
try:
from StringIO import StringIO
@@ -118,10 +118,9 @@ class XMLWriter:
else:
data = str(self)
- f = open(filename, "w")
- f.write(self.header)
- f.write(data)
- f.close()
+ with open(filename, "w") as file:
+ file.write(self.header)
+ file.write(data)
def load(filename):
@@ -223,18 +222,20 @@ class XMLReader(object):
return None # won't be able to parse this one
return getattr(self, "parse_" + node.tag)(node, self.tags[node.tag])
- def parse_tag(self, root, fmt, insert_children=True, create=None):
+ def parse_tag(self, root, fmt, insert_children=True):
"""
Parse an odml node based on the format description *fmt*
- and a function *create* to instantiate a corresponding object
+ and instantiate the corresponding object.
+ :param root: lxml.etree node containing an odML object or object tree.
+ :param fmt: odML class corresponding to the content of the root node.
+ :param insert_children: Bool value. When True, child elements of the root node
+ will be parsed to their odML equivalents and appended to
+ the odML document. When False, child elements of the
+ root node will be ignored.
"""
arguments = {}
extra_args = {}
children = []
- text = []
-
- if root.text:
- text.append(root.text.strip())
for k, v in root.attrib.iteritems():
k = k.lower()
@@ -258,8 +259,6 @@ class XMLReader(object):
else:
tag = fmt.map(node.tag)
if tag in arguments:
- # TODO make this an error, however first figure out a
- # way to let <odML version=><version/> pass
self.warn("Element <%s> is given multiple times in "
"<%s> tag" % (node.tag, root.tag), node)
@@ -273,38 +272,21 @@ class XMLReader(object):
else:
self.error("Invalid element <%s> in odML document section <%s>"
% (node.tag, root.tag), node)
- if node.tail:
- text.append(node.tail.strip())
if sys.version_info > (3,):
- self.check_mandatory_arguments(dict(list(arguments.items()) +
- list(extra_args.items())),
- fmt, root.tag, root)
+ check_args = dict(list(arguments.items()) + list(extra_args.items()))
else:
- self.check_mandatory_arguments(dict(arguments.items() +
- extra_args.items()),
- fmt, root.tag, root)
- if create is None:
- obj = fmt.create()
- else:
- obj = create(args=arguments, text=''.join(text), children=children)
+ check_args = dict(arguments.items() + extra_args.items())
- for k, v in arguments.items():
- if hasattr(obj, k) and (getattr(obj, k) is None or k == 'id'):
- try:
- if k == 'id' and v is not None:
- obj._id = v
- else:
- setattr(obj, k, v)
- except Exception as e:
- self.warn("cannot set '%s' property on <%s>: %s" %
- (k, root.tag, repr(e)), root)
- if not self.ignore_errors:
- raise e
+ self.check_mandatory_arguments(check_args, fmt, root.tag, root)
+
+ # Instantiate the current odML object with the parsed attributes.
+ obj = fmt.create(**arguments)
if insert_children:
for child in children:
obj.append(child)
+
return obj
def parse_odML(self, root, fmt):
@@ -312,24 +294,10 @@ class XMLReader(object):
return doc
def parse_section(self, root, fmt):
- name = root.get("name") # property name= overrides
- if name is None: # the element
- name_node = root.find("name")
- if name_node is not None:
- name = name_node.text
- root.remove(name_node)
- # delete the name_node so its value won't
- # be used to overwrite the already set name-attribute
-
- if name is None:
- self.error("Missing name element in <section>", root)
-
- return self.parse_tag(root, fmt,
- create=lambda **kargs: fmt.create(name))
+ return self.parse_tag(root, fmt)
def parse_property(self, root, fmt):
- create = lambda children, args, **kargs: fmt.create(**args)
- return self.parse_tag(root, fmt, insert_children=False, create=create)
+ return self.parse_tag(root, fmt, insert_children=False)
if __name__ == '__main__':
| odML Format update
Define Section `name` and `type` as well as Property `name` as required in `format.py`. | G-Node/python-odml | diff --git a/test/test_property.py b/test/test_property.py
index 9138cae..9eafedb 100644
--- a/test/test_property.py
+++ b/test/test_property.py
@@ -327,6 +327,40 @@ class TestProperty(unittest.TestCase):
assert(p.dtype == 'string')
assert(p.value == ['7', '20', '1 Dog', 'Seven'])
+ def test_name(self):
+ # Test id is used when name is not provided
+ p = Property()
+ self.assertIsNotNone(p.name)
+ self.assertEqual(p.name, p.id)
+
+ # Test name is properly set on init
+ name = "rumpelstilzchen"
+ p = Property(name)
+ self.assertEqual(p.name, name)
+
+ # Test name can be properly set on single and connected Properties
+ prop = Property()
+ self.assertNotEqual(prop.name, "prop")
+ prop.name = "prop"
+ self.assertEqual(prop.name, "prop")
+
+ sec = Section()
+ prop_a = Property(parent=sec)
+ self.assertNotEqual(prop_a.name, "prop_a")
+ prop_a.name = "prop_a"
+ self.assertEqual(prop_a.name, "prop_a")
+
+ # Test property name can be changed with siblings
+ prop_b = Property(name="prop_b", parent=sec)
+ self.assertEqual(prop_b.name, "prop_b")
+ prop_b.name = "prop"
+ self.assertEqual(prop_b.name, "prop")
+
+ # Test property name set will fail on existing sibling with same name
+ with self.assertRaises(KeyError):
+ prop_b.name = "prop_a"
+ self.assertEqual(prop_b.name, "prop")
+
def test_parent(self):
p = Property("property_section", parent=Section("S"))
self.assertIsInstance(p.parent, BaseSection)
diff --git a/test/test_section.py b/test/test_section.py
index 84604aa..5581928 100644
--- a/test/test_section.py
+++ b/test/test_section.py
@@ -39,6 +39,50 @@ class TestSection(unittest.TestCase):
sec.definition = ""
self.assertIsNone(sec.definition)
+ def test_name(self):
+ # Test id is used when name is not provided
+ s = Section()
+ self.assertIsNotNone(s.name)
+ self.assertEqual(s.name, s.id)
+
+ # Test name is properly set on init
+ name = "rumpelstilzchen"
+ s = Section(name)
+ self.assertEqual(s.name, name)
+
+ name = "rumpelstilzchen"
+ s = Section(name=name)
+ self.assertEqual(s.name, name)
+
+ # Test name can be properly set on single and connected Sections
+ sec = Section()
+ self.assertNotEqual(sec.name, "sec")
+ sec.name = "sec"
+ self.assertEqual(sec.name, "sec")
+
+ subsec_a = Section(parent=sec)
+ self.assertNotEqual(subsec_a.name, "subsec_a")
+ subsec_a.name = "subsec_a"
+ self.assertEqual(subsec_a.name, "subsec_a")
+
+ # Test subsection name can be changed with siblings
+ subsec_b = Section(name="subsec_b", parent=sec)
+ self.assertEqual(subsec_b.name, "subsec_b")
+ subsec_b.name = "subsec"
+ self.assertEqual(subsec_b.name, "subsec")
+
+ # Test subsection name set will fail on existing sibling with same name
+ with self.assertRaises(KeyError):
+ subsec_b.name = "subsec_a"
+ self.assertEqual(subsec_b.name, "subsec")
+
+ # Test section name set will fail on existing same name document sibling
+ doc = Document()
+ sec_a = Section(name="a", parent=doc)
+ sec_b = Section(name="b", parent=doc)
+ with self.assertRaises(KeyError):
+ sec_b.name = "a"
+
def test_parent(self):
s = Section("Section")
self.assertIsNone(s.parent)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 5
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y libxml2-dev libxslt1-dev lib32z1-dev"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
isodate==0.7.2
lxml==5.3.1
-e git+https://github.com/G-Node/python-odml.git@bc4bade4c93e0d5cb3ab8c0fb427fcf3c0ed96e1#egg=odML
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pyparsing==3.2.3
pytest @ file:///croot/pytest_1738938843180/work
PyYAML==6.0.2
rdflib==7.1.4
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: python-odml
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- isodate==0.7.2
- lxml==5.3.1
- pyparsing==3.2.3
- pyyaml==6.0.2
- rdflib==7.1.4
prefix: /opt/conda/envs/python-odml
| [
"test/test_property.py::TestProperty::test_name",
"test/test_section.py::TestSection::test_name"
]
| []
| [
"test/test_property.py::TestProperty::test_bool_conversion",
"test/test_property.py::TestProperty::test_clone",
"test/test_property.py::TestProperty::test_dtype",
"test/test_property.py::TestProperty::test_get_merged_equivalent",
"test/test_property.py::TestProperty::test_get_path",
"test/test_property.py::TestProperty::test_get_set_value",
"test/test_property.py::TestProperty::test_id",
"test/test_property.py::TestProperty::test_merge",
"test/test_property.py::TestProperty::test_merge_check",
"test/test_property.py::TestProperty::test_new_id",
"test/test_property.py::TestProperty::test_parent",
"test/test_property.py::TestProperty::test_simple_attributes",
"test/test_property.py::TestProperty::test_str_to_int_convert",
"test/test_property.py::TestProperty::test_value",
"test/test_property.py::TestProperty::test_value_append",
"test/test_property.py::TestProperty::test_value_extend",
"test/test_section.py::TestSection::test_append",
"test/test_section.py::TestSection::test_children",
"test/test_section.py::TestSection::test_clone",
"test/test_section.py::TestSection::test_contains",
"test/test_section.py::TestSection::test_extend",
"test/test_section.py::TestSection::test_id",
"test/test_section.py::TestSection::test_include",
"test/test_section.py::TestSection::test_insert",
"test/test_section.py::TestSection::test_link",
"test/test_section.py::TestSection::test_merge",
"test/test_section.py::TestSection::test_merge_check",
"test/test_section.py::TestSection::test_new_id",
"test/test_section.py::TestSection::test_parent",
"test/test_section.py::TestSection::test_path",
"test/test_section.py::TestSection::test_remove",
"test/test_section.py::TestSection::test_reorder",
"test/test_section.py::TestSection::test_repository",
"test/test_section.py::TestSection::test_simple_attributes",
"test/test_section.py::TestSection::test_unmerge"
]
| []
| BSD 4-Clause "Original" or "Old" License | 2,451 | [
"odml/property.py",
"odml/format.py",
"odml/tools/odmlparser.py",
"odml/tools/xmlparser.py",
"odml/section.py"
]
| [
"odml/property.py",
"odml/format.py",
"odml/tools/odmlparser.py",
"odml/tools/xmlparser.py",
"odml/section.py"
]
|
|
pydicom__pydicom-633 | fcc63f0b96fb370b0eb60b2c765b469ce62e597c | 2018-04-26 18:41:30 | fcc63f0b96fb370b0eb60b2c765b469ce62e597c | scaramallion: Looks good. | diff --git a/pydicom/filewriter.py b/pydicom/filewriter.py
index 797439608..f15749508 100644
--- a/pydicom/filewriter.py
+++ b/pydicom/filewriter.py
@@ -456,6 +456,8 @@ def write_dataset(fp, dataset, parent_encoding=default_encoding):
Attempt to correct ambiguous VR elements when explicit little/big
encoding Elements that can't be corrected will be returned unchanged.
"""
+ _harmonize_properties(dataset, fp)
+
if not fp.is_implicit_VR and not dataset.is_original_encoding:
dataset = correct_ambiguous_vr(dataset, fp.is_little_endian)
@@ -475,6 +477,22 @@ def write_dataset(fp, dataset, parent_encoding=default_encoding):
return fp.tell() - fpStart
+def _harmonize_properties(dataset, fp):
+ """Make sure the properties in the dataset and the file pointer are
+ consistent, so the user can set both with the same effect.
+ Properties set on the destination file object always have preference.
+ """
+ # ensure preference of fp over dataset
+ if hasattr(fp, 'is_little_endian'):
+ dataset.is_little_endian = fp.is_little_endian
+ if hasattr(fp, 'is_implicit_VR'):
+ dataset.is_implicit_VR = fp.is_implicit_VR
+
+ # write the properties back to have a consistent state
+ fp.is_implicit_VR = dataset.is_implicit_VR
+ fp.is_little_endian = dataset.is_little_endian
+
+
def write_sequence(fp, data_element, encoding):
"""Write a dicom Sequence contained in data_element to the file fp."""
# write_data_element has already written the VR='SQ' (if needed) and
| Write failure with implicit -> explicit VR
```python
>>> from pydicom import dcmread
>>> from pydicom.filebase import DicomBytesIO
>>> from pydicom.filewriter import write_dataset
>>> ds = dcmread('dicom_files/RTImageStorage.dcm')
>>> fp = DicomBytesIO()
>>> fp.is_little_endian = True
>>> fp.is_implicit_VR = False
>>> write_dataset(fp, ds)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../pydicom/pydicom/filewriter.py", line 473, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/.../pydicom/pydicom/tag.py", line 37, in tag_in_exception
raise type(ex)(msg)
TypeError: With tag (0008, 0008) got exception: object of type 'NoneType' has no len()
Traceback (most recent call last):
File "/.../pydicom/pydicom/tag.py", line 30, in tag_in_exception
yield
File "/.../pydicom/pydicom/filewriter.py", line 473, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "/.../pydicom/pydicom/filewriter.py", line 384, in write_data_element
if len(VR) != 2:
TypeError: object of type 'NoneType' has no len()
```
Probably related to the #616 PR @mrbean-bremen
| pydicom/pydicom | diff --git a/pydicom/tests/test_filewriter.py b/pydicom/tests/test_filewriter.py
index 464d6b172..4b943d651 100644
--- a/pydicom/tests/test_filewriter.py
+++ b/pydicom/tests/test_filewriter.py
@@ -1129,6 +1129,46 @@ class TestWriteToStandard(object):
for elem_in, elem_out in zip(ds_explicit, ds_out):
assert elem_in == elem_out
+ def test_write_dataset(self):
+ # make sure writing and reading back a dataset works correctly
+ ds = dcmread(mr_implicit_name)
+ fp = DicomBytesIO()
+ write_dataset(fp, ds)
+ fp.seek(0)
+ ds_read = read_dataset(fp, is_implicit_VR=True, is_little_endian=True)
+ for elem_orig, elem_read in zip(ds_read, ds):
+ assert elem_orig == elem_read
+
+ def test_write_dataset_with_explicit_vr(self):
+ # make sure conversion from implicit to explicit VR does not
+ # raise (regression test for #632)
+ ds = dcmread(mr_implicit_name)
+ fp = DicomBytesIO()
+ fp.is_implicit_VR = False
+ fp.is_little_endian = True
+ write_dataset(fp, ds)
+ fp.seek(0)
+ ds_read = read_dataset(fp, is_implicit_VR=False, is_little_endian=True)
+ for elem_orig, elem_read in zip(ds_read, ds):
+ assert elem_orig == elem_read
+
+ def test_convert_implicit_to_explicit_vr_using_destination(self):
+ # make sure conversion from implicit to explicit VR works
+ # if setting the property in the destination
+ ds = dcmread(mr_implicit_name)
+ ds.is_implicit_VR = False
+ ds.file_meta.TransferSyntaxUID = '1.2.840.10008.1.2.1'
+ fp = DicomBytesIO()
+ fp.is_implicit_VR = False
+ fp.is_little_endian = True
+ ds.save_as(fp, write_like_original=False)
+ fp.seek(0)
+ ds_out = dcmread(fp)
+ ds_explicit = dcmread(mr_name)
+
+ for elem_in, elem_out in zip(ds_explicit, ds_out):
+ assert elem_in == elem_out
+
def test_convert_explicit_to_implicit_vr(self):
# make sure conversion from explicit to implicit VR works
# without private tags
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 1.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
-e git+https://github.com/pydicom/pydicom.git@fcc63f0b96fb370b0eb60b2c765b469ce62e597c#egg=pydicom
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: pydicom
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/pydicom
| [
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_write_dataset"
]
| [
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_raw_elements_preserved_implicit_vr",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_raw_elements_preserved_explicit_vr",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_changed_character_set"
]
| [
"pydicom/tests/test_filewriter.py::WriteFileTests::testCT",
"pydicom/tests/test_filewriter.py::WriteFileTests::testJPEG2000",
"pydicom/tests/test_filewriter.py::WriteFileTests::testListItemWriteBack",
"pydicom/tests/test_filewriter.py::WriteFileTests::testMR",
"pydicom/tests/test_filewriter.py::WriteFileTests::testMultiPN",
"pydicom/tests/test_filewriter.py::WriteFileTests::testRTDose",
"pydicom/tests/test_filewriter.py::WriteFileTests::testRTPlan",
"pydicom/tests/test_filewriter.py::WriteFileTests::testUnicode",
"pydicom/tests/test_filewriter.py::WriteFileTests::test_write_double_filemeta",
"pydicom/tests/test_filewriter.py::WriteFileTests::test_write_ffff_ffff",
"pydicom/tests/test_filewriter.py::WriteFileTests::test_write_no_ts",
"pydicom/tests/test_filewriter.py::WriteFileTests::test_write_removes_grouplength",
"pydicom/tests/test_filewriter.py::WriteFileTests::testwrite_short_uid",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testCT",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testJPEG2000",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testListItemWriteBack",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testMR",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testMultiPN",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testRTDose",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testRTPlan",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testUnicode",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::test_multivalue_DA",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::test_write_double_filemeta",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::test_write_ffff_ffff",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::test_write_no_ts",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::test_write_removes_grouplength",
"pydicom/tests/test_filewriter.py::ScratchWriteDateTimeTests::testwrite_short_uid",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_empty_AT",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_DA",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_DT",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_OD_explicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_OD_implicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_OL_explicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_OL_implicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_TM",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_UC_explicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_UC_implicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_UN_implicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_UR_explicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_UR_implicit_little",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_empty_LO",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_multi_DA",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_multi_DT",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_multi_TM",
"pydicom/tests/test_filewriter.py::WriteDataElementTests::test_write_unknown_vr_raises",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_lut_descriptor",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_overlay",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_pixel_data",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_pixel_representation_vm_one",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_pixel_representation_vm_three",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_sequence",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVR::test_waveform_bits_allocated",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVRElement::test_not_ambiguous",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVRElement::test_not_ambiguous_raw_data_element",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVRElement::test_correct_ambiguous_data_element",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVRElement::test_correct_ambiguous_raw_data_element",
"pydicom/tests/test_filewriter.py::TestCorrectAmbiguousVRElement::test_pixel_data_not_ow_or_ob",
"pydicom/tests/test_filewriter.py::WriteAmbiguousVRTests::test_write_explicit_vr_big_endian",
"pydicom/tests/test_filewriter.py::WriteAmbiguousVRTests::test_write_explicit_vr_little_endian",
"pydicom/tests/test_filewriter.py::WriteAmbiguousVRTests::test_write_explicit_vr_raises",
"pydicom/tests/test_filewriter.py::ScratchWriteTests::testImpl_LE_deflen_write",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_preamble_default",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_preamble_custom",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_no_preamble",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_none_preamble",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_bad_preamble",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_prefix",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_prefix_none",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_ds_changed",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_convert_implicit_to_explicit_vr",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_write_dataset_with_explicit_vr",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_convert_implicit_to_explicit_vr_using_destination",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_convert_explicit_to_implicit_vr",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_convert_big_to_little_endian",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_convert_little_to_big_endian",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_transfer_syntax_added",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_private_tag_vr_from_implicit_data",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_convert_rgb_from_implicit_to_explicit_vr",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_transfer_syntax_not_added",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_transfer_syntax_raises",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_media_storage_sop_class_uid_added",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_write_no_file_meta",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_raise_no_file_meta",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_add_file_meta",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_standard",
"pydicom/tests/test_filewriter.py::TestWriteToStandard::test_commandset_no_written",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_bad_elements",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_missing_elements",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_group_length",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_group_length_updated",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_version",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_implementation_version_name_length",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_implementation_class_uid_length",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoToStandard::test_filelike_position",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_commandset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_commandset_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_commandset_filemeta",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_commandset_filemeta_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_ds_unchanged",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_file_meta_unchanged",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_filemeta_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_no_preamble",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_commandset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_commandset_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_commandset_filemeta",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_commandset_filemeta_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_custom",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_default",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_preamble_filemeta_dataset",
"pydicom/tests/test_filewriter.py::TestWriteNonStandard::test_read_write_identical",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoNonStandard::test_bad_elements",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoNonStandard::test_filelike_position",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoNonStandard::test_group_length_updated",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoNonStandard::test_meta_unchanged",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoNonStandard::test_missing_elements",
"pydicom/tests/test_filewriter.py::TestWriteFileMetaInfoNonStandard::test_transfer_syntax_not_added",
"pydicom/tests/test_filewriter.py::TestWriteNumbers::test_write_empty_value",
"pydicom/tests/test_filewriter.py::TestWriteNumbers::test_write_list",
"pydicom/tests/test_filewriter.py::TestWriteNumbers::test_write_singleton",
"pydicom/tests/test_filewriter.py::TestWriteNumbers::test_exception",
"pydicom/tests/test_filewriter.py::TestWriteNumbers::test_write_big_endian",
"pydicom/tests/test_filewriter.py::TestWritePN::test_no_encoding_unicode",
"pydicom/tests/test_filewriter.py::TestWritePN::test_no_encoding",
"pydicom/tests/test_filewriter.py::TestWriteDT::test_format_dt",
"pydicom/tests/test_filewriter.py::TestWriteUndefinedLengthPixelData::test_big_endian_correct_data",
"pydicom/tests/test_filewriter.py::TestWriteUndefinedLengthPixelData::test_big_endian_incorrect_data",
"pydicom/tests/test_filewriter.py::TestWriteUndefinedLengthPixelData::test_little_endian_correct_data",
"pydicom/tests/test_filewriter.py::TestWriteUndefinedLengthPixelData::test_little_endian_incorrect_data"
]
| []
| MIT License | 2,452 | [
"pydicom/filewriter.py"
]
| [
"pydicom/filewriter.py"
]
|
automl__SMAC3-418 | a83dcd7169ea5141ca20cc47e25a38a423e2820d | 2018-04-27 11:27:11 | f710fa60dbf2c64e42ce14aa0eb529f92378560a | codecov-io: # [Codecov](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=h1) Report
> Merging [#418](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=desc) into [development](https://codecov.io/gh/automl/SMAC3/commit/2c259a0f3f67e43ee1674efb8c85d5ac8cec6e71?src=pr&el=desc) will **increase** coverage by `0.3%`.
> The diff coverage is `95.65%`.
[](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## development #418 +/- ##
==============================================
+ Coverage 88.12% 88.43% +0.3%
==============================================
Files 48 48
Lines 3150 3251 +101
==============================================
+ Hits 2776 2875 +99
- Misses 374 376 +2
```
| [Impacted Files](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [smac/facade/smac\_facade.py](https://codecov.io/gh/automl/SMAC3/pull/418/diff?src=pr&el=tree#diff-c21hYy9mYWNhZGUvc21hY19mYWNhZGUucHk=) | `93.25% <95.65%> (-0.24%)` | :arrow_down: |
| [smac/intensification/intensification.py](https://codecov.io/gh/automl/SMAC3/pull/418/diff?src=pr&el=tree#diff-c21hYy9pbnRlbnNpZmljYXRpb24vaW50ZW5zaWZpY2F0aW9uLnB5) | `94.02% <0%> (+0.23%)` | :arrow_up: |
| [smac/scenario/scenario.py](https://codecov.io/gh/automl/SMAC3/pull/418/diff?src=pr&el=tree#diff-c21hYy9zY2VuYXJpby9zY2VuYXJpby5weQ==) | `94.23% <0%> (+1.79%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=footer). Last update [2c259a0...dd76fd5](https://codecov.io/gh/automl/SMAC3/pull/418?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/README.md b/README.md
index 59ed9407c..797a06a81 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,12 @@
# SMAC v3 Project
-Copyright (C) 2017 [ML4AAD Group](http://www.ml4aad.org/)
+Copyright (C) 2016-2018 [ML4AAD Group](http://www.ml4aad.org/)
-__Attention__: This package is under heavy development and subject to change.
-A stable release of SMAC (v2) in Java can be found [here](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/).
+__Attention__: This package is a re-implementation of the original SMAC tool
+(see reference below).
+However, the reimplementation slightly differs from the original SMAC.
+For comparisons against the original SMAC, we refer to a stable release of SMAC (v2) in Java
+which can be found [here](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/).
The documentation can be found [here](https://automl.github.io/SMAC3/).
@@ -24,7 +27,7 @@ Status for development branch
SMAC is a tool for algorithm configuration to optimize the parameters of
arbitrary algorithms across a set of instances. This also includes
hyperparameter optimization of ML algorithms. The main core consists of
-Bayesian Optimization in combination with a simple racing mechanism to
+Bayesian Optimization in combination with a aggressive racing mechanism to
efficiently decide which of two configuration performs better.
For a detailed description of its main idea,
@@ -35,26 +38,41 @@ we refer to
In: Proceedings of the conference on Learning and Intelligent OptimizatioN (LION 5)
-SMAC v3 is written in python3 and continuously tested with python3.5 and
+SMAC v3 is written in Python3 and continuously tested with python3.5 and
python3.6. Its [Random Forest](https://github.com/automl/random_forest_run)
is written in C++.
# Installation
+## Requirements
+
Besides the listed requirements (see `requirements.txt`), the random forest
used in SMAC3 requires SWIG (>= 3.0).
- apt-get install swig
+```apt-get install swig```
+
+
+## Installation via pip
+
+SMAC3 is available on pipy.
+
+```pip install smac```
+
+## Manual Installation
+
+```
+git clone https://github.com/automl/SMAC3.git && cd SMAC3
+cat requirements.txt | xargs -n 1 -L 1 pip install
+python setup.py install
+```
+
+## Installation in Anaconda
- cat requirements.txt | xargs -n 1 -L 1 pip install
-
- python setup.py install
-
If you use Anaconda as your Python environment, you have to install three
-packages before you can install SMAC:
+packages **before** you can install SMAC:
+
+```conda install gxx_linux-64 gcc_linux-64 swig```
- conda install gxx_linux-64 gcc_linux-64 swig
-
# License
This program is free software: you can redistribute it and/or modify
@@ -64,28 +82,27 @@ This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-You should have received a copy of the 3-clause BSD license
-along with this program (see LICENSE file).
+You should have received a copy of the 3-clause BSD license
+along with this program (see LICENSE file).
If not, see <https://opensource.org/licenses/BSD-3-Clause>.
# USAGE
The usage of SMAC v3 is mainly the same as provided with [SMAC v2.08](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/v2.08.00/manual.pdf).
-It supports the same parameter configuration space syntax and interface to
-target algorithms. Please note that we do not support the extended parameter
-configuration syntax introduced in SMACv2.10.
+It supports the same parameter configuration space syntax
+(except for extended forbidden constraints) and interface to
+target algorithms.
# Examples
See examples/
* examples/rosenbrock.py - example on how to optimize a Python function
- (REQUIRES [PYNISHER](https://github.com/sfalkner/pynisher) )
* examples/spear_qcp/run.sh - example on how to optimize the SAT solver Spear
on a set of SAT formulas
-
+
# Contact
-
-SMAC v3 is developed by the [ML4AAD Group of the University of Freiburg](http://www.ml4aad.org/).
+
+SMAC3 is developed by the [ML4AAD Group of the University of Freiburg](http://www.ml4aad.org/).
If you found a bug, please report to https://github.com/automl/SMAC3
diff --git a/doc/options.rst b/doc/options.rst
index 88272026c..2c0383116 100644
--- a/doc/options.rst
+++ b/doc/options.rst
@@ -47,6 +47,8 @@ The Parameter Configuration Space (PCS) defines the legal ranges of the
parameters to be optimized and their default values. In the examples-folder you
can find several examples for PCS-files. Generally, the format is:
+To define parameters and their ranges, the following format is supported:
+
.. code-block:: bash
parameter_name categorical {value_1, ..., value_N} [default value]
@@ -56,14 +58,29 @@ can find several examples for PCS-files. Generally, the format is:
parameter_name real [min_value, max_value] [default value]
parameter_name real [min_value, max_value] [default value] log
+The trailing "log" indicates that SMAC should sample from the defined ranges
+on a log scale.
+
+Furthermore, conditional dependencies can be expressed. That is useful if
+a parameter activates sub-parameters. For example, only if a certain heuristic
+is used, the heuristic's parameter are active and otherwise SMAC can ignore these.
+
+.. code-block:: bash
+
# Conditionals:
child_name | condition [&&,||] condition ...
- # Condition Operators:
+ # Condition Operators:
# parent_x [<, >] parent_x_value (if parameter type is ordinal, integer or real)
# parent_x [==,!=] parent_x_value (if parameter type is categorical, ordinal or integer)
# parent_x in {parent_x_value1, parent_x_value2,...}
+Forbidden constraints allow for specifications of forbidden combinations of
+parameter values. Please note that SMAC uses a simple rejection sampling
+strategy. Therefore, SMAC cannot handle efficiently highly constrained spaces.
+
+.. code-block:: bash
+
# Forbiddens:
{parameter_name_1=value_1, ..., parameter_name_N=value_N}
diff --git a/examples/rosenbrock.py b/examples/rosenbrock.py
index bf45ed486..5a023806e 100644
--- a/examples/rosenbrock.py
+++ b/examples/rosenbrock.py
@@ -27,6 +27,6 @@ x, cost, _ = fmin_smac(func=rosenbrock_2d,
x0=[-3, -4],
bounds=[(-5, 5), (-5, 5)],
maxfun=325,
- rng=3)
+ rng=3) # Passing a seed makes fmin_smac determistic
print("Best x: %s; with cost: %f"% (str(x), cost))
diff --git a/smac/facade/smac_facade.py b/smac/facade/smac_facade.py
index 48f86fb98..518d4de5c 100644
--- a/smac/facade/smac_facade.py
+++ b/smac/facade/smac_facade.py
@@ -1,6 +1,5 @@
import logging
import os
-import shutil
import typing
import numpy as np
@@ -27,7 +26,7 @@ from smac.optimizer.acquisition import EI, LogEI, AbstractAcquisitionFunction
from smac.optimizer.ei_optimization import InterleavedLocalAndRandomSearch, \
AcquisitionFunctionMaximizer
from smac.optimizer.random_configuration_chooser import ChooserNoCoolDown, \
- ChooserLinearCoolDown
+ ChooserLinearCoolDown, RandomConfigurationChooser
from smac.epm.rf_with_instances import RandomForestWithInstances
from smac.epm.rfr_imputator import RFRImputator
from smac.epm.base_epm import AbstractEPM
@@ -59,21 +58,21 @@ class SMAC(object):
def __init__(self,
scenario: Scenario,
- tae_runner: typing.Union[ExecuteTARun, typing.Callable]=None,
- runhistory: RunHistory=None,
- intensifier: Intensifier=None,
- acquisition_function: AbstractAcquisitionFunction=None,
- acquisition_function_optimizer: AcquisitionFunctionMaximizer=None,
- model: AbstractEPM=None,
- runhistory2epm: AbstractRunHistory2EPM=None,
- initial_design: InitialDesign=None,
- initial_configurations: typing.List[Configuration]=None,
- stats: Stats=None,
- restore_incumbent: Configuration=None,
- rng: typing.Union[np.random.RandomState, int]=None,
- smbo_class: SMBO=None,
- run_id: int=1,
- random_configuration_chooser=None):
+ tae_runner: typing.Optional[typing.Union[ExecuteTARun, typing.Callable]]=None,
+ runhistory: typing.Optional[RunHistory]=None,
+ intensifier: typing.Optional[Intensifier]=None,
+ acquisition_function: typing.Optional[AbstractAcquisitionFunction]=None,
+ acquisition_function_optimizer: typing.Optional[AcquisitionFunctionMaximizer]=None,
+ model: typing.Optional[AbstractEPM]=None,
+ runhistory2epm: typing.Optional[AbstractRunHistory2EPM]=None,
+ initial_design: typing.Optional[InitialDesign]=None,
+ initial_configurations: typing.Optional[typing.List[Configuration]]=None,
+ stats: typing.Optional[Stats]=None,
+ restore_incumbent: typing.Optional[Configuration]=None,
+ rng: typing.Optional[typing.Union[np.random.RandomState, int]]=None,
+ smbo_class: typing.Optional[SMBO]=None,
+ run_id: typing.Optional[int]=None,
+ random_configuration_chooser: typing.Optional[RandomConfigurationChooser]=None):
"""Constructor
Parameters
@@ -121,11 +120,11 @@ class SMAC(object):
smbo_class : ~smac.optimizer.smbo.SMBO
Class implementing the SMBO interface which will be used to
instantiate the optimizer class.
- run_id: int, (default: 1)
- Run ID will be used as subfolder for output_dir.
- random_configuration_chooser
- when to choose a random configuration -- one of
- ChooserNoCoolDown, ChooserLinearCoolDown
+ run_id : int (optional)
+ Run ID will be used as subfolder for output_dir. If no ``run_id`` is given, a random ``run_id`` will be
+ chosen.
+ random_configuration_chooser : ~smac.optimizer.random_configuration_chooser.RandomConfigurationChooser
+ How often to choose a random configuration during the intensification procedure.
"""
self.logger = logging.getLogger(
@@ -136,16 +135,21 @@ class SMAC(object):
self.scenario = scenario
self.output_dir = ""
if not restore_incumbent:
+ # restore_incumbent is used by the CLI interface which provides a method for restoring a SMAC run given an
+ # output directory. This is the default path.
+ # initial random number generator
+ run_id, rng = self._get_rng(rng=rng, run_id=run_id)
self.output_dir = create_output_directory(scenario, run_id)
elif scenario.output_dir is not None:
+ run_id, rng = self._get_rng(rng=rng, run_id=run_id)
# output-directory is created in CLI when restoring from a
# folder. calling the function again in the facade results in two
# folders being created: run_X and run_X.OLD. if we are
# restoring, the output-folder exists already and we omit creating it,
# but set the self-output_dir to the dir.
# necessary because we want to write traj to new output-dir in CLI.
- self.output_dir = os.path.join(scenario.output_dir,
- "run_%d" % (run_id))
+ self.output_dir = scenario.output_dir_for_this_run
+
if (
scenario.deterministic is True
and getattr(scenario, 'tuner_timeout', None) is None
@@ -170,9 +174,6 @@ class SMAC(object):
if runhistory.aggregate_func is None:
runhistory.aggregate_func = aggregate_func
- # initial random number generator
- num_run, rng = self._get_rng(rng=rng)
-
random_configuration_chooser = SMAC._get_random_configuration_chooser(
random_configuration_chooser=random_configuration_chooser)
@@ -186,7 +187,7 @@ class SMAC(object):
# initial EPM
types, bounds = get_types(scenario.cs, scenario.feature_array)
if model is None:
- model = RandomForestWithInstances(types=types,
+ model = RandomForestWithInstances(types=types,
bounds=bounds,
instance_features=scenario.feature_array,
seed=rng.randint(MAXINT),
@@ -354,7 +355,7 @@ class SMAC(object):
elif scenario.run_obj == 'quality':
runhistory2epm = RunHistory2EPM4Cost(scenario=scenario, num_params=num_params,
success_states=[
- StatusType.SUCCESS,
+ StatusType.SUCCESS,
StatusType.CRASHED],
impute_censored_data=False, impute_state=None)
@@ -374,7 +375,7 @@ class SMAC(object):
'runhistory2epm': runhistory2epm,
'intensifier': intensifier,
'aggregate_func': aggregate_func,
- 'num_run': num_run,
+ 'num_run': run_id,
'model': model,
'acq_optimizer': acquisition_function_optimizer,
'acquisition_func': acquisition_function,
@@ -387,36 +388,61 @@ class SMAC(object):
else:
self.solver = smbo_class(**smbo_args)
- def _get_rng(self, rng):
- """Initialize random number generator
+ def _get_rng(
+ self,
+ rng: typing.Optional[typing.Union[int, np.random.RandomState]]=None,
+ run_id: typing.Optional[int]=None,
+ ) -> typing.Tuple[int, np.random.RandomState]:
+ """Initialize random number generator and set run_id
- If rng is None, initialize a new generator
- If rng is Int, create RandomState from that
- If rng is RandomState, return it
+ * If rng and run_id are None, initialize a new generator and sample a run_id
+ * If rng is None and a run_id is given, use the run_id to initialize the rng
+ * If rng is an int, a RandomState object is created from that.
+ * If rng is RandomState, return it
+ * If only run_id is None, a run_id is sampled from the random state.
Parameters
----------
- rng: np.random.RandomState|int|None
+ rng : np.random.RandomState|int|None
+
+ run_id : int, optional
Returns
-------
- int, np.random.RandomState
+ int
+ np.random.RandomState
"""
# initialize random number generator
- if rng is None:
- self.logger.debug('no rng given: using default seed of 1')
- num_run = 1
- rng = np.random.RandomState(seed=num_run)
+ if rng is not None and not isinstance(rng, (int, np.random.RandomState)):
+ raise TypeError('Argument rng accepts only arguments of type None, int or np.random.RandomState, '
+ 'you provided %s.' % str(type(rng)))
+ if run_id is not None and not isinstance(run_id, int):
+ raise TypeError('Argument run_id accepts only arguments of type None, int or np.random.RandomState, '
+ 'you provided %s.' % str(type(run_id)))
+
+ if rng is None and run_id is None:
+ # Case that both are None
+ self.logger.debug('No rng and no run_id given: using a random value to initialize run_id.')
+ rng = np.random.RandomState()
+ run_id = rng.randint(MAXINT)
+ elif rng is None and isinstance(run_id, int):
+ self.logger.debug('No rng and no run_id given: using run_id %d as seed.', run_id)
+ rng = np.random.RandomState(seed=run_id)
elif isinstance(rng, int):
- num_run = rng
+ if run_id is None:
+ run_id = rng
+ else:
+ pass
rng = np.random.RandomState(seed=rng)
elif isinstance(rng, np.random.RandomState):
- num_run = rng.randint(MAXINT)
- rng = rng
+ if run_id is None:
+ run_id = rng.randint(MAXINT)
+ else:
+ pass
else:
- raise TypeError('Unknown type %s for argument rng. Only accepts '
- 'None, int or np.random.RandomState' % str(type(rng)))
- return num_run, rng
+ raise ValueError('This should not happen! Please contact the developers! Arguments: rng=%s of type %s and '
+ 'run_id=% of type %s' % (rng, type(rng), run_id, type(run_id)))
+ return run_id, rng
@staticmethod
def _get_random_configuration_chooser(random_configuration_chooser):
@@ -449,7 +475,7 @@ class SMAC(object):
finally:
self.solver.stats.save()
self.solver.stats.print_stats()
- self.logger.info("Final Incumbent: %s" % (self.solver.incumbent))
+ self.logger.info("Final Incumbent: %s", (self.solver.incumbent))
self.runhistory = self.solver.runhistory
self.trajectory = self.solver.intensifier.traj_logger.trajectory
| Add seed to output directory name
In order to have an overview of what has been run one would sometimes like to know the seed of previous runs. One solution would be to store it in the output directory name, but this isn't as easy as it sounds as the scenario object does not know the seed for the SMAC object. | automl/SMAC3 | diff --git a/test/test_facade/test_smac_facade.py b/test/test_facade/test_smac_facade.py
index 81b7e0f55..bc66013e5 100644
--- a/test/test_facade/test_smac_facade.py
+++ b/test/test_facade/test_smac_facade.py
@@ -2,6 +2,7 @@ from contextlib import suppress
import os
import shutil
import unittest
+import unittest.mock
import numpy as np
from ConfigSpace.hyperparameters import UniformFloatHyperparameter
@@ -57,59 +58,91 @@ class TestSMACFacade(unittest.TestCase):
self.assertIs(smac.solver.intensifier.tae_runner.ta, target_algorithm)
def test_pass_invalid_tae_runner(self):
- self.assertRaisesRegexp(TypeError, "Argument 'tae_runner' is <class "
- "'int'>, but must be either a "
- "callable or an instance of "
- "ExecuteTaRun.",
- SMAC, tae_runner=1, scenario=self.scenario)
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument 'tae_runner' is <class 'int'>, but must be either a callable or an instance of ExecuteTaRun.",
+ SMAC,
+ tae_runner=1,
+ scenario=self.scenario,
+ )
def test_pass_tae_runner_objective(self):
- tae = ExecuteTAFuncDict(lambda: 1,
- run_obj='runtime')
- self.assertRaisesRegexp(ValueError, "Objective for the target algorithm"
- " runner and the scenario must be "
- "the same, but are 'runtime' and "
- "'quality'",
- SMAC, tae_runner=tae, scenario=self.scenario)
-
- def test_check_random_states(self):
- ta = ExecuteTAFuncDict(lambda x: x**2)
-
- # Get state immediately or it will change with the next calltest_check_random_states
-
+ tae = ExecuteTAFuncDict(lambda: 1, run_obj='runtime')
+ self.assertRaisesRegex(
+ ValueError,
+ "Objective for the target algorithm runner and the scenario must be the same, but are 'runtime' and "
+ "'quality'",
+ SMAC,
+ tae_runner=tae,
+ scenario=self.scenario,
+ )
+
+ @unittest.mock.patch.object(SMAC, '__init__')
+ def test_check_random_states(self, patch):
+ patch.return_value = None
+ smac = SMAC()
+ smac.logger = unittest.mock.MagicMock()
+
+ # Check some properties
# Check whether different seeds give different random states
- S1 = SMAC(tae_runner=ta, scenario=self.scenario, rng=1)
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario, rng=2)
- S2 = S2.solver.scenario.cs.random
- self.assertNotEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
-
- # Check whether no seeds give the same random states (use default seed)
- S1 = SMAC(tae_runner=ta, scenario=self.scenario)
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario)
- S2 = S2.solver.scenario.cs.random
- self.assertEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
-
- # Check whether the same seeds give the same random states
- S1 = SMAC(tae_runner=ta, scenario=self.scenario, rng=1)
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario, rng=1)
- S2 = S2.solver.scenario.cs.random
- self.assertEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
-
- # Check whether the same RandomStates give the same random states
- S1 = SMAC(tae_runner=ta, scenario=self.scenario,
- rng=np.random.RandomState(1))
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario,
- rng=np.random.RandomState(1))
- S2 = S2.solver.scenario.cs.random
- self.assertEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
+ _, rng_1 = smac._get_rng(1)
+ _, rng_2 = smac._get_rng(2)
+ self.assertNotEqual(sum(rng_1.get_state()[1] - rng_2.get_state()[1]), 0)
+
+ # Check whether no seeds gives different random states
+ _, rng_1 = smac._get_rng()
+ self.assertEqual(smac.logger.debug.call_count, 1)
+ _, rng_2 = smac._get_rng()
+ self.assertEqual(smac.logger.debug.call_count, 2)
+
+ self.assertNotEqual(sum(rng_1.get_state()[1] - rng_2.get_state()[1]), 0)
+
+ # Check whether the same int seeds give the same random states
+ _, rng_1 = smac._get_rng(1)
+ _, rng_2 = smac._get_rng(1)
+ self.assertEqual(sum(rng_1.get_state()[1] - rng_2.get_state()[1]), 0)
+
+ # Check all execution paths
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument rng accepts only arguments of type None, int or np.random.RandomState, "
+ "you provided <class 'str'>.",
+ smac._get_rng,
+ rng='ABC',
+ )
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument run_id accepts only arguments of type None, int or np.random.RandomState, "
+ "you provided <class 'str'>.",
+ smac._get_rng,
+ run_id='ABC'
+ )
+
+ run_id, rng_1 = smac._get_rng(rng=None, run_id=None)
+ self.assertIsInstance(run_id, int)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+ self.assertEqual(smac.logger.debug.call_count, 3)
+
+ run_id, rng_1 = smac._get_rng(rng=None, run_id=1)
+ self.assertEqual(run_id, 1)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+
+ run_id, rng_1 = smac._get_rng(rng=1, run_id=None)
+ self.assertEqual(run_id, 1)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+
+ run_id, rng_1 = smac._get_rng(rng=1, run_id=1337)
+ self.assertEqual(run_id, 1337)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+
+ rs = np.random.RandomState(1)
+ run_id, rng_1 = smac._get_rng(rng=rs, run_id=None)
+ self.assertIsInstance(run_id, int)
+ self.assertIs(rng_1, rs)
+
+ run_id, rng_1 = smac._get_rng(rng=rs, run_id=2505)
+ self.assertEqual(run_id, 2505)
+ self.assertIs(rng_1, rs)
def test_check_deterministic_rosenbrock(self):
def rosenbrock_2d(x):
diff --git a/test/test_smbo/test_smbo.py b/test/test_smbo/test_smbo.py
index cdb03c72d..d38bc36b5 100644
--- a/test/test_smbo/test_smbo.py
+++ b/test/test_smbo/test_smbo.py
@@ -93,11 +93,14 @@ class TestSMBO(unittest.TestCase):
self.assertIsInstance(smbo.num_run, int)
self.assertIs(smbo.rng, rng)
# ML: I don't understand the following line and it throws an error
- self.assertRaisesRegexp(TypeError,
- "Unknown type <(class|type) 'str'> for argument "
- 'rng. Only accepts None, int or '
- 'np.random.RandomState',
- SMAC, self.scenario, rng='BLA')
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument rng accepts only arguments of type None, int or np.random.RandomState, you provided "
+ "<class 'str'>.",
+ SMAC,
+ self.scenario,
+ rng='BLA',
+ )
def test_choose_next(self):
seed = 42
@@ -110,7 +113,7 @@ class TestSMBO(unittest.TestCase):
Y = self.branin(X)
x = next(smbo.choose_next(X, Y)).get_array()
assert x.shape == (2,)
-
+
def test_choose_next_w_empty_rh(self):
seed = 42
smbo = SMAC(self.scenario, rng=seed).solver
@@ -126,9 +129,9 @@ class TestSMBO(unittest.TestCase):
**{"X":X, "Y":Y}
)
- x = next(smbo.choose_next(X, Y, incumbent_value=0.0)).get_array()
+ x = next(smbo.choose_next(X, Y, incumbent_value=0.0)).get_array()
assert x.shape == (2,)
-
+
def test_choose_next_empty_X(self):
smbo = SMAC(self.scenario, rng=1).solver
smbo.acquisition_func._compute = mock.Mock(
@@ -146,7 +149,7 @@ class TestSMBO(unittest.TestCase):
self.assertEqual(x, [0, 1, 2])
self.assertEqual(smbo._random_search.maximize.call_count, 1)
self.assertEqual(smbo.acquisition_func._compute.call_count, 0)
-
+
def test_choose_next_empty_X_2(self):
smbo = SMAC(self.scenario, rng=1).solver
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 4
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "python setup.py install",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y build-essential swig"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
ConfigSpace==0.4.19
Cython==3.0.12
docutils==0.18.1
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
joblib==1.1.1
MarkupSafe==2.0.1
nose==1.3.7
numpy==1.19.5
packaging==21.3
pluggy==1.0.0
psutil==7.0.0
py==1.11.0
Pygments==2.14.0
pynisher==0.6.4
pyparsing==3.1.4
pyrfr==0.8.2
pytest==7.0.1
pytz==2025.2
requests==2.27.1
scikit-learn==0.24.2
scipy==1.5.4
six==1.17.0
smac==0.8.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
threadpoolctl==3.1.0
tomli==1.2.3
typing==3.7.4.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: SMAC3
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- configspace==0.4.19
- cython==3.0.12
- docutils==0.18.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- joblib==1.1.1
- markupsafe==2.0.1
- nose==1.3.7
- numpy==1.19.5
- packaging==21.3
- pluggy==1.0.0
- psutil==7.0.0
- py==1.11.0
- pygments==2.14.0
- pynisher==0.6.4
- pyparsing==3.1.4
- pyrfr==0.8.2
- pytest==7.0.1
- pytz==2025.2
- requests==2.27.1
- scikit-learn==0.24.2
- scipy==1.5.4
- six==1.17.0
- smac==0.8.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-rtd-theme==2.0.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- threadpoolctl==3.1.0
- tomli==1.2.3
- typing==3.7.4.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/SMAC3
| [
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_check_random_states",
"test/test_smbo/test_smbo.py::TestSMBO::test_rng"
]
| []
| [
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_check_deterministic_rosenbrock",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_get_runhistory_and_trajectory_and_tae_runner",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_inject_dependencies",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_inject_stats_and_runhistory_object_to_TAE",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_no_output",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_output_structure",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_pass_callable",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_pass_invalid_tae_runner",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_pass_tae_runner_objective",
"test/test_smbo/test_smbo.py::TestSMBO::test_abort_on_initial_design",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_2",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_3",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_empty_X",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_empty_X_2",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_w_empty_rh",
"test/test_smbo/test_smbo.py::TestSMBO::test_init_EIPS_as_arguments",
"test/test_smbo/test_smbo.py::TestSMBO::test_init_only_scenario_quality",
"test/test_smbo/test_smbo.py::TestSMBO::test_init_only_scenario_runtime",
"test/test_smbo/test_smbo.py::TestSMBO::test_intensification_percentage",
"test/test_smbo/test_smbo.py::TestSMBO::test_validation"
]
| []
| BSD 3-Clause License | 2,454 | [
"doc/options.rst",
"README.md",
"smac/facade/smac_facade.py",
"examples/rosenbrock.py"
]
| [
"doc/options.rst",
"README.md",
"smac/facade/smac_facade.py",
"examples/rosenbrock.py"
]
|
automl__SMAC3-420 | a83dcd7169ea5141ca20cc47e25a38a423e2820d | 2018-04-27 12:23:01 | f710fa60dbf2c64e42ce14aa0eb529f92378560a | diff --git a/README.md b/README.md
index 59ed9407c..797a06a81 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,12 @@
# SMAC v3 Project
-Copyright (C) 2017 [ML4AAD Group](http://www.ml4aad.org/)
+Copyright (C) 2016-2018 [ML4AAD Group](http://www.ml4aad.org/)
-__Attention__: This package is under heavy development and subject to change.
-A stable release of SMAC (v2) in Java can be found [here](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/).
+__Attention__: This package is a re-implementation of the original SMAC tool
+(see reference below).
+However, the reimplementation slightly differs from the original SMAC.
+For comparisons against the original SMAC, we refer to a stable release of SMAC (v2) in Java
+which can be found [here](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/).
The documentation can be found [here](https://automl.github.io/SMAC3/).
@@ -24,7 +27,7 @@ Status for development branch
SMAC is a tool for algorithm configuration to optimize the parameters of
arbitrary algorithms across a set of instances. This also includes
hyperparameter optimization of ML algorithms. The main core consists of
-Bayesian Optimization in combination with a simple racing mechanism to
+Bayesian Optimization in combination with a aggressive racing mechanism to
efficiently decide which of two configuration performs better.
For a detailed description of its main idea,
@@ -35,26 +38,41 @@ we refer to
In: Proceedings of the conference on Learning and Intelligent OptimizatioN (LION 5)
-SMAC v3 is written in python3 and continuously tested with python3.5 and
+SMAC v3 is written in Python3 and continuously tested with python3.5 and
python3.6. Its [Random Forest](https://github.com/automl/random_forest_run)
is written in C++.
# Installation
+## Requirements
+
Besides the listed requirements (see `requirements.txt`), the random forest
used in SMAC3 requires SWIG (>= 3.0).
- apt-get install swig
+```apt-get install swig```
+
+
+## Installation via pip
+
+SMAC3 is available on pipy.
+
+```pip install smac```
+
+## Manual Installation
+
+```
+git clone https://github.com/automl/SMAC3.git && cd SMAC3
+cat requirements.txt | xargs -n 1 -L 1 pip install
+python setup.py install
+```
+
+## Installation in Anaconda
- cat requirements.txt | xargs -n 1 -L 1 pip install
-
- python setup.py install
-
If you use Anaconda as your Python environment, you have to install three
-packages before you can install SMAC:
+packages **before** you can install SMAC:
+
+```conda install gxx_linux-64 gcc_linux-64 swig```
- conda install gxx_linux-64 gcc_linux-64 swig
-
# License
This program is free software: you can redistribute it and/or modify
@@ -64,28 +82,27 @@ This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-You should have received a copy of the 3-clause BSD license
-along with this program (see LICENSE file).
+You should have received a copy of the 3-clause BSD license
+along with this program (see LICENSE file).
If not, see <https://opensource.org/licenses/BSD-3-Clause>.
# USAGE
The usage of SMAC v3 is mainly the same as provided with [SMAC v2.08](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/v2.08.00/manual.pdf).
-It supports the same parameter configuration space syntax and interface to
-target algorithms. Please note that we do not support the extended parameter
-configuration syntax introduced in SMACv2.10.
+It supports the same parameter configuration space syntax
+(except for extended forbidden constraints) and interface to
+target algorithms.
# Examples
See examples/
* examples/rosenbrock.py - example on how to optimize a Python function
- (REQUIRES [PYNISHER](https://github.com/sfalkner/pynisher) )
* examples/spear_qcp/run.sh - example on how to optimize the SAT solver Spear
on a set of SAT formulas
-
+
# Contact
-
-SMAC v3 is developed by the [ML4AAD Group of the University of Freiburg](http://www.ml4aad.org/).
+
+SMAC3 is developed by the [ML4AAD Group of the University of Freiburg](http://www.ml4aad.org/).
If you found a bug, please report to https://github.com/automl/SMAC3
diff --git a/doc/options.rst b/doc/options.rst
index 88272026c..2c0383116 100644
--- a/doc/options.rst
+++ b/doc/options.rst
@@ -47,6 +47,8 @@ The Parameter Configuration Space (PCS) defines the legal ranges of the
parameters to be optimized and their default values. In the examples-folder you
can find several examples for PCS-files. Generally, the format is:
+To define parameters and their ranges, the following format is supported:
+
.. code-block:: bash
parameter_name categorical {value_1, ..., value_N} [default value]
@@ -56,14 +58,29 @@ can find several examples for PCS-files. Generally, the format is:
parameter_name real [min_value, max_value] [default value]
parameter_name real [min_value, max_value] [default value] log
+The trailing "log" indicates that SMAC should sample from the defined ranges
+on a log scale.
+
+Furthermore, conditional dependencies can be expressed. That is useful if
+a parameter activates sub-parameters. For example, only if a certain heuristic
+is used, the heuristic's parameter are active and otherwise SMAC can ignore these.
+
+.. code-block:: bash
+
# Conditionals:
child_name | condition [&&,||] condition ...
- # Condition Operators:
+ # Condition Operators:
# parent_x [<, >] parent_x_value (if parameter type is ordinal, integer or real)
# parent_x [==,!=] parent_x_value (if parameter type is categorical, ordinal or integer)
# parent_x in {parent_x_value1, parent_x_value2,...}
+Forbidden constraints allow for specifications of forbidden combinations of
+parameter values. Please note that SMAC uses a simple rejection sampling
+strategy. Therefore, SMAC cannot handle efficiently highly constrained spaces.
+
+.. code-block:: bash
+
# Forbiddens:
{parameter_name_1=value_1, ..., parameter_name_N=value_N}
diff --git a/examples/rosenbrock.py b/examples/rosenbrock.py
index bf45ed486..5a023806e 100644
--- a/examples/rosenbrock.py
+++ b/examples/rosenbrock.py
@@ -27,6 +27,6 @@ x, cost, _ = fmin_smac(func=rosenbrock_2d,
x0=[-3, -4],
bounds=[(-5, 5), (-5, 5)],
maxfun=325,
- rng=3)
+ rng=3) # Passing a seed makes fmin_smac determistic
print("Best x: %s; with cost: %f"% (str(x), cost))
diff --git a/smac/facade/smac_facade.py b/smac/facade/smac_facade.py
index 48f86fb98..518d4de5c 100644
--- a/smac/facade/smac_facade.py
+++ b/smac/facade/smac_facade.py
@@ -1,6 +1,5 @@
import logging
import os
-import shutil
import typing
import numpy as np
@@ -27,7 +26,7 @@ from smac.optimizer.acquisition import EI, LogEI, AbstractAcquisitionFunction
from smac.optimizer.ei_optimization import InterleavedLocalAndRandomSearch, \
AcquisitionFunctionMaximizer
from smac.optimizer.random_configuration_chooser import ChooserNoCoolDown, \
- ChooserLinearCoolDown
+ ChooserLinearCoolDown, RandomConfigurationChooser
from smac.epm.rf_with_instances import RandomForestWithInstances
from smac.epm.rfr_imputator import RFRImputator
from smac.epm.base_epm import AbstractEPM
@@ -59,21 +58,21 @@ class SMAC(object):
def __init__(self,
scenario: Scenario,
- tae_runner: typing.Union[ExecuteTARun, typing.Callable]=None,
- runhistory: RunHistory=None,
- intensifier: Intensifier=None,
- acquisition_function: AbstractAcquisitionFunction=None,
- acquisition_function_optimizer: AcquisitionFunctionMaximizer=None,
- model: AbstractEPM=None,
- runhistory2epm: AbstractRunHistory2EPM=None,
- initial_design: InitialDesign=None,
- initial_configurations: typing.List[Configuration]=None,
- stats: Stats=None,
- restore_incumbent: Configuration=None,
- rng: typing.Union[np.random.RandomState, int]=None,
- smbo_class: SMBO=None,
- run_id: int=1,
- random_configuration_chooser=None):
+ tae_runner: typing.Optional[typing.Union[ExecuteTARun, typing.Callable]]=None,
+ runhistory: typing.Optional[RunHistory]=None,
+ intensifier: typing.Optional[Intensifier]=None,
+ acquisition_function: typing.Optional[AbstractAcquisitionFunction]=None,
+ acquisition_function_optimizer: typing.Optional[AcquisitionFunctionMaximizer]=None,
+ model: typing.Optional[AbstractEPM]=None,
+ runhistory2epm: typing.Optional[AbstractRunHistory2EPM]=None,
+ initial_design: typing.Optional[InitialDesign]=None,
+ initial_configurations: typing.Optional[typing.List[Configuration]]=None,
+ stats: typing.Optional[Stats]=None,
+ restore_incumbent: typing.Optional[Configuration]=None,
+ rng: typing.Optional[typing.Union[np.random.RandomState, int]]=None,
+ smbo_class: typing.Optional[SMBO]=None,
+ run_id: typing.Optional[int]=None,
+ random_configuration_chooser: typing.Optional[RandomConfigurationChooser]=None):
"""Constructor
Parameters
@@ -121,11 +120,11 @@ class SMAC(object):
smbo_class : ~smac.optimizer.smbo.SMBO
Class implementing the SMBO interface which will be used to
instantiate the optimizer class.
- run_id: int, (default: 1)
- Run ID will be used as subfolder for output_dir.
- random_configuration_chooser
- when to choose a random configuration -- one of
- ChooserNoCoolDown, ChooserLinearCoolDown
+ run_id : int (optional)
+ Run ID will be used as subfolder for output_dir. If no ``run_id`` is given, a random ``run_id`` will be
+ chosen.
+ random_configuration_chooser : ~smac.optimizer.random_configuration_chooser.RandomConfigurationChooser
+ How often to choose a random configuration during the intensification procedure.
"""
self.logger = logging.getLogger(
@@ -136,16 +135,21 @@ class SMAC(object):
self.scenario = scenario
self.output_dir = ""
if not restore_incumbent:
+ # restore_incumbent is used by the CLI interface which provides a method for restoring a SMAC run given an
+ # output directory. This is the default path.
+ # initial random number generator
+ run_id, rng = self._get_rng(rng=rng, run_id=run_id)
self.output_dir = create_output_directory(scenario, run_id)
elif scenario.output_dir is not None:
+ run_id, rng = self._get_rng(rng=rng, run_id=run_id)
# output-directory is created in CLI when restoring from a
# folder. calling the function again in the facade results in two
# folders being created: run_X and run_X.OLD. if we are
# restoring, the output-folder exists already and we omit creating it,
# but set the self-output_dir to the dir.
# necessary because we want to write traj to new output-dir in CLI.
- self.output_dir = os.path.join(scenario.output_dir,
- "run_%d" % (run_id))
+ self.output_dir = scenario.output_dir_for_this_run
+
if (
scenario.deterministic is True
and getattr(scenario, 'tuner_timeout', None) is None
@@ -170,9 +174,6 @@ class SMAC(object):
if runhistory.aggregate_func is None:
runhistory.aggregate_func = aggregate_func
- # initial random number generator
- num_run, rng = self._get_rng(rng=rng)
-
random_configuration_chooser = SMAC._get_random_configuration_chooser(
random_configuration_chooser=random_configuration_chooser)
@@ -186,7 +187,7 @@ class SMAC(object):
# initial EPM
types, bounds = get_types(scenario.cs, scenario.feature_array)
if model is None:
- model = RandomForestWithInstances(types=types,
+ model = RandomForestWithInstances(types=types,
bounds=bounds,
instance_features=scenario.feature_array,
seed=rng.randint(MAXINT),
@@ -354,7 +355,7 @@ class SMAC(object):
elif scenario.run_obj == 'quality':
runhistory2epm = RunHistory2EPM4Cost(scenario=scenario, num_params=num_params,
success_states=[
- StatusType.SUCCESS,
+ StatusType.SUCCESS,
StatusType.CRASHED],
impute_censored_data=False, impute_state=None)
@@ -374,7 +375,7 @@ class SMAC(object):
'runhistory2epm': runhistory2epm,
'intensifier': intensifier,
'aggregate_func': aggregate_func,
- 'num_run': num_run,
+ 'num_run': run_id,
'model': model,
'acq_optimizer': acquisition_function_optimizer,
'acquisition_func': acquisition_function,
@@ -387,36 +388,61 @@ class SMAC(object):
else:
self.solver = smbo_class(**smbo_args)
- def _get_rng(self, rng):
- """Initialize random number generator
+ def _get_rng(
+ self,
+ rng: typing.Optional[typing.Union[int, np.random.RandomState]]=None,
+ run_id: typing.Optional[int]=None,
+ ) -> typing.Tuple[int, np.random.RandomState]:
+ """Initialize random number generator and set run_id
- If rng is None, initialize a new generator
- If rng is Int, create RandomState from that
- If rng is RandomState, return it
+ * If rng and run_id are None, initialize a new generator and sample a run_id
+ * If rng is None and a run_id is given, use the run_id to initialize the rng
+ * If rng is an int, a RandomState object is created from that.
+ * If rng is RandomState, return it
+ * If only run_id is None, a run_id is sampled from the random state.
Parameters
----------
- rng: np.random.RandomState|int|None
+ rng : np.random.RandomState|int|None
+
+ run_id : int, optional
Returns
-------
- int, np.random.RandomState
+ int
+ np.random.RandomState
"""
# initialize random number generator
- if rng is None:
- self.logger.debug('no rng given: using default seed of 1')
- num_run = 1
- rng = np.random.RandomState(seed=num_run)
+ if rng is not None and not isinstance(rng, (int, np.random.RandomState)):
+ raise TypeError('Argument rng accepts only arguments of type None, int or np.random.RandomState, '
+ 'you provided %s.' % str(type(rng)))
+ if run_id is not None and not isinstance(run_id, int):
+ raise TypeError('Argument run_id accepts only arguments of type None, int or np.random.RandomState, '
+ 'you provided %s.' % str(type(run_id)))
+
+ if rng is None and run_id is None:
+ # Case that both are None
+ self.logger.debug('No rng and no run_id given: using a random value to initialize run_id.')
+ rng = np.random.RandomState()
+ run_id = rng.randint(MAXINT)
+ elif rng is None and isinstance(run_id, int):
+ self.logger.debug('No rng and no run_id given: using run_id %d as seed.', run_id)
+ rng = np.random.RandomState(seed=run_id)
elif isinstance(rng, int):
- num_run = rng
+ if run_id is None:
+ run_id = rng
+ else:
+ pass
rng = np.random.RandomState(seed=rng)
elif isinstance(rng, np.random.RandomState):
- num_run = rng.randint(MAXINT)
- rng = rng
+ if run_id is None:
+ run_id = rng.randint(MAXINT)
+ else:
+ pass
else:
- raise TypeError('Unknown type %s for argument rng. Only accepts '
- 'None, int or np.random.RandomState' % str(type(rng)))
- return num_run, rng
+ raise ValueError('This should not happen! Please contact the developers! Arguments: rng=%s of type %s and '
+ 'run_id=% of type %s' % (rng, type(rng), run_id, type(run_id)))
+ return run_id, rng
@staticmethod
def _get_random_configuration_chooser(random_configuration_chooser):
@@ -449,7 +475,7 @@ class SMAC(object):
finally:
self.solver.stats.save()
self.solver.stats.print_stats()
- self.logger.info("Final Incumbent: %s" % (self.solver.incumbent))
+ self.logger.info("Final Incumbent: %s", (self.solver.incumbent))
self.runhistory = self.solver.runhistory
self.trajectory = self.solver.intensifier.traj_logger.trajectory
diff --git a/smac/optimizer/acquisition.py b/smac/optimizer/acquisition.py
index 1a03b19ec..27039b0ca 100644
--- a/smac/optimizer/acquisition.py
+++ b/smac/optimizer/acquisition.py
@@ -223,8 +223,11 @@ class EIPS(EI):
X = X[:, np.newaxis]
m, v = self.model.predict_marginalized_over_instances(X)
- assert m.shape[1] == 2
- assert v.shape[1] == 2
+ if m.shape[1] != 2:
+ raise ValueError("m has wrong shape: %s != (-1, 2)" % str(m.shape))
+ if v.shape[1] != 2:
+ raise ValueError("v has wrong shape: %s != (-1, 2)" % str(v.shape))
+
m_cost = m[:, 0]
v_cost = v[:, 0]
# The model already predicts log(runtime)
diff --git a/smac/runhistory/runhistory2epm.py b/smac/runhistory/runhistory2epm.py
index e1772d36f..d6af39f44 100644
--- a/smac/runhistory/runhistory2epm.py
+++ b/smac/runhistory/runhistory2epm.py
@@ -164,8 +164,6 @@ class AbstractRunHistory2EPM(object):
"""
self.logger.debug("Transform runhistory into X,y format")
- assert isinstance(runhistory, RunHistory)
-
# consider only successfully finished runs
s_run_dict = {run: runhistory.data[run] for run in runhistory.data.keys()
if runhistory.data[run].status in self.success_states}
| Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
### [Codacy](https://app.codacy.com/app/KEggensperger/SMAC3/commit?cid=196076701) detected an issue:
#### Message: `Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.`
#### Occurred on:
+ **Commit**: a4f5e8c2ccd44994b573ab91006a2d65aa07f829
+ **File**: [smac/runhistory/runhistory2epm.py](https://github.com/automl/SMAC3/blob/a4f5e8c2ccd44994b573ab91006a2d65aa07f829/smac/runhistory/runhistory2epm.py)
+ **LineNum**: [47](https://github.com/automl/SMAC3/blob/a4f5e8c2ccd44994b573ab91006a2d65aa07f829/smac/runhistory/runhistory2epm.py#L47)
+ **Code**: `assert isinstance(runhistory, RunHistory)`
#### Currently on:
+ **Commit**: 5775eaf234730c8aaaa06e46d0eb6e0a8348929e
+ **File**: [smac/runhistory/runhistory2epm.py](https://github.com/automl/SMAC3/blob/5775eaf234730c8aaaa06e46d0eb6e0a8348929e/smac/runhistory/runhistory2epm.py)
+ **LineNum**: [167](https://github.com/automl/SMAC3/blob/5775eaf234730c8aaaa06e46d0eb6e0a8348929e/smac/runhistory/runhistory2epm.py#L167)
| automl/SMAC3 | diff --git a/test/test_facade/test_smac_facade.py b/test/test_facade/test_smac_facade.py
index 81b7e0f55..bc66013e5 100644
--- a/test/test_facade/test_smac_facade.py
+++ b/test/test_facade/test_smac_facade.py
@@ -2,6 +2,7 @@ from contextlib import suppress
import os
import shutil
import unittest
+import unittest.mock
import numpy as np
from ConfigSpace.hyperparameters import UniformFloatHyperparameter
@@ -57,59 +58,91 @@ class TestSMACFacade(unittest.TestCase):
self.assertIs(smac.solver.intensifier.tae_runner.ta, target_algorithm)
def test_pass_invalid_tae_runner(self):
- self.assertRaisesRegexp(TypeError, "Argument 'tae_runner' is <class "
- "'int'>, but must be either a "
- "callable or an instance of "
- "ExecuteTaRun.",
- SMAC, tae_runner=1, scenario=self.scenario)
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument 'tae_runner' is <class 'int'>, but must be either a callable or an instance of ExecuteTaRun.",
+ SMAC,
+ tae_runner=1,
+ scenario=self.scenario,
+ )
def test_pass_tae_runner_objective(self):
- tae = ExecuteTAFuncDict(lambda: 1,
- run_obj='runtime')
- self.assertRaisesRegexp(ValueError, "Objective for the target algorithm"
- " runner and the scenario must be "
- "the same, but are 'runtime' and "
- "'quality'",
- SMAC, tae_runner=tae, scenario=self.scenario)
-
- def test_check_random_states(self):
- ta = ExecuteTAFuncDict(lambda x: x**2)
-
- # Get state immediately or it will change with the next calltest_check_random_states
-
+ tae = ExecuteTAFuncDict(lambda: 1, run_obj='runtime')
+ self.assertRaisesRegex(
+ ValueError,
+ "Objective for the target algorithm runner and the scenario must be the same, but are 'runtime' and "
+ "'quality'",
+ SMAC,
+ tae_runner=tae,
+ scenario=self.scenario,
+ )
+
+ @unittest.mock.patch.object(SMAC, '__init__')
+ def test_check_random_states(self, patch):
+ patch.return_value = None
+ smac = SMAC()
+ smac.logger = unittest.mock.MagicMock()
+
+ # Check some properties
# Check whether different seeds give different random states
- S1 = SMAC(tae_runner=ta, scenario=self.scenario, rng=1)
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario, rng=2)
- S2 = S2.solver.scenario.cs.random
- self.assertNotEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
-
- # Check whether no seeds give the same random states (use default seed)
- S1 = SMAC(tae_runner=ta, scenario=self.scenario)
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario)
- S2 = S2.solver.scenario.cs.random
- self.assertEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
-
- # Check whether the same seeds give the same random states
- S1 = SMAC(tae_runner=ta, scenario=self.scenario, rng=1)
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario, rng=1)
- S2 = S2.solver.scenario.cs.random
- self.assertEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
-
- # Check whether the same RandomStates give the same random states
- S1 = SMAC(tae_runner=ta, scenario=self.scenario,
- rng=np.random.RandomState(1))
- S1 = S1.solver.scenario.cs.random
-
- S2 = SMAC(tae_runner=ta, scenario=self.scenario,
- rng=np.random.RandomState(1))
- S2 = S2.solver.scenario.cs.random
- self.assertEqual(sum(S1.get_state()[1] - S2.get_state()[1]), 0)
+ _, rng_1 = smac._get_rng(1)
+ _, rng_2 = smac._get_rng(2)
+ self.assertNotEqual(sum(rng_1.get_state()[1] - rng_2.get_state()[1]), 0)
+
+ # Check whether no seeds gives different random states
+ _, rng_1 = smac._get_rng()
+ self.assertEqual(smac.logger.debug.call_count, 1)
+ _, rng_2 = smac._get_rng()
+ self.assertEqual(smac.logger.debug.call_count, 2)
+
+ self.assertNotEqual(sum(rng_1.get_state()[1] - rng_2.get_state()[1]), 0)
+
+ # Check whether the same int seeds give the same random states
+ _, rng_1 = smac._get_rng(1)
+ _, rng_2 = smac._get_rng(1)
+ self.assertEqual(sum(rng_1.get_state()[1] - rng_2.get_state()[1]), 0)
+
+ # Check all execution paths
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument rng accepts only arguments of type None, int or np.random.RandomState, "
+ "you provided <class 'str'>.",
+ smac._get_rng,
+ rng='ABC',
+ )
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument run_id accepts only arguments of type None, int or np.random.RandomState, "
+ "you provided <class 'str'>.",
+ smac._get_rng,
+ run_id='ABC'
+ )
+
+ run_id, rng_1 = smac._get_rng(rng=None, run_id=None)
+ self.assertIsInstance(run_id, int)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+ self.assertEqual(smac.logger.debug.call_count, 3)
+
+ run_id, rng_1 = smac._get_rng(rng=None, run_id=1)
+ self.assertEqual(run_id, 1)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+
+ run_id, rng_1 = smac._get_rng(rng=1, run_id=None)
+ self.assertEqual(run_id, 1)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+
+ run_id, rng_1 = smac._get_rng(rng=1, run_id=1337)
+ self.assertEqual(run_id, 1337)
+ self.assertIsInstance(rng_1, np.random.RandomState)
+
+ rs = np.random.RandomState(1)
+ run_id, rng_1 = smac._get_rng(rng=rs, run_id=None)
+ self.assertIsInstance(run_id, int)
+ self.assertIs(rng_1, rs)
+
+ run_id, rng_1 = smac._get_rng(rng=rs, run_id=2505)
+ self.assertEqual(run_id, 2505)
+ self.assertIs(rng_1, rs)
def test_check_deterministic_rosenbrock(self):
def rosenbrock_2d(x):
diff --git a/test/test_smbo/test_smbo.py b/test/test_smbo/test_smbo.py
index cdb03c72d..d38bc36b5 100644
--- a/test/test_smbo/test_smbo.py
+++ b/test/test_smbo/test_smbo.py
@@ -93,11 +93,14 @@ class TestSMBO(unittest.TestCase):
self.assertIsInstance(smbo.num_run, int)
self.assertIs(smbo.rng, rng)
# ML: I don't understand the following line and it throws an error
- self.assertRaisesRegexp(TypeError,
- "Unknown type <(class|type) 'str'> for argument "
- 'rng. Only accepts None, int or '
- 'np.random.RandomState',
- SMAC, self.scenario, rng='BLA')
+ self.assertRaisesRegex(
+ TypeError,
+ "Argument rng accepts only arguments of type None, int or np.random.RandomState, you provided "
+ "<class 'str'>.",
+ SMAC,
+ self.scenario,
+ rng='BLA',
+ )
def test_choose_next(self):
seed = 42
@@ -110,7 +113,7 @@ class TestSMBO(unittest.TestCase):
Y = self.branin(X)
x = next(smbo.choose_next(X, Y)).get_array()
assert x.shape == (2,)
-
+
def test_choose_next_w_empty_rh(self):
seed = 42
smbo = SMAC(self.scenario, rng=seed).solver
@@ -126,9 +129,9 @@ class TestSMBO(unittest.TestCase):
**{"X":X, "Y":Y}
)
- x = next(smbo.choose_next(X, Y, incumbent_value=0.0)).get_array()
+ x = next(smbo.choose_next(X, Y, incumbent_value=0.0)).get_array()
assert x.shape == (2,)
-
+
def test_choose_next_empty_X(self):
smbo = SMAC(self.scenario, rng=1).solver
smbo.acquisition_func._compute = mock.Mock(
@@ -146,7 +149,7 @@ class TestSMBO(unittest.TestCase):
self.assertEqual(x, [0, 1, 2])
self.assertEqual(smbo._random_search.maximize.call_count, 1)
self.assertEqual(smbo.acquisition_func._compute.call_count, 0)
-
+
def test_choose_next_empty_X_2(self):
smbo = SMAC(self.scenario, rng=1).solver
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_git_commit_hash",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 6
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "python setup.py install",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y build-essential swig"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
ConfigSpace==0.4.19
Cython==3.0.12
docutils==0.18.1
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
joblib==1.1.1
MarkupSafe==2.0.1
nose==1.3.7
numpy==1.19.5
packaging==21.3
pluggy==1.0.0
psutil==7.0.0
py==1.11.0
Pygments==2.14.0
pynisher==0.6.4
pyparsing==3.1.4
pyrfr==0.8.2
pytest==7.0.1
pytz==2025.2
requests==2.27.1
scikit-learn==0.24.2
scipy==1.5.4
six==1.17.0
smac==0.8.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
threadpoolctl==3.1.0
tomli==1.2.3
typing==3.7.4.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: SMAC3
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- configspace==0.4.19
- cython==3.0.12
- docutils==0.18.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- joblib==1.1.1
- markupsafe==2.0.1
- nose==1.3.7
- numpy==1.19.5
- packaging==21.3
- pluggy==1.0.0
- psutil==7.0.0
- py==1.11.0
- pygments==2.14.0
- pynisher==0.6.4
- pyparsing==3.1.4
- pyrfr==0.8.2
- pytest==7.0.1
- pytz==2025.2
- requests==2.27.1
- scikit-learn==0.24.2
- scipy==1.5.4
- six==1.17.0
- smac==0.8.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-rtd-theme==2.0.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- threadpoolctl==3.1.0
- tomli==1.2.3
- typing==3.7.4.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/SMAC3
| [
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_check_random_states",
"test/test_smbo/test_smbo.py::TestSMBO::test_rng"
]
| []
| [
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_check_deterministic_rosenbrock",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_get_runhistory_and_trajectory_and_tae_runner",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_inject_dependencies",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_inject_stats_and_runhistory_object_to_TAE",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_no_output",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_output_structure",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_pass_callable",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_pass_invalid_tae_runner",
"test/test_facade/test_smac_facade.py::TestSMACFacade::test_pass_tae_runner_objective",
"test/test_smbo/test_smbo.py::TestSMBO::test_abort_on_initial_design",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_2",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_3",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_empty_X",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_empty_X_2",
"test/test_smbo/test_smbo.py::TestSMBO::test_choose_next_w_empty_rh",
"test/test_smbo/test_smbo.py::TestSMBO::test_init_EIPS_as_arguments",
"test/test_smbo/test_smbo.py::TestSMBO::test_init_only_scenario_quality",
"test/test_smbo/test_smbo.py::TestSMBO::test_init_only_scenario_runtime",
"test/test_smbo/test_smbo.py::TestSMBO::test_intensification_percentage",
"test/test_smbo/test_smbo.py::TestSMBO::test_validation"
]
| []
| BSD 3-Clause License | 2,455 | [
"smac/facade/smac_facade.py",
"examples/rosenbrock.py",
"doc/options.rst",
"smac/optimizer/acquisition.py",
"README.md",
"smac/runhistory/runhistory2epm.py"
]
| [
"smac/facade/smac_facade.py",
"examples/rosenbrock.py",
"doc/options.rst",
"smac/optimizer/acquisition.py",
"README.md",
"smac/runhistory/runhistory2epm.py"
]
|
|
cekit__cekit-214 | c8b39abd56db50214c0df9411793c1ed5dd9e31c | 2018-04-27 13:01:10 | c871246da5035e070cf5f79f486283fabd5bfc46 | diff --git a/cekit/builder.py b/cekit/builder.py
index 7b72823..3b37e0e 100644
--- a/cekit/builder.py
+++ b/cekit/builder.py
@@ -23,6 +23,9 @@ class Builder(object):
# import is delayed until here to prevent circular import error
from cekit.builders.osbs import OSBSBuilder as BuilderImpl
logger.info("Using OSBS builder to build the image.")
+ elif 'buildah' == build_engine:
+ from cekit.builders.buildah import BuildahBuilder as BuilderImpl
+ logger.info("Using Buildah builder to build the image.")
else:
raise CekitError("Builder engine %s is not supported" % build_engine)
diff --git a/cekit/builders/buildah.py b/cekit/builders/buildah.py
new file mode 100644
index 0000000..3e9cb72
--- /dev/null
+++ b/cekit/builders/buildah.py
@@ -0,0 +1,57 @@
+import logging
+import subprocess
+import os
+
+from cekit.builder import Builder
+from cekit.errors import CekitError
+
+
+logger = logging.getLogger('cekit')
+
+
+class BuildahBuilder(Builder):
+ """This class representes buildah builder in build-using-dockerfile mode."""
+
+ def __init__(self, build_engine, target, params={}):
+ self._tags = params.get('tags')
+ self._pull = params.get('pull', False) # --pull-always
+ super(BuildahBuilder, self).__init__(build_engine, target, params)
+
+ def check_prerequisities(self):
+ try:
+ subprocess.check_output(['sudo', 'buildah', 'version'], stderr=subprocess.STDOUT)
+ except subprocess.CalledProcessError as ex:
+ raise CekitError("Buildah build engine needs buildah"
+ " installed and configured, error: %s"
+ % ex.output)
+ except Exception as ex:
+ raise CekitError("Buildah build engine needs buildah installed and configured!", ex)
+
+ def build(self):
+ """Build container image using buildah."""
+ tags = self._tags
+ cmd = ["sudo", "buildah", "build-using-dockerfile"]
+
+ if self._pull:
+ cmd.append('--pull-awlays')
+
+ # Custom tags for the container image
+ logger.debug("Building image with tags: '%s'" %
+ "', '".join(tags))
+
+ for tag in tags:
+ cmd.extend(["-t", tag])
+
+ logger.info("Building container image...")
+
+ cmd.append(os.path.join(self.target, 'image'))
+
+ logger.debug("Running Buildah build: '%s'" % " ".join(cmd))
+
+ try:
+ subprocess.check_call(cmd)
+
+ logger.info("Image built and available under following tags: %s"
+ % ", ".join(tags))
+ except:
+ raise CekitError("Image build failed, see logs above.")
diff --git a/cekit/cli.py b/cekit/cli.py
index 24e0962..b421f2f 100644
--- a/cekit/cli.py
+++ b/cekit/cli.py
@@ -67,7 +67,7 @@ class Cekit(object):
build_group.add_argument('--build-engine',
default='docker',
- choices=['docker', 'osbs'],
+ choices=['docker', 'osbs', 'buildah'],
help='an engine used to build the image.')
build_group.add_argument('--build-tag',
diff --git a/cekit/descriptor/base.py b/cekit/descriptor/base.py
index d1448e3..63b1d59 100644
--- a/cekit/descriptor/base.py
+++ b/cekit/descriptor/base.py
@@ -4,6 +4,7 @@ import logging
import os
import yaml
+
from cekit.errors import CekitError
from pykwalify.core import Core
diff --git a/cekit/descriptor/image.py b/cekit/descriptor/image.py
index 18107a5..f2eec1b 100644
--- a/cekit/descriptor/image.py
+++ b/cekit/descriptor/image.py
@@ -1,4 +1,5 @@
import copy
+import os
import yaml
from cekit.descriptor import Descriptor, Label, Env, Port, Run, Modules, \
@@ -29,8 +30,9 @@ def get_image_schema():
class Image(Descriptor):
- def __init__(self, descriptor, directory):
- self.directory = directory
+ def __init__(self, descriptor, artifact_dir):
+ self._artifact_dir = artifact_dir
+ self.path = artifact_dir
self.schemas = [_image_schema.copy()]
super(Image, self).__init__(descriptor)
self.skip_merging = ['description',
@@ -76,9 +78,10 @@ class Image(Descriptor):
self._descriptor['ports'] = [Port(x) for x in self._descriptor.get('ports', [])]
if 'run' in self._descriptor:
self._descriptor['run'] = Run(self._descriptor['run'])
- self._descriptor['artifacts'] = [Resource(a) for a in self._descriptor.get('artifacts', [])]
+ self._descriptor['artifacts'] = [Resource(a, directory=self._artifact_dir)
+ for a in self._descriptor.get('artifacts', [])]
if 'modules' in self._descriptor:
- self._descriptor['modules'] = Modules(self._descriptor['modules'])
+ self._descriptor['modules'] = Modules(self._descriptor['modules'], self.path)
if 'packages' in self._descriptor:
self._descriptor['packages'] = Packages(self._descriptor['packages'])
if 'osbs' in self._descriptor:
diff --git a/cekit/descriptor/module.py b/cekit/descriptor/module.py
index 18f32bd..ae7e126 100644
--- a/cekit/descriptor/module.py
+++ b/cekit/descriptor/module.py
@@ -16,7 +16,9 @@ class Module(Image):
Constructor arguments:
descriptor_path: A path to module descriptor file.
"""
- def __init__(self, descriptor, path):
+ def __init__(self, descriptor, path, artifact_dir):
+ self._artifact_dir = artifact_dir
+ self.path = path
schema = module_schema.copy()
self.schemas = [schema]
# calling Descriptor constructor only here (we dont wat Image() to mess with schema)
@@ -27,7 +29,6 @@ class Module(Image):
'release']
self._prepare()
- self.path = path
self.name = self._descriptor['name']
self._descriptor['execute'] = [Execute(x, self.name)
for x in self._descriptor.get('execute', [])]
diff --git a/cekit/descriptor/modules.py b/cekit/descriptor/modules.py
index b033833..0a885a5 100644
--- a/cekit/descriptor/modules.py
+++ b/cekit/descriptor/modules.py
@@ -19,10 +19,10 @@ map:
class Modules(Descriptor):
- def __init__(self, descriptor):
+ def __init__(self, descriptor, path):
self.schemas = modules_schema
super(Modules, self).__init__(descriptor)
- self._descriptor['repositories'] = [Resource(r)
+ self._descriptor['repositories'] = [Resource(r, directory=path)
for r in self._descriptor.get('repositories', [])]
self._descriptor['install'] = [Install(x) for x in self._descriptor.get('install', [])]
diff --git a/cekit/descriptor/overrides.py b/cekit/descriptor/overrides.py
index 265abfb..4378cfd 100644
--- a/cekit/descriptor/overrides.py
+++ b/cekit/descriptor/overrides.py
@@ -7,7 +7,9 @@ overrides_schema['map']['version'] = {'type': 'text'}
class Overrides(Image):
- def __init__(self, descriptor):
+ def __init__(self, descriptor, artifact_dir):
+ self._artifact_dir = artifact_dir
+ self.path = artifact_dir
schema = overrides_schema.copy()
self.schemas = [schema]
# calling Descriptor constructor only here (we dont wat Image() to mess with schema)
diff --git a/cekit/descriptor/resource.py b/cekit/descriptor/resource.py
index 55a4d03..7e9b693 100644
--- a/cekit/descriptor/resource.py
+++ b/cekit/descriptor/resource.py
@@ -25,12 +25,9 @@ class Resource(Descriptor):
SUPPORTED_HASH_ALGORITHMS = ['sha256', 'sha1', 'md5']
CHECK_INTEGRITY = True
- def __new__(cls, resource):
+ def __new__(cls, resource, **kwargs):
if cls is Resource:
if 'path' in resource:
- directory = resource['path']
- if not os.path.isabs(directory):
- resource['path'] = os.path.join(os.getcwd(), directory)
return super(Resource, cls).__new__(_PathResource)
elif 'url' in resource:
return super(Resource, cls).__new__(_UrlResource)
@@ -67,13 +64,13 @@ class Resource(Descriptor):
self.checksums[algorithm] = descriptor[algorithm]
def __eq__(self, other):
- #All subclasses of Resource are considered same object type
+ # All subclasses of Resource are considered same object type
if isinstance(other, Resource):
return self['name'] == other['name']
return NotImplemented
def __ne__(self, other):
- #All subclasses of Resource are considered same object type
+ # All subclasses of Resource are considered same object type
if isinstance(other, Resource):
return not self['name'] == other['name']
return NotImplemented
@@ -113,7 +110,7 @@ class Resource(Descriptor):
# exception is fatal we be logged before Cekit dies
raise CekitError("Error copying resource: '%s'. See logs for more info."
- % self.name, ex)
+ % self.name, ex)
self.__verify(target)
@@ -122,7 +119,8 @@ class Resource(Descriptor):
def __verify(self, target):
""" Checks all defined check_sums for an aritfact """
if not self.checksums:
- logger.debug("Artifact '%s' lacks any checksum definition, it will be replaced" % self.name)
+ logger.debug("Artifact '%s' lacks any checksum definition, it will be replaced"
+ % self.name)
return False
if not Resource.CHECK_INTEGRITY:
logger.info("Integrity checking disabled, skipping verification.")
@@ -211,7 +209,13 @@ class Resource(Descriptor):
class _PathResource(Resource):
- def __init__(self, descriptor):
+ def __init__(self, descriptor, directory, **kwargs):
+ # if the path si relative its considered relative to the directory parameter
+ # it defualts to CWD, but should be set for a descriptor dir if used for artifacts
+ if not os.path.isabs(descriptor['path']):
+ descriptor['path'] = os.path.join(directory,
+ descriptor['path'])
+
if 'name' not in descriptor:
descriptor['name'] = os.path.basename(descriptor['path'])
super(_PathResource, self).__init__(descriptor)
@@ -233,8 +237,8 @@ class _PathResource(Resource):
raise CekitError("Could not download resource '%s' from cache" % self.name)
else:
raise CekitError("Could not copy resource '%s', "
- "source path does not exist. "
- "Make sure you provided correct path" % self.name)
+ "source path does not exist. "
+ "Make sure you provided correct path" % self.name)
logger.debug("Copying repository from '%s' to '%s'." % (self.path,
target))
@@ -247,7 +251,7 @@ class _PathResource(Resource):
class _UrlResource(Resource):
- def __init__(self, descriptor):
+ def __init__(self, descriptor, **kwargs):
if 'name' not in descriptor:
descriptor['name'] = os.path.basename(descriptor['url'])
super(_UrlResource, self).__init__(descriptor)
@@ -264,7 +268,7 @@ class _UrlResource(Resource):
class _GitResource(Resource):
- def __init__(self, descriptor):
+ def __init__(self, descriptor, **kwargs):
if 'name' not in descriptor:
descriptor['name'] = os.path.basename(descriptor['git']['url'])
super(_GitResource, self).__init__(descriptor)
diff --git a/cekit/generator.py b/cekit/generator.py
index a19fe1b..8cabe1b 100644
--- a/cekit/generator.py
+++ b/cekit/generator.py
@@ -35,7 +35,7 @@ class Generator(object):
if not modules.get('repositories'):
modules['repositories'] = [{'path': local_mod_path, 'name': 'modules'}]
- self.image = Image(descriptor, os.path.dirname(descriptor_path))
+ self.image = Image(descriptor, os.path.dirname(os.path.abspath(descriptor_path)))
self.target = target
if overrides:
@@ -110,7 +110,8 @@ class Generator(object):
def override(self, overrides_path):
logger.info("Using overrides file from '%s'." % overrides_path)
- descriptor = Overrides(tools.load_descriptor(overrides_path))
+ descriptor = Overrides(tools.load_descriptor(overrides_path),
+ os.path.dirname(os.path.abspath(overrides_path)))
descriptor.merge(self.image)
return descriptor
diff --git a/cekit/module.py b/cekit/module.py
index a870485..dfad8fb 100644
--- a/cekit/module.py
+++ b/cekit/module.py
@@ -49,7 +49,8 @@ def copy_module_to_target(name, version, target):
def check_module_version(path, version):
descriptor = Module(tools.load_descriptor(os.path.join(path, 'module.yaml')),
- path)
+ path,
+ os.path.dirname(os.path.abspath(os.path.join(path, 'module.yaml'))))
if descriptor.version != version:
raise CekitError("Requested conflicting version '%s' of module '%s'" %
(version, descriptor['name']))
@@ -84,7 +85,9 @@ def discover_modules(repo_dir):
for modules_dir, _, files in os.walk(repo_dir):
if 'module.yaml' in files:
module = Module(tools.load_descriptor(os.path.join(modules_dir, 'module.yaml')),
- modules_dir)
+ modules_dir,
+ os.path.dirname(os.path.abspath(os.path.join(modules_dir,
+ 'module.yaml'))))
module.fetch_dependencies(repo_dir)
logger.debug("Adding module '%s', path: '%s'" % (module.name, module.path))
modules.append(module)
diff --git a/docs/build.rst b/docs/build.rst
index 5d01c9d..0f3830e 100644
--- a/docs/build.rst
+++ b/docs/build.rst
@@ -5,6 +5,7 @@ Cekit supports following builder engines:
* ``Docker`` -- build the container image using `docker build <https://docs.docker.com/engine/reference/commandline/build/>`_ command and it default option
* ``OSBS`` -- build the container image using `OSBS service <https://osbs.readthedocs.io>`_
+* ``Buildah`` -- build the container image using `Buildah <https://github.com/projectatomic/buildah>`_
Executing builds
-----------------
@@ -18,7 +19,7 @@ You can execute an container image build by running:
**Options affecting builder:**
* ``--tag`` -- an image tag used to build image (can be specified multiple times)
-* ``--build-engine`` -- a builder engine to use ``osbs`` or ``docker`` [#f1]_
+* ``--build-engine`` -- a builder engine to use ``osbs``, ``buildah`` or ``docker`` [#f1]_
* ``--build-pull`` -- ask a builder engine to check and fetch latest base image
* ``--build-osbs-stage`` -- use ``rhpkg-stage`` tool instead of ``rhpkg``
* ``--build-osbs-release`` [#f2]_ -- perform a OSBS release build
@@ -46,7 +47,7 @@ This is the default way to build an container image. The image is build using ``
OSBS build
^^^^^^^^^^^^^^^
-This build is using ``rhpkg container-build`` to build the image using OSBS service. By default
+This build engine is using ``rhpkg container-build`` to build the image using OSBS service. By default
it performs scratch build. If you need a release build you need to specify ``--build-osbs-release`` parameter.
**Example:** Performing scratch build
@@ -61,3 +62,21 @@ it performs scratch build. If you need a release build you need to specify ``--b
.. code:: bash
$ cekit build --build-engine=osbs --build-osbs-release
+
+
+Buildah build
+^^^^^^^^^^^^^
+
+This build engine is based on `Buildah <https://github.com/projectatomic/buildah>`_. Buildah still doesn't
+support non-privileged builds so you need to have **sudo** configured to run `buildah` as a root user on
+your desktop.
+
+.. note::
+ If you need to use any non default registry, please update `/etc/containers/registry.conf` file.
+
+
+**Example:** Building image using Buildah
+
+.. code:: bash
+
+ $ cekit build --build-engine=buildah
diff --git a/docs/descriptor.rst b/docs/descriptor.rst
index 52f998c..2abd4a0 100644
--- a/docs/descriptor.rst
+++ b/docs/descriptor.rst
@@ -28,6 +28,14 @@ for downloaded resources will match the ``name`` attribute, which defaults to
the base name of the file/URL. Artifact locations may be specified as ``url``\s,
``path``\s or ``git`` references.
+.. note::
+
+ If you are using relative ``path`` to define an artifact, path is considered relative to an
+ image descriptor which introduced that artifact.
+
+ **Example**: If an artifact is defined inside */foo/bar/image.yaml* with a path: *baz/1.zip*
+ the artifact will be resolved as */foo/bar/baz/1.zip*
+
.. code:: yaml
artifacts:
| relative paths should be relative to the file specifying them
For example, relative paths for artifacts defined in overrides.yaml should be relative to the overrides.yaml file, not the image.yaml. This makes it difficult to use a single overrides file for multiple image builds, since the relative paths may be different. I haven't verified whether or not relative paths work correctly with modules (i.e. if a module defines a resource itself). | cekit/cekit | diff --git a/tests/test_builder.py b/tests/test_builder.py
index 2293306..a4521ca 100644
--- a/tests/test_builder.py
+++ b/tests/test_builder.py
@@ -116,3 +116,32 @@ def test_docker_builder_run(mocker):
builder.build()
check_call.assert_called_once_with(['docker', 'build', '-t', 'foo', '-t', 'bar', 'tmp/image'])
+
+
+def test_buildah_builder_run(mocker):
+ params = {'tags': ['foo', 'bar']}
+ check_call = mocker.patch.object(subprocess, 'check_call')
+ builder = create_osbs_build_object(mocker, 'buildah', params)
+ builder.build()
+
+ check_call.assert_called_once_with(['sudo',
+ 'buildah',
+ 'build-using-dockerfile',
+ '-t', 'foo',
+ '-t', 'bar',
+ 'tmp/image'])
+
+
+def test_buildah_builder_run_pull(mocker):
+ params = {'tags': ['foo', 'bar'], 'pull': True}
+ check_call = mocker.patch.object(subprocess, 'check_call')
+ builder = create_osbs_build_object(mocker, 'buildah', params)
+ builder.build()
+
+ check_call.assert_called_once_with(['sudo',
+ 'buildah',
+ 'build-using-dockerfile',
+ '--pull-awlays',
+ '-t', 'foo',
+ '-t', 'bar',
+ 'tmp/image'])
diff --git a/tests/test_dockerfile.py b/tests/test_dockerfile.py
index d9da5b5..c5f8e15 100644
--- a/tests/test_dockerfile.py
+++ b/tests/test_dockerfile.py
@@ -88,7 +88,7 @@ def prepare_generator(target, desc_part, desc_type="image"):
desc = basic_config.copy()
desc.update(desc_part)
- image = Module(desc, '/tmp/')
+ image = Module(desc, '/tmp/', '/tmp')
generator = Generator.__new__(Generator)
generator.image = image
diff --git a/tests/test_unit_args.py b/tests/test_unit_args.py
index 43ce2b3..10eb5dc 100644
--- a/tests/test_unit_args.py
+++ b/tests/test_unit_args.py
@@ -36,7 +36,8 @@ def test_args_build_pull(mocker):
assert Cekit().parse().args.build_pull
[email protected]('engine', ['osbs', 'docker'])
+
[email protected]('engine', ['osbs', 'docker', 'buildah'])
def test_args_build_engine(mocker, engine):
mocker.patch.object(sys, 'argv', ['cekit', 'build', '--build-engine', engine])
diff --git a/tests/test_unit_module.py b/tests/test_unit_module.py
index 6f9bae9..fa3dca3 100644
--- a/tests/test_unit_module.py
+++ b/tests/test_unit_module.py
@@ -1,5 +1,7 @@
from cekit.descriptor import Module
from cekit.module import modules
+import os
+
module_desc = {
'schema_version': 1,
@@ -16,6 +18,6 @@ module_desc = {
def test_modules_repos(tmpdir):
tmpdir = str(tmpdir.mkdir('target'))
- module = Module(module_desc, tmpdir)
+ module = Module(module_desc, os.getcwd(), '/tmp')
module.fetch_dependencies(tmpdir)
assert 'foo' in [m['name'] for m in modules]
diff --git a/tests/test_unit_resource.py b/tests/test_unit_resource.py
index 4446e2d..3bf20c9 100644
--- a/tests/test_unit_resource.py
+++ b/tests/test_unit_resource.py
@@ -178,3 +178,15 @@ def test_generated_url_with_cacher():
res.name = 'file'
assert res._Resource__substitute_cache_url('file') == 'file,sha256,justamocksum'
tools.cfg = {}
+
+
+def test_path_resource_absolute():
+ res = Resource({'name': 'foo',
+ 'path': '/bar'}, directory='/foo')
+ assert res.path == '/bar'
+
+
+def test_path_resource_relative():
+ res = Resource({'name': 'foo',
+ 'path': 'bar'}, directory='/foo')
+ assert res.path == '/foo/bar'
diff --git a/tests/test_unit_tools.py b/tests/test_unit_tools.py
index 9c13786..31c5352 100644
--- a/tests/test_unit_tools.py
+++ b/tests/test_unit_tools.py
@@ -20,17 +20,17 @@ def test_merging_description_image():
desc1 = Image({'name': 'foo', 'version': 1}, None)
desc2 = Module({'name': 'mod1',
- 'description': 'mod_desc'}, None)
+ 'description': 'mod_desc'}, None, None)
merged = _merge_descriptors(desc1, desc2)
assert 'description' not in merged
def test_merging_description_modules():
- desc1 = Module({'name': 'foo'}, None)
+ desc1 = Module({'name': 'foo'}, None, None)
desc2 = Module({'name': 'mod1',
- 'description': 'mod_desc'}, None)
+ 'description': 'mod_desc'}, None, None)
merged = _merge_descriptors(desc1, desc2)
assert 'description' not in merged
@@ -40,7 +40,7 @@ def test_merging_description_override():
desc1 = Image({'name': 'foo', 'version': 1}, None)
desc2 = Overrides({'name': 'mod1',
- 'description': 'mod_desc'})
+ 'description': 'mod_desc'}, None)
merged = _merge_descriptors(desc2, desc1)
assert 'description' in merged
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 12
} | 1.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"behave",
"docker",
"lxml",
"mock",
"pytest",
"pytest-cov",
"pytest-mock",
"pykwalify"
],
"pre_install": [],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | behave==1.2.6
-e git+https://github.com/cekit/cekit.git@c8b39abd56db50214c0df9411793c1ed5dd9e31c#egg=cekit
certifi==2025.1.31
charset-normalizer==3.4.1
colorlog==6.9.0
coverage==7.8.0
docker==7.1.0
docopt==0.6.2
exceptiongroup==1.2.2
idna==3.10
iniconfig==2.1.0
Jinja2==3.1.6
lxml==5.3.1
MarkupSafe==3.0.2
mock==5.2.0
packaging==24.2
parse==1.20.2
parse_type==0.6.4
pluggy==1.5.0
pykwalify==1.8.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-mock==3.14.0
python-dateutil==2.9.0.post0
PyYAML==6.0.2
requests==2.32.3
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.12
six==1.17.0
tomli==2.2.1
urllib3==2.3.0
| name: cekit
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- behave==1.2.6
- certifi==2025.1.31
- charset-normalizer==3.4.1
- colorlog==6.9.0
- coverage==7.8.0
- docker==7.1.0
- docopt==0.6.2
- exceptiongroup==1.2.2
- idna==3.10
- iniconfig==2.1.0
- jinja2==3.1.6
- lxml==5.3.1
- markupsafe==3.0.2
- mock==5.2.0
- packaging==24.2
- parse==1.20.2
- parse-type==0.6.4
- pluggy==1.5.0
- pykwalify==1.8.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.2
- requests==2.32.3
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.12
- six==1.17.0
- tomli==2.2.1
- urllib3==2.3.0
prefix: /opt/conda/envs/cekit
| [
"tests/test_builder.py::test_buildah_builder_run",
"tests/test_builder.py::test_buildah_builder_run_pull",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_run_user-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_default_run_user-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_custom_cmd-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_entrypoint-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_workdir-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_volumes-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_ports-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_env-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_execute-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_execute_user-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_concrt_label_version-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering[test_cekit_label_version-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering_tech_preview[test_without_family-\\x08-\\x08]",
"tests/test_dockerfile.py::test_dockerfile_rendering_tech_preview[test_with_family-\\x08-\\x08]",
"tests/test_unit_args.py::test_args_build_engine[buildah]",
"tests/test_unit_module.py::test_modules_repos",
"tests/test_unit_resource.py::test_path_resource_absolute",
"tests/test_unit_resource.py::test_path_resource_relative",
"tests/test_unit_tools.py::test_merging_description_image",
"tests/test_unit_tools.py::test_merging_description_modules",
"tests/test_unit_tools.py::test_merging_description_override"
]
| [
"tests/test_builder.py::test_docker_builder_defaults"
]
| [
"tests/test_builder.py::test_osbs_builder_defaults",
"tests/test_builder.py::test_osbs_builder_use_rhpkg_staget",
"tests/test_builder.py::test_osbs_builder_nowait",
"tests/test_builder.py::test_osbs_builder_user",
"tests/test_builder.py::test_osbs_builder_run_rhpkg_stage",
"tests/test_builder.py::test_osbs_builder_run_rhpkg",
"tests/test_builder.py::test_osbs_builder_run_rhpkg_nowait",
"tests/test_builder.py::test_osbs_builder_run_rhpkg_user",
"tests/test_builder.py::test_docker_builder_run",
"tests/test_unit_args.py::test_args_command[generate]",
"tests/test_unit_args.py::test_args_command[build]",
"tests/test_unit_args.py::test_args_command[test]",
"tests/test_unit_args.py::test_args_not_valid_command",
"tests/test_unit_args.py::test_args_tags[tags0-build_tags0-expected0]",
"tests/test_unit_args.py::test_args_tags[tags1-build_tags1-expected1]",
"tests/test_unit_args.py::test_args_tags[tags2-build_tags2-expected2]",
"tests/test_unit_args.py::test_args_tags[tags3-build_tags3-expected3]",
"tests/test_unit_args.py::test_args_build_pull",
"tests/test_unit_args.py::test_args_build_engine[osbs]",
"tests/test_unit_args.py::test_args_build_engine[docker]",
"tests/test_unit_args.py::test_args_osbs_stage",
"tests/test_unit_args.py::test_args_osbs_stage_false",
"tests/test_unit_args.py::test_args_invalid_build_engine",
"tests/test_unit_args.py::test_args_osbs_user",
"tests/test_unit_args.py::test_args_config_default",
"tests/test_unit_args.py::test_args_config",
"tests/test_unit_args.py::test_args_osbs_nowait",
"tests/test_unit_args.py::test_args_osbs_no_nowait",
"tests/test_unit_resource.py::test_repository_dir_is_constructed_properly",
"tests/test_unit_resource.py::test_git_clone",
"tests/test_unit_resource.py::test_fetching_with_ssl_verify",
"tests/test_unit_resource.py::test_fetching_disable_ssl_verify",
"tests/test_unit_resource.py::test_fetching_bad_status_code",
"tests/test_unit_resource.py::test_fetching_file_exists_but_used_as_is",
"tests/test_unit_resource.py::test_fetching_file_exists_fetched_again",
"tests/test_unit_resource.py::test_fetching_file_exists_no_hash_fetched_again",
"tests/test_unit_resource.py::test_generated_url_without_cacher",
"tests/test_unit_resource.py::test_resource_verify",
"tests/test_unit_resource.py::test_generated_url_with_cacher",
"tests/test_unit_tools.py::test_merging_plain_descriptors",
"tests/test_unit_tools.py::test_merging_emdedded_descriptors",
"tests/test_unit_tools.py::test_merging_plain_lists",
"tests/test_unit_tools.py::test_merging_plain_list_of_list",
"tests/test_unit_tools.py::test_merging_list_of_descriptors"
]
| []
| MIT License | 2,456 | [
"cekit/module.py",
"cekit/descriptor/image.py",
"cekit/cli.py",
"cekit/descriptor/module.py",
"docs/build.rst",
"cekit/builders/buildah.py",
"cekit/descriptor/resource.py",
"cekit/descriptor/overrides.py",
"docs/descriptor.rst",
"cekit/descriptor/base.py",
"cekit/generator.py",
"cekit/descriptor/modules.py",
"cekit/builder.py"
]
| [
"cekit/module.py",
"cekit/descriptor/image.py",
"cekit/cli.py",
"cekit/descriptor/module.py",
"docs/build.rst",
"cekit/builders/buildah.py",
"cekit/descriptor/resource.py",
"cekit/descriptor/overrides.py",
"docs/descriptor.rst",
"cekit/descriptor/base.py",
"cekit/generator.py",
"cekit/descriptor/modules.py",
"cekit/builder.py"
]
|
|
sangoma__ursine-12 | b9523c22a724b42e84e2e3093cb02b801e03fa70 | 2018-04-27 21:56:42 | b9523c22a724b42e84e2e3093cb02b801e03fa70 | diff --git a/ursine/uri.py b/ursine/uri.py
index e7feacc..57be602 100644
--- a/ursine/uri.py
+++ b/ursine/uri.py
@@ -200,6 +200,16 @@ class URI:
else:
return uri
+ def short_uri(self):
+ if self.user and self.password:
+ user = f'{self.user}:{self.password}@'
+ elif self.user:
+ user = f'{self.user}@'
+ else:
+ user = ''
+
+ return f'{self.scheme}:{user}{self.host}:{self.port}'
+
def __repr__(self):
return f'{self.__class__.__name__}({self})'
| Missing `short_uri`
aiosip needs this method to generate URI addresses. | sangoma/ursine | diff --git a/tests/test_uri.py b/tests/test_uri.py
index d74630d..36a8806 100644
--- a/tests/test_uri.py
+++ b/tests/test_uri.py
@@ -29,6 +29,19 @@ def test_to_str(uri, expect):
assert str(URI(uri)) == expect
[email protected]('uri,expect', [
+ ('sip:localhost', 'sip:localhost:5060'),
+ ('sips:localhost', 'sips:localhost:5061'),
+ ('<sip:localhost>', 'sip:localhost:5060'),
+ (
+ 'John Doe <sip:localhost:5080?x=y&a=b>',
+ 'sip:localhost:5080',
+ )
+])
+def test_to_short_uri(uri, expect):
+ assert URI(uri).short_uri() == expect
+
+
@pytest.mark.parametrize('uri,expect', [
('sip:localhost', 'URI(sip:localhost:5060;transport=udp)'),
('sips:localhost', 'URI(sips:localhost:5061;transport=tcp)'),
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"Pipfile"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
multidict==5.2.0
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
-e git+https://github.com/sangoma/ursine.git@b9523c22a724b42e84e2e3093cb02b801e03fa70#egg=ursine
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: ursine
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- multidict==5.2.0
prefix: /opt/conda/envs/ursine
| [
"tests/test_uri.py::test_to_short_uri[sip:localhost-sip:localhost:5060]",
"tests/test_uri.py::test_to_short_uri[sips:localhost-sips:localhost:5061]",
"tests/test_uri.py::test_to_short_uri[<sip:localhost>-sip:localhost:5060]",
"tests/test_uri.py::test_to_short_uri[John"
]
| []
| [
"tests/test_uri.py::test_invalid[sip:localhost:port]",
"tests/test_uri.py::test_invalid[sip:localhost:0]",
"tests/test_uri.py::test_invalid[sip:localhost:70000]",
"tests/test_uri.py::test_invalid[sip:localhost?]",
"tests/test_uri.py::test_invalid[sip:localhost;]",
"tests/test_uri.py::test_invalid[sip:localhost&]",
"tests/test_uri.py::test_to_str[sip:localhost-sip:localhost:5060;transport=udp]",
"tests/test_uri.py::test_to_str[sips:localhost-sips:localhost:5061;transport=tcp]",
"tests/test_uri.py::test_to_str[<sip:localhost>-sip:localhost:5060;transport=udp]",
"tests/test_uri.py::test_to_str[John",
"tests/test_uri.py::test_repr[sip:localhost-URI(sip:localhost:5060;transport=udp)]",
"tests/test_uri.py::test_repr[sips:localhost-URI(sips:localhost:5061;transport=tcp)]",
"tests/test_uri.py::test_equality[sip:localhost-sip:localhost]",
"tests/test_uri.py::test_equality[sip:localhost-sip:localhost;transport=udp]",
"tests/test_uri.py::test_equality[<sip:localhost>-sip:localhost]",
"tests/test_uri.py::test_equality[Alice",
"tests/test_uri.py::test_equality[SIP:localhost-sip:localhost]",
"tests/test_uri.py::test_equality[<sip:localhost>;tag=foo-sip:localhost;tag=foo]",
"tests/test_uri.py::test_equality[<sip:localhost>",
"tests/test_uri.py::test_inequality[sip:localhost-sips:localhost]",
"tests/test_uri.py::test_inequality[Bob",
"tests/test_uri.py::test_inequality[Alice",
"tests/test_uri.py::test_inequality[sip:remotehost-sip:localhost]",
"tests/test_uri.py::test_build[kwargs0-sip:localhost]",
"tests/test_uri.py::test_build[kwargs1-sip:localhost;transport=tcp]",
"tests/test_uri.py::test_build[kwargs2-sips:[::1]:5080;maddr=[::dead:beef]?x=y&a=]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-user-jdoe-sip:jdoe@localhost]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost;transport=tcp-scheme-sips-sips:localhost:5060]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-port-5080-sip:localhost:5080]",
"tests/test_uri.py::test_modified_uri_creation[sip:jdoe@localhost-user-None-sip:localhost]",
"tests/test_uri.py::test_modified_uri_creation[\"Mark\"",
"tests/test_uri.py::test_modified_uri_creation[sip:user:pass@localhost-user-None-sip:localhost]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-user-user:pass-sip:user:pass@localhost]",
"tests/test_uri.py::test_modified_uri_creation[sip:alice@localhost-password-pass-sip:alice:pass@localhost]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-transport-tcp-sip:localhost;transport=tcp]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-tag-bler-sip:localhost;transport=udp;tag=bler]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-parameters-new10-sip:localhost;maddr=[::1];foo=bar;x=]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-headers-new11-sip:localhost?ahhhh=&foo=bar]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-parameters-new12-sip:localhost;maddr=[::1];foo=bar;x=]",
"tests/test_uri.py::test_modified_uri_creation[sip:localhost-headers-new13-sip:localhost?ahhhh=&foo=bar]",
"tests/test_uri.py::test_tag_generation[sip:localhost-None-None]",
"tests/test_uri.py::test_tag_generation[sip:localhost-5654-5654]",
"tests/test_uri.py::test_tag_generation[sip:localhost;tag=2000-5654-5654]",
"tests/test_uri.py::test_tag_generation[sip:localhost;tag=2ace-None-2ace]"
]
| []
| Apache License 2.0 | 2,458 | [
"ursine/uri.py"
]
| [
"ursine/uri.py"
]
|
|
NeurodataWithoutBorders__pynwb-501 | 6f1c065fc321b8b9669935f0465b6a6ff24c087e | 2018-04-27 23:13:52 | 6d2cc9f8ed3ab1a67e2da2fb4fec77029aed2215 | diff --git a/src/pynwb/behavior.py b/src/pynwb/behavior.py
index e3cc84de..d73adcf1 100644
--- a/src/pynwb/behavior.py
+++ b/src/pynwb/behavior.py
@@ -102,7 +102,7 @@ class BehavioralEvents(MultiContainerInterface):
'get': 'get_timeseries',
'create': 'create_timeseries',
'type': TimeSeries,
- 'attr': 'timeseries'
+ 'attr': 'time_series'
}
@@ -120,7 +120,7 @@ class BehavioralTimeSeries(MultiContainerInterface):
'get': 'get_timeseries',
'create': 'create_timeseries',
'type': TimeSeries,
- 'attr': 'timeseries'
+ 'attr': 'time_series'
}
@@ -135,7 +135,7 @@ class PupilTracking(MultiContainerInterface):
'get': 'get_timeseries',
'create': 'create_timeseries',
'type': TimeSeries,
- 'attr': 'timeseries'
+ 'attr': 'time_series'
}
| BehavioralEvents does not write to hdf5 file
Using minimal example below to create an nwb file and a BehavioralEvents time series, groups for the processing module and for the BehavioralEvents container are created. However, the data is missing from the hdf5 file after writing the data.
Can be fixed by changing 'timeseries' in BehavioralEvents attr to 'time_series'
- ObjectMapper is reading from the wrong attribute.
### Steps to Reproduce
```
from datetime import datetime
import numpy as np
from pynwb.behavior import BehavioralEvents
from pynwb import NWBFile, NWBHDF5IO, TimeSeries
nwbfile = NWBFile(source='nwbfile_source',
session_description='nwb_file_session_description',
identifier='0',
session_start_time=datetime(2017, 5, 4, 1, 1, 1))
ts = TimeSeries(name='behavioral_events_name', source='behavioral_events_source',
data=[1.0,2.0,3.0,4.0],
timestamps=[5.0,6.0,7.0,8.0]
unit='ms',
resolution=np.nan,
conversion=np.nan)
behavioral_events = BehavioralEvents('Behavioral_Events_source',name='BehavioralEvents_name')
behavioral_events.add_timeseries(ts)
behavior_module = nwbfile.create_processing_module('behavioral_module_name',
'behavioral_module_source',
'example module')
behavior_module.add_container(behavioral_events)
with NWBHDF5IO('test_behavior.nwb', mode='w') as io:
io.write(nwbfile)
```
### Environment
Please describe your environment according to the following bullet points.
- **Python Executable:** Python
- **Python Version:** Python 2.7
- **Operating System:** macOS
- **Pynwb Version:** 0.3.0
| NeurodataWithoutBorders/pynwb | diff --git a/tests/unit/pynwb_tests/test_behavior.py b/tests/unit/pynwb_tests/test_behavior.py
index 2a23b19d..985486b8 100644
--- a/tests/unit/pynwb_tests/test_behavior.py
+++ b/tests/unit/pynwb_tests/test_behavior.py
@@ -30,7 +30,7 @@ class BehavioralEventsConstructor(unittest.TestCase):
bE = BehavioralEvents('test_bE', ts)
self.assertEqual(bE.source, 'test_bE')
- self.assertEqual(bE.timeseries['test_ts'], ts)
+ self.assertEqual(bE.time_series['test_ts'], ts)
class BehavioralTimeSeriesConstructor(unittest.TestCase):
@@ -39,7 +39,7 @@ class BehavioralTimeSeriesConstructor(unittest.TestCase):
bts = BehavioralTimeSeries('test_bts', ts)
self.assertEqual(bts.source, 'test_bts')
- self.assertEqual(bts.timeseries['test_ts'], ts)
+ self.assertEqual(bts.time_series['test_ts'], ts)
class PupilTrackingConstructor(unittest.TestCase):
@@ -48,7 +48,7 @@ class PupilTrackingConstructor(unittest.TestCase):
pt = PupilTracking('test_pt', ts)
self.assertEqual(pt.source, 'test_pt')
- self.assertEqual(pt.timeseries['test_ts'], ts)
+ self.assertEqual(pt.time_series['test_ts'], ts)
class EyeTrackingConstructor(unittest.TestCase):
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"tox"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2018.1.18
chardet==3.0.4
distlib==0.3.9
filelock==3.4.1
h5py==2.7.1
idna==2.6
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
numpy==1.14.2
packaging==21.3
platformdirs==2.4.0
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/NeurodataWithoutBorders/pynwb.git@6f1c065fc321b8b9669935f0465b6a6ff24c087e#egg=pynwb
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.7.2
requests==2.18.4
ruamel.yaml==0.15.37
six==1.17.0
toml==0.10.2
tomli==1.2.3
tox==3.28.0
typing_extensions==4.1.1
urllib3==1.22
virtualenv==20.17.1
zipp==3.6.0
| name: pynwb
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- certifi==2018.1.18
- chardet==3.0.4
- distlib==0.3.9
- filelock==3.4.1
- h5py==2.7.1
- idna==2.6
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- numpy==1.14.2
- packaging==21.3
- platformdirs==2.4.0
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.7.2
- requests==2.18.4
- ruamel-yaml==0.15.37
- six==1.17.0
- toml==0.10.2
- tomli==1.2.3
- tox==3.28.0
- typing-extensions==4.1.1
- urllib3==1.22
- virtualenv==20.17.1
- zipp==3.6.0
prefix: /opt/conda/envs/pynwb
| [
"tests/unit/pynwb_tests/test_behavior.py::BehavioralEventsConstructor::test_init",
"tests/unit/pynwb_tests/test_behavior.py::BehavioralTimeSeriesConstructor::test_init",
"tests/unit/pynwb_tests/test_behavior.py::PupilTrackingConstructor::test_init"
]
| []
| [
"tests/unit/pynwb_tests/test_behavior.py::SpatialSeriesConstructor::test_init",
"tests/unit/pynwb_tests/test_behavior.py::BehavioralEpochsConstructor::test_init",
"tests/unit/pynwb_tests/test_behavior.py::EyeTrackingConstructor::test_init",
"tests/unit/pynwb_tests/test_behavior.py::CompassDirectionConstructor::test_init",
"tests/unit/pynwb_tests/test_behavior.py::PositionConstructor::test_init"
]
| []
| BSD-3-Clause | 2,460 | [
"src/pynwb/behavior.py"
]
| [
"src/pynwb/behavior.py"
]
|
|
acorg__dark-matter-569 | bb55d862e66a923688c1f4db4fdfc9467c8210f4 | 2018-04-29 11:55:47 | bb55d862e66a923688c1f4db4fdfc9467c8210f4 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 3aac224..4781caa 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,20 @@
+## 2.0.4 April 29, 2018
+
+* Added `--sampleIndexFilename` and `--pathogenIndexFilename` to
+ `proteins-to-pathogens.py`. These cause the writing of files containing
+ lines with an integer index, a space, then a sample or pathogen name.
+ These can be later used to identify the de-duplicated reads files for a
+ given sample or pathogen name.
+
+## 2.0.3 April 28, 2018
+
+* Added number of identical and positive amino acid matches to BLAST and
+ DIAMOND hsps.
+
+## 2.0.2 April 23, 2018
+
+* The protein grouper now de-duplicates read by id, not sequence.
+
## 2.0.1 April 23, 2018
* Fixed HTML tiny formatting error in `toHTML` method of `ProteinGrouper`
diff --git a/bin/proteins-to-pathogens.py b/bin/proteins-to-pathogens.py
index 004263a..3a3ce34 100755
--- a/bin/proteins-to-pathogens.py
+++ b/bin/proteins-to-pathogens.py
@@ -83,6 +83,22 @@ if __name__ == '__main__':
help=('An (optional) filename to write a pathogen-sample panel PNG '
'image to.'))
+ parser.add_argument(
+ '--sampleIndexFilename',
+ help=('An (optional) filename to write a sample index file to. '
+ 'Lines in the file will have an integer index, a space, and '
+ 'then the sample name. Only produced if --html is used '
+ '(because the pathogen-NNN-sample-MMM.fastq are only written '
+ 'in that case).'))
+
+ parser.add_argument(
+ '--pathogenIndexFilename',
+ help=('An (optional) filename to write a pathogen index file to. '
+ 'Lines in the file will have an integer index, a space, and '
+ 'then the pathogen name. Only produced if --html is used '
+ '(because the pathogen-NNN-sample-MMM.fastq are only written '
+ 'in that case).'))
+
parser.add_argument(
'--html', default=False, action='store_true',
help='If specified, output HTML instead of plain text.')
@@ -123,6 +139,16 @@ if __name__ == '__main__':
args = parser.parse_args()
+ if not args.html:
+ if args.sampleIndexFilename:
+ print('It does not make sense to use --sampleIndexFilename '
+ 'without also using --html', file=sys.stderr)
+ sys.exit(1)
+ if args.pathogenIndexFilename:
+ print('It does not make sense to use --pathogenIndexFilename '
+ 'without also using --html', file=sys.stderr)
+ sys.exit(1)
+
if args.proteinFastaFilename:
# Flatten lists of lists that we get from using both nargs='+' and
# action='append'. We use both because it allows people to use
@@ -153,6 +179,8 @@ if __name__ == '__main__':
if args.html:
print(grouper.toHTML(args.pathogenPanelFilename,
minProteinFraction=args.minProteinFraction,
- pathogenType=args.pathogenType))
+ pathogenType=args.pathogenType,
+ sampleIndexFilename=args.sampleIndexFilename,
+ pathogenIndexFilename=args.pathogenIndexFilename))
else:
print(grouper.toStr())
diff --git a/dark/__init__.py b/dark/__init__.py
index c8923d7..d0bf4db 100644
--- a/dark/__init__.py
+++ b/dark/__init__.py
@@ -5,4 +5,6 @@ if sys.version_info < (2, 7):
# Note that the version string must have the following format, otherwise it
# will not be found by the version() function in ../setup.py
-__version__ = '2.0.3'
+#
+# Remember to update ../CHANGELOG.md describing what's new in each version.
+__version__ = '2.0.4'
diff --git a/dark/proteins.py b/dark/proteins.py
index 6a1270f..8e6a850 100644
--- a/dark/proteins.py
+++ b/dark/proteins.py
@@ -178,6 +178,28 @@ class PathogenSampleFiles(object):
sampleIndex = self._samples[sampleName]
return self._readsFilenames[(pathogenIndex, sampleIndex)]
+ def writeSampleIndex(self, fp):
+ """
+ Write a file of sample indices and names, sorted by index.
+
+ @param fp: A file-like object, opened for writing.
+ """
+ print('\n'.join(
+ '%d %s' % (index, name) for (index, name) in
+ sorted((index, name) for (name, index) in self._samples.items())
+ ), file=fp)
+
+ def writePathogenIndex(self, fp):
+ """
+ Write a file of pathogen indices and names, sorted by index.
+
+ @param fp: A file-like object, opened for writing.
+ """
+ print('\n'.join(
+ '%d %s' % (index, name) for (index, name) in
+ sorted((index, name) for (name, index) in self._pathogens.items())
+ ), file=fp)
+
class ProteinGrouper(object):
"""
@@ -387,7 +409,8 @@ class ProteinGrouper(object):
return '\n'.join(result)
def toHTML(self, pathogenPanelFilename=None, minProteinFraction=0.0,
- pathogenType='viral'):
+ pathogenType='viral', sampleIndexFilename=None,
+ pathogenIndexFilename=None):
"""
Produce an HTML string representation of the pathogen summary.
@@ -398,6 +421,12 @@ class ProteinGrouper(object):
for that pathogen to be displayed.
@param pathogenType: A C{str} giving the type of the pathogen involved,
either 'bacterial' or 'viral'.
+ @param sampleIndexFilename: A C{str} filename to write a sample index
+ file to. Lines in the file will have an integer index, a space, and
+ then the sample name.
+ @param pathogenIndexFilename: A C{str} filename to write a pathogen
+ index file to. Lines in the file will have an integer index, a
+ space, and then the pathogen name.
@return: An HTML C{str} suitable for printing.
"""
if pathogenType not in ('bacterial', 'viral'):
@@ -411,6 +440,14 @@ class ProteinGrouper(object):
if pathogenPanelFilename:
self.pathogenPanel(pathogenPanelFilename)
+ if sampleIndexFilename:
+ with open(sampleIndexFilename, 'w') as fp:
+ self.pathogenSampleFiles.writeSampleIndex(fp)
+
+ if pathogenIndexFilename:
+ with open(pathogenIndexFilename, 'w') as fp:
+ self.pathogenSampleFiles.writePathogenIndex(fp)
+
pathogenNames = sorted(
pathogenName for pathogenName in self.pathogenNames
if self.maxProteinFraction(pathogenName) >= minProteinFraction)
@@ -494,7 +531,8 @@ class ProteinGrouper(object):
proteinFieldsDescription = [
'<p>',
- 'In all bullet point protein lists below, there are eight fields:',
+ 'In all bullet point protein lists below, there are the following '
+ 'fields:',
'<ol>',
'<li>Coverage fraction.</li>',
'<li>Median bit score.</li>',
| The protein grouper should write a plain text index of pathogen index and name
Right now after running `proteins-to-pathogens.py` we are left with files with names like `pathogen-219-sample-228.fastq` but there is no simple way to match a pathogen name with its index. That information is in the `index.html` file but should also be in a text file so we can know which file to look in to get reads when we just have a pathogen name (e.g., from a different pipeline run). The `toStr` method could perhaps print it? | acorg/dark-matter | diff --git a/test/test_proteins.py b/test/test_proteins.py
index ed8c49f..81bcccb 100644
--- a/test/test_proteins.py
+++ b/test/test_proteins.py
@@ -917,3 +917,35 @@ class TestPathogenSampleFiles(TestCase):
proteins['gi|327410| protein 77']['readLengths'])
self.assertEqual((2, 7),
proteins['gi|327409| ubiquitin']['readLengths'])
+
+ def testWriteSampleIndex(self):
+ """
+ The writeSampleIndex function must write a file with the expected
+ content.
+ """
+ pathogenSampleFiles = PathogenSampleFiles(None)
+ pathogenSampleFiles._samples = {
+ 'NEO11': 500,
+ 'NEO33': 24,
+ 'NEO66': 333,
+ }
+
+ fp = StringIO()
+ pathogenSampleFiles.writeSampleIndex(fp)
+ self.assertEqual('24 NEO33\n333 NEO66\n500 NEO11\n', fp.getvalue())
+
+ def testWritePathogenIndex(self):
+ """
+ The writePatogenIndex function must write a file with the expected
+ content.
+ """
+ pathogenSampleFiles = PathogenSampleFiles(None)
+ pathogenSampleFiles._pathogens = {
+ 'virus b': 4,
+ 'virus a': 3,
+ 'virus c': 9,
+ }
+
+ fp = StringIO()
+ pathogenSampleFiles.writePathogenIndex(fp)
+ self.assertEqual('3 virus a\n4 virus b\n9 virus c\n', fp.getvalue())
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 4
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | backcall==0.2.0
biopython==1.81
bz2file==0.98
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
cycler==0.11.0
-e git+https://github.com/acorg/dark-matter.git@bb55d862e66a923688c1f4db4fdfc9467c8210f4#egg=dark_matter
decorator==5.1.1
exceptiongroup==1.2.2
fonttools==4.38.0
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
ipython==7.34.0
jedi==0.19.2
kiwisolver==1.4.5
matplotlib==3.5.3
matplotlib-inline==0.1.6
numpy==1.21.6
packaging==24.0
parso==0.8.4
pexpect==4.9.0
pickleshare==0.7.5
Pillow==9.5.0
pluggy==1.2.0
prompt_toolkit==3.0.48
ptyprocess==0.7.0
pycparser==2.21
pyfaidx==0.8.1.3
Pygments==2.17.2
pyparsing==3.1.4
pytest==7.4.4
python-dateutil==2.9.0.post0
pyzmq==26.2.1
requests==2.31.0
simplejson==3.20.1
six==1.17.0
tomli==2.0.1
traitlets==5.9.0
typing_extensions==4.7.1
urllib3==2.0.7
wcwidth==0.2.13
zipp==3.15.0
| name: dark-matter
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- backcall==0.2.0
- biopython==1.81
- bz2file==0.98
- cffi==1.15.1
- charset-normalizer==3.4.1
- cycler==0.11.0
- decorator==5.1.1
- exceptiongroup==1.2.2
- fonttools==4.38.0
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- ipython==7.34.0
- jedi==0.19.2
- kiwisolver==1.4.5
- matplotlib==3.5.3
- matplotlib-inline==0.1.6
- numpy==1.21.6
- packaging==24.0
- parso==0.8.4
- pexpect==4.9.0
- pickleshare==0.7.5
- pillow==9.5.0
- pluggy==1.2.0
- prompt-toolkit==3.0.48
- ptyprocess==0.7.0
- pycparser==2.21
- pyfaidx==0.8.1.3
- pygments==2.17.2
- pyparsing==3.1.4
- pytest==7.4.4
- python-dateutil==2.9.0.post0
- pyzmq==26.2.1
- requests==2.31.0
- simplejson==3.20.1
- six==1.17.0
- tomli==2.0.1
- traitlets==5.9.0
- typing-extensions==4.7.1
- urllib3==2.0.7
- wcwidth==0.2.13
- zipp==3.15.0
prefix: /opt/conda/envs/dark-matter
| [
"test/test_proteins.py::TestPathogenSampleFiles::testWritePathogenIndex",
"test/test_proteins.py::TestPathogenSampleFiles::testWriteSampleIndex"
]
| [
"test/test_proteins.py::TestGetPathogenProteinCounts::testExpected",
"test/test_proteins.py::TestGetPathogenProteinCounts::testExpectedWithTwoFiles",
"test/test_proteins.py::TestProteinGrouper::testMaxProteinFraction",
"test/test_proteins.py::TestPathogenSampleFiles::testIdenticalReadsRemoved",
"test/test_proteins.py::TestPathogenSampleFiles::testOpenNotCalledOnRepeatedCall",
"test/test_proteins.py::TestPathogenSampleFiles::testProteinsSavedCorrectly",
"test/test_proteins.py::TestPathogenSampleFiles::testReadLengthsAdded"
]
| [
"test/test_proteins.py::TestSplitNames::testNestedBrackets",
"test/test_proteins.py::TestSplitNames::testNoBrackets",
"test/test_proteins.py::TestSplitNames::testNormalCase",
"test/test_proteins.py::TestSplitNames::testTwoSetsOfBrackets",
"test/test_proteins.py::TestSplitNames::testWhitespaceStripping",
"test/test_proteins.py::TestGetPathogenProteinCounts::testNone",
"test/test_proteins.py::TestProteinGrouper::testAssetDir",
"test/test_proteins.py::TestProteinGrouper::testDuplicatePathogenProteinSample",
"test/test_proteins.py::TestProteinGrouper::testNoAssetDir",
"test/test_proteins.py::TestProteinGrouper::testNoFiles",
"test/test_proteins.py::TestProteinGrouper::testNoFilesToStr",
"test/test_proteins.py::TestProteinGrouper::testNoRegex",
"test/test_proteins.py::TestProteinGrouper::testOneLineInEachOfTwoFilesDifferentPathogens",
"test/test_proteins.py::TestProteinGrouper::testOneLineInEachOfTwoFilesDifferentPathogensTitle",
"test/test_proteins.py::TestProteinGrouper::testOneLineInEachOfTwoFilesSamePathogen",
"test/test_proteins.py::TestProteinGrouper::testOneLineInEachOfTwoFilesSamePathogenTitle",
"test/test_proteins.py::TestProteinGrouper::testOneLineInOneFile",
"test/test_proteins.py::TestProteinGrouper::testOneLineInOneFileFASTQ",
"test/test_proteins.py::TestProteinGrouper::testOneLineInOneFileTitle",
"test/test_proteins.py::TestProteinGrouper::testOneLineInOneFileToStr",
"test/test_proteins.py::TestProteinGrouper::testTwoLinesInOneFileDifferentPathogens",
"test/test_proteins.py::TestProteinGrouper::testTwoLinesInOneFileSamePathogen",
"test/test_proteins.py::TestProteinGrouper::testTwoLinesInOneFileTitle",
"test/test_proteins.py::TestProteinGrouper::testUnknownFormat",
"test/test_proteins.py::TestProteinGrouper::testUnknownPathogenType",
"test/test_proteins.py::TestPathogenSampleFiles::testUnknownFormat"
]
| []
| MIT License | 2,462 | [
"bin/proteins-to-pathogens.py",
"dark/__init__.py",
"dark/proteins.py",
"CHANGELOG.md"
]
| [
"bin/proteins-to-pathogens.py",
"dark/__init__.py",
"dark/proteins.py",
"CHANGELOG.md"
]
|
|
demosense__demorepo-39 | c5e8e938224a7cf1cdffe51bf8755cd4f103dd3d | 2018-04-29 18:35:15 | 6dcff3ceada6957d9e14e646bde040c32ac8ac0f | diff --git a/README.md b/README.md
index c479d90..750f7a8 100644
--- a/README.md
+++ b/README.md
@@ -59,7 +59,7 @@ python3 -m demorepo [command] [options]
### run
```
-demorepo run [-h] [-t TARGETS] [-e ENV] [--reverse-targets] command
+demorepo run [-h] [-t TARGETS] [-e ENV] [--reverse-targets] [--stop-on-error] command
```
Execute a shell command for all projects.
@@ -78,12 +78,13 @@ optional arguments:
The format is VAR_NAME=VAR_VALUE. Multiple env vars
can be specified.
--reverse-targets Reverse the dependency order for projects
+ --stop-on-error Stops the execution if the command fails for a project
```
### run-stage
```
-demorepo stage [-h] [-e ENV] [-t TARGETS] [--reverse-targets] stage
+demorepo stage [-h] [-e ENV] [-t TARGETS] [--reverse-targets] [--stop-on-error] stage
```
Run the specified stage in the global and local config files.
@@ -102,6 +103,7 @@ optional arguments:
stage, separated by blank spaces (use quotes around
the string).
--reverse-targets Reverse the dependency order for projects
+ --stop-on-error Stops the execution if the stage fails for a project
```
### diff
diff --git a/src/demorepo/commands/command_run.py b/src/demorepo/commands/command_run.py
index f08edba..504024f 100644
--- a/src/demorepo/commands/command_run.py
+++ b/src/demorepo/commands/command_run.py
@@ -62,7 +62,7 @@ def _get_child_environ(env):
return child_environ
-def _run_targets(projects, paths, targets, env, *, stage=None, command=None):
+def _run_targets(projects, paths, targets, env, stop_on_error, *, stage=None, command=None):
errors = []
# Get scripts from stage scripts or paste the command
@@ -85,6 +85,8 @@ def _run_targets(projects, paths, targets, env, *, stage=None, command=None):
logger.info('')
+ # Interrupted captures the index in which the execution is interrupted
+ interrupted = -1
for t, script in scripts.items():
logger.info(strformat.hline)
@@ -112,6 +114,10 @@ def _run_targets(projects, paths, targets, env, *, stage=None, command=None):
if p.returncode != 0:
errors.append(t)
+ # Capture the index of the target
+ if stop_on_error:
+ interrupted = list(scripts.keys()).index(t)
+ break
logger.info('')
@@ -121,16 +127,27 @@ def _run_targets(projects, paths, targets, env, *, stage=None, command=None):
index = 0
for t in scripts.keys():
index += 1
+
+ # Color depending on the error
msg = "DONE" if t not in errors else "ERROR"
color = strformat.GREEN if msg == "DONE" else strformat.RED
+
+ # Color has a third option if interrupted
+ if interrupted != -1 and index > interrupted+1:
+ msg = "SKIPPED"
+ color = strformat.YELLOW
+
logger.info(" {}. {} {}".format(index, t, msg), color=color)
logger.info("")
- color = strformat.GREEN if len(errors) == 0 else strformat.RED
- logger.info("----- {} scripts runned, {} successful, {} errors -----\n".format(len(scripts),
- len(scripts)-len(errors), len(errors)), color=color)
+ if interrupted == -1:
+ color = strformat.GREEN if len(errors) == 0 else strformat.RED
+ logger.info("----- {} scripts runned, {} successful, {} errors -----\n".format(len(scripts),
+ len(scripts)-len(errors), len(errors)), color=color)
+ else:
+ logger.info("----- Interrupted by failed {} -----\n".format(mode), color=strformat.YELLOW)
- # Exit with error if needed
+# Exit with error if needed
if len(errors) > 0:
sys.exit(-1)
@@ -140,19 +157,21 @@ def stage(args):
targets = args.get('targets', None)
reverse_targets = args['reverse_targets']
env = args.get('env')
+ stop_on_error = args['stop_on_error']
projects = config.get_projects()
dependencies = config.get_projects_dependencies()
paths = config.get_projects_paths()
- targets = get_targets(projects, dependencies, targets, reverse_targets)
- _run_targets(projects, paths, targets, env, stage=stage)
+ targets = get_targets(projects, dependencies, targets, reverse_targets, stop_on_error)
+ _run_targets(projects, paths, targets, env, stop_on_error, stage=stage)
def run(args):
command = args['command']
targets = args.get('targets', None)
reverse_targets = args['reverse_targets']
+ stop_on_error = args['stop_on_error']
env = args.get('env')
projects = config.get_projects()
@@ -160,4 +179,4 @@ def run(args):
paths = config.get_projects_paths()
targets = get_targets(projects, dependencies, targets, reverse_targets)
- _run_targets(projects, paths, targets, env, command=command)
+ _run_targets(projects, paths, targets, env, stop_on_error, command=command)
diff --git a/src/demorepo/parser/__init__.py b/src/demorepo/parser/__init__.py
index f1cbc97..61104d2 100644
--- a/src/demorepo/parser/__init__.py
+++ b/src/demorepo/parser/__init__.py
@@ -50,6 +50,8 @@ parser_stage.add_argument('-t', '--targets', help='A list of target project name
'separated by blank spaces (use quotes around the string).')
parser_stage.add_argument('--reverse-targets', action='store_true',
help='Reverse the dependency order for projects')
+parser_stage.add_argument('--stop-on-error', action='store_true',
+ help='Stops the execution if the stage fails for a project')
#
# demorepo run
@@ -68,6 +70,8 @@ parser_run.add_argument('-e', '--env', action='append',
# 'target projects too.')
parser_run.add_argument('--reverse-targets', action='store_true',
help='Reverse the dependency order for projects')
+parser_run.add_argument('--stop-on-error', action='store_true',
+ help='Stops the execution if the command fails for a project')
#
# End commands
| Stop stage/command when errors happen
Some stages should be able to define a short circuit strategy when a particular script fails.
Use Case: Specifically in deploy scripts with dependences, the script should stop to avoid resources being deployed when their dependencies failed. | demosense/demorepo | diff --git a/src/tests/demorepo.yml b/src/tests/demorepo.yml
index 5d38913..eef7802 100644
--- a/src/tests/demorepo.yml
+++ b/src/tests/demorepo.yml
@@ -19,15 +19,15 @@ projects:
- p2
stages:
- test-p2:
+ test-all:
script: ./test.sh
projects:
+ - p1
- p2
- test-all:
+ test-p2:
script: ./test.sh
projects:
- - p1
- p2
test-all-p3-fails:
@@ -37,6 +37,13 @@ stages:
- p2
- p3
+ test-all-p1-interrupts:
+ script: ./fail.sh
+ projects:
+ - p1
+ - p2
+ - p3
+
test-all-local-p1:
script: ./deploy.sh
projects:
diff --git a/src/tests/test_parser.py b/src/tests/test_parser.py
index c2ef98f..ff287f4 100644
--- a/src/tests/test_parser.py
+++ b/src/tests/test_parser.py
@@ -6,8 +6,8 @@ from demorepo import parser
defaults = dict(silent=False, log_path=None, working_mode=None)
lgc_defaults = dict(ci_tool='gitlab', ci_url=None)
-run_defaults = dict(targets=None, env=None, reverse_targets=False)
-stage_defaults = dict(targets=None, env=None, reverse_targets=False)
+run_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_error=False)
+stage_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_error=False)
@pytest.mark.parametrize("argv,expected,exit", [
@@ -56,8 +56,9 @@ stage_defaults = dict(targets=None, env=None, reverse_targets=False)
),
# run opts
(
- ['demorepo', 'run', 'ls', '--targets', 'target1', '--reverse-targets', '--env', 'cosa=1234'],
- dict(defaults, working_mode='run', command='ls', targets='target1', reverse_targets=True, env=['cosa=1234']),
+ ['demorepo', 'run', 'ls', '--targets', 'target1', '--reverse-targets', '--stop-on-error', '--env', 'cosa=1234'],
+ dict(defaults, working_mode='run', command='ls', targets='target1',
+ reverse_targets=True, stop_on_error=True, env=['cosa=1234']),
None,
),
# stage required
@@ -74,9 +75,9 @@ stage_defaults = dict(targets=None, env=None, reverse_targets=False)
),
# stage opts
(
- ['demorepo', 'stage', 'deploy', '--targets', 'target1', '--reverse-targets', '--env', 'cosa=1234'],
+ ['demorepo', 'stage', 'deploy', '--targets', 'target1', '--reverse-targets', '--stop-on-error', '--env', 'cosa=1234'],
dict(defaults, working_mode='stage', stage='deploy',
- targets='target1', reverse_targets=True, env=['cosa=1234']),
+ targets='target1', reverse_targets=True, stop_on_error=True, env=['cosa=1234']),
None,
),
])
diff --git a/src/tests/test_stage.py b/src/tests/test_stage.py
index 82a5b12..b93a675 100644
--- a/src/tests/test_stage.py
+++ b/src/tests/test_stage.py
@@ -10,11 +10,12 @@ default_args = {
'stage': 'test',
'targets': None,
'reverse_targets': False,
+ 'stop_on_error': True,
'recursive_deps': False
}
-def test_run_global_all(setup):
+def test_run_all(setup):
args = default_args.copy()
args["stage"] = "test-all"
@@ -26,7 +27,7 @@ def test_run_global_all(setup):
assert mock_dict['mock_subprocess_Popen'] == 2
-def test_run_global_all_filter(setup):
+def test_run_all_filter(setup):
args = default_args.copy()
args["stage"] = "test-all"
@@ -39,7 +40,7 @@ def test_run_global_all_filter(setup):
assert mock_dict['mock_subprocess_Popen'] == 1
-def test_run_global_all_filter_empty(setup):
+def test_run_all_filter_empty(setup):
args = default_args.copy()
args["stage"] = "test-all"
@@ -80,6 +81,22 @@ def test_run_global_all_p3_fails(setup):
assert mock_dict['mock_sys_exit'] == 1
+def test_run_all_p1_interrupts(setup):
+
+ args = default_args.copy()
+ args["stage"] = "test-all-p1-interrupts"
+
+ mock_dict['mock_subprocess_Popen'] = 0
+ mock_dict['mock_sys_exit'] = 0
+ setup.setattr(subprocess, 'Popen', mock_subprocess_Popen)
+ setup.setattr(sys, 'exit', mock_sys_exit)
+
+ commands.stage(args)
+
+ assert mock_dict['mock_subprocess_Popen'] == 1
+ assert mock_dict['mock_sys_exit'] == 1
+
+
def test_run_all_local_p1(setup):
# TODO: How can we really assert that this is really calling the local script?
args = default_args.copy()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 3
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
chardet==3.0.4
-e git+https://github.com/demosense/demorepo.git@c5e8e938224a7cf1cdffe51bf8755cd4f103dd3d#egg=demorepo
gitdb==4.0.12
gitdb2==4.0.2
GitPython==2.1.9
idna==2.6
PyYAML==3.12
requests==2.18.4
smmap==5.0.2
urllib3==1.22
| name: demorepo
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- chardet==3.0.4
- gitdb==4.0.12
- gitdb2==4.0.2
- gitpython==2.1.9
- idna==2.6
- pyyaml==3.12
- requests==2.18.4
- smmap==5.0.2
- urllib3==1.22
prefix: /opt/conda/envs/demorepo
| [
"src/tests/test_parser.py::test_parser[argv5-expected5-None]",
"src/tests/test_parser.py::test_parser[argv7-expected7-None]",
"src/tests/test_parser.py::test_parser[argv8-expected8-None]",
"src/tests/test_parser.py::test_parser[argv10-expected10-None]"
]
| [
"src/tests/test_stage.py::test_run_all",
"src/tests/test_stage.py::test_run_all_filter",
"src/tests/test_stage.py::test_run_all_filter_empty",
"src/tests/test_stage.py::test_run_global_p2",
"src/tests/test_stage.py::test_run_global_all_p3_fails",
"src/tests/test_stage.py::test_run_all_p1_interrupts",
"src/tests/test_stage.py::test_run_all_local_p1"
]
| [
"src/tests/test_parser.py::test_parser[argv0-expected0-None]",
"src/tests/test_parser.py::test_parser[argv1-expected1-SystemExit]",
"src/tests/test_parser.py::test_parser[argv2-expected2-None]",
"src/tests/test_parser.py::test_parser[argv3-expected3-None]",
"src/tests/test_parser.py::test_parser[argv4-expected4-SystemExit]",
"src/tests/test_parser.py::test_parser[argv6-expected6-SystemExit]",
"src/tests/test_parser.py::test_parser[argv9-expected9-SystemExit]"
]
| []
| null | 2,463 | [
"src/demorepo/parser/__init__.py",
"README.md",
"src/demorepo/commands/command_run.py"
]
| [
"src/demorepo/parser/__init__.py",
"README.md",
"src/demorepo/commands/command_run.py"
]
|
|
demosense__demorepo-40 | 6dcff3ceada6957d9e14e646bde040c32ac8ac0f | 2018-04-29 19:09:10 | 6dcff3ceada6957d9e14e646bde040c32ac8ac0f | diff --git a/src/demorepo/commands/command_run.py b/src/demorepo/commands/command_run.py
index 504024f..2e725b5 100644
--- a/src/demorepo/commands/command_run.py
+++ b/src/demorepo/commands/command_run.py
@@ -171,6 +171,7 @@ def run(args):
command = args['command']
targets = args.get('targets', None)
reverse_targets = args['reverse_targets']
+ inverse_dependencies = args['inverse_dependencies']
stop_on_error = args['stop_on_error']
env = args.get('env')
@@ -178,5 +179,6 @@ def run(args):
dependencies = config.get_projects_dependencies()
paths = config.get_projects_paths()
- targets = get_targets(projects, dependencies, targets, reverse_targets)
+ targets = get_targets(projects, dependencies, targets, reverse_targets, inverse_dependencies)
_run_targets(projects, paths, targets, env, stop_on_error, command=command)
+
diff --git a/src/demorepo/commands/targets/__init__.py b/src/demorepo/commands/targets/__init__.py
index dc90bd6..77fa852 100644
--- a/src/demorepo/commands/targets/__init__.py
+++ b/src/demorepo/commands/targets/__init__.py
@@ -1,10 +1,12 @@
+from collections import defaultdict
+
from demorepo import logger
__all__ = ['get_targets']
-def get_targets(targets, dependencies, targets_filter, reverse_targets=False, recursive_deps=False):
+def get_targets(targets, dependencies, targets_filter, reverse_targets=False, inverse_dependencies=False):
if targets_filter is not None:
# If filter is empty it means that we will return an empty list so nothing gets run
@@ -21,11 +23,15 @@ def get_targets(targets, dependencies, targets_filter, reverse_targets=False, re
"Unrecognized project {} in --targets".format(t))
targets = [t for t in targets if t in targets_filter]
+ # Set reverse dependency targets if inverse_dependencies
+ if inverse_dependencies:
+ targets = _add_inverse_dependencies(targets, dependencies)
+
# Set target order
targets = _order_targets(targets, dependencies)
# Reverse if specified in args
- if (reverse_targets):
+ if reverse_targets:
targets.reverse()
return targets
@@ -49,6 +55,27 @@ def _order_targets(targets, dependencies):
return list(ordered)
-def _add_deps(targets, dependencies, projects):
- # TODO: Implement inverse dependencies with projects
- pass
+def _add_inverse_dependencies(targets, dependencies):
+ # Process the dict of dependents (projects which are dependent of a project; inverse of dependencies)
+ dependents = defaultdict(set)
+ for project in dependencies:
+ for dependency in dependencies[project]:
+ dependents[dependency].add(project)
+
+ # This recursive function add the dependencies in the set s when e is marked as modified
+ def add_dependencies(e, s):
+ new_elements = [x for x in dependents[e] if x not in s]
+ for n in new_elements:
+ s.add(n)
+ add_dependencies(n, s)
+
+ # Now, for each modified project, get its name (to use the dependents dict) and add its dependencies
+ s = set()
+ for m in targets:
+ s.add(m)
+ # only if project is in dependents list (recursive option implemented for this project type)
+ if m in dependents:
+ add_dependencies(m, s)
+
+ # Return the extended list of target projects
+ return list(s)
diff --git a/src/demorepo/config/__init__.py b/src/demorepo/config/__init__.py
index 953ce67..403ee2f 100644
--- a/src/demorepo/config/__init__.py
+++ b/src/demorepo/config/__init__.py
@@ -82,7 +82,7 @@ def _init_config():
_config['stages'] = stages
else:
- logger.error("Error: Unable to find config.yml. Is this a demorepo?")
+ logger.error("Error: Unable to find demorepo.yml. Is this a demorepo?")
sys.exit(-1)
diff --git a/src/demorepo/parser/__init__.py b/src/demorepo/parser/__init__.py
index 61104d2..c01888a 100644
--- a/src/demorepo/parser/__init__.py
+++ b/src/demorepo/parser/__init__.py
@@ -43,9 +43,9 @@ parser_stage.add_argument(
parser_stage.add_argument('-e', '--env', action='append', help='Optional variables passed to the target stage script.'
' The format is VAR_NAME=VAR_VALUE. '
'Multiple env vars can be specified.')
-# parser_stage.add_argument('-r', '--recursive-deps', action='store_true',
-# help='Find projects recursively which depends on target projects and include them as '
-# 'target projects too.')
+parser_stage.add_argument('-r', '--inverse-dependencies', action='store_true',
+ help='Find projects which depends on target projects and include them as '
+ 'target projects too.')
parser_stage.add_argument('-t', '--targets', help='A list of target project names to run the provided stage, '
'separated by blank spaces (use quotes around the string).')
parser_stage.add_argument('--reverse-targets', action='store_true',
@@ -65,9 +65,9 @@ parser_run.add_argument('-e', '--env', action='append',
help='Optional variables passed to the target stage script.'
' The format is VAR_NAME=VAR_VALUE. '
'Multiple env vars can be specified.')
-# parser_run.add_argument('-r', '--recursive-deps', action='store_true',
-# help='Find projects recursively which depends on target projects and include them as '
-# 'target projects too.')
+parser_run.add_argument('-r', '--inverse-dependencies', action='store_true',
+ help='Find projects which depends on target projects and include them as '
+ 'target projects too.')
parser_run.add_argument('--reverse-targets', action='store_true',
help='Reverse the dependency order for projects')
parser_run.add_argument('--stop-on-error', action='store_true',
| Reimplement recursive-deps | demosense/demorepo | diff --git a/src/tests/test_parser.py b/src/tests/test_parser.py
index ff287f4..585f478 100644
--- a/src/tests/test_parser.py
+++ b/src/tests/test_parser.py
@@ -6,8 +6,8 @@ from demorepo import parser
defaults = dict(silent=False, log_path=None, working_mode=None)
lgc_defaults = dict(ci_tool='gitlab', ci_url=None)
-run_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_error=False)
-stage_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_error=False)
+run_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_error=False, inverse_dependencies=False)
+stage_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_error=False, inverse_dependencies=False)
@pytest.mark.parametrize("argv,expected,exit", [
@@ -45,7 +45,7 @@ stage_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_err
# run required
(
['demorepo', 'run', 'ls'],
- dict(defaults, working_mode='run', **run_defaults, command='ls'),
+ dict(defaults, working_mode='run', command='ls', **run_defaults),
None,
),
# run required fail
@@ -56,15 +56,16 @@ stage_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_err
),
# run opts
(
- ['demorepo', 'run', 'ls', '--targets', 'target1', '--reverse-targets', '--stop-on-error', '--env', 'cosa=1234'],
+ ['demorepo', 'run', 'ls', '--targets', 'target1', '--reverse-targets',
+ '--stop-on-error', '--env', 'cosa=1234', '--inverse-dependencies'],
dict(defaults, working_mode='run', command='ls', targets='target1',
- reverse_targets=True, stop_on_error=True, env=['cosa=1234']),
+ reverse_targets=True, stop_on_error=True, env=['cosa=1234'], inverse_dependencies=True),
None,
),
# stage required
(
['demorepo', 'stage', 'deploy'],
- dict(defaults, working_mode='stage', **stage_defaults, stage='deploy'),
+ dict(defaults, working_mode='stage', stage='deploy', **stage_defaults),
None,
),
# stage required fail
@@ -75,9 +76,10 @@ stage_defaults = dict(targets=None, env=None, reverse_targets=False, stop_on_err
),
# stage opts
(
- ['demorepo', 'stage', 'deploy', '--targets', 'target1', '--reverse-targets', '--stop-on-error', '--env', 'cosa=1234'],
- dict(defaults, working_mode='stage', stage='deploy',
- targets='target1', reverse_targets=True, stop_on_error=True, env=['cosa=1234']),
+ ['demorepo', 'stage', 'deploy', '--targets', 'target1', '--reverse-targets',
+ '--stop-on-error', '--env', 'cosa=1234', '--inverse-dependencies'],
+ dict(defaults, working_mode='stage', stage='deploy', targets='target1',
+ reverse_targets=True, stop_on_error=True, env=['cosa=1234'], inverse_dependencies=True),
None,
),
])
diff --git a/src/tests/test_targets.py b/src/tests/test_targets.py
index 52a46ac..89e6167 100644
--- a/src/tests/test_targets.py
+++ b/src/tests/test_targets.py
@@ -13,13 +13,14 @@ from . import raises
"""
[email protected]("projects, dependencies, targets, reverse_targets, expected, exception", [
[email protected]("projects, dependencies, targets, reverse_targets, inverse_dependencies, expected, exception", [
# No filter no deps
(
["target_A", "target_B", "target_C"],
dict(target_A=[], target_B=[], target_C=[]),
"target_A target_B target_C",
False,
+ False,
["target_A", "target_B", "target_C"],
None
),
@@ -29,6 +30,7 @@ from . import raises
dict(target_A=[], target_B=[], target_C=[]),
"target_A target_C",
False,
+ False,
["target_A", "target_C"],
None
),
@@ -38,6 +40,7 @@ from . import raises
dict(target_A=[], target_B=[], target_C=[]),
"",
False,
+ False,
[],
None
),
@@ -47,6 +50,7 @@ from . import raises
dict(target_A=[], target_B=[], target_C=[]),
"target_A target_D",
False,
+ False,
None,
Exception
),
@@ -56,6 +60,7 @@ from . import raises
dict(target_A=["target_B"], target_B=[], target_C=["target_A"]),
"target_A target_B target_C",
False,
+ False,
["target_B", "target_A", "target_C"],
None
),
@@ -65,6 +70,7 @@ from . import raises
dict(target_A=["target_B"], target_B=[], target_C=["target_A"]),
"target_A target_C",
False,
+ False,
["target_A", "target_C"],
None
),
@@ -74,6 +80,7 @@ from . import raises
dict(target_A=["target_B"], target_B=[], target_C=["target_A"]),
"target_A target_B target_C",
True,
+ False,
["target_C", "target_A", "target_B"],
None
),
@@ -83,9 +90,30 @@ from . import raises
dict(target_A=["target_B"], target_B=[], target_C=["target_A"]),
"target_B target_C",
True,
+ False,
["target_C", "target_B"],
None
),
+ # No reverse order, inverse deps (single iteration A -> B)
+ (
+ ["target_A", "target_B", "target_C"],
+ dict(target_A=["target_B"], target_B=[], target_C=[]),
+ "target_B",
+ False,
+ True,
+ ["target_B", "target_A"],
+ None
+ ),
+ # Reverse order, inverse deps (multiple iterations B -> C -> A)
+ (
+ ["target_A", "target_B", "target_C"],
+ dict(target_A=["target_C"], target_B=[], target_C=["target_B"]),
+ "target_B",
+ True,
+ True,
+ ["target_A", "target_C", "target_B"],
+ None
+ ),
# Cycle
(
["target_A", "target_B", "target_C"],
@@ -93,11 +121,12 @@ from . import raises
"target_A"], target_C=["target_A"]),
"target_A target_B target_C",
False,
+ False,
None,
Exception
),
])
-def test_get_targets_plain(projects, dependencies, targets, reverse_targets, expected, exception):
+def test_get_targets_plain(projects, dependencies, targets, reverse_targets, inverse_dependencies, expected, exception):
with raises(exception):
- result = get_targets(projects, dependencies, targets, reverse_targets)
+ result = get_targets(projects, dependencies, targets, reverse_targets, inverse_dependencies)
assert result == expected
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 4
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
chardet==3.0.4
-e git+https://github.com/demosense/demorepo.git@6dcff3ceada6957d9e14e646bde040c32ac8ac0f#egg=demorepo
gitdb==4.0.12
gitdb2==4.0.2
GitPython==2.1.9
idna==2.6
PyYAML==3.12
requests==2.18.4
smmap==5.0.2
urllib3==1.22
| name: demorepo
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- chardet==3.0.4
- gitdb==4.0.12
- gitdb2==4.0.2
- gitpython==2.1.9
- idna==2.6
- pyyaml==3.12
- requests==2.18.4
- smmap==5.0.2
- urllib3==1.22
prefix: /opt/conda/envs/demorepo
| [
"src/tests/test_parser.py::test_parser[argv5-expected5-None]",
"src/tests/test_parser.py::test_parser[argv7-expected7-None]",
"src/tests/test_parser.py::test_parser[argv8-expected8-None]",
"src/tests/test_parser.py::test_parser[argv10-expected10-None]",
"src/tests/test_targets.py::test_get_targets_plain[projects8-dependencies8-target_B-False-True-expected8-None]",
"src/tests/test_targets.py::test_get_targets_plain[projects9-dependencies9-target_B-True-True-expected9-None]"
]
| []
| [
"src/tests/test_parser.py::test_parser[argv0-expected0-None]",
"src/tests/test_parser.py::test_parser[argv1-expected1-SystemExit]",
"src/tests/test_parser.py::test_parser[argv2-expected2-None]",
"src/tests/test_parser.py::test_parser[argv3-expected3-None]",
"src/tests/test_parser.py::test_parser[argv4-expected4-SystemExit]",
"src/tests/test_parser.py::test_parser[argv6-expected6-SystemExit]",
"src/tests/test_parser.py::test_parser[argv9-expected9-SystemExit]",
"src/tests/test_targets.py::test_get_targets_plain[projects0-dependencies0-target_A",
"src/tests/test_targets.py::test_get_targets_plain[projects1-dependencies1-target_A",
"src/tests/test_targets.py::test_get_targets_plain[projects2-dependencies2--False-False-expected2-None]",
"src/tests/test_targets.py::test_get_targets_plain[projects3-dependencies3-target_A",
"src/tests/test_targets.py::test_get_targets_plain[projects4-dependencies4-target_A",
"src/tests/test_targets.py::test_get_targets_plain[projects5-dependencies5-target_A",
"src/tests/test_targets.py::test_get_targets_plain[projects6-dependencies6-target_A",
"src/tests/test_targets.py::test_get_targets_plain[projects7-dependencies7-target_B",
"src/tests/test_targets.py::test_get_targets_plain[projects10-dependencies10-target_A"
]
| []
| null | 2,464 | [
"src/demorepo/parser/__init__.py",
"src/demorepo/commands/targets/__init__.py",
"src/demorepo/commands/command_run.py",
"src/demorepo/config/__init__.py"
]
| [
"src/demorepo/parser/__init__.py",
"src/demorepo/commands/targets/__init__.py",
"src/demorepo/commands/command_run.py",
"src/demorepo/config/__init__.py"
]
|
|
python-useful-helpers__logwrap-12 | efe3e5d5b561c4ccffa0393c6363264606fba540 | 2018-04-30 08:44:09 | efe3e5d5b561c4ccffa0393c6363264606fba540 | coveralls: ## Pull Request Test Coverage Report for [Build 333](https://coveralls.io/builds/16756808)
* **29** of **29** **(100.0%)** changed or added relevant lines in **2** files are covered.
* No unchanged relevant lines lost coverage.
* Overall coverage remained the same at **100.0%**
---
| Totals | [](https://coveralls.io/builds/16756808) |
| :-- | --: |
| Change from base [Build 330](https://coveralls.io/builds/16691721): | 0.0% |
| Covered Lines: | 406 |
| Relevant Lines: | 406 |
---
##### 💛 - [Coveralls](https://coveralls.io)
| diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index eaa652d..39f201d 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -4,6 +4,8 @@ Version 3.3.0
-------------
* Type hints and stubs
* PEP0518
+* Deprecation of *args for logwrap
+* Fix empty *args and **kwargs
Version 3.2.0
-------------
diff --git a/LICENSE b/LICENSE
index 8dada3e..d3591af 100644
--- a/LICENSE
+++ b/LICENSE
@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
- Copyright {yyyy} {name of copyright owner}
+ Copyright 2016-2018 Alexey Stepanov aka penguinolog
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/README.rst b/README.rst
index 4b5b277..f92271a 100644
--- a/README.rst
+++ b/README.rst
@@ -65,6 +65,9 @@ logwrap
-------
The main decorator. Could be used as not argumented (`@logwrap.logwrap`) and argumented (`@logwrap.logwrap()`).
Not argumented usage simple calls with default values for all positions.
+
+.. note:: Argumens should be set via keywords only.
+
Argumented usage with arguments from signature:
.. code-block:: python
diff --git a/doc/source/logwrap.rst b/doc/source/logwrap.rst
index e12484f..4e7cf7e 100644
--- a/doc/source/logwrap.rst
+++ b/doc/source/logwrap.rst
@@ -12,10 +12,12 @@ API: Decorators: `LogWrap` class and `logwrap` function.
.. versionadded:: 2.2.0
- .. py:method:: __init__(log=logging.getLogger('logwrap'), log_level=logging.DEBUG, exc_level=logging.ERROR, max_indent=20, spec=None, blacklisted_names=None, blacklisted_exceptions=None, log_call_args=True, log_call_args_on_exc=True, log_result_obj=True, )
+ .. py:method:: __init__(func=None, *, log=logging.getLogger('logwrap'), log_level=logging.DEBUG, exc_level=logging.ERROR, max_indent=20, spec=None, blacklisted_names=None, blacklisted_exceptions=None, log_call_args=True, log_call_args_on_exc=True, log_result_obj=True, )
+ :param func: function to wrap
+ :type func: typing.Optional[typing.Callable]
:param log: logger object for decorator, by default used 'logwrap'
- :type log: typing.Union[logging.Logger, typing.Callable]
+ :type log: logging.Logger
:param log_level: log level for successful calls
:type log_level: int
:param exc_level: log level for exception cases
@@ -44,6 +46,9 @@ API: Decorators: `LogWrap` class and `logwrap` function.
:param log_result_obj: log result of function call.
:type log_result_obj: bool
+ .. versionchanged:: 3.3.0 Extract func from log and do not use Union.
+ .. versionchanged:: 3.3.0 Deprecation of *args
+
.. note:: Attributes/properties names the same as argument names and changes
the same fields.
diff --git a/logwrap/_log_wrap2.py b/logwrap/_log_wrap2.py
index e19078e..9464dde 100644
--- a/logwrap/_log_wrap2.py
+++ b/logwrap/_log_wrap2.py
@@ -25,6 +25,7 @@ from __future__ import unicode_literals
import logging
import typing # noqa # pylint: disable=unused-import
+import warnings
import six
# noinspection PyUnresolvedReferences
@@ -35,11 +36,106 @@ from . import _log_wrap_shared
__all__ = ('logwrap', 'LogWrap')
+def _apply_old_spec(*args, **kwargs): # type: (...) -> typing.Dict[str, typing.Any]
+ # pylint: disable=unused-argument
+ def old_spec(
+ log=_log_wrap_shared.logger, # type: typing.Union[logging.Logger, typing.Callable]
+ log_level=logging.DEBUG, # type: int
+ exc_level=logging.ERROR, # type: int
+ max_indent=20, # type: int
+ spec=None, # type: typing.Optional[typing.Callable]
+ blacklisted_names=None, # type: typing.Optional[typing.List[str]]
+ blacklisted_exceptions=None, # type: typing.Optional[typing.List[Exception]]
+ log_call_args=True, # type: bool
+ log_call_args_on_exc=True, # type: bool
+ log_result_obj=True, # type: bool
+ ): # type: (...) -> None
+ """Old spec."""
+ pass # pragma: no cover
+
+ # pylint: enable=unused-argument
+
+ sig = funcsigs.signature(old_spec) # type: funcsigs.Signature
+ parameters = tuple(sig.parameters.values()) # type: typing.Tuple[funcsigs.Parameter, ...]
+
+ real_parameters = {
+ parameter.name: parameter.default for parameter in parameters
+ } # type: typing.Dict[str, typing.Any]
+
+ bound = sig.bind(*args, **kwargs).arguments
+
+ final_kwargs = {
+ key: bound.get(key, real_parameters[key])
+ for key in real_parameters
+ } # type: typing.Dict[str, typing.Any]
+
+ return final_kwargs
+
+
class LogWrap(_log_wrap_shared.BaseLogWrap):
"""LogWrap."""
__slots__ = ()
+ def __init__( # pylint: disable=keyword-arg-before-vararg
+ self,
+ func=None, # type: typing.Optional[typing.Callable]
+ *args,
+ **kwargs
+ ): # type: (...) -> None
+ """Log function calls and return values.
+
+ :param func: function to wrap
+ :type func: typing.Optional[typing.Callable]
+ :param log: logger object for decorator, by default used 'logwrap'
+ :type log: logging.Logger
+ :param log_level: log level for successful calls
+ :type log_level: int
+ :param exc_level: log level for exception cases
+ :type exc_level: int
+ :param max_indent: maximum indent before classic `repr()` call.
+ :type max_indent: int
+ :param spec: callable object used as spec for arguments bind.
+ This is designed for the special cases only,
+ when impossible to change signature of target object,
+ but processed/redirected signature is accessible.
+ Note: this object should provide fully compatible
+ signature with decorated function, or arguments bind
+ will be failed!
+ :type spec: typing.Optional[typing.Callable]
+ :param blacklisted_names: Blacklisted argument names.
+ Arguments with this names will be skipped
+ in log.
+ :type blacklisted_names: typing.Optional[typing.Iterable[str]]
+ :param blacklisted_exceptions: list of exception,
+ which should be re-raised without
+ producing log record.
+ :type blacklisted_exceptions: typing.Optional[
+ typing.Iterable[Exception]
+ ]
+ :param log_call_args: log call arguments before executing
+ wrapped function.
+ :type log_call_args: bool
+ :param log_call_args_on_exc: log call arguments if exception raised.
+ :type log_call_args_on_exc: bool
+ :param log_result_obj: log result of function call.
+ :type log_result_obj: bool
+
+ .. versionchanged:: 3.3.0 Extract func from log and do not use Union.
+ """
+ if isinstance(func, logging.Logger):
+ args = (func,) + args
+ func = None
+
+ if args:
+ warnings.warn(
+ 'Logwrap should use keyword-only parameters starting from version 3.4.0\n'
+ 'After version 3.4.0 arguments list and order may be changed.',
+ DeprecationWarning
+ )
+
+ super(LogWrap, self).__init__(func=func, **_apply_old_spec(*args, **kwargs))
+
def _get_function_wrapper(
self,
func # type: typing.Callable
@@ -79,22 +175,17 @@ class LogWrap(_log_wrap_shared.BaseLogWrap):
# pylint: disable=unexpected-keyword-arg, no-value-for-parameter
-def logwrap(
- log=_log_wrap_shared.logger, # type: typing.Union[logging.Logger, typing.Callable]
- log_level=logging.DEBUG, # type: int
- exc_level=logging.ERROR, # type: int
- max_indent=20, # type: int
- spec=None, # type: typing.Optional[typing.Callable]
- blacklisted_names=None, # type: typing.Optional[typing.List[str]]
- blacklisted_exceptions=None, # type: typing.Optional[typing.List[Exception]]
- log_call_args=True, # type: bool
- log_call_args_on_exc=True, # type: bool
- log_result_obj=True, # type: bool
+def logwrap( # pylint: disable=keyword-arg-before-vararg
+ func=None, # type: typing.Optional[typing.Callable]
+ *args,
+ **kwargs
): # type: (...) -> typing.Union[LogWrap, typing.Callable]
"""Log function calls and return values.
+ :param func: function to wrap
+ :type func: typing.Optional[typing.Callable]
:param log: logger object for decorator, by default used 'logwrap'
- :type log: typing.Union[logging.Logger, typing.Callable]
+ :type log: logging.Logger
:param log_level: log level for successful calls
:type log_level: int
:param exc_level: log level for exception cases
@@ -123,23 +214,22 @@ def logwrap(
:type log_result_obj: bool
:return: built real decorator.
:rtype: _log_wrap_shared.BaseLogWrap
+
+ .. versionchanged:: 3.3.0 Extract func from log and do not use Union.
"""
- if isinstance(log, logging.Logger):
+ if isinstance(func, logging.Logger):
+ args = (func, ) + args
func = None
- else:
- log, func = _log_wrap_shared.logger, log # type: logging.Logger, typing.Callable
+
+ if args:
+ warnings.warn(
+ 'Logwrap should use keyword-only parameters starting from version 3.4.0\n'
+ 'After version 3.4.0 arguments list and order may be changed.',
+ DeprecationWarning
+ )
wrapper = LogWrap(
- log=log,
- log_level=log_level,
- exc_level=exc_level,
- max_indent=max_indent,
- spec=spec,
- blacklisted_names=blacklisted_names,
- blacklisted_exceptions=blacklisted_exceptions,
- log_call_args=log_call_args,
- log_call_args_on_exc=log_call_args_on_exc,
- log_result_obj=log_result_obj
+ **_apply_old_spec(*args, **kwargs)
)
if func is not None:
return wrapper(func)
diff --git a/logwrap/_log_wrap2.pyi b/logwrap/_log_wrap2.pyi
index 0e286c0..2513215 100644
--- a/logwrap/_log_wrap2.pyi
+++ b/logwrap/_log_wrap2.pyi
@@ -3,10 +3,26 @@ import typing
from . import _log_wrap_shared
class LogWrap(_log_wrap_shared.BaseLogWrap):
+ def __init__(
+ self,
+ func: typing.Optional[typing.Callable]=None,
+ log: logging.Logger=...,
+ log_level: int=...,
+ exc_level: int=...,
+ max_indent: int=...,
+ spec: typing.Optional[typing.Callable]=...,
+ blacklisted_names: typing.Optional[typing.List[str]]=...,
+ blacklisted_exceptions: typing.Optional[typing.List[Exception]]=...,
+ log_call_args: bool=...,
+ log_call_args_on_exc: bool=...,
+ log_result_obj: bool=...
+ ) -> None: ...
+
def _get_function_wrapper(self, func: typing.Callable) -> typing.Callable: ...
def logwrap(
- log: typing.Union[logging.Logger, typing.Callable]=...,
+ func: typing.Optional[typing.Callable]=None,
+ log: logging.Logger=...,
log_level: int=...,
exc_level: int=...,
max_indent: int=...,
diff --git a/logwrap/_log_wrap3.py b/logwrap/_log_wrap3.py
index 8555180..c82f692 100644
--- a/logwrap/_log_wrap3.py
+++ b/logwrap/_log_wrap3.py
@@ -29,6 +29,7 @@ import functools
import inspect
import logging
import typing
+import warnings
from . import _log_wrap_shared
@@ -36,11 +37,106 @@ from . import _log_wrap_shared
__all__ = ('logwrap', 'LogWrap')
+def _apply_old_spec(*args, **kwargs) -> typing.Dict[str, typing.Any]:
+ # pylint: disable=unused-argument
+ def old_spec(
+ log: typing.Union[logging.Logger, typing.Callable] = _log_wrap_shared.logger,
+ log_level: int = logging.DEBUG,
+ exc_level: int = logging.ERROR,
+ max_indent: int = 20,
+ spec: typing.Optional[typing.Callable] = None,
+ blacklisted_names: typing.Optional[typing.List[str]] = None,
+ blacklisted_exceptions: typing.Optional[typing.List[Exception]] = None,
+ log_call_args: bool = True,
+ log_call_args_on_exc: bool = True,
+ log_result_obj: bool = True,
+ ) -> None:
+ """Old spec."""
+ pass # pragma: no cover
+
+ # pylint: enable=unused-argument
+
+ sig = inspect.signature(old_spec) # type: inspect.Signature
+ parameters = tuple(sig.parameters.values()) # type: typing.Tuple[inspect.Parameter, ...]
+
+ real_parameters = {
+ parameter.name: parameter.default for parameter in parameters
+ } # type: typing.Dict[str, typing.Any]
+
+ bound = sig.bind(*args, **kwargs).arguments
+
+ final_kwargs = {
+ key: bound.get(key, real_parameters[key])
+ for key in real_parameters
+ } # type: typing.Dict[str, typing.Any]
+
+ return final_kwargs
+
+
class LogWrap(_log_wrap_shared.BaseLogWrap):
"""Python 3.4+ version of LogWrap."""
__slots__ = ()
+ def __init__( # pylint: disable=keyword-arg-before-vararg
+ self,
+ func: typing.Optional[typing.Callable] = None,
+ *args,
+ **kwargs
+ ) -> None:
+ """Log function calls and return values.
+
+ :param func: function to wrap
+ :type func: typing.Optional[typing.Callable]
+ :param log: logger object for decorator, by default used 'logwrap'
+ :type log: logging.Logger
+ :param log_level: log level for successful calls
+ :type log_level: int
+ :param exc_level: log level for exception cases
+ :type exc_level: int
+ :param max_indent: maximum indent before classic `repr()` call.
+ :type max_indent: int
+ :param spec: callable object used as spec for arguments bind.
+ This is designed for the special cases only,
+ when impossible to change signature of target object,
+ but processed/redirected signature is accessible.
+ Note: this object should provide fully compatible
+ signature with decorated function, or arguments bind
+ will be failed!
+ :type spec: typing.Optional[typing.Callable]
+ :param blacklisted_names: Blacklisted argument names.
+ Arguments with this names will be skipped
+ in log.
+ :type blacklisted_names: typing.Optional[typing.Iterable[str]]
+ :param blacklisted_exceptions: list of exception,
+ which should be re-raised without
+ producing log record.
+ :type blacklisted_exceptions: typing.Optional[
+ typing.Iterable[Exception]
+ ]
+ :param log_call_args: log call arguments before executing
+ wrapped function.
+ :type log_call_args: bool
+ :param log_call_args_on_exc: log call arguments if exception raised.
+ :type log_call_args_on_exc: bool
+ :param log_result_obj: log result of function call.
+ :type log_result_obj: bool
+
+ .. versionchanged:: 3.3.0 Extract func from log and do not use Union.
+ .. versionchanged:: 3.3.0 Deprecation of *args
+ """
+ if isinstance(func, logging.Logger):
+ args = (func,) + args
+ func = None
+
+ if args:
+ warnings.warn(
+ 'Logwrap will accept keyword-only parameters starting from version 3.4.0',
+ DeprecationWarning
+ )
+
+ super(LogWrap, self).__init__(func=func, **_apply_old_spec(*args, **kwargs))
+
def _get_function_wrapper(
self,
func: typing.Callable
@@ -108,22 +204,17 @@ class LogWrap(_log_wrap_shared.BaseLogWrap):
# pylint: disable=unexpected-keyword-arg, no-value-for-parameter
-def logwrap(
- log: typing.Union[logging.Logger, typing.Callable] = _log_wrap_shared.logger,
- log_level: int = logging.DEBUG,
- exc_level: int = logging.ERROR,
- max_indent: int = 20,
- spec: typing.Optional[typing.Callable] = None,
- blacklisted_names: typing.Optional[typing.List[str]] = None,
- blacklisted_exceptions: typing.Optional[typing.List[Exception]] = None,
- log_call_args: bool = True,
- log_call_args_on_exc: bool = True,
- log_result_obj: bool = True,
+def logwrap( # pylint: disable=keyword-arg-before-vararg
+ func: typing.Optional[typing.Callable] = None,
+ *args,
+ **kwargs
) -> typing.Union[LogWrap, typing.Callable]:
"""Log function calls and return values. Python 3.4+ version.
+ :param func: function to wrap
+ :type func: typing.Optional[typing.Callable]
:param log: logger object for decorator, by default used 'logwrap'
- :type log: typing.Union[logging.Logger, typing.Callable]
+ :type log: logging.Logger
:param log_level: log level for successful calls
:type log_level: int
:param exc_level: log level for exception cases
@@ -137,12 +228,9 @@ def logwrap(
Note: this object should provide fully compatible signature
with decorated function, or arguments bind will be failed!
:type spec: typing.Optional[typing.Callable]
- :param blacklisted_names: Blacklisted argument names.
- Arguments with this names will be skipped in log.
+ :param blacklisted_names: Blacklisted argument names. Arguments with this names will be skipped in log.
:type blacklisted_names: typing.Optional[typing.Iterable[str]]
- :param blacklisted_exceptions: list of exception,
- which should be re-raised without
- producing log record.
+ :param blacklisted_exceptions: list of exceptions, which should be re-raised without producing log record.
:type blacklisted_exceptions: typing.Optional[typing.Iterable[Exception]]
:param log_call_args: log call arguments before executing wrapped function.
:type log_call_args: bool
@@ -152,23 +240,22 @@ def logwrap(
:type log_result_obj: bool
:return: built real decorator.
:rtype: _log_wrap_shared.BaseLogWrap
+
+ .. versionchanged:: 3.3.0 Extract func from log and do not use Union.
+ .. versionchanged:: 3.3.0 Deprecation of *args
"""
- if isinstance(log, logging.Logger):
+ if isinstance(func, logging.Logger):
+ args = (func, ) + args
func = None
- else:
- log, func = _log_wrap_shared.logger, log # type: logging.Logger, typing.Callable
+
+ if args:
+ warnings.warn(
+ 'Logwrap will accept keyword-only parameters starting from version 3.4.0',
+ DeprecationWarning
+ )
wrapper = LogWrap(
- log=log,
- log_level=log_level,
- exc_level=exc_level,
- max_indent=max_indent,
- spec=spec,
- blacklisted_names=blacklisted_names,
- blacklisted_exceptions=blacklisted_exceptions,
- log_call_args=log_call_args,
- log_call_args_on_exc=log_call_args_on_exc,
- log_result_obj=log_result_obj
+ **_apply_old_spec(*args, **kwargs)
)
if func is not None:
return wrapper(func)
diff --git a/logwrap/_log_wrap3.pyi b/logwrap/_log_wrap3.pyi
index 0e286c0..6d16152 100644
--- a/logwrap/_log_wrap3.pyi
+++ b/logwrap/_log_wrap3.pyi
@@ -3,10 +3,28 @@ import typing
from . import _log_wrap_shared
class LogWrap(_log_wrap_shared.BaseLogWrap):
+ def __init__(
+ self,
+ func: typing.Optional[typing.Callable]=None,
+ *,
+ log: logging.Logger=...,
+ log_level: int=...,
+ exc_level: int=...,
+ max_indent: int=...,
+ spec: typing.Optional[typing.Callable]=...,
+ blacklisted_names: typing.Optional[typing.List[str]]=...,
+ blacklisted_exceptions: typing.Optional[typing.List[Exception]]=...,
+ log_call_args: bool=...,
+ log_call_args_on_exc: bool=...,
+ log_result_obj: bool=...
+ ) -> None: ...
+
def _get_function_wrapper(self, func: typing.Callable) -> typing.Callable: ...
def logwrap(
- log: typing.Union[logging.Logger, typing.Callable]=...,
+ func: typing.Optional[typing.Callable]=None,
+ *,
+ log: logging.Logger=...,
log_level: int=...,
exc_level: int=...,
max_indent: int=...,
diff --git a/logwrap/_log_wrap_shared.py b/logwrap/_log_wrap_shared.py
index 467a4f6..d886294 100644
--- a/logwrap/_log_wrap_shared.py
+++ b/logwrap/_log_wrap_shared.py
@@ -88,7 +88,8 @@ class BaseLogWrap(_class_decorator.BaseDecorator):
def __init__(
self,
- log=logger, # type: typing.Union[logging.Logger, typing.Callable]
+ func=None, # type: typing.Optional[typing.Callable]
+ log=logger, # type: logging.Logger
log_level=logging.DEBUG, # type: int
exc_level=logging.ERROR, # type: int
max_indent=20, # type: int
@@ -101,8 +102,10 @@ class BaseLogWrap(_class_decorator.BaseDecorator):
): # type: (...) -> None
"""Log function calls and return values.
+ :param func: function to wrap
+ :type func: typing.Optional[typing.Callable]
:param log: logger object for decorator, by default used 'logwrap'
- :type log: typing.Union[logging.Logger, typing.Callable]
+ :type log: logging.Logger
:param log_level: log level for successful calls
:type log_level: int
:param exc_level: log level for exception cases
@@ -134,7 +137,11 @@ class BaseLogWrap(_class_decorator.BaseDecorator):
:type log_call_args_on_exc: bool
:param log_result_obj: log result of function call.
:type log_result_obj: bool
+
+ .. versionchanged:: 3.3.0 Extract func from log and do not use Union.
"""
+ super(BaseLogWrap, self).__init__(func=func)
+
# Typing fix:
if blacklisted_names is None:
self.__blacklisted_names = [] # type: typing.List[str]
@@ -145,11 +152,7 @@ class BaseLogWrap(_class_decorator.BaseDecorator):
else:
self.__blacklisted_exceptions = list(blacklisted_exceptions)
- if not isinstance(log, logging.Logger):
- func, self.__logger = log, logger # type: typing.Callable, logging.Logger
- else:
- func, self.__logger = None, log # type: None, logging.Logger
- super(BaseLogWrap, self).__init__(func=func)
+ self.__logger = log
self.__log_level = log_level
self.__exc_level = exc_level
@@ -345,10 +348,18 @@ class BaseLogWrap(_class_decorator.BaseDecorator):
if last_kind != param.kind:
param_str += comment(kind=param.kind)
last_kind = param.kind
+
+ src = bound.get(param.name, param.default)
+ if param.empty == src:
+ if param.VAR_POSITIONAL == param.kind:
+ src = ()
+ elif param.VAR_KEYWORD == param.kind:
+ src = {}
+
param_str += fmt(
key=param.name,
val=core.pretty_repr(
- src=bound.get(param.name, param.default),
+ src=src,
indent=indent + 4,
no_indent_start=True,
max_indent=self.max_indent,
diff --git a/logwrap/_log_wrap_shared.pyi b/logwrap/_log_wrap_shared.pyi
index 99d709f..2231e54 100644
--- a/logwrap/_log_wrap_shared.pyi
+++ b/logwrap/_log_wrap_shared.pyi
@@ -13,7 +13,8 @@ def _check_type(expected: typing.Type) -> typing.Callable: ...
class BaseLogWrap(_class_decorator.BaseDecorator):
def __init__(
self,
- log: typing.Union[logging.Logger, typing.Callable]=...,
+ func: typing.Optional[typing.Callable]=None,
+ log: logging.Logger=...,
log_level: int=...,
exc_level: int=...,
max_indent: int=...,
diff --git a/requirements.txt b/requirements.txt
index 6875d5e..b273544 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,2 +1,2 @@
-six
-typing >= 3.6
+six >=1.9
+typing >= 3.6 ; python_version < "3.7"
diff --git a/tox.ini b/tox.ini
index 86f807b..ec2fb9c 100644
--- a/tox.ini
+++ b/tox.ini
@@ -26,7 +26,7 @@ deps =
commands =
py.test -vv --junitxml=unit_result.xml --html=report.html --cov-config .coveragerc --cov-report html --cov=logwrap {posargs:test}
- coverage report --fail-under 89
+ coverage report --fail-under 85
[testenv:py34-nocov]
usedevelop = False
@@ -111,3 +111,11 @@ commands = python setup.py build_sphinx
[testenv:bandit]
deps = bandit
commands = bandit -r logwrap
+
+[testenv:dep-graph]
+envdir = {toxworkdir}/dep-graph
+deps =
+ pipenv
+commands =
+ pipenv install -r {toxinidir}/build_requirements.txt --skip-lock
+ pipenv graph
| @logwrap incorrectly log empty *args and **kwargs
```python
@logwrap.logwrap
def func(*args, **kwargs): pass
```
On call `func()` will be logged `'args'=<class 'inspect._empty'>` for `*args` and `**kwargs` instead of empty tuple and empty dict. | python-useful-helpers/logwrap | diff --git a/test/test_log_wrap.py b/test/test_log_wrap.py
index 636dd7f..724c1a8 100644
--- a/test/test_log_wrap.py
+++ b/test/test_log_wrap.py
@@ -39,7 +39,7 @@ else:
# noinspection PyUnusedLocal,PyMissingOrEmptyDocstring
@mock.patch('logwrap._log_wrap_shared.logger', autospec=True)
class TestLogWrap(unittest.TestCase):
- def test_no_args(self, logger):
+ def test_001_no_args(self, logger):
@logwrap.logwrap
def func():
return 'No args'
@@ -61,7 +61,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_args_simple(self, logger):
+ def test_002_args_simple(self, logger):
arg = 'test arg'
@logwrap.logwrap
@@ -97,7 +97,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_args_defaults(self, logger):
+ def test_003_args_defaults(self, logger):
arg = 'test arg'
@logwrap.logwrap
@@ -133,7 +133,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_args_complex(self, logger):
+ def test_004_args_complex(self, logger):
string = 'string'
dictionary = {'key': 'dictionary'}
@@ -173,7 +173,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_args_kwargs(self, logger):
+ def test_005_args_kwargs(self, logger):
targs = ['string1', 'string2']
tkwargs = {'key': 'tkwargs'}
@@ -213,7 +213,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_renamed_args_kwargs(self, logger):
+ def test_006_renamed_args_kwargs(self, logger):
arg = 'arg'
targs = ['string1', 'string2']
tkwargs = {'key': 'tkwargs'}
@@ -259,7 +259,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_negative(self, logger):
+ def test_007_negative(self, logger):
@logwrap.logwrap
def func():
raise ValueError('as expected')
@@ -282,7 +282,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_negative_substitutions(self, logger):
+ def test_008_negative_substitutions(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -314,7 +314,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_spec(self, logger):
+ def test_009_spec(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -351,7 +351,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_indent(self, logger):
+ def test_010_indent(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -382,7 +382,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_method(self, logger):
+ def test_011_method(self, logger):
class Tst(object):
@logwrap.logwrap
def func(tst_self):
@@ -413,7 +413,7 @@ class TestLogWrap(unittest.TestCase):
]
)
- def test_class_decorator(self, logger):
+ def test_012_class_decorator(self, logger):
@logwrap.LogWrap
def func():
return 'No args'
@@ -439,7 +439,7 @@ class TestLogWrap(unittest.TestCase):
six.PY3,
'Strict python 3 syntax'
)
- def test_py3_args(self, logger):
+ def test_013_py3_args(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -488,7 +488,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_wrapped(self, logger):
+ def test_014_wrapped(self, logger):
# noinspection PyShadowingNames
def simpledeco(func):
@six.wraps(func)
@@ -543,7 +543,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_args_blacklist(self, logger):
+ def test_015_args_blacklist(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -584,7 +584,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_exceptions_blacklist(self, logger):
+ def test_016_exceptions_blacklist(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -607,7 +607,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_disable_args(self, logger):
+ def test_017_disable_args(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -637,7 +637,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_disable_args_exc(self, logger):
+ def test_018_disable_args_exc(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -686,7 +686,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_disable_all_args(self, logger):
+ def test_019_disable_all_args(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -722,7 +722,7 @@ def tst(arg, darg=1, *args, kwarg, dkwarg=4, **kwargs):
]
)
- def test_disable_result(self, logger):
+ def test_020_disable_result(self, logger):
new_logger = mock.Mock(spec=logging.Logger, name='logger')
log = mock.Mock(name='log')
new_logger.attach_mock(log, 'log')
@@ -801,3 +801,81 @@ class TestObject(unittest.TestCase):
),
repr(log_call),
)
+
+
+# noinspection PyUnusedLocal,PyMissingOrEmptyDocstring
[email protected]('logwrap._log_wrap_shared.logger', autospec=True)
+class TestDeprecation(unittest.TestCase):
+ def test_001_args_func(self, logger):
+ new_logger = mock.Mock(spec=logging.Logger, name='logger')
+ log = mock.Mock(name='log')
+ new_logger.attach_mock(log, 'log')
+
+ arg = 'test arg'
+
+ with mock.patch('warnings.warn') as warn:
+ @logwrap.logwrap(
+ new_logger,
+ )
+ def func(*args, **kwargs):
+ return args[0] if args else kwargs.get('arg', arg)
+
+ self.assertTrue(bool(warn.mock_calls))
+
+ result = func()
+ self.assertEqual(result, arg)
+ self.assertEqual(
+ log.mock_calls,
+ [
+ mock.call(
+ level=logging.DEBUG,
+ msg="Calling: \n"
+ "'func'(\n"
+ " # VAR_POSITIONAL:\n"
+ " 'args'=(),\n"
+ " # VAR_KEYWORD:\n"
+ " 'kwargs'={},\n"
+ ")"),
+ mock.call(
+ level=logging.DEBUG,
+ msg="Done: 'func' with result:\n"
+ "u'''test arg'''"),
+ ]
+ )
+
+ def test_002_args_cls(self, logger):
+ new_logger = mock.Mock(spec=logging.Logger, name='logger')
+ log = mock.Mock(name='log')
+ new_logger.attach_mock(log, 'log')
+
+ arg = 'test arg'
+
+ with mock.patch('warnings.warn') as warn:
+ @logwrap.LogWrap(
+ new_logger,
+ )
+ def func(*args, **kwargs):
+ return args[0] if args else kwargs.get('arg', arg)
+
+ self.assertTrue(bool(warn.mock_calls))
+
+ result = func()
+ self.assertEqual(result, arg)
+ self.assertEqual(
+ log.mock_calls,
+ [
+ mock.call(
+ level=logging.DEBUG,
+ msg="Calling: \n"
+ "'func'(\n"
+ " # VAR_POSITIONAL:\n"
+ " 'args'=(),\n"
+ " # VAR_KEYWORD:\n"
+ " 'kwargs'={},\n"
+ ")"),
+ mock.call(
+ level=logging.DEBUG,
+ msg="Done: 'func' with result:\n"
+ "u'''test arg'''"),
+ ]
+ )
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 12
} | 3.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
coverage==6.2
importlib-metadata==4.8.3
iniconfig==1.1.1
-e git+https://github.com/python-useful-helpers/logwrap.git@efe3e5d5b561c4ccffa0393c6363264606fba540#egg=logwrap
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
six==1.17.0
tomli==1.2.3
typing==3.7.4.3
typing_extensions==4.1.1
zipp==3.6.0
| name: logwrap
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- coverage==6.2
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- six==1.17.0
- tomli==1.2.3
- typing==3.7.4.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/logwrap
| [
"test/test_log_wrap.py::TestDeprecation::test_001_args_func",
"test/test_log_wrap.py::TestDeprecation::test_002_args_cls"
]
| []
| [
"test/test_log_wrap.py::TestLogWrap::test_001_no_args",
"test/test_log_wrap.py::TestLogWrap::test_002_args_simple",
"test/test_log_wrap.py::TestLogWrap::test_003_args_defaults",
"test/test_log_wrap.py::TestLogWrap::test_004_args_complex",
"test/test_log_wrap.py::TestLogWrap::test_005_args_kwargs",
"test/test_log_wrap.py::TestLogWrap::test_006_renamed_args_kwargs",
"test/test_log_wrap.py::TestLogWrap::test_007_negative",
"test/test_log_wrap.py::TestLogWrap::test_008_negative_substitutions",
"test/test_log_wrap.py::TestLogWrap::test_009_spec",
"test/test_log_wrap.py::TestLogWrap::test_010_indent",
"test/test_log_wrap.py::TestLogWrap::test_011_method",
"test/test_log_wrap.py::TestLogWrap::test_012_class_decorator",
"test/test_log_wrap.py::TestLogWrap::test_013_py3_args",
"test/test_log_wrap.py::TestLogWrap::test_014_wrapped",
"test/test_log_wrap.py::TestLogWrap::test_015_args_blacklist",
"test/test_log_wrap.py::TestLogWrap::test_016_exceptions_blacklist",
"test/test_log_wrap.py::TestLogWrap::test_017_disable_args",
"test/test_log_wrap.py::TestLogWrap::test_018_disable_args_exc",
"test/test_log_wrap.py::TestLogWrap::test_019_disable_all_args",
"test/test_log_wrap.py::TestLogWrap::test_020_disable_result",
"test/test_log_wrap.py::TestObject::test_basic"
]
| []
| Apache License 2.0 | 2,465 | [
"README.rst",
"logwrap/_log_wrap3.py",
"logwrap/_log_wrap_shared.pyi",
"logwrap/_log_wrap2.py",
"logwrap/_log_wrap3.pyi",
"doc/source/logwrap.rst",
"CHANGELOG.rst",
"tox.ini",
"LICENSE",
"logwrap/_log_wrap_shared.py",
"logwrap/_log_wrap2.pyi",
"requirements.txt"
]
| [
"README.rst",
"logwrap/_log_wrap3.py",
"logwrap/_log_wrap_shared.pyi",
"logwrap/_log_wrap2.py",
"logwrap/_log_wrap3.pyi",
"doc/source/logwrap.rst",
"CHANGELOG.rst",
"tox.ini",
"LICENSE",
"logwrap/_log_wrap_shared.py",
"logwrap/_log_wrap2.pyi",
"requirements.txt"
]
|
conan-io__conan-2833 | 419beea8c76ebf9271c8612339bdb0e5aa376306 | 2018-04-30 10:51:45 | 419beea8c76ebf9271c8612339bdb0e5aa376306 | diff --git a/conans/client/cmd/create.py b/conans/client/cmd/create.py
new file mode 100644
index 000000000..b9b48f48a
--- /dev/null
+++ b/conans/client/cmd/create.py
@@ -0,0 +1,51 @@
+import os
+from conans.client.cmd.test import PackageTester
+from conans.errors import ConanException
+
+
+def get_test_conanfile_path(tf, conanfile_path):
+ """Searches in the declared test_folder or in the standard locations"""
+
+ if tf is False:
+ # Look up for testing conanfile can be disabled if tf (test folder) is False
+ return None
+
+ test_folders = [tf] if tf else ["test_package", "test"]
+ base_folder = os.path.dirname(conanfile_path)
+ for test_folder_name in test_folders:
+ test_folder = os.path.join(base_folder, test_folder_name)
+ test_conanfile_path = os.path.join(test_folder, "conanfile.py")
+ if os.path.exists(test_conanfile_path):
+ return test_conanfile_path
+ else:
+ if tf:
+ raise ConanException("test folder '%s' not available, or it doesn't have a conanfile.py"
+ % tf)
+
+
+def create(reference, manager, user_io, profile, remote, update, build_modes, manifest_folder,
+ manifest_verify, manifest_interactive, keep_build, test_build_folder, test_folder,
+ conanfile_path):
+
+ test_conanfile_path = get_test_conanfile_path(test_folder, conanfile_path)
+
+ if test_conanfile_path:
+ pt = PackageTester(manager, user_io)
+ pt.install_build_and_test(test_conanfile_path, reference, profile, remote, update,
+ build_modes=build_modes,
+ manifest_folder=manifest_folder,
+ manifest_verify=manifest_verify,
+ manifest_interactive=manifest_interactive,
+ keep_build=keep_build,
+ test_build_folder=test_build_folder)
+ else:
+ manager.install(reference=reference,
+ install_folder=None, # Not output anything
+ manifest_folder=manifest_folder,
+ manifest_verify=manifest_verify,
+ manifest_interactive=manifest_interactive,
+ remote_name=remote,
+ profile=profile,
+ build_modes=build_modes,
+ update=update,
+ keep_build=keep_build)
diff --git a/conans/client/cmd/uploader.py b/conans/client/cmd/uploader.py
index afde653c0..ac7f6297a 100644
--- a/conans/client/cmd/uploader.py
+++ b/conans/client/cmd/uploader.py
@@ -27,10 +27,11 @@ class CmdUpload(object):
self._registry = registry
self._cache_search = DiskSearchManager(self._client_cache)
- def upload(self, reference_or_pattern, package_id=None, all_packages=None,
+ def upload(self, recorder, reference_or_pattern, package_id=None, all_packages=None,
force=False, confirm=False, retry=0, retry_wait=0, skip_upload=False,
integrity_check=False, no_overwrite=None, remote_name=None):
"""If package_id is provided, conan_reference_or_pattern is a ConanFileReference"""
+
if package_id and not _is_a_reference(reference_or_pattern):
raise ConanException("-p parameter only allowed with a valid recipe reference, "
"not with a pattern")
@@ -42,7 +43,8 @@ class CmdUpload(object):
else:
references = self._cache_search.search_recipes(reference_or_pattern)
if not references:
- raise NotFoundException("No packages found matching pattern '%s'" % reference_or_pattern)
+ raise NotFoundException(("No packages found matching pattern '%s'" %
+ reference_or_pattern))
for conan_ref in references:
upload = True
@@ -54,21 +56,21 @@ class CmdUpload(object):
conanfile_path = self._client_cache.conanfile(conan_ref)
conan_file = load_conanfile_class(conanfile_path)
except NotFoundException:
- raise NotFoundException("There is no local conanfile exported as %s"
- % str(conan_ref))
+ raise NotFoundException(("There is no local conanfile exported as %s" %
+ str(conan_ref)))
if all_packages:
packages_ids = self._client_cache.conan_packages(conan_ref)
elif package_id:
packages_ids = [package_id, ]
else:
packages_ids = []
- self._upload(conan_file, conan_ref, force, packages_ids, retry, retry_wait, skip_upload,
- integrity_check, no_overwrite, remote_name=remote_name)
+ self._upload(conan_file, conan_ref, force, packages_ids, retry, retry_wait,
+ skip_upload, integrity_check, no_overwrite, remote_name, recorder)
logger.debug("====> Time manager upload: %f" % (time.time() - t1))
def _upload(self, conan_file, conan_ref, force, packages_ids, retry, retry_wait, skip_upload,
- integrity_check, no_overwrite, remote_name):
+ integrity_check, no_overwrite, remote_name, recorder):
"""Uploads the recipes and binaries identified by conan_ref"""
defined_remote = self._registry.get_ref(conan_ref)
@@ -86,6 +88,8 @@ class CmdUpload(object):
% (str(conan_ref), upload_remote.name))
self._upload_recipe(conan_ref, retry, retry_wait, skip_upload, no_overwrite, upload_remote)
+ recorder.add_recipe(str(conan_ref), upload_remote.name, upload_remote.url)
+
if packages_ids:
# Can't use build_policy_always here because it's not loaded (only load_class)
if conan_file.build_policy == "always":
@@ -93,9 +97,12 @@ class CmdUpload(object):
"no packages can be uploaded")
total = len(packages_ids)
for index, package_id in enumerate(packages_ids):
- self._upload_package(PackageReference(conan_ref, package_id), index + 1, total,
- retry, retry_wait, skip_upload, integrity_check, no_overwrite,
- upload_remote)
+ ret_upload_package = self._upload_package(PackageReference(conan_ref, package_id),
+ index + 1, total, retry, retry_wait,
+ skip_upload, integrity_check,
+ no_overwrite, upload_remote)
+ if ret_upload_package:
+ recorder.add_package(str(conan_ref), package_id)
if not defined_remote and not skip_upload:
self._registry.set_ref(conan_ref, upload_remote)
@@ -126,8 +133,8 @@ class CmdUpload(object):
result = self._remote_manager.upload_package(package_ref, remote, retry, retry_wait,
skip_upload, integrity_check, no_overwrite)
- return result
logger.debug("====> Time uploader upload_package: %f" % (time.time() - t1))
+ return result
def _check_recipe_date(self, conan_ref, remote):
try:
diff --git a/conans/client/command.py b/conans/client/command.py
index b1df17448..88e47c05b 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -248,7 +248,7 @@ class Command(object):
raise
finally:
if args.json and info:
- self._outputer.json_install(info, args.json, cwd)
+ self._outputer.json_output(info, args.json, cwd)
def download(self, *args):
"""Downloads recipe and binaries to the local cache, without using settings. It works
@@ -338,7 +338,7 @@ class Command(object):
raise
finally:
if args.json and info:
- self._outputer.json_install(info, args.json, cwd)
+ self._outputer.json_output(info, args.json, cwd)
def config(self, *args):
"""Manages Conan configuration. Edits the conan.conf or installs config files.
@@ -877,14 +877,26 @@ class Command(object):
parser.add_argument("-no", "--no-overwrite", nargs="?", type=str, choices=["all", "recipe"],
action=OnceArgument, const="all",
help="Uploads package only if recipe is the same as the remote one")
+ parser.add_argument("-j", "--json", default=None, action=OnceArgument,
+ help='json file path where the install information will be written to')
args = parser.parse_args(*args)
- return self._conan.upload(pattern=args.pattern_or_reference, package=args.package,
- remote=args.remote, all_packages=args.all, force=args.force,
- confirm=args.confirm, retry=args.retry,
- retry_wait=args.retry_wait, skip_upload=args.skip_upload,
- integrity_check=args.check, no_overwrite=args.no_overwrite)
+ cwd = os.getcwd()
+ info = None
+
+ try:
+ info = self._conan.upload(pattern=args.pattern_or_reference, package=args.package,
+ remote=args.remote, all_packages=args.all, force=args.force,
+ confirm=args.confirm, retry=args.retry,
+ retry_wait=args.retry_wait, skip_upload=args.skip_upload,
+ integrity_check=args.check, no_overwrite=args.no_overwrite)
+ except ConanException as exc:
+ info = exc.info
+ raise
+ finally:
+ if args.json and info:
+ self._outputer.json_output(info, args.json, cwd)
def remote(self, *args):
"""Manages the remote list and the package recipes associated to a remote.
@@ -901,6 +913,8 @@ class Command(object):
help='Verify SSL certificated. Default True')
parser_add.add_argument("-i", "--insert", nargs="?", const=0, type=int, action=OnceArgument,
help="insert remote at specific index")
+ parser_add.add_argument("-f", "--force", default=False, action='store_true',
+ help="Force addition, will update if existing")
parser_rm = subparsers.add_parser('remove', help='Remove a remote')
parser_rm.add_argument('remote', help='Name of the remote')
parser_upd = subparsers.add_parser('update', help='Update the remote url')
@@ -942,7 +956,7 @@ class Command(object):
remotes = self._conan.remote_list()
self._outputer.remote_list(remotes)
elif args.subcommand == "add":
- return self._conan.remote_add(remote, url, verify_ssl, args.insert)
+ return self._conan.remote_add(remote, url, verify_ssl, args.insert, args.force)
elif args.subcommand == "remove":
return self._conan.remote_remove(remote)
elif args.subcommand == "rename":
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index bed81708f..90d8214a6 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -5,7 +5,8 @@ import requests
import conans
from conans import __version__ as client_version, tools
-from conans.client.action_recorder import ActionRecorder
+from conans.client.cmd.create import create
+from conans.client.recorder.action_recorder import ActionRecorder
from conans.client.client_cache import ClientCache
from conans.client.conf import MIN_SERVER_COMPATIBLE_VERSION, ConanClientConfigParser
from conans.client.manager import ConanManager, existing_info_files
@@ -13,6 +14,7 @@ from conans.client.migrations import ClientMigrator
from conans.client.output import ConanOutput, ScopedOutput
from conans.client.profile_loader import read_profile, profile_from_args, \
read_conaninfo_profile
+from conans.client.recorder.upload_recoder import UploadRecoder
from conans.client.remote_manager import RemoteManager
from conans.client.remote_registry import RemoteRegistry
from conans.client.rest.auth_manager import ConanApiAuthManager
@@ -69,21 +71,15 @@ def api_method(f):
try:
curdir = get_cwd()
log_command(f.__name__, kwargs)
- the_self._init_manager()
with tools.environment_append(the_self._client_cache.conan_config.env_vars):
# Patch the globals in tools
- ret = f(*args, **kwargs)
- if ret is None: # FIXME: Probably each method should manage its return
- return the_self._recorder.get_info()
- return ret
+ return f(*args, **kwargs)
except Exception as exc:
msg = exception_message_safe(exc)
try:
log_exception(exc, msg)
except:
pass
- if isinstance(exc, ConanException):
- exc.info = the_self._recorder.get_info()
raise
finally:
os.chdir(curdir)
@@ -220,18 +216,15 @@ class ConanAPIV1(object):
self._search_manager = search_manager
self._settings_preprocessor = _settings_preprocessor
self._registry = RemoteRegistry(self._client_cache.registry, self._user_io.out)
- self._recorder = None
- self._manager = None
if not interactive:
self._user_io.disable_input()
- def _init_manager(self):
+ def _init_manager(self, action_recorder):
"""Every api call gets a new recorder and new manager"""
- self._recorder = ActionRecorder()
- self._manager = ConanManager(self._client_cache, self._user_io, self._runner,
- self._remote_manager, self._search_manager,
- self._settings_preprocessor, self._recorder, self._registry)
+ return ConanManager(self._client_cache, self._user_io, self._runner,
+ self._remote_manager, self._search_manager,
+ self._settings_preprocessor, action_recorder, self._registry)
@api_method
def new(self, name, header=False, pure_c=False, test=False, exports_sources=False, bare=False,
@@ -271,7 +264,9 @@ class ConanAPIV1(object):
profile = profile_from_args(profile_name, settings, options, env, cwd,
self._client_cache)
reference = ConanFileReference.loads(reference)
- pt = PackageTester(self._manager, self._user_io)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ pt = PackageTester(manager, self._user_io)
pt.install_build_and_test(conanfile_path, reference, profile, remote,
update, build_modes=build_modes,
test_build_folder=test_build_folder)
@@ -295,76 +290,49 @@ class ConanAPIV1(object):
options = options or []
env = env or []
- cwd = cwd or get_cwd()
- conanfile_path = _get_conanfile_path(conanfile_path, cwd, py=True)
+ try:
+ cwd = cwd or os.getcwd()
+ recorder = ActionRecorder()
+ conanfile_path = _get_conanfile_path(conanfile_path, cwd, py=True)
- if not name or not version:
- conanfile = load_conanfile_class(conanfile_path)
- name, version = conanfile.name, conanfile.version
if not name or not version:
- raise ConanException("conanfile.py doesn't declare package name or version")
-
- reference = ConanFileReference(name, version, user, channel)
- scoped_output = ScopedOutput(str(reference), self._user_io.out)
- # Make sure keep_source is set for keep_build
- if keep_build:
- keep_source = True
- # Forcing an export!
- if not not_export:
- scoped_output.highlight("Exporting package recipe")
- cmd_export(conanfile_path, name, version, user, channel, keep_source,
- self._user_io.out, self._client_cache)
-
- if build_modes is None: # Not specified, force build the tested library
- build_modes = [name]
-
- manifests = _parse_manifests_arguments(verify, manifests, manifests_interactive, cwd)
- manifest_folder, manifest_interactive, manifest_verify = manifests
- profile = profile_from_args(profile_name, settings, options, env,
- cwd, self._client_cache)
-
- def get_test_conanfile_path(tf):
- """Searches in the declared test_folder or in the standard locations"""
-
- if tf is False:
- # Look up for testing conanfile can be disabled if tf (test folder) is False
- return None
-
- test_folders = [tf] if tf else ["test_package", "test"]
- base_folder = os.path.dirname(conanfile_path)
- for test_folder_name in test_folders:
- test_folder = os.path.join(base_folder, test_folder_name)
- test_conanfile_path = os.path.join(test_folder, "conanfile.py")
- if os.path.exists(test_conanfile_path):
- return test_conanfile_path
- else:
- if tf:
- raise ConanException("test folder '%s' not available, "
- "or it doesn't have a conanfile.py" % tf)
-
- test_conanfile_path = get_test_conanfile_path(test_folder)
- self._recorder.add_recipe_being_developed(reference)
-
- if test_conanfile_path:
- pt = PackageTester(self._manager, self._user_io)
- pt.install_build_and_test(test_conanfile_path, reference, profile,
- remote, update, build_modes=build_modes,
- manifest_folder=manifest_folder,
- manifest_verify=manifest_verify,
- manifest_interactive=manifest_interactive,
- keep_build=keep_build,
- test_build_folder=test_build_folder)
- else:
- self._manager.install(reference=reference,
- install_folder=None, # Not output anything
- manifest_folder=manifest_folder,
- manifest_verify=manifest_verify,
- manifest_interactive=manifest_interactive,
- remote_name=remote,
- profile=profile,
- build_modes=build_modes,
- update=update,
- keep_build=keep_build)
+ conanfile = load_conanfile_class(conanfile_path)
+ name, version = conanfile.name, conanfile.version
+ if not name or not version:
+ raise ConanException("conanfile.py doesn't declare package name or version")
+
+ reference = ConanFileReference(name, version, user, channel)
+ scoped_output = ScopedOutput(str(reference), self._user_io.out)
+ # Make sure keep_source is set for keep_build
+ if keep_build:
+ keep_source = True
+ # Forcing an export!
+ if not not_export:
+ scoped_output.highlight("Exporting package recipe")
+ cmd_export(conanfile_path, name, version, user, channel, keep_source,
+ self._user_io.out, self._client_cache)
+
+ if build_modes is None: # Not specified, force build the tested library
+ build_modes = [name]
+
+ manifests = _parse_manifests_arguments(verify, manifests, manifests_interactive, cwd)
+ manifest_folder, manifest_interactive, manifest_verify = manifests
+ profile = profile_from_args(profile_name, settings, options, env,
+ cwd, self._client_cache)
+
+ manager = self._init_manager(recorder)
+ recorder.add_recipe_being_developed(reference)
+
+ create(reference, manager, self._user_io, profile, remote, update, build_modes,
+ manifest_folder, manifest_verify, manifest_interactive, keep_build,
+ test_build_folder, test_folder, conanfile_path)
+
+ return recorder.get_info()
+
+ except ConanException as exc:
+ recorder.error = True
+ exc.info = recorder.get_info()
+ raise
@api_method
def export_pkg(self, conanfile_path, name, channel, source_folder=None, build_folder=None,
@@ -419,9 +387,11 @@ class ConanAPIV1(object):
version = conanfile.version
reference = ConanFileReference(name, version, user, channel)
- self._manager.export_pkg(reference, source_folder=source_folder, build_folder=build_folder,
- package_folder=package_folder, install_folder=install_folder,
- profile=profile, force=force)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.export_pkg(reference, source_folder=source_folder, build_folder=build_folder,
+ package_folder=package_folder, install_folder=install_folder,
+ profile=profile, force=force)
@api_method
def download(self, reference, remote=None, package=None, recipe=False):
@@ -429,7 +399,9 @@ class ConanAPIV1(object):
raise ConanException("recipe parameter cannot be used together with package")
# Install packages without settings (fixed ids or all)
conan_ref = ConanFileReference.loads(reference)
- self._manager.download(conan_ref, package, remote_name=remote, recipe=recipe)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.download(conan_ref, package, remote_name=remote, recipe=recipe)
@api_method
def install_reference(self, reference, settings=None, options=None, env=None,
@@ -437,26 +409,33 @@ class ConanAPIV1(object):
manifests_interactive=None, build=None, profile_name=None,
update=False, generators=None, install_folder=None, cwd=None):
- cwd = cwd or get_cwd()
- install_folder = _make_abs_path(install_folder, cwd)
-
- manifests = _parse_manifests_arguments(verify, manifests, manifests_interactive, cwd)
- manifest_folder, manifest_interactive, manifest_verify = manifests
-
- profile = profile_from_args(profile_name, settings, options, env, cwd,
- self._client_cache)
-
- if not generators: # We don't want the default txt
- generators = False
-
- mkdir(install_folder)
- self._manager.install(reference=reference, install_folder=install_folder, remote_name=remote,
- profile=profile, build_modes=build, update=update,
- manifest_folder=manifest_folder,
- manifest_verify=manifest_verify,
- manifest_interactive=manifest_interactive,
- generators=generators,
- install_reference=True)
+ try:
+ recorder = ActionRecorder()
+ cwd = cwd or os.getcwd()
+ install_folder = _make_abs_path(install_folder, cwd)
+
+ manifests = _parse_manifests_arguments(verify, manifests, manifests_interactive, cwd)
+ manifest_folder, manifest_interactive, manifest_verify = manifests
+
+ profile = profile_from_args(profile_name, settings, options, env, cwd,
+ self._client_cache)
+
+ if not generators: # We don't want the default txt
+ generators = False
+
+ mkdir(install_folder)
+ manager = self._init_manager(recorder)
+ manager.install(reference=reference, install_folder=install_folder,
+ remote_name=remote, profile=profile, build_modes=build,
+ update=update, manifest_folder=manifest_folder,
+ manifest_verify=manifest_verify,
+ manifest_interactive=manifest_interactive,
+ generators=generators, install_reference=True)
+ return recorder.get_info()
+ except ConanException as exc:
+ recorder.error = True
+ exc.info = recorder.get_info()
+ raise
@api_method
def install(self, path="", settings=None, options=None, env=None,
@@ -464,27 +443,34 @@ class ConanAPIV1(object):
manifests_interactive=None, build=None, profile_name=None,
update=False, generators=None, no_imports=False, install_folder=None, cwd=None):
- cwd = cwd or get_cwd()
- install_folder = _make_abs_path(install_folder, cwd)
- conanfile_path = _get_conanfile_path(path, cwd, py=None)
-
- manifests = _parse_manifests_arguments(verify, manifests, manifests_interactive, cwd)
- manifest_folder, manifest_interactive, manifest_verify = manifests
-
- profile = profile_from_args(profile_name, settings, options, env, cwd,
- self._client_cache)
-
- self._manager.install(reference=conanfile_path,
- install_folder=install_folder,
- remote_name=remote,
- profile=profile,
- build_modes=build,
- update=update,
- manifest_folder=manifest_folder,
- manifest_verify=manifest_verify,
- manifest_interactive=manifest_interactive,
- generators=generators,
- no_imports=no_imports)
+ try:
+ recorder = ActionRecorder()
+ cwd = cwd or os.getcwd()
+ install_folder = _make_abs_path(install_folder, cwd)
+ conanfile_path = _get_conanfile_path(path, cwd, py=None)
+
+ manifests = _parse_manifests_arguments(verify, manifests, manifests_interactive, cwd)
+ manifest_folder, manifest_interactive, manifest_verify = manifests
+
+ profile = profile_from_args(profile_name, settings, options, env, cwd,
+ self._client_cache)
+ manager = self._init_manager(recorder)
+ manager.install(reference=conanfile_path,
+ install_folder=install_folder,
+ remote_name=remote,
+ profile=profile,
+ build_modes=build,
+ update=update,
+ manifest_folder=manifest_folder,
+ manifest_verify=manifest_verify,
+ manifest_interactive=manifest_interactive,
+ generators=generators,
+ no_imports=no_imports)
+ return recorder.get_info()
+ except ConanException as exc:
+ recorder.error = True
+ exc.info = recorder.get_info()
+ raise
@api_method
def config_get(self, item):
@@ -531,7 +517,10 @@ class ConanAPIV1(object):
install_folder=None):
reference, profile = self._info_get_profile(reference, install_folder, profile_name, settings,
options, env)
- graph = self._manager.info_build_order(reference, profile, build_order, remote, check_updates)
+
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ graph = manager.info_build_order(reference, profile, build_order, remote, check_updates)
return graph
@api_method
@@ -539,8 +528,11 @@ class ConanAPIV1(object):
profile_name=None, remote=None, check_updates=None, install_folder=None):
reference, profile = self._info_get_profile(reference, install_folder, profile_name, settings,
options, env)
- ret = self._manager.info_nodes_to_build(reference, profile, build_modes, remote,
- check_updates)
+
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ ret = manager.info_nodes_to_build(reference, profile, build_modes, remote,
+ check_updates)
ref_list, project_reference = ret
return ref_list, project_reference
@@ -549,8 +541,11 @@ class ConanAPIV1(object):
profile_name=None, update=False, install_folder=None):
reference, profile = self._info_get_profile(reference, install_folder, profile_name, settings,
options, env)
- ret = self._manager.info_get_graph(reference, remote_name=remote, profile=profile,
- check_updates=update)
+
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ ret = manager.info_get_graph(reference, remote_name=remote, profile=profile,
+ check_updates=update)
deps_graph, graph_updates_info, project_reference = ret
return deps_graph, graph_updates_info, project_reference
@@ -566,9 +561,11 @@ class ConanAPIV1(object):
default_pkg_folder = os.path.join(build_folder, "package")
package_folder = _make_abs_path(package_folder, cwd, default=default_pkg_folder)
- self._manager.build(conanfile_path, source_folder, build_folder, package_folder,
- install_folder, should_configure=should_configure, should_build=should_build,
- should_install=should_install)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.build(conanfile_path, source_folder, build_folder, package_folder,
+ install_folder, should_configure=should_configure, should_build=should_build,
+ should_install=should_install)
@api_method
def package(self, path, build_folder, package_folder, source_folder=None, install_folder=None, cwd=None):
@@ -580,8 +577,10 @@ class ConanAPIV1(object):
default_pkg_folder = os.path.join(build_folder, "package")
package_folder = _make_abs_path(package_folder, cwd, default=default_pkg_folder)
- self._manager.local_package(package_folder, conanfile_path, build_folder, source_folder,
- install_folder)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.local_package(package_folder, conanfile_path, build_folder, source_folder,
+ install_folder)
@api_method
def source(self, path, source_folder=None, info_folder=None, cwd=None):
@@ -594,7 +593,9 @@ class ConanAPIV1(object):
if not os.path.exists(info_folder):
raise ConanException("Specified info-folder doesn't exist")
- self._manager.source(conanfile_path, source_folder, info_folder)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.source(conanfile_path, source_folder, info_folder)
@api_method
def imports(self, path, dest=None, info_folder=None, cwd=None):
@@ -611,7 +612,9 @@ class ConanAPIV1(object):
mkdir(dest)
conanfile_abs_path = _get_conanfile_path(path, cwd, py=None)
- self._manager.imports(conanfile_abs_path, dest, info_folder)
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.imports(conanfile_abs_path, dest, info_folder)
@api_method
def imports_undo(self, manifest_path):
@@ -628,9 +631,12 @@ class ConanAPIV1(object):
@api_method
def remove(self, pattern, query=None, packages=None, builds=None, src=False, force=False,
remote=None, outdated=False):
- self._manager.remove(pattern, package_ids_filter=packages, build_ids=builds,
- src=src, force=force, remote_name=remote, packages_query=query,
- outdated=outdated)
+
+ recorder = ActionRecorder()
+ manager = self._init_manager(recorder)
+ manager.remove(pattern, package_ids_filter=packages, build_ids=builds,
+ src=src, force=force, remote_name=remote, packages_query=query,
+ outdated=outdated)
@api_method
def copy(self, reference, user_channel, force=False, packages=None):
@@ -685,20 +691,32 @@ class ConanAPIV1(object):
""" Uploads a package recipe and the generated binary packages to a specified remote
"""
+ recorder = UploadRecoder()
+
if force and no_overwrite:
- raise ConanException("'no_overwrite' argument cannot be used together with 'force'")
+ exc = ConanException("'no_overwrite' argument cannot be used together with 'force'")
+ recorder.error = True
+ exc.info = recorder.get_info()
+ raise exc
- uploader = CmdUpload(self._client_cache, self._user_io, self._remote_manager, self._registry)
- return uploader.upload(pattern, package, all_packages, force, confirm, retry, retry_wait,
- skip_upload, integrity_check, no_overwrite, remote)
+ uploader = CmdUpload(self._client_cache, self._user_io, self._remote_manager,
+ self._registry)
+ try:
+ uploader.upload(recorder, pattern, package, all_packages, force, confirm, retry,
+ retry_wait, skip_upload, integrity_check, no_overwrite, remote)
+ return recorder.get_info()
+ except ConanException as exc:
+ recorder.error = True
+ exc.info = recorder.get_info()
+ raise
@api_method
def remote_list(self):
return self._registry.remotes
@api_method
- def remote_add(self, remote, url, verify_ssl=True, insert=None):
- return self._registry.add(remote, url, verify_ssl, insert)
+ def remote_add(self, remote, url, verify_ssl=True, insert=None, force=None):
+ return self._registry.add(remote, url, verify_ssl, insert, force)
@api_method
def remote_remove(self, remote):
diff --git a/conans/client/conan_command_output.py b/conans/client/conan_command_output.py
index b82ffbe62..43806ffeb 100644
--- a/conans/client/conan_command_output.py
+++ b/conans/client/conan_command_output.py
@@ -47,7 +47,7 @@ class CommandOutputer(object):
json_output = os.path.join(cwd, json_output)
save(json_output, json_str)
- def json_install(self, info, json_output, cwd):
+ def json_output(self, info, json_output, cwd):
cwd = os.path.abspath(cwd or get_cwd())
if not os.path.isabs(json_output):
json_output = os.path.join(cwd, json_output)
diff --git a/conans/client/installer.py b/conans/client/installer.py
index f60ca7e54..2046fb3fd 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -4,7 +4,7 @@ import shutil
import platform
from conans.client import tools
-from conans.client.action_recorder import INSTALL_ERROR_MISSING_BUILD_FOLDER, INSTALL_ERROR_BUILDING
+from conans.client.recorder.action_recorder import INSTALL_ERROR_MISSING_BUILD_FOLDER, INSTALL_ERROR_BUILDING
from conans.model.conan_file import get_env_context_manager
from conans.model.env_info import EnvInfo
from conans.model.user_info import UserInfo
diff --git a/conans/client/package_installer.py b/conans/client/package_installer.py
index 03de7efda..1c4e298fe 100644
--- a/conans/client/package_installer.py
+++ b/conans/client/package_installer.py
@@ -1,6 +1,6 @@
import os
-from conans.client.action_recorder import INSTALL_ERROR_MISSING
+from conans.client.recorder.action_recorder import INSTALL_ERROR_MISSING
from conans.errors import (ConanException, NotFoundException, NoRemoteAvailable)
from conans.model.ref import PackageReference
from conans.util.files import rmdir, make_read_only
diff --git a/conans/client/proxy.py b/conans/client/proxy.py
index ac74424f1..f55fe3b7e 100644
--- a/conans/client/proxy.py
+++ b/conans/client/proxy.py
@@ -5,7 +5,7 @@ from requests.exceptions import RequestException
from conans.client.loader_parse import load_conanfile_class
from conans.client.output import ScopedOutput
from conans.client.remover import DiskRemover
-from conans.client.action_recorder import INSTALL_ERROR_MISSING, INSTALL_ERROR_NETWORK
+from conans.client.recorder.action_recorder import INSTALL_ERROR_MISSING, INSTALL_ERROR_NETWORK
from conans.errors import (ConanException, NotFoundException, NoRemoteAvailable)
from conans.model.ref import PackageReference
from conans.util.log import logger
diff --git a/conans/client/recorder/__init__.py b/conans/client/recorder/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/conans/client/action_recorder.py b/conans/client/recorder/action_recorder.py
similarity index 100%
rename from conans/client/action_recorder.py
rename to conans/client/recorder/action_recorder.py
diff --git a/conans/client/recorder/upload_recoder.py b/conans/client/recorder/upload_recoder.py
new file mode 100644
index 000000000..f6d712d5f
--- /dev/null
+++ b/conans/client/recorder/upload_recoder.py
@@ -0,0 +1,47 @@
+from collections import namedtuple, OrderedDict
+from datetime import datetime
+
+
+class _UploadRecipe(namedtuple("UploadRecipe", "reference, remote_name, remote_url, time")):
+
+ def __new__(cls, reference, remote_name, remote_url):
+ the_time = datetime.utcnow()
+ return super(cls, _UploadRecipe).__new__(cls, reference, remote_name, remote_url, the_time)
+
+ def to_dict(self):
+ return {"id": self.reference, "remote_name": self.remote_name,
+ "remote_url": self.remote_url, "time": self.time}
+
+
+class _UploadPackage(namedtuple("UploadPackage", "package_id, time")):
+
+ def __new__(cls, package_id):
+ the_time = datetime.utcnow()
+ return super(cls, _UploadPackage).__new__(cls, package_id, the_time)
+
+ def to_dict(self):
+ return {"id": self.package_id, "time": self.time}
+
+
+class UploadRecoder(object):
+
+ def __init__(self):
+ self.error = False
+ self._info = OrderedDict()
+
+ def add_recipe(self, reference, remote_name, remote_url):
+ self._info[reference] = {"recipe": _UploadRecipe(reference, remote_name, remote_url),
+ "packages": []}
+
+ def add_package(self, reference, package_id):
+ self._info[reference]["packages"].append(_UploadPackage(package_id))
+
+ def get_info(self):
+ info = {"error": self.error, "uploaded": []}
+
+ for item in self._info.values():
+ recipe_info = item["recipe"].to_dict()
+ packages_info = [package.to_dict() for package in item["packages"]]
+ info["uploaded"].append({"recipe": recipe_info, "packages": packages_info})
+
+ return info
diff --git a/conans/client/remote_registry.py b/conans/client/remote_registry.py
index 20f08f0d5..bf6df2bab 100644
--- a/conans/client/remote_registry.py
+++ b/conans/client/remote_registry.py
@@ -1,8 +1,10 @@
import os
+import fasteners
+
+from collections import OrderedDict, namedtuple
+
from conans.errors import ConanException, NoRemoteAvailable
from conans.util.files import load, save
-from collections import OrderedDict, namedtuple
-import fasteners
from conans.util.config_parser import get_bool_from_text_value
from conans.util.log import logger
@@ -155,12 +157,46 @@ class RemoteRegistry(object):
refs[conan_reference] = remote
self._save(remotes, refs)
- def add(self, remote_name, remote, verify_ssl=True, insert=None):
+ def _upsert(self, remote_name, url, verify_ssl, insert):
+ self._remotes = None # invalidate cached remotes
+ with fasteners.InterProcessLock(self._filename + ".lock", logger=logger):
+ remotes, refs = self._load()
+ # Remove duplicates
+ remotes.pop(remote_name, None)
+ remotes_list = []
+ renamed = None
+ for name, r in remotes.items():
+ if r[0] != url:
+ remotes_list.append((name, r))
+ else:
+ renamed = name
+
+ if insert is not None:
+ try:
+ insert_index = int(insert)
+ except ValueError:
+ raise ConanException("insert argument must be an integer")
+ remotes_list.insert(insert_index, (remote_name, (url, verify_ssl)))
+ remotes = OrderedDict(remotes_list)
+ else:
+ remotes = OrderedDict(remotes_list)
+ remotes[remote_name] = (url, verify_ssl)
+
+ if renamed:
+ for k, v in refs.items():
+ if v == renamed:
+ refs[k] = remote_name
+ self._save(remotes, refs)
+
+ def add(self, remote_name, url, verify_ssl=True, insert=None, force=None):
+ if force:
+ return self._upsert(remote_name, url, verify_ssl, insert)
+
def exists_function(remotes):
if remote_name in remotes:
raise ConanException("Remote '%s' already exists in remotes (use update to modify)"
% remote_name)
- self._add_update(remote_name, remote, verify_ssl, exists_function, insert)
+ self._add_update(remote_name, url, verify_ssl, exists_function, insert)
def remove(self, remote_name):
self._remotes = None # invalidate cached remotes
@@ -172,11 +208,11 @@ class RemoteRegistry(object):
refs = {k: v for k, v in refs.items() if v != remote_name}
self._save(remotes, refs)
- def update(self, remote_name, remote, verify_ssl=True, insert=None):
+ def update(self, remote_name, url, verify_ssl=True, insert=None):
def exists_function(remotes):
if remote_name not in remotes:
raise ConanException("Remote '%s' not found in remotes" % remote_name)
- self._add_update(remote_name, remote, verify_ssl, exists_function, insert)
+ self._add_update(remote_name, url, verify_ssl, exists_function, insert)
def rename(self, remote_name, new_remote_name):
self._remotes = None # invalidate cached remotes
@@ -204,14 +240,14 @@ class RemoteRegistry(object):
refs = {k: v for k, v in refs.items() if v in new_remotes}
self._save(new_remotes, refs)
- def _add_update(self, remote_name, remote, verify_ssl, exists_function, insert=None):
+ def _add_update(self, remote_name, url, verify_ssl, exists_function, insert=None):
self._remotes = None # invalidate cached remotes
with fasteners.InterProcessLock(self._filename + ".lock", logger=logger):
remotes, refs = self._load()
exists_function(remotes)
urls = {r[0]: name for name, r in remotes.items() if name != remote_name}
- if remote in urls:
- raise ConanException("Remote '%s' already exists with same URL" % urls[remote])
+ if url in urls:
+ raise ConanException("Remote '%s' already exists with same URL" % urls[url])
if insert is not None:
try:
insert_index = int(insert)
@@ -219,8 +255,8 @@ class RemoteRegistry(object):
raise ConanException("insert argument must be an integer")
remotes.pop(remote_name, None) # Remove if exists (update)
remotes_list = list(remotes.items())
- remotes_list.insert(insert_index, (remote_name, (remote, verify_ssl)))
+ remotes_list.insert(insert_index, (remote_name, (url, verify_ssl)))
remotes = OrderedDict(remotes_list)
else:
- remotes[remote_name] = (remote, verify_ssl)
+ remotes[remote_name] = (url, verify_ssl)
self._save(remotes, refs)
diff --git a/conans/model/env_info.py b/conans/model/env_info.py
index f2a133b21..7a5346a75 100644
--- a/conans/model/env_info.py
+++ b/conans/model/env_info.py
@@ -142,7 +142,6 @@ class EnvValues(object):
# DepsEnvInfo. the OLD values are always kept, never overwrite,
elif isinstance(env_obj, DepsEnvInfo):
for (name, value) in env_obj.vars.items():
- name = name.upper() if name.lower() == "path" else name
self.add(name, value)
else:
raise ConanException("unknown env type: %s" % env_obj)
@@ -197,10 +196,17 @@ class EnvInfo(object):
def __init__(self):
self._values_ = {}
+ @staticmethod
+ def _adjust_casing(name):
+ """We don't want to mix "path" with "PATH", actually we don`t want to mix anything
+ with different casing. Furthermore in Windows all is uppercase, but managing all in
+ upper case will be breaking."""
+ return name.upper() if name.lower() == "path" else name
+
def __getattr__(self, name):
if name.startswith("_") and name.endswith("_"):
return super(EnvInfo, self).__getattr__(name)
-
+ name = self._adjust_casing(name)
attr = self._values_.get(name)
if not attr:
self._values_[name] = []
@@ -209,6 +215,7 @@ class EnvInfo(object):
def __setattr__(self, name, value):
if name.startswith("_") and name.endswith("_"):
return super(EnvInfo, self).__setattr__(name, value)
+ name = self._adjust_casing(name)
self._values_[name] = value
@property
| unstable/incorrect PATH variable definition when the name variable is lowercased
I finally found the time to hunt down the unstable PATH definition which I discovered while writing a test for the virtualenv generator (see #2556).
The basic setup is quite simple. I have two packages, `base` and `derived` where `derived` requires `base`. Both packages should contribute two directories to the PATH variable, `basedir/bin:samebin` and `deriveddir/bin:samebin` respectively. However, in the `base` package PATH is written in lowercase (which was a typo).
Base Package:
~~~python
import os
from conans import ConanFile
class BaseConan(ConanFile):
name = "base"
version = "0.1"
def package_info(self):
self.env_info.path.extend([os.path.join("basedir", "bin"),"samebin"])
~~~
Derived Package:
~~~python
import os
from conans import ConanFile
class DerivedConan(ConanFile):
name = "dummy"
version = "0.1"
requires = "base/0.1@lasote/testing"
def package_info(self):
self.env_info.PATH = [os.path.join("deriveddir", "bin"),"samebin"]
~~~
conanfile.txt:
~~~
[requires]
derived/0.1@lasote/testing
[generators]
virtualenv
~~~
When I now use the derived package, for example with the virtualenv generator, we randomly get one of the two following values for PATH:
* `PATH="deriveddir/bin":"samebin":"basedir/bin":"samebin":$PATH`
* `PATH="basedir/bin":"samebin":"deriveddir/bin":"samebin":$PATH`
The reason for this strange behavior is actually (an undocumented feature?) in the [update method of the EnvValues class](
https://github.com/conan-io/conan/blob/8151c4c39a5ffbf42f21a10d586fe88b8f1c8f04/conans/model/env_info.py#L145) where path is uppercased automatically. However, somehow also the lowercase `path` is still in one of the env dicts. Finally, depending on the random order between `PATH` and `path` in the unordered dict, either the one or the other result is generated.
Interestingly, when `PATH` is used instead of `path` in the `base` package, the returned value is `PATH="deriveddir/bin":"basedir/bin":"samebin":$PATH`. Note that the [lists have been deduplicated](https://github.com/conan-io/conan/blob/8151c4c39a5ffbf42f21a10d586fe88b8f1c8f04/conans/model/env_info.py#L236) which I did not expect. Is this another undocumented feature for env_info?
I am not sure what the best fix for this situation is. On the one hand, using `path` in the `base` package was a typo and I would not recommend to do it in general. On the other hand, given that there is special handling in place it kind of works sometimes which is the absolute worst case. I currently lean towards removing the special case for PATH completely in order to get deterministic results and to fail early when writing an incorrect recipe. However, I am clearly missing the big picture here to actually decide on the best solution. Maybe @lasote can shed some light on this issue.
Best,
Mario
| conan-io/conan | diff --git a/conans/test/command/remote_test.py b/conans/test/command/remote_test.py
index e7cb82f21..d28942a5e 100644
--- a/conans/test/command/remote_test.py
+++ b/conans/test/command/remote_test.py
@@ -41,6 +41,49 @@ class RemoteTest(unittest.TestCase):
output = str(self.client.user_io.out)
self.assertIn("remote1: http://", output.splitlines()[0])
+ def add_force_test(self):
+ client = TestClient()
+ client.run("remote add r1 https://r1")
+ client.run("remote add r2 https://r2")
+ client.run("remote add r3 https://r3")
+ client.run("remote add_ref Hello/0.1@user/testing r2")
+ client.run("remote add_ref Hello2/0.1@user/testing r1")
+
+ client.run("remote add r4 https://r4 -f")
+ client.run("remote list")
+ lines = str(client.user_io.out).splitlines()
+ self.assertIn("r1: https://r1", lines[0])
+ self.assertIn("r2: https://r2", lines[1])
+ self.assertIn("r3: https://r3", lines[2])
+ self.assertIn("r4: https://r4", lines[3])
+
+ client.run("remote add r2 https://newr2 -f")
+ client.run("remote list")
+ lines = str(client.user_io.out).splitlines()
+ self.assertIn("r1: https://r1", lines[0])
+ self.assertIn("r3: https://r3", lines[1])
+ self.assertIn("r4: https://r4", lines[2])
+ self.assertIn("r2: https://newr2", lines[3])
+
+ client.run("remote add newr1 https://r1 -f")
+ client.run("remote list")
+ lines = str(client.user_io.out).splitlines()
+ self.assertIn("r3: https://r3", lines[0])
+ self.assertIn("r4: https://r4", lines[1])
+ self.assertIn("r2: https://newr2", lines[2])
+ self.assertIn("newr1: https://r1", lines[3])
+ client.run("remote list_ref")
+ self.assertIn("Hello2/0.1@user/testing: newr1", client.out)
+ self.assertIn("Hello/0.1@user/testing: r2", client.out)
+
+ client.run("remote add newr1 https://newr1 -f -i")
+ client.run("remote list")
+ lines = str(client.user_io.out).splitlines()
+ self.assertIn("newr1: https://newr1", lines[0])
+ self.assertIn("r3: https://r3", lines[1])
+ self.assertIn("r4: https://r4", lines[2])
+ self.assertIn("r2: https://newr2", lines[3])
+
def rename_test(self):
client = TestClient()
client.run("remote add r1 https://r1")
diff --git a/conans/test/command/upload_complete_test.py b/conans/test/command/upload_complete_test.py
index 3f05a5a11..bad8447fb 100644
--- a/conans/test/command/upload_complete_test.py
+++ b/conans/test/command/upload_complete_test.py
@@ -1,3 +1,4 @@
+import json
import unittest
from conans.test.utils.tools import TestClient, TestServer, TestRequester
from conans.test.utils.test_files import hello_source_files, temp_folder,\
@@ -7,7 +8,7 @@ import os
from conans.paths import CONAN_MANIFEST, EXPORT_TGZ_NAME, CONANINFO
import platform
import stat
-from conans.util.files import save, mkdir
+from conans.util.files import load, mkdir, save
from conans.model.ref import ConanFileReference, PackageReference
from conans.model.manifest import FileTreeManifest
from conans.test.utils.test_files import uncompress_packaged_files
@@ -336,3 +337,75 @@ class TestConan(ConanFile):
self.client.run('upload %s --force' % str(self.conan_ref))
self.assertIn("Uploading %s" % str(self.conan_ref),
self.client.user_io.out)
+
+ def upload_json_test(self):
+ conanfile = """
+from conans import ConanFile
+
+class TestConan(ConanFile):
+ name = "test"
+ version = "0.1"
+
+ def package(self):
+ self.copy("mylib.so", dst="lib")
+"""
+
+ client = self._get_client()
+ client.save({"conanfile.py": conanfile,
+ "mylib.so": ""})
+ client.run("create . danimtb/testing")
+
+ # Test conflict parameter error
+ error = client.run("upload test/0.1@danimtb/* --all -p ewvfw --json upload.json",
+ ignore_error=True)
+ self.assertTrue(error)
+ json_path = os.path.join(client.current_folder, "upload.json")
+ self.assertTrue(os.path.exists(json_path))
+ json_content = load(json_path)
+ output = json.loads(json_content)
+ self.assertTrue(output["error"])
+ self.assertEqual(0, len(output["uploaded"]))
+
+ # Test invalid reference error
+ error = client.run("upload fake/0.1@danimtb/testing --all --json upload.json",
+ ignore_error=True)
+ self.assertTrue(error)
+ json_path = os.path.join(client.current_folder, "upload.json")
+ self.assertTrue(os.path.exists(json_path))
+ json_content = load(json_path)
+ output = json.loads(json_content)
+ self.assertTrue(output["error"])
+ self.assertEqual(0, len(output["uploaded"]))
+
+ # Test normal upload
+ client.run("upload test/0.1@danimtb/testing --all --json upload.json")
+ self.assertTrue(os.path.exists(json_path))
+ json_content = load(json_path)
+ output = json.loads(json_content)
+ output_expected = {"error": False,
+ "uploaded": [
+ {
+ "recipe": {
+ "id": "test/0.1@danimtb/testing",
+ "remote_url": "unknown",
+ "remote_name": "default",
+ "time": "unknown"
+ },
+ "packages": [
+ {
+ "id": "5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9",
+ "time": "unknown"
+ }
+ ]
+ }
+ ]}
+ self.assertEqual(output_expected["error"], output["error"])
+ self.assertEqual(len(output_expected["uploaded"]), len(output["uploaded"]))
+
+ for i, item in enumerate(output["uploaded"]):
+ self.assertEqual(output_expected["uploaded"][i]["recipe"]["id"], item["recipe"]["id"])
+ self.assertEqual(output_expected["uploaded"][i]["recipe"]["remote_name"],
+ item["recipe"]["remote_name"])
+ for j, subitem in enumerate(item["packages"]):
+ self.assertEqual(output_expected["uploaded"][i]["packages"][j]["id"],
+ subitem["id"])
diff --git a/conans/test/download_test.py b/conans/test/download_test.py
index 6b59b17dd..eec9147c2 100644
--- a/conans/test/download_test.py
+++ b/conans/test/download_test.py
@@ -1,6 +1,6 @@
import unittest
-from conans.client.action_recorder import ActionRecorder
+from conans.client.recorder.action_recorder import ActionRecorder
from conans.client.proxy import ConanProxy
from conans.errors import NotFoundException, ConanException
from conans.model.ref import ConanFileReference, PackageReference
diff --git a/conans/test/util/action_recorder_test.py b/conans/test/functional/action_recorder_test.py
similarity index 96%
rename from conans/test/util/action_recorder_test.py
rename to conans/test/functional/action_recorder_test.py
index ac85817b5..8519cab4b 100644
--- a/conans/test/util/action_recorder_test.py
+++ b/conans/test/functional/action_recorder_test.py
@@ -1,7 +1,7 @@
import unittest
-from conans.client.action_recorder import (ActionRecorder, INSTALL_ERROR_MISSING,
- INSTALL_ERROR_NETWORK)
+from conans.client.recorder.action_recorder import (ActionRecorder, INSTALL_ERROR_MISSING,
+ INSTALL_ERROR_NETWORK)
from conans.model.ref import ConanFileReference, PackageReference
diff --git a/conans/test/functional/upload_recorder_test.py b/conans/test/functional/upload_recorder_test.py
new file mode 100644
index 000000000..31702ab59
--- /dev/null
+++ b/conans/test/functional/upload_recorder_test.py
@@ -0,0 +1,119 @@
+import unittest
+
+from datetime import datetime
+from conans.client.recorder.upload_recoder import UploadRecoder
+
+
+class UploadRecorderTest(unittest.TestCase):
+
+ def setUp(self):
+ self.recorder = UploadRecoder()
+
+ def empty_test(self):
+ info = self.recorder.get_info()
+ expected_result = {'error': False, 'uploaded': []}
+ self.assertEqual(expected_result, info)
+
+ def sequential_test(self):
+ self.recorder.add_recipe("fake/0.1@user/channel", "my_remote", "https://fake_url.com")
+ self.recorder.add_package("fake/0.1@user/channel", "fake_package_id")
+ self.recorder.add_recipe("fakefake/0.1@user/channel", "my_remote2", "https://fake_url2.com")
+ self.recorder.add_package("fakefake/0.1@user/channel", "fakefake_package_id1")
+ self.recorder.add_package("fakefake/0.1@user/channel", "fakefake_package_id2")
+ info = self.recorder.get_info()
+ expected_result_without_time = {
+ "error": False,
+ "uploaded": [
+ {
+ "recipe": {
+ "id": "fake/0.1@user/channel",
+ "remote_name": "my_remote",
+ "remote_url": "https://fake_url.com"
+ },
+ "packages": [
+ {
+ "id": "fake_package_id"
+ }
+ ]
+ },
+ {
+ "recipe": {
+ "id": "fakefake/0.1@user/channel",
+ "remote_name": "my_remote2",
+ "remote_url": "https://fake_url2.com"
+ },
+ "packages": [
+ {
+ "id": "fakefake_package_id1"
+ },
+ {
+ "id": "fakefake_package_id2"
+ }
+ ]
+ }
+ ]
+ }
+
+ self._check_result(expected_result_without_time, info)
+
+ def unordered_test(self):
+ self.recorder.add_recipe("fake1/0.1@user/channel", "my_remote1", "https://fake_url1.com")
+ self.recorder.add_recipe("fake2/0.1@user/channel", "my_remote2", "https://fake_url2.com")
+ self.recorder.add_recipe("fake3/0.1@user/channel", "my_remote3", "https://fake_url3.com")
+ self.recorder.add_package("fake1/0.1@user/channel", "fake1_package_id1")
+ self.recorder.add_package("fake2/0.1@user/channel", "fake2_package_id1")
+ self.recorder.add_package("fake2/0.1@user/channel", "fake2_package_id2")
+ info = self.recorder.get_info()
+ expected_result_without_time = {
+ "error": False,
+ "uploaded": [
+ {
+ "recipe": {
+ "id": "fake1/0.1@user/channel",
+ "remote_name": "my_remote1",
+ "remote_url": "https://fake_url1.com"
+ },
+ "packages": [
+ {
+ "id": "fake1_package_id1"
+ }
+ ]
+ },
+ {
+ "recipe": {
+ "id": "fake2/0.1@user/channel",
+ "remote_name": "my_remote2",
+ "remote_url": "https://fake_url2.com"
+ },
+ "packages": [
+ {
+ "id": "fake2_package_id1"
+ },
+ {
+ "id": "fake2_package_id2"
+ }
+ ]
+ },
+ {
+ "recipe": {
+ "id": "fake3/0.1@user/channel",
+ "remote_name": "my_remote3",
+ "remote_url": "https://fake_url3.com"
+ },
+ "packages": [
+ ]
+ }
+ ]
+ }
+
+ self._check_result(expected_result_without_time, info)
+
+ def _check_result(self, expeceted, result):
+ for i, item in enumerate(result["uploaded"]):
+ assert item["recipe"]["time"]
+ del result["uploaded"][i]["recipe"]["time"]
+
+ for j, package in enumerate(item["packages"]):
+ assert package["time"], datetime
+ del result["uploaded"][i]["packages"][j]["time"]
+ self.assertEqual(expeceted, result)
diff --git a/conans/test/integration/conan_env_test.py b/conans/test/integration/conan_env_test.py
index 1409daba1..b09a7caf4 100644
--- a/conans/test/integration/conan_env_test.py
+++ b/conans/test/integration/conan_env_test.py
@@ -626,6 +626,54 @@ class Hello2Conan(ConanFile):
self.assertInSep("VAR2=>24:23*", client.user_io.out)
self.assertInSep("VAR3=>bestvalue*", client.user_io.out)
+ def mix_path_case_test(self):
+ client = TestClient()
+ conanfile = """
+from conans import ConanFile
+class LibConan(ConanFile):
+ name = "libB"
+ version = "1.0"
+
+ def package_info(self):
+ self.env_info.path = ["path_from_B"]
+"""
+ client.save({"conanfile.py": conanfile})
+ client.run("create . user/channel")
+
+ conanfile = """
+from conans import ConanFile
+class LibConan(ConanFile):
+ name = "libA"
+ version = "1.0"
+ requires = "libB/1.0@user/channel"
+
+ def package_info(self):
+ self.env_info.PATH.extend(["path_from_A"])
+"""
+ client.save({"conanfile.py": conanfile})
+ client.run("create . user/channel")
+
+ conanfile = """
+[requires]
+libA/1.0@user/channel
+[generators]
+virtualenv
+"""
+ client.save({"conanfile.txt": conanfile}, clean_first=True)
+ client.run("install .")
+ info = load(os.path.join(client.current_folder, "conanbuildinfo.txt"))
+ info = info.replace("\r\n", "\n")
+ self.assertIn("""
+[ENV_libA]
+PATH=["path_from_A"]
+[ENV_libB]
+PATH=["path_from_B"]""", info)
+ if platform.system() != "Windows":
+ activate = load(os.path.join(client.current_folder, "activate.sh"))
+ self.assertIn('PATH="path_from_A":"path_from_B":$PATH', activate)
+ else:
+ activate = load(os.path.join(client.current_folder, "activate.bat"))
+ self.assertIn('PATH=path_from_A;path_from_B;%PATH%', activate)
def check_conaninfo_completion_test(self):
"""
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_issue_reference",
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 9
} | 1.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"nose-cov",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y cmake"
],
"python": "3.6",
"reqs_path": [
"conans/requirements.txt",
"conans/requirements_osx.txt",
"conans/requirements_server.txt",
"conans/requirements_dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asn1crypto==1.5.1
astroid==1.6.6
attrs==22.2.0
beautifulsoup4==4.12.3
bottle==0.12.25
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
colorama==0.3.9
-e git+https://github.com/conan-io/conan.git@419beea8c76ebf9271c8612339bdb0e5aa376306#egg=conan
cov-core==1.15.0
coverage==4.2
cryptography==2.1.4
deprecation==2.0.7
distro==1.1.0
fasteners==0.19
future==0.16.0
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.7.0
mock==1.3.0
ndg-httpsclient==0.4.4
node-semver==0.2.0
nose==1.3.7
nose-cov==1.6
packaging==21.3
parameterized==0.8.1
patch==1.16
pbr==6.1.1
pluggy==1.0.0
pluginbase==0.7
py==1.11.0
pyasn==1.5.0b7
pyasn1==0.5.1
pycparser==2.21
Pygments==2.14.0
PyJWT==1.7.1
pylint==1.8.4
pyOpenSSL==17.5.0
pyparsing==3.1.4
pytest==7.0.1
PyYAML==3.12
requests==2.27.1
six==1.17.0
soupsieve==2.3.2.post1
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
waitress==2.0.0
WebOb==1.8.9
WebTest==2.0.35
wrapt==1.16.0
zipp==3.6.0
| name: conan
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asn1crypto==1.5.1
- astroid==1.6.6
- attrs==22.2.0
- beautifulsoup4==4.12.3
- bottle==0.12.25
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- colorama==0.3.9
- cov-core==1.15.0
- coverage==4.2
- cryptography==2.1.4
- deprecation==2.0.7
- distro==1.1.0
- fasteners==0.19
- future==0.16.0
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- isort==5.10.1
- lazy-object-proxy==1.7.1
- mccabe==0.7.0
- mock==1.3.0
- ndg-httpsclient==0.4.4
- node-semver==0.2.0
- nose==1.3.7
- nose-cov==1.6
- packaging==21.3
- parameterized==0.8.1
- patch==1.16
- pbr==6.1.1
- pluggy==1.0.0
- pluginbase==0.7
- py==1.11.0
- pyasn==1.5.0b7
- pyasn1==0.5.1
- pycparser==2.21
- pygments==2.14.0
- pyjwt==1.7.1
- pylint==1.8.4
- pyopenssl==17.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pyyaml==3.12
- requests==2.27.1
- six==1.17.0
- soupsieve==2.3.2.post1
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- waitress==2.0.0
- webob==1.8.9
- webtest==2.0.35
- wrapt==1.16.0
- zipp==3.6.0
prefix: /opt/conda/envs/conan
| [
"conans/test/download_test.py::DownloadTest::test_returns_on_failures",
"conans/test/functional/action_recorder_test.py::ActionRecorderTest::test_install"
]
| [
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_complex_deps_propagation",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_complex_deps_propagation_append",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_complex_deps_propagation_override",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_conan_info_cache_and_priority",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_conaninfo_filtered",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_override_simple",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_override_simple2",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_package_env_working",
"conans/test/integration/conan_env_test.py::ConanEnvTest::test_run_env"
]
| []
| []
| MIT License | 2,466 | [
"conans/model/env_info.py",
"conans/client/command.py",
"conans/client/recorder/upload_recoder.py",
"conans/client/remote_registry.py",
"conans/client/cmd/create.py",
"conans/client/conan_api.py",
"conans/client/conan_command_output.py",
"conans/client/proxy.py",
"conans/client/package_installer.py",
"conans/client/cmd/uploader.py",
"conans/client/recorder/__init__.py",
"conans/client/installer.py",
"conans/client/action_recorder.py"
]
| [
"conans/model/env_info.py",
"conans/client/command.py",
"conans/client/recorder/upload_recoder.py",
"conans/client/remote_registry.py",
"conans/client/cmd/create.py",
"conans/client/conan_api.py",
"conans/client/conan_command_output.py",
"conans/client/proxy.py",
"conans/client/package_installer.py",
"conans/client/cmd/uploader.py",
"conans/client/recorder/action_recorder.py",
"conans/client/recorder/__init__.py",
"conans/client/installer.py"
]
|
|
biocommons__bioutils-11 | b9d435b7815a1ffc849cd3980e408bbed53f6bcb | 2018-04-30 22:39:24 | b9d435b7815a1ffc849cd3980e408bbed53f6bcb | diff --git a/.gitignore b/.gitignore
index b6e83da..c1fee79 100644
--- a/.gitignore
+++ b/.gitignore
@@ -94,5 +94,3 @@ ENV/
doc/_build
.eggs
bioutils/_data/assemblies/pull
-.pytest_cache
-.vscode
diff --git a/.vscode/settings.json b/.vscode/settings.json
deleted file mode 100644
index b933f71..0000000
--- a/.vscode/settings.json
+++ /dev/null
@@ -1,6 +0,0 @@
-{
- "python.formatting.provider": "yapf",
- "editor.formatOnSave": true,
- "python.venvPath": "${workspaceFolder}/venv/",
- "python.pythonPath": "${workspaceFolder}/venv/3.6/bin/python",
-}
\ No newline at end of file
diff --git a/Makefile b/Makefile
index 5ce8a7b..367f174 100644
--- a/Makefile
+++ b/Makefile
@@ -10,7 +10,7 @@ SELF:=$(firstword $(MAKEFILE_LIST))
PKG=bioutils
PKGD=$(subst .,/,${PKG})
-VEDIR=venv/3.6
+VEDIR=venv/3.5
############################################################################
diff --git a/README.rst b/README.rst
index a126048..224f8e9 100644
--- a/README.rst
+++ b/README.rst
@@ -9,6 +9,8 @@ the `hgvs <https://github.com/biocommons/hgvs/>`_ and `uta
not really intended for broader use (read: it may change without
notice).
+To use an E-Utilities API key run add it to an environment variable called `ncbi_api_key`
+and it will be used in the E-Utilities request.
.. |issues_badge| image:: https://img.shields.io/github/issues/biocommons/bioutils.png
:target: https://github.com/biocommons/bioutils/issues
diff --git a/bioutils/accessions.py b/bioutils/accessions.py
index c80a428..c10a9b5 100644
--- a/bioutils/accessions.py
+++ b/bioutils/accessions.py
@@ -5,154 +5,8 @@
from __future__ import absolute_import, division, print_function, unicode_literals
-import re
-
from six import iteritems
-from .exceptions import BioutilsError
-
-
-_ensembl_species_prefixes = "|".join("""ENS ENSACA ENSAME ENSAMX
-ENSANA ENSAPL ENSBTA ENSCAF ENSCAN ENSCAP ENSCAT ENSCCA ENSCEL ENSCGR
-ENSCGR ENSCHI ENSCHO ENSCIN ENSCJA ENSCLA ENSCPO ENSCSA ENSCSAV ENSDAR
-ENSDNO ENSDOR ENSEBU ENSECA ENSEEU ENSETE ENSFAL ENSFCA ENSFDA ENSGAC
-ENSGAL ENSGGO ENSGMO ENSHGLF ENSHGLM ENSJJA ENSLAC ENSLAF ENSLOC
-ENSMAU ENSMEU ENSMFA ENSMGA ENSMIC ENSMLE ENSMLU ENSMMU ENSMNE ENSMOC
-ENSMOD ENSMPU ENSMUS ENSNGA ENSNLE ENSOAN ENSOAR ENSOCU ENSODE ENSOGA
-ENSONI ENSOPR ENSORL ENSPAN ENSPCA ENSPCO ENSPEM ENSPFO ENSPMA ENSPPA
-ENSPPR ENSPPY ENSPSI ENSPTI ENSPTR ENSPVA ENSRBI ENSRNO ENSRRO ENSSAR
-ENSSBO ENSSCE ENSSHA ENSSSC ENSSTO ENSTBE ENSTGU ENSTNI ENSTRU ENSTSY
-ENSTTR ENSVPA ENSXET ENSXMA FB MGP_129S1SvImJ_ MGP_AJ_ MGP_AKRJ_
-MGP_BALBcJ_ MGP_C3HHeJ_ MGP_C57BL6NJ_ MGP_CAROLIEiJ_ MGP_CASTEiJ_
-MGP_CBAJ_ MGP_DBA2J_ MGP_FVBNJ_ MGP_LPJ_ MGP_NODShiLtJ_ MGP_NZOHlLtJ_
-MGP_PWKPhJ_ MGP_PahariEiJ_ MGP_SPRETEiJ_ MGP_WSBEiJ_""".split())
-_ensembl_feature_types_re = r"E|FM|G|GT|P|R|T"
-_ensembl_re = r"^(?:{})(?:{}){}$".format(
- _ensembl_species_prefixes, _ensembl_feature_types_re, r"\d{11}(?:\.\d+)?")
-
-ac_namespace_regexps = {
- # https://uswest.ensembl.org/info/genome/stable_ids/prefixes.html
- # [species prefix][feature type prefix][a unique eleven digit number]
- # N.B. The regexp at http://identifiers.org/ensembl appears broken:
- # 1) Human only; 2) escaped backslashes (\\d rather than \d).
- _ensembl_re: "Ensembl",
-
- # http://identifiers.org/insdc/
- # P12345, a UniProtKB accession matches the miriam regexp but shouldn't (I think)
- r"^([A-Z]\d{5}|[A-Z]{2}\d{6}|[A-Z]{4}\d{8}|[A-J][A-Z]{2}\d{5})(\.\d+)?$":
- "INSDC",
-
- # http://identifiers.org/refseq/
- # https://www.ncbi.nlm.nih.gov/books/NBK21091/table/ch18.T.refseq_accession_numbers_and_mole/
- r"^((AC|AP|NC|NG|NM|NP|NR|NT|NW|XM|XP|XR|YP|ZP)_\d+|(NZ\_[A-Z]{4}\d+))(\.\d+)?$":
- "RefSeq",
-
- # UniProtKB
- # http://identifiers.org/uniprot/
- # https://www.uniprot.org/help/accession_numbers
- r"^(?:[OPQ][0-9][A-Z0-9]{3}[0-9]|[A-NR-Z][0-9]([A-Z][A-Z0-9]{2}[0-9]){1,2})$":
- "UniProtKB",
-}
-
-ac_namespace_regexps = {re.compile(k): v for k, v in iteritems(ac_namespace_regexps)}
-
-
-def chr22XY(c):
- """force to name from 1..22, 23, 24, X, Y, M
- to in chr1..chr22, chrX, chrY, chrM
- str or ints accepted
-
- >>> chr22XY('1')
- 'chr1'
- >>> chr22XY(1)
- 'chr1'
- >>> chr22XY('chr1')
- 'chr1'
- >>> chr22XY(23)
- 'chrX'
- >>> chr22XY(24)
- 'chrY'
- >>> chr22XY("X")
- 'chrX'
- >>> chr22XY("23")
- 'chrX'
- >>> chr22XY("M")
- 'chrM'
-
- """
- c = str(c)
- if c[0:3] == 'chr':
- c = c[3:]
- if c == '23':
- c = 'X'
- if c == '24':
- c = 'Y'
- return 'chr' + c
-
-
-def infer_namespace(ac):
- """Infer the single namespace of the given accession
-
- This function is convenience wrapper around infer_namespaces().
- Returns:
- * None if no namespaces are inferred
- * The (single) namespace if only one namespace is inferred
- * Raises an exception if more than one namespace is inferred
-
- >>> infer_namespace("ENST00000530893.6")
- 'Ensembl'
-
- >>> infer_namespace("NM_01234.5")
- 'RefSeq'
-
- >>> infer_namespace("A2BC19")
- 'UniProtKB'
-
- >>> infer_namespace("P12345")
- Traceback (most recent call last):
- ...
- bioutils.exceptions.BioutilsError: Multiple namespaces possible for P12345
-
- >>> infer_namespace("BOGUS99") is None
- True
-
- """
-
- namespaces = infer_namespaces(ac)
- if not namespaces:
- return None
- if len(namespaces) > 1:
- raise BioutilsError("Multiple namespaces possible for {}".format(ac))
- return namespaces[0]
-
-
-def infer_namespaces(ac):
- """infer possible namespaces of given accession based on syntax
- Always returns a list, possibly empty
-
- >>> infer_namespaces("ENST00000530893.6")
- ['Ensembl']
- >>> infer_namespaces("ENST00000530893")
- ['Ensembl']
- >>> infer_namespaces("ENSQ00000530893")
- []
- >>> infer_namespaces("NM_01234")
- ['RefSeq']
- >>> infer_namespaces("NM_01234.5")
- ['RefSeq']
- >>> infer_namespaces("NQ_01234.5")
- []
- >>> infer_namespaces("A2BC19")
- ['UniProtKB']
- >>> sorted(infer_namespaces("P12345"))
- ['INSDC', 'UniProtKB']
- >>> infer_namespaces("A0A022YWF9")
- ['UniProtKB']
-
-
- """
- return [v for k, v in iteritems(ac_namespace_regexps) if k.match(ac)]
-
def prepend_chr(chr):
"""prefix chr with 'chr' if not present
@@ -184,15 +38,46 @@ def strip_chr(chr):
return chr[3:] if chr[0:3] == 'chr' else chr
+def chr22XY(c):
+ """force to name from 1..22, 23, 24, X, Y, M
+ to in chr1..chr22, chrX, chrY, chrM
+ str or ints accepted
+
+ >>> chr22XY('1')
+ 'chr1'
+ >>> chr22XY(1)
+ 'chr1'
+ >>> chr22XY('chr1')
+ 'chr1'
+ >>> chr22XY(23)
+ 'chrX'
+ >>> chr22XY(24)
+ 'chrY'
+ >>> chr22XY("X")
+ 'chrX'
+ >>> chr22XY("23")
+ 'chrX'
+ >>> chr22XY("M")
+ 'chrM'
+
+ """
+ c = str(c)
+ if c[0:3] == 'chr':
+ c = c[3:]
+ if c == '23': c = 'X'
+ if c == '24': c = 'Y'
+ return 'chr' + c
+
+
## <LICENSE>
## Copyright 2014 Bioutils Contributors (https://bitbucket.org/biocommons/bioutils)
-##
+##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
-##
+##
## http://www.apache.org/licenses/LICENSE-2.0
-##
+##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
diff --git a/bioutils/exceptions.py b/bioutils/exceptions.py
deleted file mode 100644
index f6c5aaa..0000000
--- a/bioutils/exceptions.py
+++ /dev/null
@@ -1,3 +0,0 @@
-class BioutilsError(Exception):
- """Root exception for all bioutils exceptions"""
- pass
diff --git a/bioutils/seqfetcher.py b/bioutils/seqfetcher.py
index 3658542..919997f 100644
--- a/bioutils/seqfetcher.py
+++ b/bioutils/seqfetcher.py
@@ -7,6 +7,7 @@ from __future__ import absolute_import, division, print_function, unicode_litera
import logging
import re
+import os
import requests
@@ -203,12 +204,29 @@ def _fetch_seq_ncbi(ac, start_i=None, end_i=None):
url += "&tool={tool}&email={email}".format(tool=ncbi_tool, email=ncbi_email)
+ url = _add_eutils_api_key(url)
+
resp = requests.get(url)
resp.raise_for_status()
seq = ''.join(resp.text.splitlines()[1:])
return seq
+
+def _add_eutils_api_key(url):
+ """Adds eutils api key to the query
+
+ :param url: eutils url with a query string
+ :return: url with api_key parameter set to the value of environment
+ variable 'ncbi_api_key' if available
+ """
+ apikey = os.environ.get('ncbi_api_key')
+ if apikey:
+ url += '&api_key={apikey}'.format(apikey=apikey)
+ return url
+
+
+
# So that I don't forget why I didn't use ebi too:
# $ curl 'http://www.ebi.ac.uk/ena/data/view/AM407889.1&display=fasta'
# >ENA|AM407889|AM407889.2 Medicago sativa partial mRNA ...
diff --git a/bioutils/sequences.py b/bioutils/sequences.py
index 836e04f..f216464 100644
--- a/bioutils/sequences.py
+++ b/bioutils/sequences.py
@@ -3,14 +3,10 @@
from __future__ import absolute_import, division, print_function, unicode_literals
-import logging
-import re
-
import six
+import re
-_logger = logging.getLogger(__name__)
-
aa3_to_aa1_lut = {
"Ala": "A",
"Arg": "R",
@@ -36,7 +32,6 @@ aa3_to_aa1_lut = {
"Ter": "*",
"Sec": "",
}
-
aa1_to_aa3_lut = {v: k for k, v in six.iteritems(aa3_to_aa1_lut)}
@@ -137,33 +132,6 @@ def complement(seq):
return seq.translate(complement_transtable)
-def elide_sequence(s, flank=5, elision="..."):
- """trim a sequence to include the left and right flanking sequences of
- size `flank`, with the intervening sequence elided by `elision`.
-
- >>> elide_sequence("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
- 'ABCDE...VWXYZ'
-
- >>> elide_sequence("ABCDEFGHIJKLMNOPQRSTUVWXYZ", flank=3)
- 'ABC...XYZ'
-
- >>> elide_sequence("ABCDEFGHIJKLMNOPQRSTUVWXYZ", elision="..")
- 'ABCDE..VWXYZ'
-
- >>> elide_sequence("ABCDEFGHIJKLMNOPQRSTUVWXYZ", flank=12)
- 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
-
- >>> elide_sequence("ABCDEFGHIJKLMNOPQRSTUVWXYZ", flank=12, elision=".")
- 'ABCDEFGHIJKL.OPQRSTUVWXYZ'
-
- """
-
- elided_sequence_len = flank + flank + len(elision)
- if len(s) <= elided_sequence_len:
- return s
- return s[:flank] + elision + s[-flank:]
-
-
def looks_like_aa3_p(seq):
"""string looks like a 3-letter AA string"""
return (seq is not None and (len(seq) % 3 == 0) and
@@ -185,18 +153,14 @@ def normalize_sequence(seq):
>>> normalize_sequence("ACGT1")
Traceback (most recent call last):
...
- RuntimeError: Normalized sequence contains non-alphabetic characters
+ RuntimeError: normalized sequence contains non-alphabetic characters
"""
assert isinstance(seq, six.text_type)
nseq = re.sub("[\s\*]", "", seq).upper()
- m = re.search("[^A-Z]", nseq)
- if m:
- _logger.debug("Original sequence: " + seq)
- _logger.debug("Normalized sequence: " + nseq)
- _logger.debug("First non-[A-Z] at {}".format(m.start()))
- raise RuntimeError("Normalized sequence contains non-alphabetic characters")
+ if re.search("[^A-Z]", nseq):
+ raise RuntimeError("normalized sequence contains non-alphabetic characters")
return nseq
diff --git a/circle.yml b/circle.yml
new file mode 100644
index 0000000..fdd64ec
--- /dev/null
+++ b/circle.yml
@@ -0,0 +1,3 @@
+machine:
+ post:
+ - pyenv global 2.7.12 3.5.3
| Add support for eutils' api_key parameter
Utils will require an api_key starting May 1, 2018.
We should support adding an api_key as an environment variable. | biocommons/bioutils | diff --git a/tests/test_seqfetcher.py b/tests/test_seqfetcher.py
index 4d44847..6154fa6 100644
--- a/tests/test_seqfetcher.py
+++ b/tests/test_seqfetcher.py
@@ -1,7 +1,8 @@
import pytest
import vcr
+import os
-from bioutils.seqfetcher import fetch_seq, _fetch_seq_ensembl, _fetch_seq_ncbi
+from bioutils.seqfetcher import fetch_seq, _fetch_seq_ensembl, _fetch_seq_ncbi, _add_eutils_api_key
@vcr.use_cassette
@@ -21,7 +22,20 @@ def test_fetch_seq():
assert 'TTTATTTATTTTAGATACTTATCTC' == fetch_seq('KB663603.1', 0, 25)
assert 'CGCCTCCCTTCCCCCTCCCCGCCCG' == fetch_seq('ENST00000288602', 0, 25)
assert 'MAALSGGGGGGAEPGQALFNGDMEP' == fetch_seq('ENSP00000288602', 0, 25)
-
+
+
+def test_add_eutils_api_key():
+ try:
+ url = 'http://test.com?boo=bar'
+ assert _add_eutils_api_key(url) == url
+ os.environ['ncbi_api_key'] = 'test-api-key'
+ assert _add_eutils_api_key(url) == url + '&api_key=test-api-key'
+ finally:
+ try:
+ os.environ.pop('ncbi_api_key')
+ except KeyError:
+ pass
+
def test_fetch_seq_errors():
# Traceback (most recent call last):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_added_files",
"has_removed_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 6
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [],
"python": "3.5",
"reqs_path": [
"etc/install.reqs",
"etc/develop.reqs"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
backcall==0.2.0
-e git+https://github.com/biocommons/bioutils.git@b9d435b7815a1ffc849cd3980e408bbed53f6bcb#egg=bioutils
certifi==2021.5.30
charset-normalizer==2.0.12
coverage==6.2
decorator==5.1.1
distlib==0.3.9
filelock==3.4.1
flake8==5.0.4
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.4.0
iniconfig==1.1.1
ipython==7.16.3
ipython-genutils==0.2.0
jedi==0.17.2
mccabe==0.7.0
multidict==5.2.0
packaging==21.3
parso==0.7.1
pexpect==4.9.0
pickleshare==0.7.5
platformdirs==2.4.0
pluggy==1.0.0
prompt-toolkit==3.0.36
ptyprocess==0.7.0
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
PyYAML==6.0.1
requests==2.27.1
six==1.17.0
toml==0.10.2
tomli==1.2.3
tox==3.28.0
traitlets==4.3.3
typing_extensions==4.1.1
urllib3==1.26.20
vcrpy==4.1.1
virtualenv==20.16.2
wcwidth==0.2.13
wrapt==1.16.0
yapf==0.32.0
yarl==1.7.2
zipp==3.6.0
| name: bioutils
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- backcall==0.2.0
- charset-normalizer==2.0.12
- coverage==6.2
- decorator==5.1.1
- distlib==0.3.9
- filelock==3.4.1
- flake8==5.0.4
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.4.0
- iniconfig==1.1.1
- ipython==7.16.3
- ipython-genutils==0.2.0
- jedi==0.17.2
- mccabe==0.7.0
- multidict==5.2.0
- packaging==21.3
- parso==0.7.1
- pexpect==4.9.0
- pickleshare==0.7.5
- platformdirs==2.4.0
- pluggy==1.0.0
- prompt-toolkit==3.0.36
- ptyprocess==0.7.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pyyaml==6.0.1
- requests==2.27.1
- six==1.17.0
- toml==0.10.2
- tomli==1.2.3
- tox==3.28.0
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- vcrpy==4.1.1
- virtualenv==20.16.2
- wcwidth==0.2.13
- wrapt==1.16.0
- yapf==0.32.0
- yarl==1.7.2
- zipp==3.6.0
prefix: /opt/conda/envs/bioutils
| [
"tests/test_seqfetcher.py::test_fetch_seq",
"tests/test_seqfetcher.py::test_add_eutils_api_key",
"tests/test_seqfetcher.py::test_fetch_seq_errors"
]
| []
| []
| []
| Apache License 2.0 | 2,467 | [
"README.rst",
"Makefile",
"bioutils/sequences.py",
".vscode/settings.json",
"bioutils/exceptions.py",
"bioutils/seqfetcher.py",
".gitignore",
"bioutils/accessions.py",
"circle.yml"
]
| [
"README.rst",
"Makefile",
"bioutils/sequences.py",
".vscode/settings.json",
"bioutils/exceptions.py",
"bioutils/seqfetcher.py",
".gitignore",
"bioutils/accessions.py",
"circle.yml"
]
|
|
guykisel__inline-plz-228 | dc293c43edd1609683294660fb7c6a0840fb24ea | 2018-05-01 23:55:15 | 8aa2a144131b4c6608baaf0185295f93a7b1dbf9 | diff --git a/inlineplz/linters/__init__.py b/inlineplz/linters/__init__.py
index 7fede16..a0fd9a4 100644
--- a/inlineplz/linters/__init__.py
+++ b/inlineplz/linters/__init__.py
@@ -100,9 +100,9 @@ LINTERS = {
'install': [['npm', 'install', 'eslint']],
'help': [os.path.normpath('./node_modules/.bin/eslint'), '-h'],
'run':
- [os.path.normpath('./node_modules/.bin/eslint'), '.', '-f', 'json'],
+ [os.path.normpath('./node_modules/.bin/eslint'), '.', '-f', 'unix'],
'rundefault': [
- os.path.normpath('./node_modules/.bin/eslint'), '.', '-f', 'json',
+ os.path.normpath('./node_modules/.bin/eslint'), '.', '-f', 'unix',
'-c', '{config_dir}/.eslintrc.js', '--ignore-path', '{config_dir}/.eslintignore'
],
'dotfiles': [
diff --git a/inlineplz/linters/config/.eslintignore b/inlineplz/linters/config/.eslintignore
index 6713aaf..ce2175e 100644
--- a/inlineplz/linters/config/.eslintignore
+++ b/inlineplz/linters/config/.eslintignore
@@ -1,10 +1,10 @@
-coverage/**
-docs/**
-jsdoc/**
-templates/**
-tmp/**
-vendor/**
-src/**
-dist/**
-node_modules/**
+**/coverage/**
+**/docs/**
+**/jsdoc/**
+**/templates/**
+**/tmp/**
+**/vendor/**
+**/src/**
+**/dist/**
**/node_modules/**
+**/.tox/**
diff --git a/inlineplz/parsers/eslint.py b/inlineplz/parsers/eslint.py
index 3d0e556..972ae1e 100644
--- a/inlineplz/parsers/eslint.py
+++ b/inlineplz/parsers/eslint.py
@@ -12,14 +12,14 @@ class ESLintParser(ParserBase):
def parse(self, lint_data):
messages = set()
- for filedata in json.loads(lint_data):
- if filedata.get('messages'):
- for msgdata in filedata['messages']:
- try:
- path = filedata['filePath']
- line = msgdata['line']
- msgbody = msgdata['message']
- messages.add((path, line, msgbody))
- except (ValueError, KeyError):
- print('Invalid message: {0}'.format(msgdata))
+ for line in lint_data.split('\n'):
+ try:
+ parts = line.split(':')
+ if line.strip() and parts:
+ path = parts[0].strip()
+ line = int(parts[1].strip())
+ msgbody = ':'.join(parts[3:]).strip()
+ messages.add((path, line, msgbody))
+ except (ValueError, IndexError):
+ print('Invalid message: {0}'.format(line))
return messages
| switch eslint to a different formatter
the json formatter breaks on long text: https://github.com/eslint/eslint/issues/5380
```
b'Invalid string length\nRangeError: Invalid string length\n at JSON.stringify (<anonymous>)\n at module.exports (/home/travis/build/guykisel/inline-plz/node_modules/eslint/lib/formatters/json.js:12:17)\n at printResults (/home/travis/build/guykisel/inline-plz/node_modules/eslint/lib/cli.js:91:20)\n at Object.execute (/home/travis/build/guykisel/inline-plz/node_modules/eslint/lib/cli.js:201:17)\n at Object.<anonymous> (/home/travis/build/guykisel/inline-plz/node_modules/eslint/bin/eslint.js:74:28)\n at Module._compile (module.js:635:30)\n at Object.Module._extensions..js (module.js:646:10)\n at Module.load (module.js:554:32)\n at tryModuleLoad (module.js:497:12)\n at Function.Module._load (module.js:489:3)'
Parsing of eslint took 0 seconds
``` | guykisel/inline-plz | diff --git a/tests/parsers/test_eslint.py b/tests/parsers/test_eslint.py
index 8255168..780af9f 100644
--- a/tests/parsers/test_eslint.py
+++ b/tests/parsers/test_eslint.py
@@ -18,6 +18,6 @@ eslint_path = os.path.join(
def test_eslint():
with codecs.open(eslint_path, encoding='utf-8', errors='replace') as inputfile:
messages = sorted(list(eslint.ESLintParser().parse(inputfile.read())))
- assert messages[0][2] == 'Parsing error: Illegal return statement'
- assert messages[0][1] == 17
- assert messages[0][0] == 'C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\asi.js'
+ assert messages[0][2] == "'addOne' is defined but never used. [Error/no-unused-vars]"
+ assert messages[0][1] == 1
+ assert messages[0][0] == '/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js'
diff --git a/tests/testdata/parsers/eslint.txt b/tests/testdata/parsers/eslint.txt
index 27a5040..04d345a 100644
--- a/tests/testdata/parsers/eslint.txt
+++ b/tests/testdata/parsers/eslint.txt
@@ -1,1 +1,9 @@
-[{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\data\\ascii-identifier-data.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\data\\non-ascii-identifier-part-only.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\data\\non-ascii-identifier-start.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\dist\\jshint-rhino.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\dist\\jshint.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\examples\\reporter.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\scripts\\build.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\scripts\\generate-identifier-data.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\cli.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\jshint.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\lex.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\messages.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\name-stack.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\options.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\platforms\\rhino.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\reg.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\reporters\\checkstyle.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\reporters\\default.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\reporters\\jslint_xml.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\reporters\\non_error.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\reporters\\unix.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\scope-manager.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\state.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\style.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\src\\vars.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\browser.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\cli.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\helpers\\browser\\fixture-fs.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\helpers\\browser\\server.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\helpers\\fixture.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\helpers\\testhelper.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\libs\\backbone.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\libs\\codemirror3.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\libs\\jquery-1.7.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\libs\\json2.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\libs\\lodash.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\libs\\prototype-17.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\npm.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\regression\\thirdparty.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\core.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\envs.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\asi.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal return statement","line":17,"column":20}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\blocks.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected end of input","line":32,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\boss.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\browser.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\camelcase.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\caseExpressions.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\class-declaration.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected reserved word","line":1,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\comma.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected identifier","line":15,"column":7}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\const.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token const","line":16,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\curly.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal return statement","line":2,"column":12}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\curly2.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal return statement","line":2,"column":12}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\default-arguments.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token =","line":7,"column":28}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\destparam.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token =","line":4,"column":17}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\emptystmt.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token ;","line":1,"column":5}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\enforceall.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\eqeqeq.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es5.funcexpr.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es5.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Object literal may not have data and accessor property with the same name","line":43,"column":19}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es5Reserved.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token default","line":6,"column":6}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es6-export-star-from.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal export declaration","line":1,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es6-import-export.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal import declaration","line":3,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es6-template-literal-tagged.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token ILLEGAL","line":5,"column":18}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\es6-template-literal.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token ILLEGAL","line":3,"column":15}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\exported.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\forin.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\function-declaration.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal export declaration","line":1,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\functionScopedOptions.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh-2194.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh-226.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh-334.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh-738-browser.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh-738-node.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1227.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1632-1.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1632-2.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1632-3.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1768-1.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1768-2.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1768-3.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1768-4.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1768-5.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1768-6.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh1802.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh247.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh431.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh56.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh618.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh668.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh826.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token <","line":24,"column":6}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh870.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh878.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gh988.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\gruntComment.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\identifiers.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\ignore-w117.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\ignored.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\ignoreDelimiters.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token <","line":3,"column":4}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\immed.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\insideEval.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\jslintInverted.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\jslintOptions.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\jslintRenamed.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\lastsemic.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\latedef-esnext.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token let","line":1,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\latedef-inline.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\latedef.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\latedefundef.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\laxbreak.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\laxcomma.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\leak.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token const","line":3,"column":4}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\loopfunc.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\mappingstart.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\max-cyclomatic-complexity-per-function.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\max-nested-block-depth-per-function.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\max-parameters-per-function.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token =","line":7,"column":13}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\max-statements-per-function.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\maxlen.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\multiline-global-declarations.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\nativeobject.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\nbsp.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\nestedFunctions-locations.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\nestedFunctions.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token [","line":37,"column":3}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\newcap.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\noarg.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\onevar.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\parsingCommas.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token ,","line":2,"column":13}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\protoiterator.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\quotes.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\quotes2.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\quotes3.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\quotes4.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token ILLEGAL","line":2,"column":14}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\redef-es6.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token let","line":2,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\redef.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\regex_array.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Illegal return statement","line":6,"column":8}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\removeglobals.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\reserved.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token let","line":5,"column":6}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\return.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\safeasi.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token .","line":10,"column":9}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\scope-cross-blocks.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\scope-redef.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\scope.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\scripturl.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\shadow-inline.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\shelljs.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\strict_incorrect.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\strict_newcap.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\strict_this.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\strict_this2.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\strict_violations.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\strings.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token ILLEGAL","line":9,"column":22}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\supernew.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\switchDefaultFirst.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\switchFallThrough.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token :","line":40,"column":13}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\trycatch.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\typeofcomp.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\undef_func.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\undef.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\undefstrict.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\unignored.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\unused-cross-blocks.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\unused.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token const","line":34,"column":2}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\unusedglobals.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\with.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Strict mode code may not include a with statement","line":13,"column":6}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\fixtures\\yield-expressions.js","messages":[{"fatal":true,"severity":2,"message":"Parsing error: Unexpected token *","line":1,"column":10}],"errorCount":1,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\options.js","messages":[],"errorCount":0,"warningCount":0},{"filePath":"C:\\Users\\Guy\\Documents\\jshint\\tests\\unit\\parser.js","messages":[],"errorCount":0,"warningCount":0}]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:1:10: 'addOne' is defined but never used. [Error/no-unused-vars]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:2:9: Use the isNaN function to compare with NaN. [Error/use-isnan]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:3:16: Unexpected space before unary operator '++'. [Error/space-unary-ops]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:3:20: Missing semicolon. [Warning/semi]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:4:12: Unnecessary 'else' after 'return'. [Warning/no-else-return]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:5:1: Expected indentation of 8 spaces but found 6. [Warning/indent]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:5:7: Function 'addOne' expected a return value. [Error/consistent-return]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:5:13: Missing semicolon. [Warning/semi]
+/var/lib/jenkins/workspace/Releases/ESLint Release/eslint/fullOfProblems.js:7:2: Unnecessary semicolon. [Error/no-extra-semi]
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 3
} | 0.30 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
cryptography==44.0.2
dirtyjson==1.0.8
exceptiongroup==1.2.2
git-url-parse==1.2.2
github3.py==4.0.1
idna==3.10
iniconfig==2.1.0
-e git+https://github.com/guykisel/inline-plz.git@dc293c43edd1609683294660fb7c6a0840fb24ea#egg=inlineplz
packaging==24.2
pbr==6.1.1
pluggy==1.5.0
pycparser==2.22
PyJWT==2.10.1
pytest==8.3.5
python-dateutil==2.9.0.post0
PyYAML==6.0.2
requests==2.32.3
scandir==1.10.0
six==1.17.0
tomli==2.2.1
unidiff==0.7.5
uritemplate==4.1.1
uritemplate.py==3.0.2
urllib3==2.3.0
xmltodict==0.14.2
| name: inline-plz
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- cffi==1.17.1
- charset-normalizer==3.4.1
- cryptography==44.0.2
- dirtyjson==1.0.8
- exceptiongroup==1.2.2
- git-url-parse==1.2.2
- github3-py==4.0.1
- idna==3.10
- iniconfig==2.1.0
- packaging==24.2
- pbr==6.1.1
- pluggy==1.5.0
- pycparser==2.22
- pyjwt==2.10.1
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- pyyaml==6.0.2
- requests==2.32.3
- scandir==1.10.0
- six==1.17.0
- tomli==2.2.1
- unidiff==0.7.5
- uritemplate==4.1.1
- uritemplate-py==3.0.2
- urllib3==2.3.0
- xmltodict==0.14.2
prefix: /opt/conda/envs/inline-plz
| [
"tests/parsers/test_eslint.py::test_eslint"
]
| []
| []
| []
| ISC License | 2,468 | [
"inlineplz/linters/config/.eslintignore",
"inlineplz/parsers/eslint.py",
"inlineplz/linters/__init__.py"
]
| [
"inlineplz/linters/config/.eslintignore",
"inlineplz/parsers/eslint.py",
"inlineplz/linters/__init__.py"
]
|
|
python-useful-helpers__exec-helpers-26 | 5f107c01eb0223d63a8ba5ad28d2bedecea4a7cd | 2018-05-02 11:26:53 | 5f107c01eb0223d63a8ba5ad28d2bedecea4a7cd | coveralls: ## Pull Request Test Coverage Report for [Build 85](https://coveralls.io/builds/16794956)
* **72** of **89** **(80.9%)** changed or added relevant lines in **4** files are covered.
* **1** unchanged line in **1** file lost coverage.
* Overall coverage decreased (**-0.5%**) to **97.543%**
---
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
| :-----|--------------|--------|---: |
| [exec_helpers/_ssh_client_base.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2F_ssh_client_base.py#L564) | 15 | 16 | 93.75%
| [exec_helpers/exec_result.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fexec_result.py#L76) | 2 | 3 | 66.67%
| [exec_helpers/subprocess_runner.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fsubprocess_runner.py#L275) | 43 | 58 | 74.14%
<!-- | **Total:** | **72** | **89** | **80.9%** | -->
| Files with Coverage Reduction | New Missed Lines | % |
| :-----|--------------|--: |
| [exec_helpers/exec_result.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fexec_result.py#L78) | 1 | 98.01% |
<!-- | **Total:** | **1** | | -->
| Totals | [](https://coveralls.io/builds/16794956) |
| :-- | --: |
| Change from base [Build 84](https://coveralls.io/builds/16778519): | -0.5% |
| Covered Lines: | 913 |
| Relevant Lines: | 936 |
---
##### 💛 - [Coveralls](https://coveralls.io)
coveralls: ## Pull Request Test Coverage Report for [Build 85](https://coveralls.io/builds/16794956)
* **72** of **89** **(80.9%)** changed or added relevant lines in **4** files are covered.
* **1** unchanged line in **1** file lost coverage.
* Overall coverage decreased (**-0.5%**) to **97.543%**
---
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
| :-----|--------------|--------|---: |
| [exec_helpers/_ssh_client_base.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2F_ssh_client_base.py#L564) | 15 | 16 | 93.75%
| [exec_helpers/exec_result.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fexec_result.py#L76) | 2 | 3 | 66.67%
| [exec_helpers/subprocess_runner.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fsubprocess_runner.py#L275) | 43 | 58 | 74.14%
<!-- | **Total:** | **72** | **89** | **80.9%** | -->
| Files with Coverage Reduction | New Missed Lines | % |
| :-----|--------------|--: |
| [exec_helpers/exec_result.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fexec_result.py#L78) | 1 | 98.01% |
<!-- | **Total:** | **1** | | -->
| Totals | [](https://coveralls.io/builds/16794956) |
| :-- | --: |
| Change from base [Build 84](https://coveralls.io/builds/16778519): | -0.5% |
| Covered Lines: | 913 |
| Relevant Lines: | 936 |
---
##### 💛 - [Coveralls](https://coveralls.io)
coveralls: ## Pull Request Test Coverage Report for [Build 85](https://coveralls.io/builds/16794956)
* **72** of **89** **(80.9%)** changed or added relevant lines in **4** files are covered.
* **1** unchanged line in **1** file lost coverage.
* Overall coverage decreased (**-0.5%**) to **97.543%**
---
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
| :-----|--------------|--------|---: |
| [exec_helpers/_ssh_client_base.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2F_ssh_client_base.py#L564) | 15 | 16 | 93.75%
| [exec_helpers/exec_result.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fexec_result.py#L76) | 2 | 3 | 66.67%
| [exec_helpers/subprocess_runner.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fsubprocess_runner.py#L275) | 43 | 58 | 74.14%
<!-- | **Total:** | **72** | **89** | **80.9%** | -->
| Files with Coverage Reduction | New Missed Lines | % |
| :-----|--------------|--: |
| [exec_helpers/exec_result.py](https://coveralls.io/builds/16794956/source?filename=exec_helpers%2Fexec_result.py#L78) | 1 | 98.01% |
<!-- | **Total:** | **1** | | -->
| Totals | [](https://coveralls.io/builds/16794956) |
| :-- | --: |
| Change from base [Build 84](https://coveralls.io/builds/16778519): | -0.5% |
| Covered Lines: | 913 |
| Relevant Lines: | 936 |
---
##### 💛 - [Coveralls](https://coveralls.io)
| diff --git a/.editorconfig b/.editorconfig
index afdae29..c13093d 100644
--- a/.editorconfig
+++ b/.editorconfig
@@ -9,7 +9,7 @@ insert_final_newline = true
trim_trailing_whitespace = true
[*.{py,ini}]
-max_line_length = 79
+max_line_length = 120
[*.{yml,rst}]
indent_size = 2
diff --git a/.pylintrc b/.pylintrc
index 8bf1f73..9e6d02f 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -273,7 +273,7 @@ logging-modules=logging
[FORMAT]
# Maximum number of characters on a single line.
-max-line-length=80
+max-line-length=120
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
diff --git a/doc/source/SSHClient.rst b/doc/source/SSHClient.rst
index 28f030d..61d8834 100644
--- a/doc/source/SSHClient.rst
+++ b/doc/source/SSHClient.rst
@@ -101,24 +101,28 @@ API: SSHClient and SSHAuth.
:param enforce: Enforce sudo enabled or disabled. By default: None
:type enforce: ``typing.Optional[bool]``
- .. py:method:: execute_async(command, get_pty=False, open_stdout=True, open_stderr=True, stdin=None, **kwargs)
+ .. py:method:: execute_async(command, stdin=None, open_stdout=True, open_stderr=True, verbose=False, log_mask_re=None, **kwargs)
Execute command in async mode and return channel with IO objects.
:param command: Command for execution
:type command: ``str``
- :param get_pty: open PTY on remote machine
- :type get_pty: ``bool``
:param stdin: pass STDIN text to the process
- :type stdin: ``typing.Union[six.text_type, six.binary_type, None]``
+ :type stdin: ``typing.Union[six.text_type, six.binary_type, bytearray, None]``
:param open_stdout: open STDOUT stream for read
:type open_stdout: bool
:param open_stderr: open STDERR stream for read
:type open_stderr: bool
+ :param verbose: produce verbose log record on command call
+ :type verbose: bool
+ :param log_mask_re: regex lookup rule to mask command for logger.
+ all MATCHED groups will be replaced by '<*masked*>'
+ :type log_mask_re: typing.Optional[str]
:rtype: ``typing.Tuple[paramiko.Channel, paramiko.ChannelFile, paramiko.ChannelFile, paramiko.ChannelFile]``
.. versionchanged:: 1.2.0 open_stdout and open_stderr flags
.. versionchanged:: 1.2.0 stdin data
+ .. versionchanged:: 1.2.0 get_pty moved to `**kwargs`
.. py:method:: execute(command, verbose=False, timeout=1*60*60, **kwargs)
diff --git a/doc/source/Subprocess.rst b/doc/source/Subprocess.rst
index 9aed138..eb0116b 100644
--- a/doc/source/Subprocess.rst
+++ b/doc/source/Subprocess.rst
@@ -39,6 +39,27 @@ API: Subprocess
.. versionchanged:: 1.1.0 release lock on exit
+ .. py:method:: execute_async(command, stdin=None, open_stdout=True, open_stderr=True, verbose=False, log_mask_re=None, **kwargs)
+
+ Execute command in async mode and return Popen with IO objects.
+
+ :param command: Command for execution
+ :type command: str
+ :param stdin: pass STDIN text to the process
+ :type stdin: typing.Union[six.text_type, six.binary_type, bytearray, None]
+ :param open_stdout: open STDOUT stream for read
+ :type open_stdout: bool
+ :param open_stderr: open STDERR stream for read
+ :type open_stderr: bool
+ :param verbose: produce verbose log record on command call
+ :type verbose: bool
+ :param log_mask_re: regex lookup rule to mask command for logger.
+ all MATCHED groups will be replaced by '<*masked*>'
+ :type log_mask_re: typing.Optional[str]
+ :rtype: ``typing.Tuple[subprocess.Popen, None, typing.Optional[io.TextIOWrapper], typing.Optional[io.TextIOWrapper], ]``
+
+ .. versionadded:: 1.2.0
+
.. py:method:: execute(command, verbose=False, timeout=1*60*60, **kwargs)
Execute command and wait for return code.
diff --git a/exec_helpers/_api.py b/exec_helpers/_api.py
index 35aaf48..77cb01f 100644
--- a/exec_helpers/_api.py
+++ b/exec_helpers/_api.py
@@ -22,11 +22,16 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
+import logging
import re
import threading
+import typing # noqa # pylint: disable=unused-import
+
+import six # noqa # pylint: disable=unused-import
from exec_helpers import constants
from exec_helpers import exceptions
+from exec_helpers import exec_result # noqa # pylint: disable=unused-import
from exec_helpers import proc_enums
from exec_helpers import _log_templates
@@ -44,7 +49,7 @@ class ExecHelper(object):
self,
logger, # type: logging.Logger
log_mask_re=None, # type: typing.Optional[str]
- ):
+ ): # type: (...) -> None
"""ExecHelper global API.
:param log_mask_re: regex lookup rule to mask command for logger.
@@ -126,6 +131,78 @@ class ExecHelper(object):
return cmd
+ def execute_async(
+ self,
+ command, # type: str
+ stdin=None, # type: typing.Union[six.text_type, six.binary_type, bytearray, None]
+ open_stdout=True, # type: bool
+ open_stderr=True, # type: bool
+ verbose=False, # type: bool
+ log_mask_re=None, # type: typing.Optional[str]
+ **kwargs
+ ):
+ """Execute command in async mode and return remote interface with IO objects.
+
+ :param command: Command for execution
+ :type command: str
+ :param stdin: pass STDIN text to the process
+ :type stdin: typing.Union[six.text_type, six.binary_type, bytearray, None]
+ :param open_stdout: open STDOUT stream for read
+ :type open_stdout: bool
+ :param open_stderr: open STDERR stream for read
+ :type open_stderr: bool
+ :param verbose: produce verbose log record on command call
+ :type verbose: bool
+ :param log_mask_re: regex lookup rule to mask command for logger.
+ all MATCHED groups will be replaced by '<*masked*>'
+ :type log_mask_re: typing.Optional[str]
+ :rtype: typing.Tuple[
+ typing.Any,
+ typing.Any,
+ typing.Any,
+ typing.Any,
+ ]
+
+ .. versionchanged:: 1.2.0 open_stdout and open_stderr flags
+ .. versionchanged:: 1.2.0 stdin data
+ """
+ raise NotImplementedError # pragma: no cover
+
+ def _exec_command(
+ self,
+ command, # type: str
+ interface, # type: typing.Any,
+ stdout, # type: typing.Any,
+ stderr, # type: typing.Any,
+ timeout, # type: int
+ verbose=False, # type: bool
+ log_mask_re=None, # type: typing.Optional[str]
+ **kwargs
+ ): # type: (...) -> exec_result.ExecResult
+ """Get exit status from channel with timeout.
+
+ :param command: Command for execution
+ :type command: str
+ :param interface: Control interface
+ :type interface: typing.Any
+ :param stdout: STDOUT pipe or file-like object
+ :type stdout: typing.Any
+ :param stderr: STDERR pipe or file-like object
+ :type stderr: typing.Any
+ :param timeout: Timeout for command execution
+ :type timeout: int
+ :param verbose: produce verbose log record on command call
+ :type verbose: bool
+ :param log_mask_re: regex lookup rule to mask command for logger.
+ all MATCHED groups will be replaced by '<*masked*>'
+ :type log_mask_re: typing.Optional[str]
+ :rtype: ExecResult
+ :raises ExecHelperTimeoutError: Timeout exceeded
+
+ .. versionchanged:: 1.2.0 log_mask_re regex rule for masking cmd
+ """
+ raise NotImplementedError # pragma: no cover
+
def execute(
self,
command, # type: str
@@ -148,7 +225,33 @@ class ExecHelper(object):
.. versionchanged:: 1.2.0 default timeout 1 hour
"""
- raise NotImplementedError() # pragma: no cover
+ with self.lock:
+ (
+ iface, # type: typing.Any
+ _,
+ stderr, # type: typing.Any
+ stdout, # type: typing.Any
+ ) = self.execute_async(
+ command,
+ verbose=verbose,
+ **kwargs
+ )
+
+ result = self._exec_command(
+ command=command,
+ interface=iface,
+ stdout=stdout,
+ stderr=stderr,
+ timeout=timeout,
+ verbose=verbose,
+ **kwargs
+ )
+ message = _log_templates.CMD_RESULT.format(result=result)
+ self.logger.log(
+ level=logging.INFO if verbose else logging.DEBUG,
+ msg=message
+ )
+ return result
def check_call(
self,
diff --git a/exec_helpers/_ssh_client_base.py b/exec_helpers/_ssh_client_base.py
index 074d4ae..4e83dd2 100644
--- a/exec_helpers/_ssh_client_base.py
+++ b/exec_helpers/_ssh_client_base.py
@@ -114,7 +114,7 @@ class _MemorizedSSH(type):
.. versionadded:: 1.2.0
"""
- return collections.OrderedDict()
+ return collections.OrderedDict() # pragma: no cover
def __call__(
cls,
@@ -225,7 +225,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
self,
ssh, # type: SSHClientBase
enforce=None # type: typing.Optional[bool]
- ):
+ ): # type: (...) -> None
"""Context manager for call commands with sudo.
:type ssh: SSHClient
@@ -259,7 +259,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
password=None, # type: typing.Optional[str]
private_keys=None, # type: typing.Optional[_type_RSAKeys]
auth=None, # type: typing.Optional[ssh_auth.SSHAuth]
- ):
+ ): # type: (...) -> None
"""SSHClient helper.
:param host: remote hostname
@@ -488,24 +488,28 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
def execute_async(
self,
command, # type: str
- get_pty=False, # type: bool
- stdin=None, # type: typing.Union[six.text_type, six.binary_type, None]
+ stdin=None, # type: typing.Union[six.text_type, six.binary_type, bytearray, None]
open_stdout=True, # type: bool
open_stderr=True, # type: bool
+ verbose=False, # type: bool
+ log_mask_re=None, # type: typing.Optional[str]
**kwargs
): # type: (...) -> _type_execute_async
"""Execute command in async mode and return channel with IO objects.
:param command: Command for execution
:type command: str
- :param get_pty: open PTY on remote machine
- :type get_pty: bool
:param stdin: pass STDIN text to the process
- :type stdin: typing.Union[six.text_type, six.binary_type, None]
+ :type stdin: typing.Union[six.text_type, six.binary_type, bytearray, None]
:param open_stdout: open STDOUT stream for read
:type open_stdout: bool
:param open_stderr: open STDERR stream for read
:type open_stderr: bool
+ :param verbose: produce verbose log record on command call
+ :type verbose: bool
+ :param log_mask_re: regex lookup rule to mask command for logger.
+ all MATCHED groups will be replaced by '<*masked*>'
+ :type log_mask_re: typing.Optional[str]
:rtype: typing.Tuple[
paramiko.Channel,
paramiko.ChannelFile,
@@ -515,20 +519,21 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
.. versionchanged:: 1.2.0 open_stdout and open_stderr flags
.. versionchanged:: 1.2.0 stdin data
+ .. versionchanged:: 1.2.0 get_pty moved to `**kwargs`
"""
cmd_for_log = self._mask_command(
cmd=command,
- log_mask_re=kwargs.get('log_mask_re', None)
+ log_mask_re=log_mask_re
)
self.logger.log(
- level=logging.INFO if kwargs.get('verbose') else logging.DEBUG,
+ level=logging.INFO if verbose else logging.DEBUG,
msg=_log_templates.CMD_EXEC.format(cmd=cmd_for_log)
)
chan = self._ssh.get_transport().open_session()
- if get_pty:
+ if kwargs.get('get_pty', False):
# Open PTY
chan.get_pty(
term='vt100',
@@ -536,45 +541,45 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
width_pixels=0, height_pixels=0
)
- _stdin = chan.makefile('wb')
+ _stdin = chan.makefile('wb') # type: paramiko.ChannelFile
stdout = chan.makefile('rb') if open_stdout else None
stderr = chan.makefile_stderr('rb') if open_stderr else None
+
cmd = "{command}\n".format(command=command)
if self.sudo_mode:
encoded_cmd = base64.b64encode(cmd.encode('utf-8')).decode('utf-8')
- cmd = (
- "sudo -S bash -c 'eval \"$(base64 -d <(echo \"{0}\"))\"'"
- ).format(
- encoded_cmd
- )
+ cmd = "sudo -S bash -c 'eval \"$(base64 -d <(echo \"{0}\"))\"'".format(encoded_cmd)
chan.exec_command(cmd) # nosec # Sanitize on caller side
if stdout.channel.closed is False:
self.auth.enter_password(_stdin)
_stdin.flush()
else:
chan.exec_command(cmd) # nosec # Sanitize on caller side
+
if stdin is not None:
- if not isinstance(stdin, six.binary_type):
- stdin = stdin.encode(encoding='utf-8')
- _stdin.write('{}\n'.format(stdin))
- _stdin.flush()
+ if not _stdin.channel.closed:
+ _stdin.write('{stdin}\n'.format(stdin=stdin))
+ _stdin.flush()
+ else:
+ self.logger.warning('STDIN Send failed: closed channel')
return chan, _stdin, stderr, stdout
- def __exec_command(
+ def _exec_command(
self,
command, # type: str
- channel, # type: paramiko.channel.Channel
+ interface, # type: paramiko.channel.Channel
stdout, # type: paramiko.channel.ChannelFile
stderr, # type: paramiko.channel.ChannelFile
timeout, # type: int
verbose=False, # type: bool
log_mask_re=None, # type: typing.Optional[str]
+ **kwargs
): # type: (...) -> exec_result.ExecResult
"""Get exit status from channel with timeout.
:type command: str
- :type channel: paramiko.channel.Channel
+ :type interface: paramiko.channel.Channel
:type stdout: paramiko.channel.ChannelFile
:type stderr: paramiko.channel.ChannelFile
:type timeout: int
@@ -589,18 +594,15 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
"""
def poll_streams(
result, # type: exec_result.ExecResult
- channel, # type: paramiko.channel.Channel
- stdout, # type: paramiko.channel.ChannelFile
- stderr, # type: paramiko.channel.ChannelFile
):
"""Poll FIFO buffers if data available."""
- if stdout and channel.recv_ready():
+ if stdout and interface.recv_ready():
result.read_stdout(
src=stdout,
log=self.logger,
verbose=verbose
)
- if stderr and channel.recv_stderr_ready():
+ if stderr and interface.recv_stderr_ready():
result.read_stderr(
src=stderr,
log=self.logger,
@@ -609,11 +611,8 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
@threaded.threadpooled
def poll_pipes(
- stdout, # type: paramiko.channel.ChannelFile
- stderr, # type: paramiko.channel.ChannelFile
result, # type: exec_result.ExecResult
stop, # type: threading.Event
- channel # type: paramiko.channel.Channel
):
"""Polling task for FIFO buffers.
@@ -626,14 +625,9 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
while not stop.isSet():
time.sleep(0.1)
if stdout or stderr:
- poll_streams(
- result=result,
- channel=channel,
- stdout=stdout,
- stderr=stderr,
- )
+ poll_streams(result=result)
- if channel.status_event.is_set():
+ if interface.status_event.is_set():
result.read_stdout(
src=stdout,
log=self.logger,
@@ -643,7 +637,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
log=self.logger,
verbose=verbose
)
- result.exit_code = channel.exit_status
+ result.exit_code = interface.exit_status
stop.set()
@@ -660,11 +654,8 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
# pylint: disable=assignment-from-no-return
future = poll_pipes(
- stdout=stdout,
- stderr=stderr,
result=result,
stop=stop_event,
- channel=channel
) # type: concurrent.futures.Future
# pylint: enable=assignment-from-no-return
@@ -673,11 +664,11 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
# Process closed?
if stop_event.isSet():
stop_event.clear()
- channel.close()
+ interface.close()
return result
stop_event.set()
- channel.close()
+ interface.close()
future.cancel()
wait_err_msg = _log_templates.CMD_WAIT_ERROR.format(
@@ -687,49 +678,6 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
self.logger.debug(wait_err_msg)
raise exceptions.ExecHelperTimeoutError(wait_err_msg)
- def execute(
- self,
- command, # type: str
- verbose=False, # type: bool
- timeout=constants.DEFAULT_TIMEOUT, # type: typing.Optional[int]
- **kwargs
- ): # type: (...) -> exec_result.ExecResult
- """Execute command and wait for return code.
-
- :param command: Command for execution
- :type command: str
- :param verbose: Produce log.info records for command call and output
- :type verbose: bool
- :param timeout: Timeout for command execution.
- :type timeout: typing.Optional[int]
- :rtype: ExecResult
- :raises ExecHelperTimeoutError: Timeout exceeded
-
- .. versionchanged:: 1.2.0 default timeout 1 hour
- """
- (
- chan, # type: paramiko.channel.Channel
- _,
- stderr, # type: paramiko.channel.ChannelFile
- stdout, # type: paramiko.channel.ChannelFile
- ) = self.execute_async(
- command,
- verbose=verbose,
- **kwargs
- )
-
- result = self.__exec_command(
- command, chan, stdout, stderr, timeout,
- verbose=verbose,
- log_mask_re=kwargs.get('log_mask_re', None),
- )
- message = _log_templates.CMD_RESULT.format(result=result)
- self.logger.log(
- level=logging.INFO if verbose else logging.DEBUG,
- msg=message
- )
- return result
-
def execute_through_host(
self,
hostname, # type: str
@@ -767,7 +715,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
cmd=command,
log_mask_re=kwargs.get('log_mask_re', None)
)
- logger.log(
+ self.logger.log(
level=logging.INFO if verbose else logging.DEBUG,
msg=_log_templates.CMD_EXEC.format(cmd=cmd_for_log)
)
@@ -801,7 +749,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
channel.exec_command(command) # nosec # Sanitize on caller side
# noinspection PyDictCreation
- result = self.__exec_command(
+ result = self._exec_command(
command, channel, stdout, stderr, timeout, verbose=verbose,
log_mask_re=kwargs.get('log_mask_re', None),
)
diff --git a/exec_helpers/exec_result.py b/exec_helpers/exec_result.py
index 924497d..a14499f 100644
--- a/exec_helpers/exec_result.py
+++ b/exec_helpers/exec_result.py
@@ -51,7 +51,7 @@ class ExecResult(object):
def __init__(
self,
cmd, # type: str
- stdin=None, # type: typing.Union[six.text_type, six.binary_type, None]
+ stdin=None, # type: typing.Union[six.text_type, six.binary_type, bytearray, None]
stdout=None, # type: typing.Optional[typing.Iterable[bytes]]
stderr=None, # type: typing.Optional[typing.Iterable[bytes]]
exit_code=proc_enums.ExitCodes.EX_INVALID # type: _type_exit_codes
@@ -61,7 +61,7 @@ class ExecResult(object):
:param cmd: command
:type cmd: str
:param stdin: string STDIN
- :type stdin: typing.Union[six.text_type, six.binary_type, None]
+ :type stdin: typing.Union[six.text_type, six.binary_type, bytearray, None]
:param stdout: binary STDOUT
:type stdout: typing.Optional[typing.Iterable[bytes]]
:param stderr: binary STDERR
@@ -72,7 +72,9 @@ class ExecResult(object):
self.__lock = threading.RLock()
self.__cmd = cmd
- if stdin is not None and not isinstance(stdin, six.text_type):
+ if isinstance(stdin, six.binary_type):
+ stdin = self._get_str_from_bin(bytearray(stdin))
+ elif isinstance(stdin, bytearray):
stdin = self._get_str_from_bin(stdin)
self.__stdin = stdin
self.__stdout = tuple(stdout) if stdout is not None else ()
@@ -148,7 +150,7 @@ class ExecResult(object):
return self.__cmd
@property
- def stdin(self): # type: () -> str
+ def stdin(self): # type: () -> typing.Optional[str]
"""Stdin input as string.
:rtype: str
diff --git a/exec_helpers/subprocess_runner.py b/exec_helpers/subprocess_runner.py
index 49492f2..4de232d 100644
--- a/exec_helpers/subprocess_runner.py
+++ b/exec_helpers/subprocess_runner.py
@@ -21,6 +21,8 @@ from __future__ import division
from __future__ import unicode_literals
import collections
+import errno
+import io
import logging
import os
import select
@@ -34,20 +36,23 @@ import six
import threaded
from exec_helpers import _api
-from exec_helpers import constants
from exec_helpers import exec_result
from exec_helpers import exceptions
-from exec_helpers import proc_enums
from exec_helpers import _log_templates
logger = logging.getLogger(__name__)
# noinspection PyUnresolvedReferences
devnull = open(os.devnull) # subprocess.DEVNULL is py3.3+
+_type_execute_async = typing.Tuple[
+ subprocess.Popen,
+ None,
+ typing.Optional[io.TextIOWrapper],
+ typing.Optional[io.TextIOWrapper],
+]
+
_win = sys.platform == "win32"
_posix = 'posix' in sys.builtin_module_names
-_type_exit_codes = typing.Union[int, proc_enums.ExitCodes]
-_type_expected = typing.Optional[typing.Iterable[_type_exit_codes]]
if _posix: # pragma: no cover
import fcntl # pylint: disable=import-error
@@ -89,12 +94,12 @@ class SingletonMeta(type):
.. versionadded:: 1.2.0
"""
- return collections.OrderedDict()
+ return collections.OrderedDict() # pragma: no cover
def set_nonblocking_pipe(pipe): # type: (os.pipe) -> None
"""Set PIPE unblocked to allow polling of all pipes in parallel."""
- descriptor = pipe.fileno()
+ descriptor = pipe.fileno() # pragma: no cover
if _posix: # pragma: no cover
# Get flags
@@ -126,14 +131,10 @@ def set_nonblocking_pipe(pipe): # type: (os.pipe) -> None
class Subprocess(six.with_metaclass(SingletonMeta, _api.ExecHelper)):
"""Subprocess helper with timeouts and lock-free FIFO."""
- __slots__ = (
- '__process',
- )
-
def __init__(
self,
log_mask_re=None, # type: typing.Optional[str]
- ):
+ ): # type: (...) -> None
"""Subprocess helper with timeouts and lock-free FIFO.
For excluding race-conditions we allow to run 1 command simultaneously
@@ -150,55 +151,41 @@ class Subprocess(six.with_metaclass(SingletonMeta, _api.ExecHelper)):
)
self.__process = None
- def __exit__(self, exc_type, exc_val, exc_tb):
- """Context manager usage."""
- if self.__process:
- self.__process.kill()
- super(Subprocess, self).__exit__(exc_type, exc_val, exc_tb)
-
- def __del__(self):
- """Destructor. Kill running subprocess, if it running."""
- if self.__process:
- self.__process.kill()
-
- def __exec_command(
+ def _exec_command(
self,
command, # type: str
- cwd=None, # type: typing.Optional[str]
- env=None, # type: typing.Optional[typing.Dict[str, typing.Any]]
- timeout=constants.DEFAULT_TIMEOUT, # type: typing.Optional[int]
+ interface, # type: subprocess.Popen,
+ stdout, # type: typing.Optional[io.TextIOWrapper],
+ stderr, # type: typing.Optional[io.TextIOWrapper],
+ timeout, # type: int
verbose=False, # type: bool
log_mask_re=None, # type: typing.Optional[str]
- stdin=None, # type: typing.Union[six.text_type, six.binary_type, None]
- open_stdout=True, # type: bool
- open_stderr=True, # type: bool
- ):
- """Command executor helper.
+ **kwargs
+ ): # type: (...) -> exec_result.ExecResult
+ """Get exit status from channel with timeout.
+ :param command: Command for execution
:type command: str
- :type cwd: str
- :type env: dict
+ :param interface: Control interface
+ :type interface: subprocess.Popen
+ :param stdout: STDOUT pipe or file-like object
+ :type stdout: typing.Any
+ :param stderr: STDERR pipe or file-like object
+ :type stderr: typing.Any
+ :param timeout: Timeout for command execution
:type timeout: int
- :param verbose: use INFO log level instead of DEBUG
- :type verbose: str
+ :param verbose: produce verbose log record on command call
+ :type verbose: bool
:param log_mask_re: regex lookup rule to mask command for logger.
all MATCHED groups will be replaced by '<*masked*>'
:type log_mask_re: typing.Optional[str]
- :type stdin: typing.Union[six.text_type, six.binary_type, None]
- :param open_stdout: open STDOUT stream for read
- :type open_stdout: bool
- :param open_stderr: open STDERR stream for read
- :type open_stderr: bool
:rtype: ExecResult
+ :raises ExecHelperTimeoutError: Timeout exceeded
- .. versionchanged:: 1.2.0 open_stdout and open_stderr flags
- .. versionchanged:: 1.2.0 default timeout 1 hour
- .. versionchanged:: 1.2.0 log_mask_re regex rule for masking cmd
+ .. versionadded:: 1.2.0
"""
def poll_streams(
result, # type: exec_result.ExecResult
- stdout, # type: io.TextIOWrapper
- stderr, # type: io.TextIOWrapper
):
"""Poll streams to the result object."""
if _win: # pragma: no cover
@@ -227,141 +214,163 @@ class Subprocess(six.with_metaclass(SingletonMeta, _api.ExecHelper)):
@threaded.threaded(started=True, daemon=True)
def poll_pipes(
result, # type: exec_result.ExecResult
- stop # type: threading.Event
+ stop, # type: threading.Event
):
"""Polling task for FIFO buffers.
- :type result: exec_result.ExecResult
- :type stop: threading.Event
+ :type result: ExecResult
+ :type stop: Event
"""
while not stop.isSet():
time.sleep(0.1)
- if open_stdout or open_stderr:
- poll_streams(
- result=result,
- stdout=self.__process.stdout,
- stderr=self.__process.stderr,
- )
+ if stdout or stderr:
+ poll_streams(result=result)
- self.__process.poll()
+ interface.poll()
- if self.__process.returncode is not None:
+ if interface.returncode is not None:
result.read_stdout(
- src=self.__process.stdout,
+ src=stdout,
log=logger,
verbose=verbose
)
result.read_stderr(
- src=self.__process.stderr,
+ src=stderr,
log=logger,
verbose=verbose
)
- result.exit_code = self.__process.returncode
+ result.exit_code = interface.returncode
stop.set()
- # 1 Command per run
- with self.lock:
- cmd_for_log = self._mask_command(
- cmd=command,
- log_mask_re=log_mask_re
- )
-
- # Store command with hidden data
- result = exec_result.ExecResult(cmd=cmd_for_log)
- stop_event = threading.Event()
-
- logger.log(
- level=logging.INFO if verbose else logging.DEBUG,
- msg=_log_templates.CMD_EXEC.format(cmd=cmd_for_log)
- )
-
- # Run
- self.__process = subprocess.Popen(
- args=[command],
- stdout=subprocess.PIPE if open_stdout else devnull,
- stderr=subprocess.PIPE if open_stderr else devnull,
- stdin=subprocess.PIPE,
- shell=True,
- cwd=cwd,
- env=env,
- universal_newlines=False,
- )
- if stdin is not None:
- if not isinstance(stdin, six.binary_type):
- stdin = stdin.encode(encoding='utf-8')
- self.__process.stdin.write(stdin)
- self.__process.stdin.close()
-
- # Poll output
-
- if open_stdout:
- set_nonblocking_pipe(self.__process.stdout)
- if open_stderr:
- set_nonblocking_pipe(self.__process.stderr)
- # pylint: disable=assignment-from-no-return
- poll_thread = poll_pipes(
- result,
- stop_event
- ) # type: threading.Thread
- # pylint: enable=assignment-from-no-return
- # wait for process close
- stop_event.wait(timeout)
-
- # Process closed?
- if stop_event.isSet():
- stop_event.clear()
- self.__process = None
- return result
- # Kill not ended process and wait for close
- try:
- self.__process.kill() # kill -9
- stop_event.wait(5)
- # Force stop cycle if no exit code after kill
- stop_event.set()
- poll_thread.join(5)
- except OSError:
- # Nothing to kill
- logger.warning(
- u"{!s} has been completed just after timeout: "
- "please validate timeout.".format(command))
- self.__process = None
-
- wait_err_msg = _log_templates.CMD_WAIT_ERROR.format(
- result=result,
- timeout=timeout
- )
- logger.debug(wait_err_msg)
- raise exceptions.ExecHelperTimeoutError(wait_err_msg)
-
- def execute(
+ # Store command with hidden data
+ cmd_for_log = self._mask_command(
+ cmd=command,
+ log_mask_re=log_mask_re
+ )
+
+ result = exec_result.ExecResult(cmd=cmd_for_log)
+ stop_event = threading.Event()
+
+ # pylint: disable=assignment-from-no-return
+ poll_thread = poll_pipes(
+ result,
+ stop_event
+ ) # type: threading.Thread
+ # pylint: enable=assignment-from-no-return
+ # wait for process close
+ stop_event.wait(timeout)
+
+ # Process closed?
+ if stop_event.isSet():
+ stop_event.clear()
+ return result
+ # Kill not ended process and wait for close
+ try:
+ interface.kill() # kill -9
+ stop_event.wait(5)
+ # Force stop cycle if no exit code after kill
+ stop_event.set()
+ poll_thread.join(5)
+ except OSError:
+ # Nothing to kill
+ logger.warning(
+ u"{!s} has been completed just after timeout: "
+ "please validate timeout.".format(command))
+ return result
+
+ wait_err_msg = _log_templates.CMD_WAIT_ERROR.format(
+ result=result,
+ timeout=timeout
+ )
+ logger.debug(wait_err_msg)
+ raise exceptions.ExecHelperTimeoutError(wait_err_msg)
+
+ def execute_async(
self,
command, # type: str
+ stdin=None, # type: typing.Union[six.text_type, six.binary_type, bytearray, None]
+ open_stdout=True, # type: bool
+ open_stderr=True, # type: bool
verbose=False, # type: bool
- timeout=constants.DEFAULT_TIMEOUT, # type: typing.Optional[int]
+ log_mask_re=None, # type: typing.Optional[str]
**kwargs
- ): # type: (...) -> exec_result.ExecResult
- """Execute command and wait for return code.
-
- Timeout limitation: read tick is 100 ms.
+ ): # type: (...) -> _type_execute_async
+ """Execute command in async mode and return Popen with IO objects.
:param command: Command for execution
:type command: str
- :param verbose: Produce log.info records for command call and output
+ :param stdin: pass STDIN text to the process
+ :type stdin: typing.Union[six.text_type, six.binary_type, bytearray, None]
+ :param open_stdout: open STDOUT stream for read
+ :type open_stdout: bool
+ :param open_stderr: open STDERR stream for read
+ :type open_stderr: bool
+ :param verbose: produce verbose log record on command call
:type verbose: bool
- :param timeout: Timeout for command execution.
- :type timeout: typing.Optional[int]
- :rtype: ExecResult
- :raises ExecHelperTimeoutError: Timeout exceeded
+ :param log_mask_re: regex lookup rule to mask command for logger.
+ all MATCHED groups will be replaced by '<*masked*>'
+ :type log_mask_re: typing.Optional[str]
+ :rtype: typing.Tuple[
+ subprocess.Popen,
+ None,
+ typing.Optional[io.TextIOWrapper],
+ typing.Optional[io.TextIOWrapper],
+ ]
- .. versionchanged:: 1.2.0 default timeout 1 hour
+ .. versionadded:: 1.2.0
"""
- result = self.__exec_command(command=command, timeout=timeout,
- verbose=verbose, **kwargs)
- message = _log_templates.CMD_RESULT.format(result=result)
- logger.log(
+ cmd_for_log = self._mask_command(
+ cmd=command,
+ log_mask_re=log_mask_re
+ )
+
+ self.logger.log(
level=logging.INFO if verbose else logging.DEBUG,
- msg=message
+ msg=_log_templates.CMD_EXEC.format(cmd=cmd_for_log)
+ )
+
+ process = subprocess.Popen(
+ args=[command],
+ stdout=subprocess.PIPE if open_stdout else devnull,
+ stderr=subprocess.PIPE if open_stderr else devnull,
+ stdin=subprocess.PIPE,
+ shell=True,
+ cwd=kwargs.get('cwd', None),
+ env=kwargs.get('env', None),
+ universal_newlines=False,
)
- return result
+ if stdin is not None:
+ if isinstance(stdin, six.text_type):
+ stdin = stdin.encode(encoding='utf-8')
+ elif isinstance(stdin, bytearray):
+ stdin = bytes(stdin)
+ try:
+ process.stdin.write(stdin)
+ except OSError as exc:
+ if exc.errno == errno.EINVAL:
+ # bpo-19612, bpo-30418: On Windows, stdin.write() fails
+ # with EINVAL if the child process exited or if the child
+ # process is still running but closed the pipe.
+ self.logger.warning('STDIN Send failed: closed PIPE')
+ elif exc.errno in (errno.EPIPE, errno.ESHUTDOWN): # pragma: no cover
+ self.logger.warning('STDIN Send failed: broken PIPE')
+ else:
+ process.kill()
+ raise
+ try:
+ process.stdin.close()
+ except OSError as exc:
+ if exc.errno in (errno.EINVAL, errno.EPIPE, errno.ESHUTDOWN):
+ pass
+ else:
+ process.kill()
+ raise
+
+ if open_stdout:
+ set_nonblocking_pipe(process.stdout)
+ if open_stderr:
+ set_nonblocking_pipe(process.stderr)
+
+ return process, None, process.stderr, process.stdout
diff --git a/tox.ini b/tox.ini
index c1ac392..bee2e03 100644
--- a/tox.ini
+++ b/tox.ini
@@ -26,7 +26,7 @@ deps =
commands =
py.test -vv --junitxml=unit_result.xml --html=report.html --cov-config .coveragerc --cov-report html --cov=exec_helpers {posargs:test}
- coverage report --fail-under 95
+ coverage report --fail-under 97
[testenv:py27-nocov]
usedevelop = False
@@ -132,6 +132,7 @@ ignore =
show-pep8 = True
show-source = True
count = True
+max-line-length = 120
[testenv:docs]
deps =
| Unify execute_async + __exec_command with Subprocess.__exec_command
Make `execute_async` with pipes unlock for `Subprocess` and use `__exec_command` like SSHClient.
This will reduce difficulty level of API and amount of copy-paste. | python-useful-helpers/exec-helpers | diff --git a/test/test_exec_result.py b/test/test_exec_result.py
index a5fbef8..ea2c9b1 100644
--- a/test/test_exec_result.py
+++ b/test/test_exec_result.py
@@ -238,3 +238,19 @@ class TestExecResult(unittest.TestCase):
with self.assertRaises(RuntimeError):
result.read_stderr([b'err'])
+
+ def test_stdin_none(self):
+ result = exec_helpers.ExecResult(cmd, exit_code=0)
+ self.assertIsNone(result.stdin)
+
+ def test_stdin_utf(self):
+ result = exec_helpers.ExecResult(cmd, stdin=u'STDIN', exit_code=0)
+ self.assertEqual(result.stdin, u'STDIN')
+
+ def test_stdin_bytes(self):
+ result = exec_helpers.ExecResult(cmd, stdin=b'STDIN', exit_code=0)
+ self.assertEqual(result.stdin, u'STDIN')
+
+ def test_stdin_bytearray(self):
+ result = exec_helpers.ExecResult(cmd, stdin=bytearray(b'STDIN'), exit_code=0)
+ self.assertEqual(result.stdin, u'STDIN')
diff --git a/test/test_ssh_client.py b/test/test_ssh_client.py
index 1b2372a..95c338e 100644
--- a/test/test_ssh_client.py
+++ b/test/test_ssh_client.py
@@ -65,8 +65,7 @@ print_stdin = 'read line; echo "$line"'
@mock.patch('exec_helpers._ssh_client_base.logger', autospec=True)
[email protected](
- 'paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
[email protected]('paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
@mock.patch('paramiko.SSHClient', autospec=True)
class TestExecute(unittest.TestCase):
def tearDown(self):
@@ -452,6 +451,186 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
+ def test_check_stdin_str(self, client, policy, logger):
+ stdin_val = u'this is a line'
+
+ stdin = mock.Mock(name='stdin')
+ stdin_channel = mock.Mock()
+ stdin_channel.configure_mock(closed=False)
+ stdin.attach_mock(stdin_channel, 'channel')
+
+ stdout = mock.Mock(name='stdout')
+ stdout_channel = mock.Mock()
+ stdout_channel.configure_mock(closed=False)
+ stdout.attach_mock(stdout_channel, 'channel')
+
+ chan = mock.Mock()
+ chan.attach_mock(mock.Mock(side_effect=[stdin, stdout]), 'makefile')
+
+ open_session = mock.Mock(return_value=chan)
+
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+
+ # noinspection PyTypeChecker
+ result = ssh.execute_async(command=print_stdin, stdin=stdin_val)
+
+ get_transport.assert_called_once()
+ open_session.assert_called_once()
+ stdin.assert_has_calls([
+ mock.call.write('{val}\n'.format(val=stdin_val)),
+ mock.call.flush()
+ ])
+
+ self.assertIn(chan, result)
+ chan.assert_has_calls((
+ mock.call.makefile('wb'),
+ mock.call.makefile('rb'),
+ mock.call.makefile_stderr('rb'),
+ mock.call.exec_command('{val}\n'.format(val=print_stdin))
+ ))
+
+ def test_check_stdin_bytes(self, client, policy, logger):
+ stdin_val = b'this is a line'
+
+ stdin = mock.Mock(name='stdin')
+ stdin_channel = mock.Mock()
+ stdin_channel.configure_mock(closed=False)
+ stdin.attach_mock(stdin_channel, 'channel')
+
+ stdout = mock.Mock(name='stdout')
+ stdout_channel = mock.Mock()
+ stdout_channel.configure_mock(closed=False)
+ stdout.attach_mock(stdout_channel, 'channel')
+
+ chan = mock.Mock()
+ chan.attach_mock(mock.Mock(side_effect=[stdin, stdout]), 'makefile')
+
+ open_session = mock.Mock(return_value=chan)
+
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+
+ # noinspection PyTypeChecker
+ result = ssh.execute_async(command=print_stdin, stdin=stdin_val)
+
+ get_transport.assert_called_once()
+ open_session.assert_called_once()
+ stdin.assert_has_calls([
+ mock.call.write('{val}\n'.format(val=stdin_val)),
+ mock.call.flush()
+ ])
+
+ self.assertIn(chan, result)
+ chan.assert_has_calls((
+ mock.call.makefile('wb'),
+ mock.call.makefile('rb'),
+ mock.call.makefile_stderr('rb'),
+ mock.call.exec_command('{val}\n'.format(val=print_stdin))
+ ))
+
+ def test_check_stdin_bytearray(self, client, policy, logger):
+ stdin_val = bytearray(b'this is a line')
+
+ stdin = mock.Mock(name='stdin')
+ stdin_channel = mock.Mock()
+ stdin_channel.configure_mock(closed=False)
+ stdin.attach_mock(stdin_channel, 'channel')
+
+ stdout = mock.Mock(name='stdout')
+ stdout_channel = mock.Mock()
+ stdout_channel.configure_mock(closed=False)
+ stdout.attach_mock(stdout_channel, 'channel')
+
+ chan = mock.Mock()
+ chan.attach_mock(mock.Mock(side_effect=[stdin, stdout]), 'makefile')
+
+ open_session = mock.Mock(return_value=chan)
+
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+
+ # noinspection PyTypeChecker
+ result = ssh.execute_async(command=print_stdin, stdin=stdin_val)
+
+ get_transport.assert_called_once()
+ open_session.assert_called_once()
+ stdin.assert_has_calls([
+ mock.call.write('{val}\n'.format(val=stdin_val)),
+ mock.call.flush()
+ ])
+
+ self.assertIn(chan, result)
+ chan.assert_has_calls((
+ mock.call.makefile('wb'),
+ mock.call.makefile('rb'),
+ mock.call.makefile_stderr('rb'),
+ mock.call.exec_command('{val}\n'.format(val=print_stdin))
+ ))
+
+ def test_check_stdin_closed(self, client, policy, logger):
+ stdin_val = 'this is a line'
+
+ stdin = mock.Mock(name='stdin')
+ stdin_channel = mock.Mock()
+ stdin_channel.configure_mock(closed=True)
+ stdin.attach_mock(stdin_channel, 'channel')
+
+ stdout = mock.Mock(name='stdout')
+ stdout_channel = mock.Mock()
+ stdout_channel.configure_mock(closed=False)
+ stdout.attach_mock(stdout_channel, 'channel')
+
+ chan = mock.Mock()
+ chan.attach_mock(mock.Mock(side_effect=[stdin, stdout]), 'makefile')
+
+ open_session = mock.Mock(return_value=chan)
+
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+
+ # noinspection PyTypeChecker
+ result = ssh.execute_async(command=print_stdin, stdin=stdin_val)
+
+ get_transport.assert_called_once()
+ open_session.assert_called_once()
+ stdin.assert_not_called()
+
+ log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log.warning.assert_called_once_with('STDIN Send failed: closed channel')
+
+ self.assertIn(chan, result)
+ chan.assert_has_calls((
+ mock.call.makefile('wb'),
+ mock.call.makefile('rb'),
+ mock.call.makefile_stderr('rb'),
+ mock.call.exec_command('{val}\n'.format(val=print_stdin))
+ ))
+
@staticmethod
def get_patched_execute_async_retval(
ec=0,
@@ -1047,103 +1226,9 @@ class TestExecute(unittest.TestCase):
command, verbose, timeout=None,
error_info=None, raise_on_err=raise_on_err)
- @mock.patch('exec_helpers.ssh_client.SSHClient.check_call')
- def test_check_stdin_str(self, check_call, client, policy, logger):
- stdin = u'this is a line'
-
- return_value = exec_result.ExecResult(
- cmd=print_stdin,
- stdin=stdin,
- stdout=[stdin],
- stderr=[],
- exit_code=0
- )
- check_call.return_value = return_value
-
- verbose = False
- raise_on_err = True
-
- # noinspection PyTypeChecker
- result = self.get_ssh().check_call(
- command=print_stdin,
- stdin=stdin,
- verbose=verbose,
- timeout=None,
- raise_on_err=raise_on_err)
- check_call.assert_called_once_with(
- command=print_stdin,
- stdin=stdin,
- verbose=verbose,
- timeout=None,
- raise_on_err=raise_on_err)
- self.assertEqual(result, return_value)
-
- @mock.patch('exec_helpers.ssh_client.SSHClient.check_call')
- def test_check_stdin_bytes(self, check_call, client, policy, logger):
- stdin = b'this is a line'
-
- return_value = exec_result.ExecResult(
- cmd=print_stdin,
- stdin=stdin,
- stdout=[stdin],
- stderr=[],
- exit_code=0
- )
- check_call.return_value = return_value
-
- verbose = False
- raise_on_err = True
-
- # noinspection PyTypeChecker
- result = self.get_ssh().check_call(
- command=print_stdin,
- stdin=stdin,
- verbose=verbose,
- timeout=None,
- raise_on_err=raise_on_err)
- check_call.assert_called_once_with(
- command=print_stdin,
- stdin=stdin,
- verbose=verbose,
- timeout=None,
- raise_on_err=raise_on_err)
- self.assertEqual(result, return_value)
-
- @mock.patch('exec_helpers.ssh_client.SSHClient.check_call')
- def test_check_stdin_bytearray(self, check_call, client, policy, logger):
- stdin = bytearray(b'this is a line')
-
- return_value = exec_result.ExecResult(
- cmd=print_stdin,
- stdin=stdin,
- stdout=[stdin],
- stderr=[],
- exit_code=0
- )
- check_call.return_value = return_value
-
- verbose = False
- raise_on_err = True
-
- # noinspection PyTypeChecker
- result = self.get_ssh().check_call(
- command=print_stdin,
- stdin=stdin,
- verbose=verbose,
- timeout=None,
- raise_on_err=raise_on_err)
- check_call.assert_called_once_with(
- command=print_stdin,
- stdin=stdin,
- verbose=verbose,
- timeout=None,
- raise_on_err=raise_on_err)
- self.assertEqual(result, return_value)
-
@mock.patch('exec_helpers._ssh_client_base.logger', autospec=True)
[email protected](
- 'paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
[email protected]('paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
@mock.patch('paramiko.SSHClient', autospec=True)
@mock.patch('paramiko.Transport', autospec=True)
class TestExecuteThrowHost(unittest.TestCase):
diff --git a/test/test_subprocess_runner.py b/test/test_subprocess_runner.py
index 28f0b6d..997d2fc 100644
--- a/test/test_subprocess_runner.py
+++ b/test/test_subprocess_runner.py
@@ -20,11 +20,13 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
+import errno
import logging
import subprocess
import unittest
import mock
+import six
import exec_helpers
from exec_helpers import subprocess_runner
@@ -53,11 +55,12 @@ class FakeFileStream(object):
@mock.patch('exec_helpers.subprocess_runner.logger', autospec=True)
@mock.patch('select.select', autospec=True)
[email protected](
- 'exec_helpers.subprocess_runner.set_nonblocking_pipe', autospec=True
-)
[email protected]('exec_helpers.subprocess_runner.set_nonblocking_pipe', autospec=True)
@mock.patch('subprocess.Popen', autospec=True, name='subprocess.Popen')
class TestSubprocessRunner(unittest.TestCase):
+ def setUp(self):
+ subprocess_runner.SingletonMeta._instances.clear()
+
@staticmethod
def prepare_close(
popen,
@@ -65,11 +68,12 @@ class TestSubprocessRunner(unittest.TestCase):
stderr_val=None,
ec=0,
open_stdout=True,
+ stdout_override=None,
open_stderr=True,
cmd_in_result=None,
):
if open_stdout:
- stdout_lines = stdout_list
+ stdout_lines = stdout_list if stdout_override is None else stdout_override
stdout = FakeFileStream(*stdout_lines)
else:
stdout = stdout_lines = None
@@ -107,7 +111,13 @@ class TestSubprocessRunner(unittest.TestCase):
return ("Command exit code '{code!s}':\n{cmd!s}\n"
.format(cmd=result.cmd.rstrip(), code=result.exit_code))
- def test_call(self, popen, _, select, logger):
+ def test_001_call(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
popen_obj, exp_result = self.prepare_close(popen)
select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
@@ -154,7 +164,13 @@ class TestSubprocessRunner(unittest.TestCase):
mock.call.poll(), popen_obj.mock_calls
)
- def test_call_verbose(self, popen, _, select, logger):
+ def test_002_call_verbose(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
popen_obj, _ = self.prepare_close(popen)
select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
@@ -182,7 +198,13 @@ class TestSubprocessRunner(unittest.TestCase):
msg=self.gen_cmd_result_log_message(result)),
])
- def test_context_manager(self, popen, _, select, logger):
+ def test_003_context_manager(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
popen_obj, exp_result = self.prepare_close(popen)
select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
@@ -204,7 +226,7 @@ class TestSubprocessRunner(unittest.TestCase):
subprocess_runner.SingletonMeta._instances.clear()
@mock.patch('time.sleep', autospec=True)
- def test_execute_timeout_fail(
+ def test_004_execute_timeout_fail(
self,
sleep,
popen, _, select, logger
@@ -234,7 +256,7 @@ class TestSubprocessRunner(unittest.TestCase):
),
))
- def test_execute_no_stdout(self, popen, _, select, logger):
+ def test_005_execute_no_stdout(self, popen, _, select, logger):
popen_obj, exp_result = self.prepare_close(popen, open_stdout=False)
select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
@@ -272,7 +294,7 @@ class TestSubprocessRunner(unittest.TestCase):
mock.call.poll(), popen_obj.mock_calls
)
- def test_execute_no_stderr(self, popen, _, select, logger):
+ def test_006_execute_no_stderr(self, popen, _, select, logger):
popen_obj, exp_result = self.prepare_close(popen, open_stderr=False)
select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
@@ -311,7 +333,7 @@ class TestSubprocessRunner(unittest.TestCase):
mock.call.poll(), popen_obj.mock_calls
)
- def test_execute_no_stdout_stderr(self, popen, _, select, logger):
+ def test_007_execute_no_stdout_stderr(self, popen, _, select, logger):
popen_obj, exp_result = self.prepare_close(
popen,
open_stdout=False,
@@ -348,7 +370,7 @@ class TestSubprocessRunner(unittest.TestCase):
mock.call.poll(), popen_obj.mock_calls
)
- def test_execute_mask_global(self, popen, _, select, logger):
+ def test_008_execute_mask_global(self, popen, _, select, logger):
cmd = "USE='secret=secret_pass' do task"
log_mask_re = r"secret\s*=\s*([A-Z-a-z0-9_\-]+)"
masked_cmd = "USE='secret=<*masked*>' do task"
@@ -406,7 +428,7 @@ class TestSubprocessRunner(unittest.TestCase):
mock.call.poll(), popen_obj.mock_calls
)
- def test_execute_mask_local(self, popen, _, select, logger):
+ def test_009_execute_mask_local(self, popen, _, select, logger):
cmd = "USE='secret=secret_pass' do task"
log_mask_re = r"secret\s*=\s*([A-Z-a-z0-9_\-]+)"
masked_cmd = "USE='secret=<*masked*>' do task"
@@ -462,11 +484,402 @@ class TestSubprocessRunner(unittest.TestCase):
mock.call.poll(), popen_obj.mock_calls
)
+ def test_004_check_stdin_str(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = u'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin.encode('utf-8')])
+
+ stdin_mock = mock.Mock()
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin.encode('utf-8')),
+ mock.call.close()
+ ])
+
+ def test_005_check_stdin_bytes(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ stdin_mock = mock.Mock()
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin),
+ mock.call.close()
+ ])
+
+ def test_006_check_stdin_bytearray(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = bytearray(b'this is a line')
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ stdin_mock = mock.Mock()
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin),
+ mock.call.close()
+ ])
+
+ @unittest.skipIf(six.PY2, 'Not implemented exception')
+ def test_007_check_stdin_fail_broken_pipe(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ pipe_err = BrokenPipeError()
+ pipe_err.errno = errno.EPIPE
+
+ stdin_mock = mock.Mock()
+ stdin_mock.attach_mock(mock.Mock(side_effect=pipe_err), 'write')
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin),
+ mock.call.close()
+ ])
+ logger.warning.assert_called_once_with('STDIN Send failed: broken PIPE')
+
+ def test_008_check_stdin_fail_closed_win(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ pipe_error = OSError()
+ pipe_error.errno = errno.EINVAL
+
+ stdin_mock = mock.Mock()
+ stdin_mock.attach_mock(mock.Mock(side_effect=pipe_error), 'write')
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin),
+ mock.call.close()
+ ])
+ logger.warning.assert_called_once_with('STDIN Send failed: closed PIPE')
+
+ def test_009_check_stdin_fail_write(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ pipe_error = OSError()
+
+ stdin_mock = mock.Mock()
+ stdin_mock.attach_mock(mock.Mock(side_effect=pipe_error), 'write')
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ with self.assertRaises(OSError):
+ # noinspection PyTypeChecker
+ runner.execute(print_stdin, stdin=stdin)
+ popen_obj.kill.assert_called_once()
+
+ @unittest.skipIf(six.PY2, 'Not implemented exception')
+ def test_010_check_stdin_fail_close_pipe(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ pipe_err = BrokenPipeError()
+ pipe_err.errno = errno.EPIPE
+
+ stdin_mock = mock.Mock()
+ stdin_mock.attach_mock(mock.Mock(side_effect=pipe_err), 'close')
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin),
+ mock.call.close()
+ ])
+ logger.warning.assert_not_called()
+
+ def test_011_check_stdin_fail_close_pipe_win(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ pipe_error = OSError()
+ pipe_error.errno = errno.EINVAL
+
+ stdin_mock = mock.Mock()
+ stdin_mock.attach_mock(mock.Mock(side_effect=pipe_error), 'close')
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+ result = runner.execute(print_stdin, stdin=stdin)
+ self.assertEqual(
+ result, exp_result
+
+ )
+ popen.assert_has_calls((
+ mock.call(
+ args=[print_stdin],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
+ stdin_mock.assert_has_calls([
+ mock.call.write(stdin),
+ mock.call.close()
+ ])
+ logger.warning.assert_not_called()
+
+ def test_012_check_stdin_fail_close(
+ self,
+ popen, # type: mock.MagicMock
+ _, # type: mock.MagicMock
+ select, # type: mock.MagicMock
+ logger # type: mock.MagicMock
+ ): # type: (...) -> None
+ stdin = b'this is a line'
+
+ popen_obj, exp_result = self.prepare_close(popen, cmd=print_stdin, stdout_override=[stdin])
+
+ pipe_error = OSError()
+
+ stdin_mock = mock.Mock()
+ stdin_mock.attach_mock(mock.Mock(side_effect=pipe_error), 'close')
+ popen_obj.attach_mock(stdin_mock, 'stdin')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ with self.assertRaises(OSError):
+ # noinspection PyTypeChecker
+ runner.execute(print_stdin, stdin=stdin)
+ popen_obj.kill.assert_called_once()
+
+ @mock.patch('time.sleep', autospec=True)
+ def test_013_execute_timeout_done(
+ self,
+ sleep,
+ popen, _, select, logger
+ ):
+ popen_obj, exp_result = self.prepare_close(popen, ec=exec_helpers.ExitCodes.EX_INVALID)
+ popen_obj.configure_mock(returncode=None)
+ popen_obj.attach_mock(mock.Mock(side_effect=OSError), 'kill')
+ select.return_value = [popen_obj.stdout, popen_obj.stderr], [], []
+
+ runner = exec_helpers.Subprocess()
+
+ # noinspection PyTypeChecker
+
+ res = runner.execute(command, timeout=1)
+
+ self.assertEqual(res, exp_result)
+
+ popen.assert_has_calls((
+ mock.call(
+ args=[command],
+ cwd=None,
+ env=None,
+ shell=True,
+ stderr=subprocess.PIPE,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ ),
+ ))
+
@mock.patch('exec_helpers.subprocess_runner.logger', autospec=True)
class TestSubprocessRunnerHelpers(unittest.TestCase):
@mock.patch('exec_helpers.subprocess_runner.Subprocess.execute')
- def test_check_call(self, execute, logger):
+ def test_001_check_call(self, execute, logger):
exit_code = 0
return_value = exec_helpers.ExecResult(
cmd=command,
@@ -501,7 +914,7 @@ class TestSubprocessRunnerHelpers(unittest.TestCase):
execute.assert_called_once_with(command, verbose, None)
@mock.patch('exec_helpers.subprocess_runner.Subprocess.execute')
- def test_check_call_expected(self, execute, logger):
+ def test_002_check_call_expected(self, execute, logger):
exit_code = 0
return_value = exec_helpers.ExecResult(
cmd=command,
@@ -539,7 +952,7 @@ class TestSubprocessRunnerHelpers(unittest.TestCase):
execute.assert_called_once_with(command, verbose, None)
@mock.patch('exec_helpers.subprocess_runner.Subprocess.check_call')
- def test_check_stderr(self, check_call, logger):
+ def test_003_check_stderr(self, check_call, logger):
return_value = exec_helpers.ExecResult(
cmd=command,
stdout=stdout_list,
@@ -578,96 +991,3 @@ class TestSubprocessRunnerHelpers(unittest.TestCase):
check_call.assert_called_once_with(
command, verbose, timeout=None,
error_info=None, raise_on_err=raise_on_err)
-
- @mock.patch('exec_helpers.subprocess_runner.Subprocess.check_call')
- def test_check_stdin_str(self, check_call, logger):
- stdin = u'this is a line'
-
- expected_result = exec_helpers.ExecResult(
- cmd=print_stdin,
- stdin=stdin,
- stdout=[stdin],
- stderr=[b''],
- exit_code=0,
- )
- check_call.return_value = expected_result
-
- verbose = False
-
- runner = exec_helpers.Subprocess()
-
- # noinspection PyTypeChecker
- result = runner.check_call(
- command=print_stdin,
- verbose=verbose,
- timeout=None,
- stdin=stdin)
- check_call.assert_called_once_with(
- command=print_stdin,
- verbose=verbose,
- timeout=None,
- stdin=stdin)
- self.assertEqual(result, expected_result)
- assert result == expected_result
-
- @mock.patch('exec_helpers.subprocess_runner.Subprocess.check_call')
- def test_check_stdin_bytes(self, check_call, logger):
- stdin = b'this is a line'
-
- expected_result = exec_helpers.ExecResult(
- cmd=print_stdin,
- stdin=stdin,
- stdout=[stdin],
- stderr=[b''],
- exit_code=0,
- )
- check_call.return_value = expected_result
-
- verbose = False
-
- runner = exec_helpers.Subprocess()
-
- # noinspection PyTypeChecker
- result = runner.check_call(
- command=print_stdin,
- verbose=verbose,
- timeout=None,
- stdin=stdin)
- check_call.assert_called_once_with(
- command=print_stdin,
- verbose=verbose,
- timeout=None,
- stdin=stdin)
- self.assertEqual(result, expected_result)
- assert result == expected_result
-
- @mock.patch('exec_helpers.subprocess_runner.Subprocess.check_call')
- def test_check_stdin_bytearray(self, check_call, logger):
- stdin = bytearray(b'this is a line')
-
- expected_result = exec_helpers.ExecResult(
- cmd=print_stdin,
- stdin=stdin,
- stdout=[stdin],
- stderr=[b''],
- exit_code=0,
- )
- check_call.return_value = expected_result
-
- verbose = False
-
- runner = exec_helpers.Subprocess()
-
- # noinspection PyTypeChecker
- result = runner.check_call(
- command=print_stdin,
- verbose=verbose,
- timeout=None,
- stdin=stdin)
- check_call.assert_called_once_with(
- command=print_stdin,
- verbose=verbose,
- timeout=None,
- stdin=stdin)
- self.assertEqual(result, expected_result)
- assert result == expected_result
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 3,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 9
} | 1.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-html",
"pytest-sugar",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | advanced-descriptors==4.0.3
bcrypt==4.3.0
cffi==1.17.1
coverage==7.8.0
cryptography==44.0.2
exceptiongroup==1.2.2
-e git+https://github.com/python-useful-helpers/exec-helpers.git@5f107c01eb0223d63a8ba5ad28d2bedecea4a7cd#egg=exec_helpers
iniconfig==2.1.0
Jinja2==3.1.6
MarkupSafe==3.0.2
mock==5.2.0
packaging==24.2
paramiko==3.5.1
pluggy==1.5.0
pycparser==2.22
PyNaCl==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-html==4.1.1
pytest-metadata==3.1.1
pytest-sugar==1.0.0
PyYAML==6.0.2
six==1.17.0
tenacity==9.0.0
termcolor==2.5.0
threaded==4.2.0
tomli==2.2.1
| name: exec-helpers
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- advanced-descriptors==4.0.3
- bcrypt==4.3.0
- cffi==1.17.1
- coverage==7.8.0
- cryptography==44.0.2
- exceptiongroup==1.2.2
- exec-helpers==1.1.2
- iniconfig==2.1.0
- jinja2==3.1.6
- markupsafe==3.0.2
- mock==5.2.0
- packaging==24.2
- paramiko==3.5.1
- pluggy==1.5.0
- pycparser==2.22
- pynacl==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-html==4.1.1
- pytest-metadata==3.1.1
- pytest-sugar==1.0.0
- pyyaml==6.0.2
- six==1.17.0
- tenacity==9.0.0
- termcolor==2.5.0
- threaded==4.2.0
- tomli==2.2.1
prefix: /opt/conda/envs/exec-helpers
| [
"test/test_ssh_client.py::TestExecute::test_check_stdin_bytearray",
"test/test_ssh_client.py::TestExecute::test_check_stdin_closed",
"test/test_ssh_client.py::TestExecute::test_check_stdin_str",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_006_check_stdin_bytearray",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_007_check_stdin_fail_broken_pipe",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_008_check_stdin_fail_closed_win",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_009_check_stdin_fail_write",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_010_check_stdin_fail_close_pipe",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_011_check_stdin_fail_close_pipe_win",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_012_check_stdin_fail_close",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_013_execute_timeout_done"
]
| [
"test/test_exec_result.py::TestExecResult::test_json"
]
| [
"test/test_exec_result.py::TestExecResult::test_create_minimal",
"test/test_exec_result.py::TestExecResult::test_finalize",
"test/test_exec_result.py::TestExecResult::test_not_equal",
"test/test_exec_result.py::TestExecResult::test_not_implemented",
"test/test_exec_result.py::TestExecResult::test_setters",
"test/test_exec_result.py::TestExecResult::test_stdin_bytearray",
"test/test_exec_result.py::TestExecResult::test_stdin_bytes",
"test/test_exec_result.py::TestExecResult::test_stdin_none",
"test/test_exec_result.py::TestExecResult::test_stdin_utf",
"test/test_exec_result.py::TestExecResult::test_wrong_result",
"test/test_ssh_client.py::TestExecute::test_check_call",
"test/test_ssh_client.py::TestExecute::test_check_call_expected",
"test/test_ssh_client.py::TestExecute::test_check_stderr",
"test/test_ssh_client.py::TestExecute::test_check_stdin_bytes",
"test/test_ssh_client.py::TestExecute::test_execute",
"test/test_ssh_client.py::TestExecute::test_execute_async",
"test/test_ssh_client.py::TestExecute::test_execute_async_mask_command",
"test/test_ssh_client.py::TestExecute::test_execute_async_no_stdout_stderr",
"test/test_ssh_client.py::TestExecute::test_execute_async_pty",
"test/test_ssh_client.py::TestExecute::test_execute_async_sudo",
"test/test_ssh_client.py::TestExecute::test_execute_async_sudo_password",
"test/test_ssh_client.py::TestExecute::test_execute_async_verbose",
"test/test_ssh_client.py::TestExecute::test_execute_async_with_no_sudo_enforce",
"test/test_ssh_client.py::TestExecute::test_execute_async_with_none_enforce",
"test/test_ssh_client.py::TestExecute::test_execute_async_with_sudo_enforce",
"test/test_ssh_client.py::TestExecute::test_execute_mask_command",
"test/test_ssh_client.py::TestExecute::test_execute_no_stderr",
"test/test_ssh_client.py::TestExecute::test_execute_no_stdout",
"test/test_ssh_client.py::TestExecute::test_execute_no_stdout_stderr",
"test/test_ssh_client.py::TestExecute::test_execute_timeout",
"test/test_ssh_client.py::TestExecute::test_execute_timeout_fail",
"test/test_ssh_client.py::TestExecute::test_execute_together",
"test/test_ssh_client.py::TestExecute::test_execute_together_exceptions",
"test/test_ssh_client.py::TestExecute::test_execute_verbose",
"test/test_ssh_client.py::TestExecuteThrowHost::test_execute_through_host_auth",
"test/test_ssh_client.py::TestExecuteThrowHost::test_execute_through_host_no_creds",
"test/test_ssh_client.py::TestSftp::test_download",
"test/test_ssh_client.py::TestSftp::test_exists",
"test/test_ssh_client.py::TestSftp::test_isdir",
"test/test_ssh_client.py::TestSftp::test_isfile",
"test/test_ssh_client.py::TestSftp::test_mkdir",
"test/test_ssh_client.py::TestSftp::test_open",
"test/test_ssh_client.py::TestSftp::test_rm_rf",
"test/test_ssh_client.py::TestSftp::test_stat",
"test/test_ssh_client.py::TestSftp::test_upload_dir",
"test/test_ssh_client.py::TestSftp::test_upload_file",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_001_call",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_002_call_verbose",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_003_context_manager",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_004_check_stdin_str",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_004_execute_timeout_fail",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_005_check_stdin_bytes",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_005_execute_no_stdout",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_006_execute_no_stderr",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_007_execute_no_stdout_stderr",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_008_execute_mask_global",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_009_execute_mask_local",
"test/test_subprocess_runner.py::TestSubprocessRunnerHelpers::test_001_check_call",
"test/test_subprocess_runner.py::TestSubprocessRunnerHelpers::test_002_check_call_expected",
"test/test_subprocess_runner.py::TestSubprocessRunnerHelpers::test_003_check_stderr"
]
| []
| Apache License 2.0 | 2,470 | [
"doc/source/Subprocess.rst",
".pylintrc",
"doc/source/SSHClient.rst",
"exec_helpers/subprocess_runner.py",
".editorconfig",
"tox.ini",
"exec_helpers/_api.py",
"exec_helpers/_ssh_client_base.py",
"exec_helpers/exec_result.py"
]
| [
"doc/source/Subprocess.rst",
".pylintrc",
"doc/source/SSHClient.rst",
"exec_helpers/subprocess_runner.py",
".editorconfig",
"tox.ini",
"exec_helpers/_api.py",
"exec_helpers/_ssh_client_base.py",
"exec_helpers/exec_result.py"
]
|
dask__dask-3461 | f6c8a9c6304bb431d8e76c82efab9ea46a40138a | 2018-05-02 19:45:40 | 279fdf7a6a78a1dfaa0974598aead3e1b44f9194 | mrocklin: This seems fine to me.
It's odd seeing six in the codebase, which hasn't been used historically and isn't an explicit dependency. I notice though that it has been used once in dataframe/io/sql.py for the last year though. Thoughts on if we should add it as an explicit dependency of `dask[dataframe]` @jakirkham or work around with our own compatibility.py file?
jakirkham: Have no qualms with `six` other than it's style is a bit dated. Personally would encourage using a more modern Python 2/3 compat library like [`python-future`]( http://python-future.org/ ) or [`eight`]( https://eight.readthedocs.io/en/latest/ ). Using our own compat module would be an option.
jrbourbeau: Good to know about `string_types` in `compatibility.py`, I hadn't run across that before! I've got no preference re: `six` and can switch to using `dask.compatibility.string_types` here.
mrocklin: It looks like we have string_types already in compatibility.py.
@jrbourbeau I recommend using this so that we can just avoid the question altogether.
If you felt like fixing the use of six in dask/dataframe/io/sql.py as well that would be welcome :)
mrocklin: Ah, you beat me to the comment
jrbourbeau: Getting some `OSError: [Errno 12] Cannot allocate memory` errors for one of the builds on Travis. Should be unrelated to the changes made in this PR.
mrocklin: Hrm, that's interesting. I agree that it's likely unrelated. Restarting.
We'll see if it recurs.
On Wed, May 2, 2018 at 5:43 PM, James Bourbeau <[email protected]>
wrote:
> Getting some OSError: [Errno 12] Cannot allocate memory errors for one of
> the builds on Travis. Should be unrelated to the changes made in this PR.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/dask/dask/pull/3461#issuecomment-386130958>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AASszEByXPa5YMFT1tdpalltqUdgOFc1ks5tuiiRgaJpZM4TwDKn>
> .
>
mrocklin: This looks good to me. Thanks @jrbourbeau ! | diff --git a/dask/dataframe/core.py b/dask/dataframe/core.py
index 0ddbf62d4..59bd653ee 100644
--- a/dask/dataframe/core.py
+++ b/dask/dataframe/core.py
@@ -22,7 +22,7 @@ from .. import core
from ..utils import partial_by_order
from .. import threaded
-from ..compatibility import apply, operator_div, bind_method
+from ..compatibility import apply, operator_div, bind_method, string_types
from ..context import globalmethod
from ..utils import (random_state_data, pseudorandom, derived_from, funcname,
memory_repr, put_lines, M, key_split, OperatorMethodMixin,
@@ -2338,7 +2338,7 @@ class DataFrame(_Frame):
def __getitem__(self, key):
name = 'getitem-%s' % tokenize(self, key)
- if np.isscalar(key) or isinstance(key, tuple):
+ if np.isscalar(key) or isinstance(key, (tuple, string_types)):
if isinstance(self._meta.index, (pd.DatetimeIndex, pd.PeriodIndex)):
if key not in self._meta.columns:
diff --git a/dask/dataframe/io/sql.py b/dask/dataframe/io/sql.py
index cf3d4e19d..592b66403 100644
--- a/dask/dataframe/io/sql.py
+++ b/dask/dataframe/io/sql.py
@@ -1,8 +1,8 @@
import numpy as np
import pandas as pd
-import six
from ... import delayed
+from ...compatibility import string_types
from .io import from_delayed, from_pandas
@@ -83,27 +83,27 @@ def read_sql_table(table, uri, index_col, divisions=None, npartitions=None,
raise ValueError("Must specify index column to partition on")
engine = sa.create_engine(uri)
m = sa.MetaData()
- if isinstance(table, six.string_types):
+ if isinstance(table, string_types):
table = sa.Table(table, m, autoload=True, autoload_with=engine,
schema=schema)
- index = (table.columns[index_col] if isinstance(index_col, six.string_types)
+ index = (table.columns[index_col] if isinstance(index_col, string_types)
else index_col)
- if not isinstance(index_col, six.string_types + (elements.Label,)):
+ if not isinstance(index_col, string_types + (elements.Label,)):
raise ValueError('Use label when passing an SQLAlchemy instance'
' as the index (%s)' % index)
if divisions and npartitions:
raise TypeError('Must supply either divisions or npartitions, not both')
- columns = ([(table.columns[c] if isinstance(c, six.string_types) else c)
+ columns = ([(table.columns[c] if isinstance(c, string_types) else c)
for c in columns]
if columns else list(table.columns))
if index_col not in columns:
columns.append(table.columns[index_col]
- if isinstance(index_col, six.string_types)
+ if isinstance(index_col, string_types)
else index_col)
- if isinstance(index_col, six.string_types):
+ if isinstance(index_col, string_types):
kwargs['index_col'] = index_col
else:
# function names get pandas auto-named
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
index 351cd9369..5a1d99131 100644
--- a/docs/source/changelog.rst
+++ b/docs/source/changelog.rst
@@ -13,7 +13,7 @@ Array
Dataframe
+++++++++
--
+- Add support for indexing Dask DataFrames with string subclasses (:pr:`3461`) `James Bourbeau`_
Bag
+++
| dask.dataframe __getitem__ does not work with subclasses of str in python 3.6
I am upgrading my code from python 2.7 to 3.6 and found that that the __getitem__ of a dask dataframe does not work as before: I use a subclass of `str` for the column names and then I want to access the columns with `df[column_name]`. This works fine with pandas and had worked with dask also in python 2.7. Here is a short example for comparing pandas and dask:
```
class MyString(str):
pass
my_s = MyString('column_1')
import pandas as pd
df = df = pd.DataFrame({'column_1': [1, 1]})
print(df[my_s])
from dask import dataframe as dd
ddf = dd.from_pandas(df, npartitions=1)
print(ddf[my_s])
```
The last line results in following error:
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-15-731527b3e767> in <module>()
2 ddf = dd.from_pandas(df, npartitions=1)
----> 3 print(ddf[my_s])
/io/venv_debian_8_http/lib/python3.6/site-packages/dask/dataframe/core.py in __getitem__(self, key)
2310 return new_dd_object(merge(self.dask, key.dask, dsk), name,
2311 self, self.divisions)
-> 2312 raise NotImplementedError(key)
2313
2314 def __setitem__(self, key, value):
NotImplementedError: column_1
```
I would expect pandas and dask dataframe to behave in the same way here. Software Versions are:
`
dask==0.17.1
pandas==0.22.0
Python is 3.6.4
` | dask/dask | diff --git a/dask/dataframe/tests/test_dataframe.py b/dask/dataframe/tests/test_dataframe.py
index 35e290f7c..ec1802e2a 100644
--- a/dask/dataframe/tests/test_dataframe.py
+++ b/dask/dataframe/tests/test_dataframe.py
@@ -2600,6 +2600,17 @@ def test_getitem_multilevel():
assert_eq(pdf[[('A', '0'), ('B', '1')]], ddf[[('A', '0'), ('B', '1')]])
+def test_getitem_string_subclass():
+ df = pd.DataFrame({'column_1': list(range(10))})
+ ddf = dd.from_pandas(df, npartitions=3)
+
+ class string_subclass(str):
+ pass
+ column_1 = string_subclass('column_1')
+
+ assert_eq(df[column_1], ddf[column_1])
+
+
def test_diff():
df = pd.DataFrame(np.random.randn(100, 5), columns=list('abcde'))
ddf = dd.from_pandas(df, 5)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 3
} | 0.17 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[complete]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
click==8.0.4
cloudpickle==2.2.1
-e git+https://github.com/dask/dask.git@f6c8a9c6304bb431d8e76c82efab9ea46a40138a#egg=dask
distributed==1.21.8
HeapDict==1.0.1
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
locket==1.0.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
msgpack==1.0.5
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
partd==1.2.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
python-dateutil==2.9.0.post0
pytz==2025.2
six==1.17.0
sortedcontainers==2.4.0
tblib==1.7.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
toolz==0.12.0
tornado==6.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
zict==2.1.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: dask
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- click==8.0.4
- cloudpickle==2.2.1
- distributed==1.21.8
- heapdict==1.0.1
- locket==1.0.0
- msgpack==1.0.5
- numpy==1.19.5
- pandas==1.1.5
- partd==1.2.0
- psutil==7.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- six==1.17.0
- sortedcontainers==2.4.0
- tblib==1.7.0
- toolz==0.12.0
- tornado==6.1
- zict==2.1.0
prefix: /opt/conda/envs/dask
| [
"dask/dataframe/tests/test_dataframe.py::test_getitem_string_subclass"
]
| [
"dask/dataframe/tests/test_dataframe.py::test_Dataframe",
"dask/dataframe/tests/test_dataframe.py::test_attributes",
"dask/dataframe/tests/test_dataframe.py::test_timezone_freq[npartitions1]",
"dask/dataframe/tests/test_dataframe.py::test_clip[2-5]",
"dask/dataframe/tests/test_dataframe.py::test_clip[2.5-3.5]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_picklable",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_divisions",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_month",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include0-None]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[None-exclude1]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include2-exclude2]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include3-None]",
"dask/dataframe/tests/test_dataframe.py::test_to_timestamp",
"dask/dataframe/tests/test_dataframe.py::test_apply",
"dask/dataframe/tests/test_dataframe.py::test_cov_corr_mixed",
"dask/dataframe/tests/test_dataframe.py::test_apply_infer_columns",
"dask/dataframe/tests/test_dataframe.py::test_info",
"dask/dataframe/tests/test_dataframe.py::test_groupby_multilevel_info",
"dask/dataframe/tests/test_dataframe.py::test_categorize_info",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx2-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx2-False]",
"dask/dataframe/tests/test_dataframe.py::test_shift",
"dask/dataframe/tests/test_dataframe.py::test_shift_with_freq",
"dask/dataframe/tests/test_dataframe.py::test_first_and_last[first]",
"dask/dataframe/tests/test_dataframe.py::test_first_and_last[last]",
"dask/dataframe/tests/test_dataframe.py::test_datetime_loc_open_slicing"
]
| [
"dask/dataframe/tests/test_dataframe.py::test_head_tail",
"dask/dataframe/tests/test_dataframe.py::test_head_npartitions",
"dask/dataframe/tests/test_dataframe.py::test_head_npartitions_warn",
"dask/dataframe/tests/test_dataframe.py::test_index_head",
"dask/dataframe/tests/test_dataframe.py::test_Series",
"dask/dataframe/tests/test_dataframe.py::test_Index",
"dask/dataframe/tests/test_dataframe.py::test_Scalar",
"dask/dataframe/tests/test_dataframe.py::test_column_names",
"dask/dataframe/tests/test_dataframe.py::test_index_names",
"dask/dataframe/tests/test_dataframe.py::test_timezone_freq[1]",
"dask/dataframe/tests/test_dataframe.py::test_rename_columns",
"dask/dataframe/tests/test_dataframe.py::test_rename_series",
"dask/dataframe/tests/test_dataframe.py::test_rename_series_method",
"dask/dataframe/tests/test_dataframe.py::test_describe",
"dask/dataframe/tests/test_dataframe.py::test_describe_empty",
"dask/dataframe/tests/test_dataframe.py::test_cumulative",
"dask/dataframe/tests/test_dataframe.py::test_dropna",
"dask/dataframe/tests/test_dataframe.py::test_squeeze",
"dask/dataframe/tests/test_dataframe.py::test_where_mask",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_multi_argument",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_names",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_column_info",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_method_names",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_keeps_kwargs_readable",
"dask/dataframe/tests/test_dataframe.py::test_metadata_inference_single_partition_aligned_args",
"dask/dataframe/tests/test_dataframe.py::test_drop_duplicates",
"dask/dataframe/tests/test_dataframe.py::test_drop_duplicates_subset",
"dask/dataframe/tests/test_dataframe.py::test_get_partition",
"dask/dataframe/tests/test_dataframe.py::test_ndim",
"dask/dataframe/tests/test_dataframe.py::test_dtype",
"dask/dataframe/tests/test_dataframe.py::test_value_counts",
"dask/dataframe/tests/test_dataframe.py::test_unique",
"dask/dataframe/tests/test_dataframe.py::test_isin",
"dask/dataframe/tests/test_dataframe.py::test_len",
"dask/dataframe/tests/test_dataframe.py::test_size",
"dask/dataframe/tests/test_dataframe.py::test_nbytes",
"dask/dataframe/tests/test_dataframe.py::test_quantile",
"dask/dataframe/tests/test_dataframe.py::test_quantile_missing",
"dask/dataframe/tests/test_dataframe.py::test_empty_quantile",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_quantile",
"dask/dataframe/tests/test_dataframe.py::test_index",
"dask/dataframe/tests/test_dataframe.py::test_assign",
"dask/dataframe/tests/test_dataframe.py::test_map",
"dask/dataframe/tests/test_dataframe.py::test_concat",
"dask/dataframe/tests/test_dataframe.py::test_args",
"dask/dataframe/tests/test_dataframe.py::test_known_divisions",
"dask/dataframe/tests/test_dataframe.py::test_unknown_divisions",
"dask/dataframe/tests/test_dataframe.py::test_align[inner]",
"dask/dataframe/tests/test_dataframe.py::test_align[outer]",
"dask/dataframe/tests/test_dataframe.py::test_align[left]",
"dask/dataframe/tests/test_dataframe.py::test_align[right]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[inner]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[outer]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[left]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[right]",
"dask/dataframe/tests/test_dataframe.py::test_combine",
"dask/dataframe/tests/test_dataframe.py::test_combine_first",
"dask/dataframe/tests/test_dataframe.py::test_random_partitions",
"dask/dataframe/tests/test_dataframe.py::test_series_round",
"dask/dataframe/tests/test_dataframe.py::test_repartition_divisions",
"dask/dataframe/tests/test_dataframe.py::test_repartition_on_pandas_dataframe",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions_same_limits",
"dask/dataframe/tests/test_dataframe.py::test_repartition_object_index",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_errors",
"dask/dataframe/tests/test_dataframe.py::test_embarrassingly_parallel_operations",
"dask/dataframe/tests/test_dataframe.py::test_fillna",
"dask/dataframe/tests/test_dataframe.py::test_fillna_multi_dataframe",
"dask/dataframe/tests/test_dataframe.py::test_ffill_bfill",
"dask/dataframe/tests/test_dataframe.py::test_fillna_series_types",
"dask/dataframe/tests/test_dataframe.py::test_sample",
"dask/dataframe/tests/test_dataframe.py::test_sample_without_replacement",
"dask/dataframe/tests/test_dataframe.py::test_datetime_accessor",
"dask/dataframe/tests/test_dataframe.py::test_str_accessor",
"dask/dataframe/tests/test_dataframe.py::test_empty_max",
"dask/dataframe/tests/test_dataframe.py::test_deterministic_apply_concat_apply_names",
"dask/dataframe/tests/test_dataframe.py::test_aca_meta_infer",
"dask/dataframe/tests/test_dataframe.py::test_aca_split_every",
"dask/dataframe/tests/test_dataframe.py::test_reduction_method",
"dask/dataframe/tests/test_dataframe.py::test_reduction_method_split_every",
"dask/dataframe/tests/test_dataframe.py::test_pipe",
"dask/dataframe/tests/test_dataframe.py::test_gh_517",
"dask/dataframe/tests/test_dataframe.py::test_drop_axis_1",
"dask/dataframe/tests/test_dataframe.py::test_gh580",
"dask/dataframe/tests/test_dataframe.py::test_rename_dict",
"dask/dataframe/tests/test_dataframe.py::test_rename_function",
"dask/dataframe/tests/test_dataframe.py::test_rename_index",
"dask/dataframe/tests/test_dataframe.py::test_to_frame",
"dask/dataframe/tests/test_dataframe.py::test_apply_warns",
"dask/dataframe/tests/test_dataframe.py::test_applymap",
"dask/dataframe/tests/test_dataframe.py::test_abs",
"dask/dataframe/tests/test_dataframe.py::test_round",
"dask/dataframe/tests/test_dataframe.py::test_cov",
"dask/dataframe/tests/test_dataframe.py::test_corr",
"dask/dataframe/tests/test_dataframe.py::test_cov_corr_meta",
"dask/dataframe/tests/test_dataframe.py::test_autocorr",
"dask/dataframe/tests/test_dataframe.py::test_index_time_properties",
"dask/dataframe/tests/test_dataframe.py::test_nlargest_nsmallest",
"dask/dataframe/tests/test_dataframe.py::test_reset_index",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_compute_forward_kwargs",
"dask/dataframe/tests/test_dataframe.py::test_series_iteritems",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_iterrows",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_itertuples",
"dask/dataframe/tests/test_dataframe.py::test_astype",
"dask/dataframe/tests/test_dataframe.py::test_astype_categoricals",
"dask/dataframe/tests/test_dataframe.py::test_astype_categoricals_known",
"dask/dataframe/tests/test_dataframe.py::test_groupby_callable",
"dask/dataframe/tests/test_dataframe.py::test_methods_tokenize_differently",
"dask/dataframe/tests/test_dataframe.py::test_gh_1301",
"dask/dataframe/tests/test_dataframe.py::test_timeseries_sorted",
"dask/dataframe/tests/test_dataframe.py::test_column_assignment",
"dask/dataframe/tests/test_dataframe.py::test_columns_assignment",
"dask/dataframe/tests/test_dataframe.py::test_attribute_assignment",
"dask/dataframe/tests/test_dataframe.py::test_setitem_triggering_realign",
"dask/dataframe/tests/test_dataframe.py::test_inplace_operators",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx0-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx0-False]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx1-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx1-False]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin_empty_partitions",
"dask/dataframe/tests/test_dataframe.py::test_getitem_meta",
"dask/dataframe/tests/test_dataframe.py::test_getitem_multilevel",
"dask/dataframe/tests/test_dataframe.py::test_diff",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_drop_duplicates[None]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_drop_duplicates[2]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_value_counts[None]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_value_counts[2]",
"dask/dataframe/tests/test_dataframe.py::test_values",
"dask/dataframe/tests/test_dataframe.py::test_copy",
"dask/dataframe/tests/test_dataframe.py::test_del",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[True-True]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[True-False]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[False-True]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[False-False]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[sum]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[mean]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[std]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[var]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[count]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[min]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[max]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[idxmin]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[idxmax]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[prod]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[all]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[sem]",
"dask/dataframe/tests/test_dataframe.py::test_to_datetime",
"dask/dataframe/tests/test_dataframe.py::test_to_timedelta",
"dask/dataframe/tests/test_dataframe.py::test_isna[values0]",
"dask/dataframe/tests/test_dataframe.py::test_isna[values1]",
"dask/dataframe/tests/test_dataframe.py::test_slice_on_filtered_boundary[0]",
"dask/dataframe/tests/test_dataframe.py::test_slice_on_filtered_boundary[9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_nonmonotonic",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1-None-False-False-drop0]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1-None-False-True-drop1]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3-False-False-drop2]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3-True-False-drop3]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-0.5-None-False-False-drop4]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-0.5-None-False-True-drop5]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1.5-None-False-True-drop6]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3.5-False-False-drop7]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3.5-True-False-drop8]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-2.5-False-False-drop9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index0-0-9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index1--1-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index2-None-10]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index3-None-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index4--1-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index5-None-2]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index6--2-3]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index7-None-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index8-left8-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index9-None-right9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index10-left10-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index11-None-right11]",
"dask/dataframe/tests/test_dataframe.py::test_better_errors_object_reductions",
"dask/dataframe/tests/test_dataframe.py::test_sample_empty_partitions",
"dask/dataframe/tests/test_dataframe.py::test_coerce",
"dask/dataframe/tests/test_dataframe.py::test_bool",
"dask/dataframe/tests/test_dataframe.py::test_cumulative_multiple_columns",
"dask/dataframe/tests/test_dataframe.py::test_map_partition_array[asarray]",
"dask/dataframe/tests/test_dataframe.py::test_map_partition_array[func1]",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_operations",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_operations_errors",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_multi_dimensional",
"dask/dataframe/tests/test_dataframe.py::test_meta_raises"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,471 | [
"dask/dataframe/core.py",
"docs/source/changelog.rst",
"dask/dataframe/io/sql.py"
]
| [
"dask/dataframe/core.py",
"docs/source/changelog.rst",
"dask/dataframe/io/sql.py"
]
|
numpy__numpydoc-175 | 8f1ac50a7267e9e1ee66141fd71561c2ca2dc713 | 2018-05-02 22:37:28 | 1f197e32a31db2280b71be183e6724f9457ce78e | timhoffm: Note: CI currently fails because of pip changes. Should be fixed by #174.
jnothman: if you've not had to modify any tests, how do we know this affects output?
timhoffm: As said above, I didn't run any tests so far myself. It's apparent that one parser for "Parameters" and "Returns" cannot get both right. I'm confident that the proposed code change itself is correct. What has to be still shown is that the calling code and tests work with that (it might be that they partly compensate for the original bug). I thought I'd use the tests in CI for that, but CI is already prematurely failing for different reasons.
timhoffm: Rebased onto master.
*Note:* The tests do currently fail. Waiting for #176 before doing any further changes to fix tests.
timhoffm: PR updated.
Single element returns params such as:
~~~
Returns
-------
int
The return value.
~~~
were detected as names. I.e. `int` was considered a name. This logical error has been fixed such that `int` is now a type and the name is empty.
As a consequence, `int` is not formatted bold anymore. This is consistent with the formatting of types in patterns like `x : int` and a prerequisite for type references like ``:class:`MyClass` `` to work in this position.
larsoner: @timhoffm can you rebase? Then I can take a look and hopefully merge
rgommers: I've taken the liberty of fixing the merge conflict. The only nontrivial change was deciding where the new heading `Receives` goes; I added it to `'Returns', 'Yields', 'Raises', 'Warns'`. | diff --git a/numpydoc/docscrape.py b/numpydoc/docscrape.py
index 02afd88..32245a9 100644
--- a/numpydoc/docscrape.py
+++ b/numpydoc/docscrape.py
@@ -220,7 +220,7 @@ class NumpyDocString(Mapping):
else:
yield name, self._strip(data[2:])
- def _parse_param_list(self, content):
+ def _parse_param_list(self, content, single_element_is_type=False):
r = Reader(content)
params = []
while not r.eof():
@@ -228,7 +228,10 @@ class NumpyDocString(Mapping):
if ' : ' in header:
arg_name, arg_type = header.split(' : ')[:2]
else:
- arg_name, arg_type = header, ''
+ if single_element_is_type:
+ arg_name, arg_type = '', header
+ else:
+ arg_name, arg_type = header, ''
desc = r.read_to_next_unindented_line()
desc = dedent_lines(desc)
@@ -393,10 +396,12 @@ class NumpyDocString(Mapping):
self._error_location("The section %s appears twice"
% section)
- if section in ('Parameters', 'Returns', 'Yields', 'Receives',
- 'Raises', 'Warns', 'Other Parameters', 'Attributes',
+ if section in ('Parameters', 'Other Parameters', 'Attributes',
'Methods'):
self[section] = self._parse_param_list(content)
+ elif section in ('Returns', 'Yields', 'Raises', 'Warns', 'Receives'):
+ self[section] = self._parse_param_list(
+ content, single_element_is_type=True)
elif section.startswith('.. index::'):
self['index'] = self._parse_index(section, content)
elif section == 'See Also':
@@ -452,10 +457,12 @@ class NumpyDocString(Mapping):
if self[name]:
out += self._str_header(name)
for param in self[name]:
+ parts = []
+ if param.name:
+ parts.append(param.name)
if param.type:
- out += ['%s : %s' % (param.name, param.type)]
- else:
- out += [param.name]
+ parts.append(param.type)
+ out += [' : '.join(parts)]
if param.desc and ''.join(param.desc).strip():
out += self._str_indent(param.desc)
out += ['']
@@ -637,7 +644,7 @@ class ClassDoc(NumpyDocString):
if _members is ALL:
_members = None
_exclude = config.get('exclude-members', [])
-
+
if config.get('show_class_members', True) and _exclude is not ALL:
def splitlines_x(s):
if not s:
@@ -649,7 +656,7 @@ class ClassDoc(NumpyDocString):
if not self[field]:
doc_list = []
for name in sorted(items):
- if (name in _exclude or
+ if (name in _exclude or
(_members and name not in _members)):
continue
try:
diff --git a/numpydoc/docscrape_sphinx.py b/numpydoc/docscrape_sphinx.py
index 9b23235..aad64c7 100644
--- a/numpydoc/docscrape_sphinx.py
+++ b/numpydoc/docscrape_sphinx.py
@@ -70,19 +70,19 @@ class SphinxDocString(NumpyDocString):
return self['Extended Summary'] + ['']
def _str_returns(self, name='Returns'):
- typed_fmt = '**%s** : %s'
- untyped_fmt = '**%s**'
+ named_fmt = '**%s** : %s'
+ unnamed_fmt = '%s'
out = []
if self[name]:
out += self._str_field_list(name)
out += ['']
for param in self[name]:
- if param.type:
- out += self._str_indent([typed_fmt % (param.name.strip(),
+ if param.name:
+ out += self._str_indent([named_fmt % (param.name.strip(),
param.type)])
else:
- out += self._str_indent([untyped_fmt % param.name.strip()])
+ out += self._str_indent([unnamed_fmt % param.type.strip()])
if not param.desc:
out += self._str_indent(['..'], 8)
else:
@@ -209,12 +209,13 @@ class SphinxDocString(NumpyDocString):
display_param, desc = self._process_param(param.name,
param.desc,
fake_autosummary)
-
+ parts = []
+ if display_param:
+ parts.append(display_param)
if param.type:
- out += self._str_indent(['%s : %s' % (display_param,
- param.type)])
- else:
- out += self._str_indent([display_param])
+ parts.append(param.type)
+ out += self._str_indent([' : '.join(parts)])
+
if desc and self.use_blockquotes:
out += ['']
elif not desc:
@@ -376,8 +377,8 @@ class SphinxDocString(NumpyDocString):
'yields': self._str_returns('Yields'),
'receives': self._str_returns('Receives'),
'other_parameters': self._str_param_list('Other Parameters'),
- 'raises': self._str_param_list('Raises'),
- 'warns': self._str_param_list('Warns'),
+ 'raises': self._str_returns('Raises'),
+ 'warns': self._str_returns('Warns'),
'warnings': self._str_warnings(),
'see_also': self._str_see_also(func_role),
'notes': self._str_section('Notes'),
| Anonymous return values have their types populated in the name slot of the tuple.
I noticed an inconsistency, when using numpydoc version 0.6.0 in python2.7 on Ubuntu. The parsed return section information returns different styles of tuple depending on if the return value is anoymous or not.
Here is a minimal working example:
```python
def mwe():
from numpydoc.docscrape import NumpyDocString
docstr = (
'Returns\n'
'----------\n'
'int\n'
' can return an anoymous integer\n'
'out : ndarray\n'
' can return a named value\n'
)
doc = NumpyDocString(docstr)
returns = doc._parsed_data['Returns']
print(returns)
```
This results in
```python
[(u'int', '', [u'can return an anoymous integer']),
(u'out', u'ndarray', [u'can return a named value'])]
```
However judging by tests (due to lack of docs), I believe it was indented that each value in the returns list should be a tuple of `(arg, arg_type, arg_desc)`. Therefore we should see this instead:
```python
[('', u'int', [u'can return an anoymous integer']),
(u'out', u'ndarray', [u'can return a named value'])]
```
My current workaround is this:
```python
for p_name, p_type, p_descr in returns:
if not p_type:
p_name = ''
p_type = p_name
```
| numpy/numpydoc | diff --git a/numpydoc/tests/test_docscrape.py b/numpydoc/tests/test_docscrape.py
index b4b7e03..e5e3f1f 100644
--- a/numpydoc/tests/test_docscrape.py
+++ b/numpydoc/tests/test_docscrape.py
@@ -211,14 +211,14 @@ def test_returns():
assert desc[-1].endswith('distribution.')
arg, arg_type, desc = doc['Returns'][1]
- assert arg == 'list of str'
- assert arg_type == ''
+ assert arg == ''
+ assert arg_type == 'list of str'
assert desc[0].startswith('This is not a real')
assert desc[-1].endswith('anonymous return values.')
arg, arg_type, desc = doc['Returns'][2]
- assert arg == 'no_description'
- assert arg_type == ''
+ assert arg == ''
+ assert arg_type == 'no_description'
assert not ''.join(desc).strip()
@@ -227,7 +227,7 @@ def test_yields():
assert len(section) == 3
truth = [('a', 'int', 'apples.'),
('b', 'int', 'bananas.'),
- ('int', '', 'unknowns.')]
+ ('', 'int', 'unknowns.')]
for (arg, arg_type, desc), (arg_, arg_type_, end) in zip(section, truth):
assert arg == arg_
assert arg_type == arg_type_
@@ -594,11 +594,11 @@ of the one-dimensional normal distribution to higher dimensions.
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
value drawn from the distribution.
- **list of str**
+ list of str
This is not a real return value. It exists to test
anonymous return values.
- **no_description**
+ no_description
..
:Other Parameters:
@@ -608,12 +608,12 @@ of the one-dimensional normal distribution to higher dimensions.
:Raises:
- **RuntimeError**
+ RuntimeError
Some error
:Warns:
- **RuntimeWarning**
+ RuntimeWarning
Some warning
.. warning::
@@ -687,7 +687,7 @@ def test_sphinx_yields_str():
**b** : int
The number of bananas.
- **int**
+ int
The number of unknowns.
""")
@@ -754,16 +754,18 @@ doc5 = NumpyDocString(
def test_raises():
assert len(doc5['Raises']) == 1
- name, _, desc = doc5['Raises'][0]
- assert name == 'LinAlgException'
- assert desc == ['If array is singular.']
+ param = doc5['Raises'][0]
+ assert param.name == ''
+ assert param.type == 'LinAlgException'
+ assert param.desc == ['If array is singular.']
def test_warns():
assert len(doc5['Warns']) == 1
- name, _, desc = doc5['Warns'][0]
- assert name == 'SomeWarning'
- assert desc == ['If needed']
+ param = doc5['Warns'][0]
+ assert param.name == ''
+ assert param.type == 'SomeWarning'
+ assert param.desc == ['If needed']
def test_see_also():
@@ -995,7 +997,7 @@ def test_use_blockquotes():
GHI
- **JKL**
+ JKL
MNO
''')
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 2
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc texlive texlive-latex-extra latexmk"
],
"python": "3.9",
"reqs_path": [
"doc/requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
babel==2.17.0
certifi==2025.1.31
charset-normalizer==3.4.1
contourpy==1.3.0
cycler==0.12.1
docutils==0.21.2
exceptiongroup==1.2.2
fonttools==4.56.0
idna==3.10
imagesize==1.4.1
importlib_metadata==8.6.1
importlib_resources==6.5.2
iniconfig==2.1.0
Jinja2==3.1.6
kiwisolver==1.4.7
MarkupSafe==3.0.2
matplotlib==3.9.4
numpy==2.0.2
-e git+https://github.com/numpy/numpydoc.git@8f1ac50a7267e9e1ee66141fd71561c2ca2dc713#egg=numpydoc
packaging==24.2
pillow==11.1.0
pluggy==1.5.0
Pygments==2.19.1
pyparsing==3.2.3
pytest==8.3.5
python-dateutil==2.9.0.post0
requests==2.32.3
six==1.17.0
snowballstemmer==2.2.0
Sphinx==7.4.7
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
tomli==2.2.1
urllib3==2.3.0
zipp==3.21.0
| name: numpydoc
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- babel==2.17.0
- certifi==2025.1.31
- charset-normalizer==3.4.1
- contourpy==1.3.0
- cycler==0.12.1
- docutils==0.21.2
- exceptiongroup==1.2.2
- fonttools==4.56.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.6.1
- importlib-resources==6.5.2
- iniconfig==2.1.0
- jinja2==3.1.6
- kiwisolver==1.4.7
- markupsafe==3.0.2
- matplotlib==3.9.4
- numpy==2.0.2
- packaging==24.2
- pillow==11.1.0
- pluggy==1.5.0
- pygments==2.19.1
- pyparsing==3.2.3
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- requests==2.32.3
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==7.4.7
- sphinxcontrib-applehelp==2.0.0
- sphinxcontrib-devhelp==2.0.0
- sphinxcontrib-htmlhelp==2.1.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- tomli==2.2.1
- urllib3==2.3.0
- zipp==3.21.0
prefix: /opt/conda/envs/numpydoc
| [
"numpydoc/tests/test_docscrape.py::test_returns",
"numpydoc/tests/test_docscrape.py::test_yields",
"numpydoc/tests/test_docscrape.py::test_sphinx_str",
"numpydoc/tests/test_docscrape.py::test_sphinx_yields_str",
"numpydoc/tests/test_docscrape.py::test_raises",
"numpydoc/tests/test_docscrape.py::test_warns",
"numpydoc/tests/test_docscrape.py::test_use_blockquotes"
]
| []
| [
"numpydoc/tests/test_docscrape.py::test_signature",
"numpydoc/tests/test_docscrape.py::test_summary",
"numpydoc/tests/test_docscrape.py::test_extended_summary",
"numpydoc/tests/test_docscrape.py::test_parameters",
"numpydoc/tests/test_docscrape.py::test_other_parameters",
"numpydoc/tests/test_docscrape.py::test_sent",
"numpydoc/tests/test_docscrape.py::test_returnyield",
"numpydoc/tests/test_docscrape.py::test_section_twice",
"numpydoc/tests/test_docscrape.py::test_notes",
"numpydoc/tests/test_docscrape.py::test_references",
"numpydoc/tests/test_docscrape.py::test_examples",
"numpydoc/tests/test_docscrape.py::test_index",
"numpydoc/tests/test_docscrape.py::test_str",
"numpydoc/tests/test_docscrape.py::test_yield_str",
"numpydoc/tests/test_docscrape.py::test_receives_str",
"numpydoc/tests/test_docscrape.py::test_no_index_in_str",
"numpydoc/tests/test_docscrape.py::test_parameters_without_extended_description",
"numpydoc/tests/test_docscrape.py::test_escape_stars",
"numpydoc/tests/test_docscrape.py::test_empty_extended_summary",
"numpydoc/tests/test_docscrape.py::test_see_also",
"numpydoc/tests/test_docscrape.py::test_see_also_parse_error",
"numpydoc/tests/test_docscrape.py::test_see_also_print",
"numpydoc/tests/test_docscrape.py::test_unknown_section",
"numpydoc/tests/test_docscrape.py::test_empty_first_line",
"numpydoc/tests/test_docscrape.py::test_no_summary",
"numpydoc/tests/test_docscrape.py::test_unicode",
"numpydoc/tests/test_docscrape.py::test_plot_examples",
"numpydoc/tests/test_docscrape.py::test_class_members",
"numpydoc/tests/test_docscrape.py::test_duplicate_signature",
"numpydoc/tests/test_docscrape.py::test_class_members_doc",
"numpydoc/tests/test_docscrape.py::test_class_members_doc_sphinx",
"numpydoc/tests/test_docscrape.py::test_templated_sections",
"numpydoc/tests/test_docscrape.py::test_nonstandard_property",
"numpydoc/tests/test_docscrape.py::test_args_and_kwargs",
"numpydoc/tests/test_docscrape.py::test_autoclass"
]
| []
| BSD License | 2,472 | [
"numpydoc/docscrape_sphinx.py",
"numpydoc/docscrape.py"
]
| [
"numpydoc/docscrape_sphinx.py",
"numpydoc/docscrape.py"
]
|
pydata__sparse-146 | 444655cd47d990d80a8862f2adaf190db8d308e2 | 2018-05-03 10:48:25 | b03b6b9a480a10a3cf59d7994292b9c5d3015cd5 | codecov-io: # [Codecov](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=h1) Report
> Merging [#146](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=desc) into [master](https://codecov.io/gh/pydata/sparse/commit/8f2a9aebe595762eace6bc48531119462f979e21?src=pr&el=desc) will **increase** coverage by `0.65%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #146 +/- ##
==========================================
+ Coverage 95.81% 96.46% +0.65%
==========================================
Files 10 10
Lines 1195 1189 -6
==========================================
+ Hits 1145 1147 +2
+ Misses 50 42 -8
```
| [Impacted Files](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [sparse/coo/umath.py](https://codecov.io/gh/pydata/sparse/pull/146/diff?src=pr&el=tree#diff-c3BhcnNlL2Nvby91bWF0aC5weQ==) | `97.11% <ø> (+0.44%)` | :arrow_up: |
| [sparse/coo/core.py](https://codecov.io/gh/pydata/sparse/pull/146/diff?src=pr&el=tree#diff-c3BhcnNlL2Nvby9jb3JlLnB5) | `94.65% <100%> (+0.98%)` | :arrow_up: |
| [sparse/coo/indexing.py](https://codecov.io/gh/pydata/sparse/pull/146/diff?src=pr&el=tree#diff-c3BhcnNlL2Nvby9pbmRleGluZy5weQ==) | `100% <0%> (ø)` | :arrow_up: |
| [sparse/coo/common.py](https://codecov.io/gh/pydata/sparse/pull/146/diff?src=pr&el=tree#diff-c3BhcnNlL2Nvby9jb21tb24ucHk=) | `97.04% <0%> (+0.92%)` | :arrow_up: |
| [sparse/dok.py](https://codecov.io/gh/pydata/sparse/pull/146/diff?src=pr&el=tree#diff-c3BhcnNlL2Rvay5weQ==) | `95.32% <0%> (+1.74%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=footer). Last update [8f2a9ae...09edda0](https://codecov.io/gh/pydata/sparse/pull/146?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
mrocklin: This looks pretty slick :)
Might want to test something like the following as well:
```python
np.sin(x, out=x)
```
hameerabbasi: Docs updated, warning added to docs that "in-place ops aren't really in-place".
Supported `out` kwarg for all ops now, including `round` and `COO.sum`, etc.
hameerabbasi: cc @mrocklin I think this is ready for a final review. :-)
hameerabbasi: Removed some unnecessary code for `astype` -- It doesn't have an `out` argument. It was what was causing coverage to fail. | diff --git a/.codecov.yml b/.codecov.yml
index a385d75..6cfd0cf 100644
--- a/.codecov.yml
+++ b/.codecov.yml
@@ -12,7 +12,6 @@ flags:
project:
enabled: yes
target: 95%
- threshold: 1%
if_no_uploads: error
if_not_found: success
if_ci_failed: error
@@ -21,7 +20,6 @@ flags:
default:
enabled: yes
target: 95%
- threshold: 1%
if_no_uploads: error
if_not_found: success
if_ci_failed: error
diff --git a/.coveragerc b/.coveragerc
index d838d37..6371c70 100644
--- a/.coveragerc
+++ b/.coveragerc
@@ -5,3 +5,9 @@ source=
omit=
sparse/_version.py
sparse/tests/*
+
+[report]
+exclude_lines =
+ pragma: no cover
+ return NotImplemented
+ raise NotImplementedError
diff --git a/docs/operations.rst b/docs/operations.rst
index a6c1a10..fa83e92 100644
--- a/docs/operations.rst
+++ b/docs/operations.rst
@@ -20,13 +20,12 @@ results for both Numpy arrays, COO arrays, or a mix of the two:
np.log(X.dot(beta.T) + 1)
-However some operations are not supported, like inplace operations,
-operations that implicitly cause dense structures,
-or numpy functions that are not yet implemented for sparse arrays
+However some operations are not supported, like operations that
+implicitly cause dense structures, or numpy functions that are not
+yet implemented for sparse arrays.
.. code-block:: python
- x += y # inplace operations not supported
x + 1 # operations that produce dense results not supported
np.svd(x) # sparse svd not implemented
@@ -34,7 +33,7 @@ or numpy functions that are not yet implemented for sparse arrays
This page describes those valid operations, and their limitations.
:obj:`elemwise`
-~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~
This function allows you to apply any arbitrary broadcasting function to any number of arguments
where the arguments can be :obj:`SparseArray` objects or :obj:`scipy.sparse.spmatrix` objects.
For example, the following will add two arrays:
@@ -155,7 +154,9 @@ be a :obj:`scipy.sparse.spmatrix` The following operators are supported:
* :obj:`operator.lshift` (:code:`x << y`)
* :obj:`operator.rshift` (:code:`x >> y`)
-.. note:: In-place operators are not supported at this time.
+.. warning::
+ While in-place operations are supported for compatibility with Numpy,
+ they are not truly in-place, and will effectively calculate the result separately.
.. _operations-elemwise:
diff --git a/sparse/coo/core.py b/sparse/coo/core.py
index 331e21f..33c427e 100644
--- a/sparse/coo/core.py
+++ b/sparse/coo/core.py
@@ -676,8 +676,7 @@ class COO(SparseArray, NDArrayOperatorsMixin):
>>> s.sum()
25
"""
- assert out is None
- return self.reduce(np.add, axis=axis, keepdims=keepdims, dtype=dtype)
+ return np.add.reduce(self, out=out, axis=axis, keepdims=keepdims, dtype=dtype)
def max(self, axis=None, keepdims=False, out=None):
"""
@@ -738,8 +737,7 @@ class COO(SparseArray, NDArrayOperatorsMixin):
>>> s.max()
8
"""
- assert out is None
- return self.reduce(np.maximum, axis=axis, keepdims=keepdims)
+ return np.maximum.reduce(self, out=out, axis=axis, keepdims=keepdims)
def min(self, axis=None, keepdims=False, out=None):
"""
@@ -800,8 +798,7 @@ class COO(SparseArray, NDArrayOperatorsMixin):
>>> s.min()
0
"""
- assert out is None
- return self.reduce(np.minimum, axis=axis, keepdims=keepdims)
+ return np.minimum.reduce(self, out=out, axis=axis, keepdims=keepdims)
def prod(self, axis=None, keepdims=False, dtype=None, out=None):
"""
@@ -867,8 +864,7 @@ class COO(SparseArray, NDArrayOperatorsMixin):
>>> s.prod()
0
"""
- assert out is None
- return self.reduce(np.multiply, axis=axis, keepdims=keepdims, dtype=dtype)
+ return np.multiply.reduce(self, out=out, axis=axis, keepdims=keepdims, dtype=dtype)
def transpose(self, axes=None):
"""
@@ -1039,16 +1035,27 @@ class COO(SparseArray, NDArrayOperatorsMixin):
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
out = kwargs.pop('out', None)
- if out is not None:
+ if out is not None and not all(isinstance(x, COO) for x in out):
return NotImplemented
if method == '__call__':
- return elemwise(ufunc, *inputs, **kwargs)
+ result = elemwise(ufunc, *inputs, **kwargs)
elif method == 'reduce':
- return COO._reduce(ufunc, *inputs, **kwargs)
+ result = COO._reduce(ufunc, *inputs, **kwargs)
else:
return NotImplemented
+ if out is not None:
+ (out,) = out
+ if out.shape != result.shape:
+ raise ValueError('non-broadcastable output operand with shape %s'
+ 'doesn\'t match the broadcast shape %s' % (out.shape, result.shape))
+
+ out._make_shallow_copy_of(result)
+ return out
+
+ return result
+
def __array__(self, dtype=None, **kwargs):
x = self.todense()
if dtype and x.dtype != dtype:
@@ -1366,10 +1373,11 @@ class COO(SparseArray, NDArrayOperatorsMixin):
The :code:`out` parameter is provided just for compatibility with Numpy and isn't
actually supported.
"""
- assert out is None
- return elemwise(np.round, self, decimals=decimals)
+ if out is not None and not isinstance(out, tuple):
+ out = (out,)
+ return self.__array_ufunc__(np.round, '__call__', self, decimals=decimals, out=out)
- def astype(self, dtype, out=None):
+ def astype(self, dtype):
"""
Copy of the array, cast to a specified type.
@@ -1385,8 +1393,7 @@ class COO(SparseArray, NDArrayOperatorsMixin):
The :code:`out` parameter is provided just for compatibility with Numpy and isn't
actually supported.
"""
- assert out is None
- return elemwise(np.ndarray.astype, self, dtype=dtype)
+ return self.__array_ufunc__(np.ndarray.astype, '__call__', self, dtype=dtype)
def maybe_densify(self, max_size=1000, min_density=0.25):
"""
diff --git a/sparse/coo/umath.py b/sparse/coo/umath.py
index 45de789..d5ebf58 100644
--- a/sparse/coo/umath.py
+++ b/sparse/coo/umath.py
@@ -64,8 +64,7 @@ def elemwise(func, *args, **kwargs):
elif isinstance(arg, SparseArray) and not isinstance(arg, COO):
args[i] = COO(arg)
elif not isinstance(arg, COO):
- raise ValueError("Performing this operation would produce "
- "a dense result: %s" % str(func))
+ return NotImplemented
# Filter out scalars as they are 'baked' into the function.
func = PositinalArgumentPartial(func, pos, posargs)
| In-place operations
I was wondering about operators such as `operator.iadd`, etc. There are a few ways we can go about this:
1. Support them by mutating the object in-place and invalidating the cache, i.e. performing the operation and then making `self` a copy of the returned object.
2. Support them only when the sparsity structure is the same, and modify `data` in-place.
3. Don't support them at all.
If we want to maintain compatibility with Numpy code (I hope to make `COO` a mostly drop-in replacement for `ndarray` with a few exceptions at some point), I would go with 1, with a warning in the docs that in-place isn't really "in-place".
If we want to do our own thing... Then we have options 2 and 3. | pydata/sparse | diff --git a/sparse/tests/test_coo.py b/sparse/tests/test_coo.py
index 74bcce0..2163892 100644
--- a/sparse/tests/test_coo.py
+++ b/sparse/tests/test_coo.py
@@ -264,6 +264,22 @@ def test_elemwise(func):
assert_eq(func(x), fs)
[email protected]('func', [np.expm1, np.log1p, np.sin, np.tan,
+ np.sinh, np.tanh, np.floor, np.ceil,
+ np.sqrt, np.conj,
+ np.round, np.rint, np.conjugate,
+ np.conj, lambda x, out: x.round(decimals=2, out=out)])
+def test_elemwise_inplace(func):
+ s = sparse.random((2, 3, 4), density=0.5)
+ x = s.todense()
+
+ func(s, out=s)
+ func(x, out=x)
+ assert isinstance(s, COO)
+
+ assert_eq(x, s)
+
+
@pytest.mark.parametrize('func', [
operator.mul, operator.add, operator.sub, operator.gt,
operator.lt, operator.ne
@@ -279,6 +295,23 @@ def test_elemwise_binary(func, shape):
assert_eq(func(xs, ys), func(x, y))
[email protected]('func', [
+ operator.imul, operator.iadd, operator.isub
+])
[email protected]('shape', [(2,), (2, 3), (2, 3, 4), (2, 3, 4, 5)])
+def test_elemwise_binary_inplace(func, shape):
+ xs = sparse.random(shape, density=0.5)
+ ys = sparse.random(shape, density=0.5)
+
+ x = xs.todense()
+ y = ys.todense()
+
+ xs = func(xs, ys)
+ x = func(x, y)
+
+ assert_eq(xs, x)
+
+
@pytest.mark.parametrize('func', [
lambda x, y, z: x + y + z,
lambda x, y, z: x * y * z,
@@ -497,7 +530,7 @@ def test_ndarray_densification_fails():
xs = sparse.random((3, 4), density=0.5)
y = np.random.rand(3, 4)
- with pytest.raises(ValueError):
+ with pytest.raises(TypeError):
xs + y
@@ -624,6 +657,30 @@ def test_bitwise_binary(func, shape):
assert_eq(func(xs, ys), func(x, y))
[email protected]('func', [
+ operator.iand, operator.ior, operator.ixor
+])
[email protected]('shape', [
+ (2,),
+ (2, 3),
+ (2, 3, 4),
+ (2, 3, 4, 5)
+])
+def test_bitwise_binary_inplace(func, shape):
+ # Small arrays need high density to have nnz entries
+ # Casting floats to int will result in all zeros, hence the * 100
+ xs = (sparse.random(shape, density=0.5) * 100).astype(np.int_)
+ ys = (sparse.random(shape, density=0.5) * 100).astype(np.int_)
+
+ x = xs.todense()
+ y = ys.todense()
+
+ xs = func(xs, ys)
+ x = func(x, y)
+
+ assert_eq(xs, x)
+
+
@pytest.mark.parametrize('func', [
operator.lshift, operator.rshift
])
@@ -649,7 +706,7 @@ def test_bitshift_binary(func, shape):
@pytest.mark.parametrize('func', [
- operator.and_
+ operator.ilshift, operator.irshift
])
@pytest.mark.parametrize('shape', [
(2,),
@@ -657,13 +714,37 @@ def test_bitshift_binary(func, shape):
(2, 3, 4),
(2, 3, 4, 5)
])
-def test_bitwise_scalar(func, shape):
+def test_bitshift_binary_inplace(func, shape):
# Small arrays need high density to have nnz entries
# Casting floats to int will result in all zeros, hence the * 100
xs = (sparse.random(shape, density=0.5) * 100).astype(np.int_)
# Can't merge into test_bitwise_binary because left/right shifting
# with something >= 64 isn't defined.
+ ys = (sparse.random(shape, density=0.5) * 64).astype(np.int_)
+
+ x = xs.todense()
+ y = ys.todense()
+
+ xs = func(xs, ys)
+ x = func(x, y)
+
+ assert_eq(xs, x)
+
+
[email protected]('func', [
+ operator.and_
+])
[email protected]('shape', [
+ (2,),
+ (2, 3),
+ (2, 3, 4),
+ (2, 3, 4, 5)
+])
+def test_bitwise_scalar(func, shape):
+ # Small arrays need high density to have nnz entries
+ # Casting floats to int will result in all zeros, hence the * 100
+ xs = (sparse.random(shape, density=0.5) * 100).astype(np.int_)
y = np.random.randint(100)
x = xs.todense()
@@ -1376,6 +1457,17 @@ def test_two_arg_where():
sparse.where(cs, xs)
[email protected]('func', [
+ operator.imul, operator.iadd, operator.isub
+])
+def test_inplace_invalid_shape(func):
+ xs = sparse.random((3, 4), density=0.5)
+ ys = sparse.random((2, 3, 4), density=0.5)
+
+ with pytest.raises(ValueError):
+ func(xs, ys)
+
+
def test_nonzero():
s = sparse.random((2, 3, 4), density=0.5)
x = s.todense()
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 5
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-flake8"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
asv==0.5.1
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
coverage==6.2
distlib==0.3.9
docutils==0.17.1
filelock==3.4.1
flake8==5.0.4
idna==3.10
imagesize==1.4.1
importlib-metadata==4.2.0
importlib-resources==5.4.0
iniconfig==1.1.1
Jinja2==3.0.3
llvmlite==0.36.0
MarkupSafe==2.0.1
mccabe==0.7.0
numba==0.53.1
numpy==1.19.5
packaging==21.3
platformdirs==2.4.0
pluggy==1.0.0
pockets==0.9.1
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
pytest-flake8==1.1.1
pytz==2025.2
requests==2.27.1
scipy==1.5.4
six==1.17.0
snowballstemmer==2.2.0
-e git+https://github.com/pydata/sparse.git@444655cd47d990d80a8862f2adaf190db8d308e2#egg=sparse
Sphinx==4.3.2
sphinx-rtd-theme==1.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-napoleon==0.7
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml==0.10.2
tomli==1.2.3
tox==3.28.0
typing_extensions==4.1.1
urllib3==1.26.20
virtualenv==20.16.2
zipp==3.6.0
| name: sparse
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- asv==0.5.1
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- coverage==6.2
- distlib==0.3.9
- docutils==0.17.1
- filelock==3.4.1
- flake8==5.0.4
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.2.0
- importlib-resources==5.4.0
- iniconfig==1.1.1
- jinja2==3.0.3
- llvmlite==0.36.0
- markupsafe==2.0.1
- mccabe==0.7.0
- numba==0.53.1
- numpy==1.19.5
- packaging==21.3
- platformdirs==2.4.0
- pluggy==1.0.0
- pockets==0.9.1
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pytest-flake8==1.1.1
- pytz==2025.2
- requests==2.27.1
- scipy==1.5.4
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==4.3.2
- sphinx-rtd-theme==1.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-napoleon==0.7
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- toml==0.10.2
- tomli==1.2.3
- tox==3.28.0
- typing-extensions==4.1.1
- urllib3==1.26.20
- virtualenv==20.16.2
- zipp==3.6.0
prefix: /opt/conda/envs/sparse
| [
"sparse/tests/test_coo.py::test_elemwise_inplace[expm1]",
"sparse/tests/test_coo.py::test_elemwise_inplace[log1p]",
"sparse/tests/test_coo.py::test_elemwise_inplace[sin]",
"sparse/tests/test_coo.py::test_elemwise_inplace[tan]",
"sparse/tests/test_coo.py::test_elemwise_inplace[sinh]",
"sparse/tests/test_coo.py::test_elemwise_inplace[tanh]",
"sparse/tests/test_coo.py::test_elemwise_inplace[floor]",
"sparse/tests/test_coo.py::test_elemwise_inplace[ceil]",
"sparse/tests/test_coo.py::test_elemwise_inplace[sqrt]",
"sparse/tests/test_coo.py::test_elemwise_inplace[conjugate0]",
"sparse/tests/test_coo.py::test_elemwise_inplace[round_]",
"sparse/tests/test_coo.py::test_elemwise_inplace[rint]",
"sparse/tests/test_coo.py::test_elemwise_inplace[conjugate1]",
"sparse/tests/test_coo.py::test_elemwise_inplace[conjugate2]",
"sparse/tests/test_coo.py::test_elemwise_inplace[<lambda>]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape0-imul]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape0-iadd]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape0-isub]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape1-imul]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape1-iadd]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape1-isub]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape2-imul]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape2-iadd]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape2-isub]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape3-imul]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape3-iadd]",
"sparse/tests/test_coo.py::test_elemwise_binary_inplace[shape3-isub]",
"sparse/tests/test_coo.py::test_ndarray_densification_fails",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape0-iand]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape0-ior]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape0-ixor]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape1-iand]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape1-ior]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape1-ixor]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape2-iand]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape2-ior]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape2-ixor]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape3-iand]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape3-ior]",
"sparse/tests/test_coo.py::test_bitwise_binary_inplace[shape3-ixor]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape0-ilshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape0-irshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape1-ilshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape1-irshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape2-ilshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape2-irshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape3-ilshift]",
"sparse/tests/test_coo.py::test_bitshift_binary_inplace[shape3-irshift]",
"sparse/tests/test_coo.py::test_inplace_invalid_shape[imul]",
"sparse/tests/test_coo.py::test_inplace_invalid_shape[iadd]",
"sparse/tests/test_coo.py::test_inplace_invalid_shape[isub]",
"sparse/tests/test_dok.py::test_setitem[shape0-index0-0.9171326073464711]",
"sparse/tests/test_dok.py::test_setitem[shape1-index1-0.25930706241679624]",
"sparse/tests/test_dok.py::test_setitem[shape3-1-0.4235816621279639]",
"sparse/tests/test_dok.py::test_setitem[shape4-index4-0.5515944453791635]",
"sparse/tests/test_dok.py::test_setitem[shape5-index5-0.893954428051718]",
"sparse/tests/test_dok.py::test_setitem[shape9-index9-0.3964879133291439]",
"sparse/tests/test_dok.py::test_setitem[shape11-index11-0.2820558363678406]",
"sparse/tests/test_dok.py::test_setitem[shape13-index13-0.8532696804595326]"
]
| [
"sparse/__init__.py::flake-8::FLAKE8",
"sparse/_version.py::flake-8::FLAKE8",
"sparse/compatibility.py::flake-8::FLAKE8",
"sparse/dok.py::flake-8::FLAKE8",
"sparse/slicing.py::flake-8::FLAKE8",
"sparse/sparse_array.py::flake-8::FLAKE8",
"sparse/utils.py::flake-8::FLAKE8",
"sparse/coo/__init__.py::flake-8::FLAKE8",
"sparse/coo/common.py::flake-8::FLAKE8",
"sparse/coo/core.py::flake-8::FLAKE8",
"sparse/coo/indexing.py::flake-8::FLAKE8",
"sparse/coo/umath.py::flake-8::FLAKE8",
"sparse/tests/test_coo.py::flake-8::FLAKE8",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func2]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func3]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func4]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func5]",
"sparse/tests/test_dok.py::flake-8::FLAKE8"
]
| [
"sparse/dok.py::sparse.dok.DOK",
"sparse/dok.py::sparse.dok.DOK.from_coo",
"sparse/dok.py::sparse.dok.DOK.from_numpy",
"sparse/dok.py::sparse.dok.DOK.nnz",
"sparse/dok.py::sparse.dok.DOK.to_coo",
"sparse/dok.py::sparse.dok.DOK.todense",
"sparse/slicing.py::sparse.slicing.check_index",
"sparse/slicing.py::sparse.slicing.clip_slice",
"sparse/slicing.py::sparse.slicing.normalize_index",
"sparse/slicing.py::sparse.slicing.posify_index",
"sparse/slicing.py::sparse.slicing.replace_ellipsis",
"sparse/slicing.py::sparse.slicing.replace_none",
"sparse/slicing.py::sparse.slicing.sanitize_index",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.density",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.ndim",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.nnz",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.size",
"sparse/utils.py::sparse.utils.random",
"sparse/coo/core.py::sparse.coo.core.COO",
"sparse/coo/core.py::sparse.coo.core.COO.T",
"sparse/coo/core.py::sparse.coo.core.COO.__len__",
"sparse/coo/core.py::sparse.coo.core.COO._sort_indices",
"sparse/coo/core.py::sparse.coo.core.COO._sum_duplicates",
"sparse/coo/core.py::sparse.coo.core.COO.dot",
"sparse/coo/core.py::sparse.coo.core.COO.dtype",
"sparse/coo/core.py::sparse.coo.core.COO.from_numpy",
"sparse/coo/core.py::sparse.coo.core.COO.from_scipy_sparse",
"sparse/coo/core.py::sparse.coo.core.COO.linear_loc",
"sparse/coo/core.py::sparse.coo.core.COO.max",
"sparse/coo/core.py::sparse.coo.core.COO.maybe_densify",
"sparse/coo/core.py::sparse.coo.core.COO.min",
"sparse/coo/core.py::sparse.coo.core.COO.nbytes",
"sparse/coo/core.py::sparse.coo.core.COO.nnz",
"sparse/coo/core.py::sparse.coo.core.COO.nonzero",
"sparse/coo/core.py::sparse.coo.core.COO.prod",
"sparse/coo/core.py::sparse.coo.core.COO.reduce",
"sparse/coo/core.py::sparse.coo.core.COO.reshape",
"sparse/coo/core.py::sparse.coo.core.COO.sum",
"sparse/coo/core.py::sparse.coo.core.COO.todense",
"sparse/coo/core.py::sparse.coo.core.COO.transpose",
"sparse/coo/indexing.py::sparse.coo.indexing._compute_mask",
"sparse/coo/indexing.py::sparse.coo.indexing._filter_pairs",
"sparse/coo/indexing.py::sparse.coo.indexing._get_mask_pairs",
"sparse/coo/indexing.py::sparse.coo.indexing._get_slice_len",
"sparse/coo/indexing.py::sparse.coo.indexing._join_adjacent_pairs",
"sparse/coo/indexing.py::sparse.coo.indexing._prune_indices",
"sparse/tests/test_coo.py::test_reductions[True-None-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-None-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-0-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-0-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-1-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-1-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-2-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-2-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True--3-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True--3-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True--3-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True--3-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True--3-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-None-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-None-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-0-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-0-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-1-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-1-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-2-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-2-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False--3-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False--3-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False--3-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False--3-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False--3-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[amax-kwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[sum-kwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[prod-kwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[reduce-kwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[reduce-kwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[reduce-kwargs5]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nanmin]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[None-nanmax]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[None-nanmin]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[0-nanmax]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[0-nanmin]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[1-nanmax]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[1-nanmin]",
"sparse/tests/test_coo.py::test_transpose[None]",
"sparse/tests/test_coo.py::test_transpose[axis1]",
"sparse/tests/test_coo.py::test_transpose[axis2]",
"sparse/tests/test_coo.py::test_transpose[axis3]",
"sparse/tests/test_coo.py::test_transpose[axis4]",
"sparse/tests/test_coo.py::test_transpose[axis5]",
"sparse/tests/test_coo.py::test_transpose[axis6]",
"sparse/tests/test_coo.py::test_transpose_error[axis0]",
"sparse/tests/test_coo.py::test_transpose_error[axis1]",
"sparse/tests/test_coo.py::test_transpose_error[axis2]",
"sparse/tests/test_coo.py::test_transpose_error[axis3]",
"sparse/tests/test_coo.py::test_transpose_error[axis4]",
"sparse/tests/test_coo.py::test_transpose_error[axis5]",
"sparse/tests/test_coo.py::test_transpose_error[0.3]",
"sparse/tests/test_coo.py::test_transpose_error[axis7]",
"sparse/tests/test_coo.py::test_reshape[a0-b0]",
"sparse/tests/test_coo.py::test_reshape[a1-b1]",
"sparse/tests/test_coo.py::test_reshape[a2-b2]",
"sparse/tests/test_coo.py::test_reshape[a3-b3]",
"sparse/tests/test_coo.py::test_reshape[a4-b4]",
"sparse/tests/test_coo.py::test_reshape[a5-b5]",
"sparse/tests/test_coo.py::test_reshape[a6-b6]",
"sparse/tests/test_coo.py::test_reshape[a7-b7]",
"sparse/tests/test_coo.py::test_reshape[a8-b8]",
"sparse/tests/test_coo.py::test_reshape[a9-b9]",
"sparse/tests/test_coo.py::test_large_reshape",
"sparse/tests/test_coo.py::test_reshape_same",
"sparse/tests/test_coo.py::test_to_scipy_sparse",
"sparse/tests/test_coo.py::test_tensordot[a_shape0-b_shape0-axes0]",
"sparse/tests/test_coo.py::test_tensordot[a_shape1-b_shape1-axes1]",
"sparse/tests/test_coo.py::test_tensordot[a_shape2-b_shape2-axes2]",
"sparse/tests/test_coo.py::test_tensordot[a_shape3-b_shape3-axes3]",
"sparse/tests/test_coo.py::test_tensordot[a_shape4-b_shape4-axes4]",
"sparse/tests/test_coo.py::test_tensordot[a_shape5-b_shape5-axes5]",
"sparse/tests/test_coo.py::test_tensordot[a_shape6-b_shape6-axes6]",
"sparse/tests/test_coo.py::test_tensordot[a_shape7-b_shape7-axes7]",
"sparse/tests/test_coo.py::test_tensordot[a_shape8-b_shape8-axes8]",
"sparse/tests/test_coo.py::test_tensordot[a_shape9-b_shape9-0]",
"sparse/tests/test_coo.py::test_dot[a_shape0-b_shape0]",
"sparse/tests/test_coo.py::test_dot[a_shape1-b_shape1]",
"sparse/tests/test_coo.py::test_dot[a_shape2-b_shape2]",
"sparse/tests/test_coo.py::test_dot[a_shape3-b_shape3]",
"sparse/tests/test_coo.py::test_dot[a_shape4-b_shape4]",
"sparse/tests/test_coo.py::test_elemwise[expm1]",
"sparse/tests/test_coo.py::test_elemwise[log1p]",
"sparse/tests/test_coo.py::test_elemwise[sin]",
"sparse/tests/test_coo.py::test_elemwise[tan]",
"sparse/tests/test_coo.py::test_elemwise[sinh]",
"sparse/tests/test_coo.py::test_elemwise[tanh]",
"sparse/tests/test_coo.py::test_elemwise[floor]",
"sparse/tests/test_coo.py::test_elemwise[ceil]",
"sparse/tests/test_coo.py::test_elemwise[sqrt]",
"sparse/tests/test_coo.py::test_elemwise[conjugate0]",
"sparse/tests/test_coo.py::test_elemwise[round_]",
"sparse/tests/test_coo.py::test_elemwise[rint]",
"sparse/tests/test_coo.py::test_elemwise[<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise[conjugate1]",
"sparse/tests/test_coo.py::test_elemwise[conjugate2]",
"sparse/tests/test_coo.py::test_elemwise[<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise[abs]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-ne]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-ne]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-ne]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-ne]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>3]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>3]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>3]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>3]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape10-shape20-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape10-shape20-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape11-shape21-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape11-shape21-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape12-shape22-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape12-shape22-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape13-shape23-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape13-shape23-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape14-shape24-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape14-shape24-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape15-shape25-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape15-shape25-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape16-shape26-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape16-shape26-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape17-shape27-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape17-shape27-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape18-shape28-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape18-shape28-mul]",
"sparse/tests/test_coo.py::test_broadcast_to[shape10-shape20]",
"sparse/tests/test_coo.py::test_broadcast_to[shape11-shape21]",
"sparse/tests/test_coo.py::test_broadcast_to[shape12-shape22]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_sparse_broadcasting",
"sparse/tests/test_coo.py::test_dense_broadcasting",
"sparse/tests/test_coo.py::test_sparsearray_elemwise[coo]",
"sparse/tests/test_coo.py::test_sparsearray_elemwise[dok]",
"sparse/tests/test_coo.py::test_elemwise_noargs",
"sparse/tests/test_coo.py::test_auto_densification_fails[pow]",
"sparse/tests/test_coo.py::test_auto_densification_fails[truediv]",
"sparse/tests/test_coo.py::test_auto_densification_fails[floordiv]",
"sparse/tests/test_coo.py::test_auto_densification_fails[ge]",
"sparse/tests/test_coo.py::test_auto_densification_fails[le]",
"sparse/tests/test_coo.py::test_auto_densification_fails[eq]",
"sparse/tests/test_coo.py::test_auto_densification_fails[mod]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-mul-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-add-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-sub-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-pow-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-truediv-3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-floordiv-4]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-gt-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-lt--5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-ne-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-ge-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-le--3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-eq-1]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-mod-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-mul-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-add-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-sub-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-pow-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-truediv-3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-floordiv-4]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-gt-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-lt--5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-ne-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-ge-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-le--3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-eq-1]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-mod-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-mul-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-add-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-sub-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-gt--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-lt-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-ne-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-ge--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-le-3]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-eq-1]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-mul-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-add-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-sub-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-gt--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-lt-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-ne-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-ge--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-le-3]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-eq-1]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[add-5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[sub--5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[pow--3]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[truediv-0]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[floordiv-0]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[gt--5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[lt-5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[ne-1]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[ge--3]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[le-3]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[eq-0]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape0-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape0-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape0-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape1-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape1-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape1-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape2-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape2-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape2-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape3-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape3-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape3-xor]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape0-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape0-rshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape1-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape1-rshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape2-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape2-rshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape3-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape3-rshift]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape0-and_]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape1-and_]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape2-and_]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape3-and_]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape0-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape0-rshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape1-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape1-rshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape2-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape2-rshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape3-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape3-rshift]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape0-invert]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape1-invert]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape2-invert]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape3-invert]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape0-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape0-xor]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape1-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape1-xor]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape2-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape2-xor]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape3-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape3-xor]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape0-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape0-rshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape1-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape1-rshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape2-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape2-rshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape3-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape3-rshift]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape0-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape0-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape0-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape1-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape1-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape1-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape2-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape2-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape2-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape3-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape3-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape3-xor]",
"sparse/tests/test_coo.py::test_elemwise_binary_empty",
"sparse/tests/test_coo.py::test_gt",
"sparse/tests/test_coo.py::test_slicing[0]",
"sparse/tests/test_coo.py::test_slicing[1]",
"sparse/tests/test_coo.py::test_slicing[-1]",
"sparse/tests/test_coo.py::test_slicing[index3]",
"sparse/tests/test_coo.py::test_slicing[index4]",
"sparse/tests/test_coo.py::test_slicing[index5]",
"sparse/tests/test_coo.py::test_slicing[index6]",
"sparse/tests/test_coo.py::test_slicing[index7]",
"sparse/tests/test_coo.py::test_slicing[index8]",
"sparse/tests/test_coo.py::test_slicing[index9]",
"sparse/tests/test_coo.py::test_slicing[index10]",
"sparse/tests/test_coo.py::test_slicing[index11]",
"sparse/tests/test_coo.py::test_slicing[index12]",
"sparse/tests/test_coo.py::test_slicing[index13]",
"sparse/tests/test_coo.py::test_slicing[index14]",
"sparse/tests/test_coo.py::test_slicing[index15]",
"sparse/tests/test_coo.py::test_slicing[index16]",
"sparse/tests/test_coo.py::test_slicing[index17]",
"sparse/tests/test_coo.py::test_slicing[index18]",
"sparse/tests/test_coo.py::test_slicing[index19]",
"sparse/tests/test_coo.py::test_slicing[index20]",
"sparse/tests/test_coo.py::test_slicing[index21]",
"sparse/tests/test_coo.py::test_slicing[index22]",
"sparse/tests/test_coo.py::test_slicing[index23]",
"sparse/tests/test_coo.py::test_slicing[index24]",
"sparse/tests/test_coo.py::test_slicing[index25]",
"sparse/tests/test_coo.py::test_slicing[index26]",
"sparse/tests/test_coo.py::test_slicing[index27]",
"sparse/tests/test_coo.py::test_slicing[index28]",
"sparse/tests/test_coo.py::test_slicing[index29]",
"sparse/tests/test_coo.py::test_slicing[index30]",
"sparse/tests/test_coo.py::test_slicing[index31]",
"sparse/tests/test_coo.py::test_slicing[index32]",
"sparse/tests/test_coo.py::test_slicing[index33]",
"sparse/tests/test_coo.py::test_slicing[index34]",
"sparse/tests/test_coo.py::test_slicing[index35]",
"sparse/tests/test_coo.py::test_slicing[index36]",
"sparse/tests/test_coo.py::test_slicing[index37]",
"sparse/tests/test_coo.py::test_slicing[index38]",
"sparse/tests/test_coo.py::test_slicing[index39]",
"sparse/tests/test_coo.py::test_slicing[index40]",
"sparse/tests/test_coo.py::test_slicing[index41]",
"sparse/tests/test_coo.py::test_slicing[index42]",
"sparse/tests/test_coo.py::test_slicing[index43]",
"sparse/tests/test_coo.py::test_slicing[index44]",
"sparse/tests/test_coo.py::test_slicing[index45]",
"sparse/tests/test_coo.py::test_custom_dtype_slicing",
"sparse/tests/test_coo.py::test_slicing_errors[index0]",
"sparse/tests/test_coo.py::test_slicing_errors[index1]",
"sparse/tests/test_coo.py::test_slicing_errors[index2]",
"sparse/tests/test_coo.py::test_slicing_errors[5]",
"sparse/tests/test_coo.py::test_slicing_errors[-5]",
"sparse/tests/test_coo.py::test_slicing_errors[foo]",
"sparse/tests/test_coo.py::test_slicing_errors[index6]",
"sparse/tests/test_coo.py::test_slicing_errors[0.5]",
"sparse/tests/test_coo.py::test_slicing_errors[index8]",
"sparse/tests/test_coo.py::test_slicing_errors[index9]",
"sparse/tests/test_coo.py::test_concatenate",
"sparse/tests/test_coo.py::test_concatenate_mixed[stack-0]",
"sparse/tests/test_coo.py::test_concatenate_mixed[stack-1]",
"sparse/tests/test_coo.py::test_concatenate_mixed[concatenate-0]",
"sparse/tests/test_coo.py::test_concatenate_mixed[concatenate-1]",
"sparse/tests/test_coo.py::test_stack[0-shape0]",
"sparse/tests/test_coo.py::test_stack[0-shape1]",
"sparse/tests/test_coo.py::test_stack[0-shape2]",
"sparse/tests/test_coo.py::test_stack[1-shape0]",
"sparse/tests/test_coo.py::test_stack[1-shape1]",
"sparse/tests/test_coo.py::test_stack[1-shape2]",
"sparse/tests/test_coo.py::test_stack[-1-shape0]",
"sparse/tests/test_coo.py::test_stack[-1-shape1]",
"sparse/tests/test_coo.py::test_stack[-1-shape2]",
"sparse/tests/test_coo.py::test_large_concat_stack",
"sparse/tests/test_coo.py::test_coord_dtype",
"sparse/tests/test_coo.py::test_addition",
"sparse/tests/test_coo.py::test_addition_not_ok_when_large_and_sparse",
"sparse/tests/test_coo.py::test_scalar_multiplication[2]",
"sparse/tests/test_coo.py::test_scalar_multiplication[2.5]",
"sparse/tests/test_coo.py::test_scalar_multiplication[scalar2]",
"sparse/tests/test_coo.py::test_scalar_multiplication[scalar3]",
"sparse/tests/test_coo.py::test_scalar_exponentiation",
"sparse/tests/test_coo.py::test_create_with_lists_of_tuples",
"sparse/tests/test_coo.py::test_sizeof",
"sparse/tests/test_coo.py::test_scipy_sparse_interface",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[coo]",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[csr]",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[dok]",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[csc]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[mul]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[add]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[sub]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[gt]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[lt]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[ne]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[add]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[sub]",
"sparse/tests/test_coo.py::test_cache_csr",
"sparse/tests/test_coo.py::test_empty_shape",
"sparse/tests/test_coo.py::test_single_dimension",
"sparse/tests/test_coo.py::test_raise_dense",
"sparse/tests/test_coo.py::test_large_sum",
"sparse/tests/test_coo.py::test_add_many_sparse_arrays",
"sparse/tests/test_coo.py::test_caching",
"sparse/tests/test_coo.py::test_scalar_slicing",
"sparse/tests/test_coo.py::test_triul[shape0-0]",
"sparse/tests/test_coo.py::test_triul[shape1-1]",
"sparse/tests/test_coo.py::test_triul[shape2--1]",
"sparse/tests/test_coo.py::test_triul[shape3--2]",
"sparse/tests/test_coo.py::test_triul[shape4-1000]",
"sparse/tests/test_coo.py::test_empty_reduction",
"sparse/tests/test_coo.py::test_random_shape[0.1-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.1-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.1-shape2]",
"sparse/tests/test_coo.py::test_random_shape[0.3-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.3-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.3-shape2]",
"sparse/tests/test_coo.py::test_random_shape[0.5-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.5-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.5-shape2]",
"sparse/tests/test_coo.py::test_random_shape[0.7-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.7-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.7-shape2]",
"sparse/tests/test_coo.py::test_two_random_unequal",
"sparse/tests/test_coo.py::test_two_random_same_seed",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_scalar_shape_construction",
"sparse/tests/test_coo.py::test_len",
"sparse/tests/test_coo.py::test_density",
"sparse/tests/test_coo.py::test_size",
"sparse/tests/test_coo.py::test_np_array",
"sparse/tests/test_coo.py::test_three_arg_where[shapes0]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes1]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes2]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes3]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes4]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes5]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes6]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes7]",
"sparse/tests/test_coo.py::test_one_arg_where",
"sparse/tests/test_coo.py::test_one_arg_where_dense",
"sparse/tests/test_coo.py::test_two_arg_where",
"sparse/tests/test_coo.py::test_nonzero",
"sparse/tests/test_coo.py::test_argwhere",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.1-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.1-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.1-shape2]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.3-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.3-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.3-shape2]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.5-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.5-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.5-shape2]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.7-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.7-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.7-shape2]",
"sparse/tests/test_dok.py::test_convert_to_coo",
"sparse/tests/test_dok.py::test_convert_from_coo",
"sparse/tests/test_dok.py::test_convert_from_numpy",
"sparse/tests/test_dok.py::test_convert_to_numpy",
"sparse/tests/test_dok.py::test_construct[2-data0]",
"sparse/tests/test_dok.py::test_construct[shape1-data1]",
"sparse/tests/test_dok.py::test_construct[shape2-data2]",
"sparse/tests/test_dok.py::test_getitem[0.1-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.1-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.1-shape2]",
"sparse/tests/test_dok.py::test_getitem[0.3-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.3-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.3-shape2]",
"sparse/tests/test_dok.py::test_getitem[0.5-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.5-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.5-shape2]",
"sparse/tests/test_dok.py::test_getitem[0.7-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.7-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.7-shape2]",
"sparse/tests/test_dok.py::test_setitem[shape2-index2-value2]",
"sparse/tests/test_dok.py::test_setitem[shape6-index6-value6]",
"sparse/tests/test_dok.py::test_setitem[shape7-index7-value7]",
"sparse/tests/test_dok.py::test_setitem[shape8-index8-value8]",
"sparse/tests/test_dok.py::test_setitem[shape10-index10-value10]",
"sparse/tests/test_dok.py::test_setitem[shape12-index12-value12]",
"sparse/tests/test_dok.py::test_default_dtype",
"sparse/tests/test_dok.py::test_int_dtype",
"sparse/tests/test_dok.py::test_float_dtype",
"sparse/tests/test_dok.py::test_set_zero"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,473 | [
".codecov.yml",
"sparse/coo/umath.py",
".coveragerc",
"docs/operations.rst",
"sparse/coo/core.py"
]
| [
".codecov.yml",
"sparse/coo/umath.py",
".coveragerc",
"docs/operations.rst",
"sparse/coo/core.py"
]
|
pydata__sparse-148 | 8f2a9aebe595762eace6bc48531119462f979e21 | 2018-05-03 18:53:16 | b03b6b9a480a10a3cf59d7994292b9c5d3015cd5 | codecov-io: # [Codecov](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=h1) Report
> Merging [#148](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=desc) into [master](https://codecov.io/gh/pydata/sparse/commit/8f2a9aebe595762eace6bc48531119462f979e21?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #148 +/- ##
==========================================
+ Coverage 95.81% 95.82% +<.01%
==========================================
Files 10 10
Lines 1195 1197 +2
==========================================
+ Hits 1145 1147 +2
Misses 50 50
```
| [Impacted Files](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [sparse/coo/core.py](https://codecov.io/gh/pydata/sparse/pull/148/diff?src=pr&el=tree#diff-c3BhcnNlL2Nvby9jb3JlLnB5) | `93.71% <100%> (+0.03%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=footer). Last update [8f2a9ae...22cc851](https://codecov.io/gh/pydata/sparse/pull/148?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/docs/generated/sparse.COO.nonzero.rst b/docs/generated/sparse.COO.nonzero.rst
new file mode 100644
index 0000000..8f0cdd2
--- /dev/null
+++ b/docs/generated/sparse.COO.nonzero.rst
@@ -0,0 +1,6 @@
+COO.nonzero
+===========
+
+.. currentmodule:: sparse
+
+.. automethod:: COO.nonzero
\ No newline at end of file
diff --git a/docs/generated/sparse.COO.rst b/docs/generated/sparse.COO.rst
index 1e2452c..d2438ab 100644
--- a/docs/generated/sparse.COO.rst
+++ b/docs/generated/sparse.COO.rst
@@ -61,6 +61,7 @@ COO
COO.dot
COO.reshape
COO.transpose
+ COO.nonzero
.. rubric:: Utility functions
.. autosummary::
diff --git a/sparse/coo/core.py b/sparse/coo/core.py
index 831e6f7..331e21f 100644
--- a/sparse/coo/core.py
+++ b/sparse/coo/core.py
@@ -1437,6 +1437,27 @@ class COO(SparseArray, NDArrayOperatorsMixin):
raise ValueError("Operation would require converting "
"large sparse array to dense")
+ def nonzero(self):
+ """
+ Get the indices where this array is nonzero.
+
+ Returns
+ -------
+ idx : tuple[numpy.ndarray]
+ The indices where this array is nonzero.
+
+ See Also
+ --------
+ :obj:`numpy.ndarray.nonzero` : NumPy equivalent function
+
+ Examples
+ --------
+ >>> s = COO.from_numpy(np.eye(5))
+ >>> s.nonzero()
+ (array([0, 1, 2, 3, 4], dtype=uint8), array([0, 1, 2, 3, 4], dtype=uint8))
+ """
+ return tuple(self.coords)
+
def _keepdims(original, new, axis):
shape = list(original.shape)
| Grabbing nonzero indices
Is the preferred way to access the coordinates/indices to use something like
```
i, j, k = arr.coords # for a 3d array
```
Is there a reason why `coo` arrays don't implement `nonzero` as a method? I think this would make `np.argwhere(arr)` work as well without converting to a dense array.
Huge thanks for developing this package -- it will be very useful to me. | pydata/sparse | diff --git a/sparse/tests/test_coo.py b/sparse/tests/test_coo.py
index 4a3e8c8..74bcce0 100644
--- a/sparse/tests/test_coo.py
+++ b/sparse/tests/test_coo.py
@@ -1374,3 +1374,24 @@ def test_two_arg_where():
with pytest.raises(ValueError):
sparse.where(cs, xs)
+
+
+def test_nonzero():
+ s = sparse.random((2, 3, 4), density=0.5)
+ x = s.todense()
+
+ expected = x.nonzero()
+ actual = s.nonzero()
+
+ assert isinstance(actual, tuple)
+ assert len(expected) == len(actual)
+
+ for e, a in zip(expected, actual):
+ assert_eq(e, a, compare_dtype=False)
+
+
+def test_argwhere():
+ s = sparse.random((2, 3, 4), density=0.5)
+ x = s.todense()
+
+ assert_eq(np.argwhere(s), np.argwhere(x), compare_dtype=False)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_added_files",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 2
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-flake8"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
asv==0.5.1
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
coverage==6.2
distlib==0.3.9
docutils==0.17.1
filelock==3.4.1
flake8==5.0.4
idna==3.10
imagesize==1.4.1
importlib-metadata==4.2.0
importlib-resources==5.4.0
iniconfig==1.1.1
Jinja2==3.0.3
llvmlite==0.36.0
MarkupSafe==2.0.1
mccabe==0.7.0
numba==0.53.1
numpy==1.19.5
packaging==21.3
platformdirs==2.4.0
pluggy==1.0.0
pockets==0.9.1
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
pytest-flake8==1.1.1
pytz==2025.2
requests==2.27.1
scipy==1.5.4
six==1.17.0
snowballstemmer==2.2.0
-e git+https://github.com/pydata/sparse.git@8f2a9aebe595762eace6bc48531119462f979e21#egg=sparse
Sphinx==4.3.2
sphinx-rtd-theme==1.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-napoleon==0.7
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml==0.10.2
tomli==1.2.3
tox==3.28.0
typing_extensions==4.1.1
urllib3==1.26.20
virtualenv==20.16.2
zipp==3.6.0
| name: sparse
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- asv==0.5.1
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- coverage==6.2
- distlib==0.3.9
- docutils==0.17.1
- filelock==3.4.1
- flake8==5.0.4
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.2.0
- importlib-resources==5.4.0
- iniconfig==1.1.1
- jinja2==3.0.3
- llvmlite==0.36.0
- markupsafe==2.0.1
- mccabe==0.7.0
- numba==0.53.1
- numpy==1.19.5
- packaging==21.3
- platformdirs==2.4.0
- pluggy==1.0.0
- pockets==0.9.1
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pytest-flake8==1.1.1
- pytz==2025.2
- requests==2.27.1
- scipy==1.5.4
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==4.3.2
- sphinx-rtd-theme==1.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-napoleon==0.7
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- toml==0.10.2
- tomli==1.2.3
- tox==3.28.0
- typing-extensions==4.1.1
- urllib3==1.26.20
- virtualenv==20.16.2
- zipp==3.6.0
prefix: /opt/conda/envs/sparse
| [
"sparse/coo/core.py::sparse.coo.core.COO.nonzero",
"sparse/tests/test_coo.py::test_nonzero",
"sparse/tests/test_dok.py::test_setitem[shape0-index0-0.3821439909639458]",
"sparse/tests/test_dok.py::test_setitem[shape1-index1-0.4288669685410029]",
"sparse/tests/test_dok.py::test_setitem[shape3-1-0.2193180657362609]",
"sparse/tests/test_dok.py::test_setitem[shape4-index4-0.34491241690174457]",
"sparse/tests/test_dok.py::test_setitem[shape5-index5-0.9899349623609871]",
"sparse/tests/test_dok.py::test_setitem[shape9-index9-0.9740207178457831]",
"sparse/tests/test_dok.py::test_setitem[shape11-index11-0.749350512496903]",
"sparse/tests/test_dok.py::test_setitem[shape13-index13-0.335354199815068]"
]
| [
"sparse/__init__.py::flake-8::FLAKE8",
"sparse/_version.py::flake-8::FLAKE8",
"sparse/compatibility.py::flake-8::FLAKE8",
"sparse/dok.py::flake-8::FLAKE8",
"sparse/slicing.py::flake-8::FLAKE8",
"sparse/sparse_array.py::flake-8::FLAKE8",
"sparse/utils.py::flake-8::FLAKE8",
"sparse/coo/__init__.py::flake-8::FLAKE8",
"sparse/coo/common.py::flake-8::FLAKE8",
"sparse/coo/core.py::flake-8::FLAKE8",
"sparse/coo/indexing.py::flake-8::FLAKE8",
"sparse/coo/umath.py::flake-8::FLAKE8",
"sparse/tests/test_coo.py::flake-8::FLAKE8",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func2]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func3]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func4]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[func5]",
"sparse/tests/test_dok.py::flake-8::FLAKE8"
]
| [
"sparse/dok.py::sparse.dok.DOK",
"sparse/dok.py::sparse.dok.DOK.from_coo",
"sparse/dok.py::sparse.dok.DOK.from_numpy",
"sparse/dok.py::sparse.dok.DOK.nnz",
"sparse/dok.py::sparse.dok.DOK.to_coo",
"sparse/dok.py::sparse.dok.DOK.todense",
"sparse/slicing.py::sparse.slicing.check_index",
"sparse/slicing.py::sparse.slicing.clip_slice",
"sparse/slicing.py::sparse.slicing.normalize_index",
"sparse/slicing.py::sparse.slicing.posify_index",
"sparse/slicing.py::sparse.slicing.replace_ellipsis",
"sparse/slicing.py::sparse.slicing.replace_none",
"sparse/slicing.py::sparse.slicing.sanitize_index",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.density",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.ndim",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.nnz",
"sparse/sparse_array.py::sparse.sparse_array.SparseArray.size",
"sparse/utils.py::sparse.utils.random",
"sparse/coo/core.py::sparse.coo.core.COO",
"sparse/coo/core.py::sparse.coo.core.COO.T",
"sparse/coo/core.py::sparse.coo.core.COO.__len__",
"sparse/coo/core.py::sparse.coo.core.COO._sort_indices",
"sparse/coo/core.py::sparse.coo.core.COO._sum_duplicates",
"sparse/coo/core.py::sparse.coo.core.COO.dot",
"sparse/coo/core.py::sparse.coo.core.COO.dtype",
"sparse/coo/core.py::sparse.coo.core.COO.from_numpy",
"sparse/coo/core.py::sparse.coo.core.COO.from_scipy_sparse",
"sparse/coo/core.py::sparse.coo.core.COO.linear_loc",
"sparse/coo/core.py::sparse.coo.core.COO.max",
"sparse/coo/core.py::sparse.coo.core.COO.maybe_densify",
"sparse/coo/core.py::sparse.coo.core.COO.min",
"sparse/coo/core.py::sparse.coo.core.COO.nbytes",
"sparse/coo/core.py::sparse.coo.core.COO.nnz",
"sparse/coo/core.py::sparse.coo.core.COO.prod",
"sparse/coo/core.py::sparse.coo.core.COO.reduce",
"sparse/coo/core.py::sparse.coo.core.COO.reshape",
"sparse/coo/core.py::sparse.coo.core.COO.sum",
"sparse/coo/core.py::sparse.coo.core.COO.todense",
"sparse/coo/core.py::sparse.coo.core.COO.transpose",
"sparse/coo/indexing.py::sparse.coo.indexing._compute_mask",
"sparse/coo/indexing.py::sparse.coo.indexing._filter_pairs",
"sparse/coo/indexing.py::sparse.coo.indexing._get_mask_pairs",
"sparse/coo/indexing.py::sparse.coo.indexing._get_slice_len",
"sparse/coo/indexing.py::sparse.coo.indexing._join_adjacent_pairs",
"sparse/coo/indexing.py::sparse.coo.indexing._prune_indices",
"sparse/tests/test_coo.py::test_reductions[True-None-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-None-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-0-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-0-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-1-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-1-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-2-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-2-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-axis4-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True--3-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True--3-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True--3-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True--3-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True--3-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[True-axis6-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-None-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-None-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-0-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-0-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-1-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-1-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-2-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-2-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-axis4-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False--3-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False--3-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False--3-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False--3-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False--3-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-max-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_reductions[False-axis6-min-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-None-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-0-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-2-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis4-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True--1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[True-axis6-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-None-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-0-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-2-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis4-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False--1-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-amax-kwargs0-eqkwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-sum-kwargs1-eqkwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-sum-kwargs2-eqkwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-prod-kwargs3-eqkwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions[False-axis6-amin-kwargs4-eqkwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[amax-kwargs0]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[sum-kwargs1]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[prod-kwargs2]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[reduce-kwargs3]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[reduce-kwargs4]",
"sparse/tests/test_coo.py::test_ufunc_reductions_kwargs[reduce-kwargs5]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.25-False-1-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.5-False-1-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[0.75-False-1-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-None-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-0-nanmin]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nansum]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nanprod]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nanmax]",
"sparse/tests/test_coo.py::test_nan_reductions[1.0-False-1-nanmin]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[None-nanmax]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[None-nanmin]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[0-nanmax]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[0-nanmin]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[1-nanmax]",
"sparse/tests/test_coo.py::test_all_nan_reduction_warning[1-nanmin]",
"sparse/tests/test_coo.py::test_transpose[None]",
"sparse/tests/test_coo.py::test_transpose[axis1]",
"sparse/tests/test_coo.py::test_transpose[axis2]",
"sparse/tests/test_coo.py::test_transpose[axis3]",
"sparse/tests/test_coo.py::test_transpose[axis4]",
"sparse/tests/test_coo.py::test_transpose[axis5]",
"sparse/tests/test_coo.py::test_transpose[axis6]",
"sparse/tests/test_coo.py::test_transpose_error[axis0]",
"sparse/tests/test_coo.py::test_transpose_error[axis1]",
"sparse/tests/test_coo.py::test_transpose_error[axis2]",
"sparse/tests/test_coo.py::test_transpose_error[axis3]",
"sparse/tests/test_coo.py::test_transpose_error[axis4]",
"sparse/tests/test_coo.py::test_transpose_error[axis5]",
"sparse/tests/test_coo.py::test_transpose_error[0.3]",
"sparse/tests/test_coo.py::test_transpose_error[axis7]",
"sparse/tests/test_coo.py::test_reshape[a0-b0]",
"sparse/tests/test_coo.py::test_reshape[a1-b1]",
"sparse/tests/test_coo.py::test_reshape[a2-b2]",
"sparse/tests/test_coo.py::test_reshape[a3-b3]",
"sparse/tests/test_coo.py::test_reshape[a4-b4]",
"sparse/tests/test_coo.py::test_reshape[a5-b5]",
"sparse/tests/test_coo.py::test_reshape[a6-b6]",
"sparse/tests/test_coo.py::test_reshape[a7-b7]",
"sparse/tests/test_coo.py::test_reshape[a8-b8]",
"sparse/tests/test_coo.py::test_reshape[a9-b9]",
"sparse/tests/test_coo.py::test_large_reshape",
"sparse/tests/test_coo.py::test_reshape_same",
"sparse/tests/test_coo.py::test_to_scipy_sparse",
"sparse/tests/test_coo.py::test_tensordot[a_shape0-b_shape0-axes0]",
"sparse/tests/test_coo.py::test_tensordot[a_shape1-b_shape1-axes1]",
"sparse/tests/test_coo.py::test_tensordot[a_shape2-b_shape2-axes2]",
"sparse/tests/test_coo.py::test_tensordot[a_shape3-b_shape3-axes3]",
"sparse/tests/test_coo.py::test_tensordot[a_shape4-b_shape4-axes4]",
"sparse/tests/test_coo.py::test_tensordot[a_shape5-b_shape5-axes5]",
"sparse/tests/test_coo.py::test_tensordot[a_shape6-b_shape6-axes6]",
"sparse/tests/test_coo.py::test_tensordot[a_shape7-b_shape7-axes7]",
"sparse/tests/test_coo.py::test_tensordot[a_shape8-b_shape8-axes8]",
"sparse/tests/test_coo.py::test_tensordot[a_shape9-b_shape9-0]",
"sparse/tests/test_coo.py::test_dot[a_shape0-b_shape0]",
"sparse/tests/test_coo.py::test_dot[a_shape1-b_shape1]",
"sparse/tests/test_coo.py::test_dot[a_shape2-b_shape2]",
"sparse/tests/test_coo.py::test_dot[a_shape3-b_shape3]",
"sparse/tests/test_coo.py::test_dot[a_shape4-b_shape4]",
"sparse/tests/test_coo.py::test_elemwise[expm1]",
"sparse/tests/test_coo.py::test_elemwise[log1p]",
"sparse/tests/test_coo.py::test_elemwise[sin]",
"sparse/tests/test_coo.py::test_elemwise[tan]",
"sparse/tests/test_coo.py::test_elemwise[sinh]",
"sparse/tests/test_coo.py::test_elemwise[tanh]",
"sparse/tests/test_coo.py::test_elemwise[floor]",
"sparse/tests/test_coo.py::test_elemwise[ceil]",
"sparse/tests/test_coo.py::test_elemwise[sqrt]",
"sparse/tests/test_coo.py::test_elemwise[conjugate0]",
"sparse/tests/test_coo.py::test_elemwise[round_]",
"sparse/tests/test_coo.py::test_elemwise[rint]",
"sparse/tests/test_coo.py::test_elemwise[<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise[conjugate1]",
"sparse/tests/test_coo.py::test_elemwise[conjugate2]",
"sparse/tests/test_coo.py::test_elemwise[<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise[abs]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape0-ne]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape1-ne]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape2-ne]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-mul]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-add]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-sub]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-gt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-lt]",
"sparse/tests/test_coo.py::test_elemwise_binary[shape3-ne]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape0-<lambda>3]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape1-<lambda>3]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape2-<lambda>3]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>0]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>1]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>2]",
"sparse/tests/test_coo.py::test_elemwise_trinary[shape3-<lambda>3]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape10-shape20-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape10-shape20-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape11-shape21-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape11-shape21-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape12-shape22-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape12-shape22-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape13-shape23-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape13-shape23-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape14-shape24-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape14-shape24-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape15-shape25-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape15-shape25-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape16-shape26-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape16-shape26-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape17-shape27-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape17-shape27-mul]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape18-shape28-add]",
"sparse/tests/test_coo.py::test_binary_broadcasting[shape18-shape28-mul]",
"sparse/tests/test_coo.py::test_broadcast_to[shape10-shape20]",
"sparse/tests/test_coo.py::test_broadcast_to[shape11-shape21]",
"sparse/tests/test_coo.py::test_broadcast_to[shape12-shape22]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>0-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>1-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>2-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>3-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>4-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes0]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes1]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes2]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes3]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes4]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes5]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes6]",
"sparse/tests/test_coo.py::test_trinary_broadcasting[<lambda>5-shapes7]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.25--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.5--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[0.75--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-nan-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0-inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes0-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes1-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes2-<lambda>]",
"sparse/tests/test_coo.py::test_trinary_broadcasting_pathological[1.0--inf-shapes3-<lambda>]",
"sparse/tests/test_coo.py::test_sparse_broadcasting",
"sparse/tests/test_coo.py::test_dense_broadcasting",
"sparse/tests/test_coo.py::test_sparsearray_elemwise[coo]",
"sparse/tests/test_coo.py::test_sparsearray_elemwise[dok]",
"sparse/tests/test_coo.py::test_ndarray_densification_fails",
"sparse/tests/test_coo.py::test_elemwise_noargs",
"sparse/tests/test_coo.py::test_auto_densification_fails[pow]",
"sparse/tests/test_coo.py::test_auto_densification_fails[truediv]",
"sparse/tests/test_coo.py::test_auto_densification_fails[floordiv]",
"sparse/tests/test_coo.py::test_auto_densification_fails[ge]",
"sparse/tests/test_coo.py::test_auto_densification_fails[le]",
"sparse/tests/test_coo.py::test_auto_densification_fails[eq]",
"sparse/tests/test_coo.py::test_auto_densification_fails[mod]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-mul-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-add-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-sub-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-pow-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-truediv-3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-floordiv-4]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-gt-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-lt--5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-ne-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-ge-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-le--3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-eq-1]",
"sparse/tests/test_coo.py::test_elemwise_scalar[True-mod-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-mul-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-add-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-sub-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-pow-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-truediv-3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-floordiv-4]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-gt-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-lt--5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-ne-0]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-ge-5]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-le--3]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-eq-1]",
"sparse/tests/test_coo.py::test_elemwise_scalar[False-mod-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-mul-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-add-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-sub-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-gt--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-lt-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-ne-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-ge--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-le-3]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[True-eq-1]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-mul-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-add-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-sub-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-gt--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-lt-5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-ne-0]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-ge--5]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-le-3]",
"sparse/tests/test_coo.py::test_leftside_elemwise_scalar[False-eq-1]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[add-5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[sub--5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[pow--3]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[truediv-0]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[floordiv-0]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[gt--5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[lt-5]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[ne-1]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[ge--3]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[le-3]",
"sparse/tests/test_coo.py::test_scalar_densification_fails[eq-0]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape0-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape0-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape0-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape1-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape1-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape1-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape2-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape2-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape2-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape3-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape3-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary[shape3-xor]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape0-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape0-rshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape1-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape1-rshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape2-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape2-rshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape3-lshift]",
"sparse/tests/test_coo.py::test_bitshift_binary[shape3-rshift]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape0-and_]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape1-and_]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape2-and_]",
"sparse/tests/test_coo.py::test_bitwise_scalar[shape3-and_]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape0-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape0-rshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape1-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape1-rshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape2-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape2-rshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape3-lshift]",
"sparse/tests/test_coo.py::test_bitshift_scalar[shape3-rshift]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape0-invert]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape1-invert]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape2-invert]",
"sparse/tests/test_coo.py::test_unary_bitwise_densification_fails[shape3-invert]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape0-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape0-xor]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape1-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape1-xor]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape2-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape2-xor]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape3-or_]",
"sparse/tests/test_coo.py::test_binary_bitwise_densification_fails[shape3-xor]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape0-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape0-rshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape1-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape1-rshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape2-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape2-rshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape3-lshift]",
"sparse/tests/test_coo.py::test_binary_bitshift_densification_fails[shape3-rshift]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape0-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape0-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape0-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape1-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape1-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape1-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape2-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape2-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape2-xor]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape3-and_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape3-or_]",
"sparse/tests/test_coo.py::test_bitwise_binary_bool[shape3-xor]",
"sparse/tests/test_coo.py::test_elemwise_binary_empty",
"sparse/tests/test_coo.py::test_gt",
"sparse/tests/test_coo.py::test_slicing[0]",
"sparse/tests/test_coo.py::test_slicing[1]",
"sparse/tests/test_coo.py::test_slicing[-1]",
"sparse/tests/test_coo.py::test_slicing[index3]",
"sparse/tests/test_coo.py::test_slicing[index4]",
"sparse/tests/test_coo.py::test_slicing[index5]",
"sparse/tests/test_coo.py::test_slicing[index6]",
"sparse/tests/test_coo.py::test_slicing[index7]",
"sparse/tests/test_coo.py::test_slicing[index8]",
"sparse/tests/test_coo.py::test_slicing[index9]",
"sparse/tests/test_coo.py::test_slicing[index10]",
"sparse/tests/test_coo.py::test_slicing[index11]",
"sparse/tests/test_coo.py::test_slicing[index12]",
"sparse/tests/test_coo.py::test_slicing[index13]",
"sparse/tests/test_coo.py::test_slicing[index14]",
"sparse/tests/test_coo.py::test_slicing[index15]",
"sparse/tests/test_coo.py::test_slicing[index16]",
"sparse/tests/test_coo.py::test_slicing[index17]",
"sparse/tests/test_coo.py::test_slicing[index18]",
"sparse/tests/test_coo.py::test_slicing[index19]",
"sparse/tests/test_coo.py::test_slicing[index20]",
"sparse/tests/test_coo.py::test_slicing[index21]",
"sparse/tests/test_coo.py::test_slicing[index22]",
"sparse/tests/test_coo.py::test_slicing[index23]",
"sparse/tests/test_coo.py::test_slicing[index24]",
"sparse/tests/test_coo.py::test_slicing[index25]",
"sparse/tests/test_coo.py::test_slicing[index26]",
"sparse/tests/test_coo.py::test_slicing[index27]",
"sparse/tests/test_coo.py::test_slicing[index28]",
"sparse/tests/test_coo.py::test_slicing[index29]",
"sparse/tests/test_coo.py::test_slicing[index30]",
"sparse/tests/test_coo.py::test_slicing[index31]",
"sparse/tests/test_coo.py::test_slicing[index32]",
"sparse/tests/test_coo.py::test_slicing[index33]",
"sparse/tests/test_coo.py::test_slicing[index34]",
"sparse/tests/test_coo.py::test_slicing[index35]",
"sparse/tests/test_coo.py::test_slicing[index36]",
"sparse/tests/test_coo.py::test_slicing[index37]",
"sparse/tests/test_coo.py::test_slicing[index38]",
"sparse/tests/test_coo.py::test_slicing[index39]",
"sparse/tests/test_coo.py::test_slicing[index40]",
"sparse/tests/test_coo.py::test_slicing[index41]",
"sparse/tests/test_coo.py::test_slicing[index42]",
"sparse/tests/test_coo.py::test_slicing[index43]",
"sparse/tests/test_coo.py::test_slicing[index44]",
"sparse/tests/test_coo.py::test_slicing[index45]",
"sparse/tests/test_coo.py::test_custom_dtype_slicing",
"sparse/tests/test_coo.py::test_slicing_errors[index0]",
"sparse/tests/test_coo.py::test_slicing_errors[index1]",
"sparse/tests/test_coo.py::test_slicing_errors[index2]",
"sparse/tests/test_coo.py::test_slicing_errors[5]",
"sparse/tests/test_coo.py::test_slicing_errors[-5]",
"sparse/tests/test_coo.py::test_slicing_errors[foo]",
"sparse/tests/test_coo.py::test_slicing_errors[index6]",
"sparse/tests/test_coo.py::test_slicing_errors[0.5]",
"sparse/tests/test_coo.py::test_slicing_errors[index8]",
"sparse/tests/test_coo.py::test_slicing_errors[index9]",
"sparse/tests/test_coo.py::test_concatenate",
"sparse/tests/test_coo.py::test_concatenate_mixed[stack-0]",
"sparse/tests/test_coo.py::test_concatenate_mixed[stack-1]",
"sparse/tests/test_coo.py::test_concatenate_mixed[concatenate-0]",
"sparse/tests/test_coo.py::test_concatenate_mixed[concatenate-1]",
"sparse/tests/test_coo.py::test_stack[0-shape0]",
"sparse/tests/test_coo.py::test_stack[0-shape1]",
"sparse/tests/test_coo.py::test_stack[0-shape2]",
"sparse/tests/test_coo.py::test_stack[1-shape0]",
"sparse/tests/test_coo.py::test_stack[1-shape1]",
"sparse/tests/test_coo.py::test_stack[1-shape2]",
"sparse/tests/test_coo.py::test_stack[-1-shape0]",
"sparse/tests/test_coo.py::test_stack[-1-shape1]",
"sparse/tests/test_coo.py::test_stack[-1-shape2]",
"sparse/tests/test_coo.py::test_large_concat_stack",
"sparse/tests/test_coo.py::test_coord_dtype",
"sparse/tests/test_coo.py::test_addition",
"sparse/tests/test_coo.py::test_addition_not_ok_when_large_and_sparse",
"sparse/tests/test_coo.py::test_scalar_multiplication[2]",
"sparse/tests/test_coo.py::test_scalar_multiplication[2.5]",
"sparse/tests/test_coo.py::test_scalar_multiplication[scalar2]",
"sparse/tests/test_coo.py::test_scalar_multiplication[scalar3]",
"sparse/tests/test_coo.py::test_scalar_exponentiation",
"sparse/tests/test_coo.py::test_create_with_lists_of_tuples",
"sparse/tests/test_coo.py::test_sizeof",
"sparse/tests/test_coo.py::test_scipy_sparse_interface",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[coo]",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[csr]",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[dok]",
"sparse/tests/test_coo.py::test_scipy_sparse_interaction[csc]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[mul]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[add]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[sub]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[gt]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[lt]",
"sparse/tests/test_coo.py::test_op_scipy_sparse[ne]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[add]",
"sparse/tests/test_coo.py::test_op_scipy_sparse_left[sub]",
"sparse/tests/test_coo.py::test_cache_csr",
"sparse/tests/test_coo.py::test_empty_shape",
"sparse/tests/test_coo.py::test_single_dimension",
"sparse/tests/test_coo.py::test_raise_dense",
"sparse/tests/test_coo.py::test_large_sum",
"sparse/tests/test_coo.py::test_add_many_sparse_arrays",
"sparse/tests/test_coo.py::test_caching",
"sparse/tests/test_coo.py::test_scalar_slicing",
"sparse/tests/test_coo.py::test_triul[shape0-0]",
"sparse/tests/test_coo.py::test_triul[shape1-1]",
"sparse/tests/test_coo.py::test_triul[shape2--1]",
"sparse/tests/test_coo.py::test_triul[shape3--2]",
"sparse/tests/test_coo.py::test_triul[shape4-1000]",
"sparse/tests/test_coo.py::test_empty_reduction",
"sparse/tests/test_coo.py::test_random_shape[0.1-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.1-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.1-shape2]",
"sparse/tests/test_coo.py::test_random_shape[0.3-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.3-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.3-shape2]",
"sparse/tests/test_coo.py::test_random_shape[0.5-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.5-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.5-shape2]",
"sparse/tests/test_coo.py::test_random_shape[0.7-shape0]",
"sparse/tests/test_coo.py::test_random_shape[0.7-shape1]",
"sparse/tests/test_coo.py::test_random_shape[0.7-shape2]",
"sparse/tests/test_coo.py::test_two_random_unequal",
"sparse/tests/test_coo.py::test_two_random_same_seed",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.0-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.01-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.1-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape0-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape0-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape0-<lambda>-bool]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape1-None-float64]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape1-rvs-int]",
"sparse/tests/test_coo.py::test_random_rvs[0.2-shape1-<lambda>-bool]",
"sparse/tests/test_coo.py::test_scalar_shape_construction",
"sparse/tests/test_coo.py::test_len",
"sparse/tests/test_coo.py::test_density",
"sparse/tests/test_coo.py::test_size",
"sparse/tests/test_coo.py::test_np_array",
"sparse/tests/test_coo.py::test_three_arg_where[shapes0]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes1]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes2]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes3]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes4]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes5]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes6]",
"sparse/tests/test_coo.py::test_three_arg_where[shapes7]",
"sparse/tests/test_coo.py::test_one_arg_where",
"sparse/tests/test_coo.py::test_one_arg_where_dense",
"sparse/tests/test_coo.py::test_two_arg_where",
"sparse/tests/test_coo.py::test_argwhere",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.1-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.1-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.1-shape2]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.3-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.3-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.3-shape2]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.5-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.5-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.5-shape2]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.7-shape0]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.7-shape1]",
"sparse/tests/test_dok.py::test_random_shape_nnz[0.7-shape2]",
"sparse/tests/test_dok.py::test_convert_to_coo",
"sparse/tests/test_dok.py::test_convert_from_coo",
"sparse/tests/test_dok.py::test_convert_from_numpy",
"sparse/tests/test_dok.py::test_convert_to_numpy",
"sparse/tests/test_dok.py::test_construct[2-data0]",
"sparse/tests/test_dok.py::test_construct[shape1-data1]",
"sparse/tests/test_dok.py::test_construct[shape2-data2]",
"sparse/tests/test_dok.py::test_getitem[0.1-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.1-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.1-shape2]",
"sparse/tests/test_dok.py::test_getitem[0.3-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.3-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.3-shape2]",
"sparse/tests/test_dok.py::test_getitem[0.5-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.5-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.5-shape2]",
"sparse/tests/test_dok.py::test_getitem[0.7-shape0]",
"sparse/tests/test_dok.py::test_getitem[0.7-shape1]",
"sparse/tests/test_dok.py::test_getitem[0.7-shape2]",
"sparse/tests/test_dok.py::test_setitem[shape2-index2-value2]",
"sparse/tests/test_dok.py::test_setitem[shape6-index6-value6]",
"sparse/tests/test_dok.py::test_setitem[shape7-index7-value7]",
"sparse/tests/test_dok.py::test_setitem[shape8-index8-value8]",
"sparse/tests/test_dok.py::test_setitem[shape10-index10-value10]",
"sparse/tests/test_dok.py::test_setitem[shape12-index12-value12]",
"sparse/tests/test_dok.py::test_default_dtype",
"sparse/tests/test_dok.py::test_int_dtype",
"sparse/tests/test_dok.py::test_float_dtype",
"sparse/tests/test_dok.py::test_set_zero"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,474 | [
"sparse/coo/core.py",
"docs/generated/sparse.COO.rst",
"docs/generated/sparse.COO.nonzero.rst"
]
| [
"sparse/coo/core.py",
"docs/generated/sparse.COO.rst",
"docs/generated/sparse.COO.nonzero.rst"
]
|
jupyter__nbgrader-954 | bbc694e8ee4c1aa4eeaee0936491ff19b20bad60 | 2018-05-03 21:33:50 | 5bc6f37c39c8b10b8f60440b2e6d9487e63ef3f1 | diff --git a/nbgrader/utils.py b/nbgrader/utils.py
index 55824f3f..55f440ab 100644
--- a/nbgrader/utils.py
+++ b/nbgrader/utils.py
@@ -194,8 +194,10 @@ def find_all_files(path, exclude=None):
"""Recursively finds all filenames rooted at `path`, optionally excluding
some based on filename globs."""
files = []
+ to_skip = []
for dirname, dirnames, filenames in os.walk(path):
- if is_ignored(dirname, exclude):
+ if is_ignored(dirname, exclude) or dirname in to_skip:
+ to_skip.extend([os.path.join(dirname, x) for x in dirnames])
continue
for filename in filenames:
fullpath = os.path.join(dirname, filename)
| Unexpected behaviour of utils.find_all_files
<!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
Ubunto 16.04
### `nbgrader --version`
nbgrader version 0.5.4
### `jupyterhub --version` (if used with JupyterHub)
0.8.1
### `jupyter notebook --version`
5.4.1
### Expected behavior
By including '.git' or '.git/**' in CourseDirectory.ignore anything under the git directory to be ignored.
### Actual behavior
Anything in subdirectories of '.git' is included.
### Steps to reproduce the behavior
$ mkdir -p foo/bar/qwe
$ touch foo/bar/qwe/file.py
$ /opt/conda/bin/python -c "from nbgrader.utils import find_all_files;print(find_all_files('foo', ['bar']))"
['foo/bar/qwe/file.py']
I'm sorry if this is expected behaviour but I found it surprising. | jupyter/nbgrader | diff --git a/nbgrader/tests/utils/test_utils.py b/nbgrader/tests/utils/test_utils.py
index 2814ea5c..ca76e83f 100644
--- a/nbgrader/tests/utils/test_utils.py
+++ b/nbgrader/tests/utils/test_utils.py
@@ -272,18 +272,34 @@ def test_is_ignored(temp_cwd):
def test_find_all_files(temp_cwd):
- os.makedirs(join("foo", "bar"))
+ os.makedirs(join("foo", "bar", "quux"))
with open(join("foo", "baz.txt"), "w") as fh:
fh.write("baz")
with open(join("foo", "bar", "baz.txt"), "w") as fh:
fh.write("baz")
+ with open(join("foo", "bar", "quux", "baz.txt"), "w") as fh:
+ fh.write("baz")
- assert utils.find_all_files("foo") == [join("foo", "baz.txt"), join("foo", "bar", "baz.txt")]
+ assert utils.find_all_files("foo") == [
+ join("foo", "baz.txt"),
+ join("foo", "bar", "baz.txt"),
+ join("foo", "bar", "quux", "baz.txt")]
assert utils.find_all_files("foo", ["bar"]) == [join("foo", "baz.txt")]
- assert utils.find_all_files(join("foo", "bar")) == [join("foo", "bar", "baz.txt")]
+ assert utils.find_all_files("foo", ["quux"]) == [
+ join("foo", "baz.txt"),
+ join("foo", "bar", "baz.txt")]
+ assert utils.find_all_files(join("foo", "bar")) == [
+ join("foo", "bar", "baz.txt"),
+ join("foo", "bar", "quux", "baz.txt")]
assert utils.find_all_files(join("foo", "bar"), ["*.txt"]) == []
- assert utils.find_all_files(".") == [join(".", "foo", "baz.txt"), join(".", "foo", "bar", "baz.txt")]
+ assert utils.find_all_files(".") == [
+ join(".", "foo", "baz.txt"),
+ join(".", "foo", "bar", "baz.txt"),
+ join(".", "foo", "bar", "quux", "baz.txt")]
assert utils.find_all_files(".", ["bar"]) == [join(".", "foo", "baz.txt")]
+ assert utils.find_all_files(".", ["quux"]) == [
+ join(".", "foo", "baz.txt"),
+ join(".", "foo", "bar", "baz.txt")]
def test_unzip_invalid_ext(temp_cwd):
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 1
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pyenchant",
"sphinxcontrib-spelling",
"sphinx_rtd_theme",
"nbval",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.5",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
alembic==1.7.7
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
async-generator==1.10
attrs==22.2.0
Babel==2.11.0
backcall==0.2.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
comm==0.1.4
contextvars==2.4
coverage==6.2
dataclasses==0.8
decorator==5.1.1
defusedxml==0.7.1
docutils==0.18.1
entrypoints==0.4
greenlet==2.0.2
idna==3.10
imagesize==1.4.1
immutables==0.19
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
ipykernel==5.5.6
ipython==7.16.3
ipython-genutils==0.2.0
ipywidgets==7.8.5
jedi==0.17.2
Jinja2==3.0.3
json5==0.9.16
jsonschema==3.2.0
jupyter==1.1.1
jupyter-client==7.1.2
jupyter-console==6.4.3
jupyter-core==4.9.2
jupyter-server==1.13.1
jupyterlab==3.2.9
jupyterlab-pygments==0.1.2
jupyterlab-server==2.10.3
jupyterlab_widgets==1.1.11
Mako==1.1.6
MarkupSafe==2.0.1
mistune==0.8.4
nbclassic==0.3.5
nbclient==0.5.9
nbconvert==6.0.7
nbformat==5.1.3
-e git+https://github.com/jupyter/nbgrader.git@bbc694e8ee4c1aa4eeaee0936491ff19b20bad60#egg=nbgrader
nbval==0.10.0
nest-asyncio==1.6.0
notebook==6.4.10
packaging==21.3
pandocfilters==1.5.1
parso==0.7.1
pexpect==4.9.0
pickleshare==0.7.5
pluggy==1.0.0
prometheus-client==0.17.1
prompt-toolkit==3.0.36
ptyprocess==0.7.0
py==1.11.0
pycparser==2.21
pyenchant==3.2.2
Pygments==2.14.0
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
pyzmq==25.1.2
requests==2.27.1
Send2Trash==1.8.3
six==1.17.0
sniffio==1.2.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-spelling==7.7.0
SQLAlchemy==1.4.54
terminado==0.12.1
testpath==0.6.0
tomli==1.2.3
tornado==6.1
traitlets==4.3.3
typing_extensions==4.1.1
urllib3==1.26.20
wcwidth==0.2.13
webencodings==0.5.1
websocket-client==1.3.1
widgetsnbextension==3.6.10
zipp==3.6.0
| name: nbgrader
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- alembic==1.7.7
- anyio==3.6.2
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- async-generator==1.10
- attrs==22.2.0
- babel==2.11.0
- backcall==0.2.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- comm==0.1.4
- contextvars==2.4
- coverage==6.2
- dataclasses==0.8
- decorator==5.1.1
- defusedxml==0.7.1
- docutils==0.18.1
- entrypoints==0.4
- greenlet==2.0.2
- idna==3.10
- imagesize==1.4.1
- immutables==0.19
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- ipykernel==5.5.6
- ipython==7.16.3
- ipython-genutils==0.2.0
- ipywidgets==7.8.5
- jedi==0.17.2
- jinja2==3.0.3
- json5==0.9.16
- jsonschema==3.2.0
- jupyter==1.1.1
- jupyter-client==7.1.2
- jupyter-console==6.4.3
- jupyter-core==4.9.2
- jupyter-server==1.13.1
- jupyterlab==3.2.9
- jupyterlab-pygments==0.1.2
- jupyterlab-server==2.10.3
- jupyterlab-widgets==1.1.11
- mako==1.1.6
- markupsafe==2.0.1
- mistune==0.8.4
- nbclassic==0.3.5
- nbclient==0.5.9
- nbconvert==6.0.7
- nbformat==5.1.3
- nbval==0.10.0
- nest-asyncio==1.6.0
- notebook==6.4.10
- packaging==21.3
- pandocfilters==1.5.1
- parso==0.7.1
- pexpect==4.9.0
- pickleshare==0.7.5
- pluggy==1.0.0
- prometheus-client==0.17.1
- prompt-toolkit==3.0.36
- ptyprocess==0.7.0
- py==1.11.0
- pycparser==2.21
- pyenchant==3.2.2
- pygments==2.14.0
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyzmq==25.1.2
- requests==2.27.1
- send2trash==1.8.3
- six==1.17.0
- sniffio==1.2.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-rtd-theme==2.0.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- sphinxcontrib-spelling==7.7.0
- sqlalchemy==1.4.54
- terminado==0.12.1
- testpath==0.6.0
- tomli==1.2.3
- tornado==6.1
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- wcwidth==0.2.13
- webencodings==0.5.1
- websocket-client==1.3.1
- widgetsnbextension==3.6.10
- zipp==3.6.0
prefix: /opt/conda/envs/nbgrader
| [
"nbgrader/tests/utils/test_utils.py::test_find_all_files"
]
| []
| [
"nbgrader/tests/utils/test_utils.py::test_is_grade",
"nbgrader/tests/utils/test_utils.py::test_is_solution",
"nbgrader/tests/utils/test_utils.py::test_is_locked",
"nbgrader/tests/utils/test_utils.py::test_determine_grade_code_grade",
"nbgrader/tests/utils/test_utils.py::test_determine_grade_markdown_grade",
"nbgrader/tests/utils/test_utils.py::test_determine_grade_solution",
"nbgrader/tests/utils/test_utils.py::test_determine_grade_code_grade_and_solution",
"nbgrader/tests/utils/test_utils.py::test_determine_grade_markdown_grade_and_solution",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_identical",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_cell_type",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_whitespace",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_source",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_points",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_grade_id",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_grade_cell",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_solution_cell",
"nbgrader/tests/utils/test_utils.py::test_compute_checksum_utf8",
"nbgrader/tests/utils/test_utils.py::test_is_ignored",
"nbgrader/tests/utils/test_utils.py::test_unzip_invalid_ext",
"nbgrader/tests/utils/test_utils.py::test_unzip_bad_zip",
"nbgrader/tests/utils/test_utils.py::test_unzip_no_output_path",
"nbgrader/tests/utils/test_utils.py::test_unzip_create_own_folder",
"nbgrader/tests/utils/test_utils.py::test_unzip_tree"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,475 | [
"nbgrader/utils.py"
]
| [
"nbgrader/utils.py"
]
|
|
Azure__WALinuxAgent-1148 | 423dc18485e4c8d506bd07f77f7612b17bda27eb | 2018-05-03 23:30:54 | 6e9b985c1d7d564253a1c344bab01b45093103cd | boumenot: I opened #1161 to address the telemetry issue. There is some sort of circular dependency issue that manifest on CI, but not locally. I will debug it later, and add the necessary event. | diff --git a/azurelinuxagent/common/protocol/wire.py b/azurelinuxagent/common/protocol/wire.py
index 841f9b72..265b1f6f 100644
--- a/azurelinuxagent/common/protocol/wire.py
+++ b/azurelinuxagent/common/protocol/wire.py
@@ -600,6 +600,12 @@ class WireClient(object):
random.shuffle(version_uris_shuffled)
for version in version_uris_shuffled:
+ # GA expects a location and failoverLocation in ExtensionsConfig, but
+ # this is not always the case. See #1147.
+ if version.uri is None:
+ logger.verbose('The specified manifest URL is empty, ignored.')
+ continue
+
response = None
if not HostPluginProtocol.is_default_channel():
response = self.fetch(version.uri)
diff --git a/azurelinuxagent/common/utils/restutil.py b/azurelinuxagent/common/utils/restutil.py
index 5ceb4c94..fc9aac93 100644
--- a/azurelinuxagent/common/utils/restutil.py
+++ b/azurelinuxagent/common/utils/restutil.py
@@ -170,8 +170,6 @@ def _http_request(method, host, rel_uri, port=None, data=None, secure=False,
headers=None, proxy_host=None, proxy_port=None):
headers = {} if headers is None else headers
- headers['Connection'] = 'close'
-
use_proxy = proxy_host is not None and proxy_port is not None
if port is None:
| ExtensionsConfig May Not Contain a failoverLocation Attribute
The agent expects ExtensionsConfig to have a location and failoverLocation for each plugin. This has been proven to not be true for all regions. I consider this to be a bug upstream, but the agent should be robust enough to handle this case.
| Azure/WALinuxAgent | diff --git a/tests/utils/test_rest_util.py b/tests/utils/test_rest_util.py
index adeb8141..a864884a 100644
--- a/tests/utils/test_rest_util.py
+++ b/tests/utils/test_rest_util.py
@@ -195,7 +195,7 @@ class TestHttpOperations(AgentTestCase):
])
HTTPSConnection.assert_not_called()
mock_conn.request.assert_has_calls([
- call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'})
+ call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT})
])
self.assertEqual(1, mock_conn.getresponse.call_count)
self.assertNotEquals(None, resp)
@@ -218,7 +218,7 @@ class TestHttpOperations(AgentTestCase):
call("foo", 443, timeout=10)
])
mock_conn.request.assert_has_calls([
- call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'})
+ call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT})
])
self.assertEqual(1, mock_conn.getresponse.call_count)
self.assertNotEquals(None, resp)
@@ -242,7 +242,7 @@ class TestHttpOperations(AgentTestCase):
])
HTTPSConnection.assert_not_called()
mock_conn.request.assert_has_calls([
- call(method="GET", url="http://foo:80/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'})
+ call(method="GET", url="http://foo:80/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT})
])
self.assertEqual(1, mock_conn.getresponse.call_count)
self.assertNotEquals(None, resp)
@@ -267,7 +267,7 @@ class TestHttpOperations(AgentTestCase):
call("foo.bar", 23333, timeout=10)
])
mock_conn.request.assert_has_calls([
- call(method="GET", url="https://foo:443/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'})
+ call(method="GET", url="https://foo:443/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT})
])
self.assertEqual(1, mock_conn.getresponse.call_count)
self.assertNotEquals(None, resp)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 2
} | 2.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio",
"distro"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
distro==1.9.0
exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
execnet==2.1.1
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
pytest-asyncio==0.26.0
pytest-cov==6.0.0
pytest-mock==3.14.0
pytest-xdist==3.6.1
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions==4.13.0
-e git+https://github.com/Azure/WALinuxAgent.git@423dc18485e4c8d506bd07f77f7612b17bda27eb#egg=WALinuxAgent
| name: WALinuxAgent
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- distro==1.9.0
- execnet==2.1.1
- pytest-asyncio==0.26.0
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pytest-xdist==3.6.1
- typing-extensions==4.13.0
prefix: /opt/conda/envs/WALinuxAgent
| [
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_direct",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_direct_secure",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_proxy",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_proxy_secure"
]
| []
| [
"tests/utils/test_rest_util.py::TestIOErrorCounter::test_get_and_reset",
"tests/utils/test_rest_util.py::TestIOErrorCounter::test_increment_hostplugin",
"tests/utils/test_rest_util.py::TestIOErrorCounter::test_increment_other",
"tests/utils/test_rest_util.py::TestIOErrorCounter::test_increment_protocol",
"tests/utils/test_rest_util.py::TestHttpOperations::test_get_http_proxy_configuration_overrides_env",
"tests/utils/test_rest_util.py::TestHttpOperations::test_get_http_proxy_configuration_requires_host",
"tests/utils/test_rest_util.py::TestHttpOperations::test_get_http_proxy_http_uses_httpproxy",
"tests/utils/test_rest_util.py::TestHttpOperations::test_get_http_proxy_https_uses_httpsproxy",
"tests/utils/test_rest_util.py::TestHttpOperations::test_get_http_proxy_ignores_user_in_httpproxy",
"tests/utils/test_rest_util.py::TestHttpOperations::test_get_http_proxy_none_is_default",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_raises_for_bad_request",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_raises_for_resource_gone",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_exceptions",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_for_safe_minimum_number_when_throttled",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_ioerrors",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_passed_status_codes",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_status_codes",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_with_constant_delay_when_throttled",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_retries_with_fibonacci_delay",
"tests/utils/test_rest_util.py::TestHttpOperations::test_http_request_with_retry",
"tests/utils/test_rest_util.py::TestHttpOperations::test_parse_url",
"tests/utils/test_rest_util.py::TestHttpOperations::test_read_response_bytes",
"tests/utils/test_rest_util.py::TestHttpOperations::test_read_response_error",
"tests/utils/test_rest_util.py::TestHttpOperations::test_request_failed",
"tests/utils/test_rest_util.py::TestHttpOperations::test_request_succeeded"
]
| []
| Apache License 2.0 | 2,476 | [
"azurelinuxagent/common/protocol/wire.py",
"azurelinuxagent/common/utils/restutil.py"
]
| [
"azurelinuxagent/common/protocol/wire.py",
"azurelinuxagent/common/utils/restutil.py"
]
|
python-useful-helpers__exec-helpers-32 | ce473df58deee47c9e327b507f8fc1ac121ba19b | 2018-05-04 10:37:30 | 814d435b7eda2b00fa1559d5a94103f1e888ab52 | coveralls: ## Pull Request Test Coverage Report for [Build 101](https://coveralls.io/builds/16832947)
* **34** of **34** **(100.0%)** changed or added relevant lines in **2** files are covered.
* **1** unchanged line in **1** file lost coverage.
* Overall coverage increased (+**0.1%**) to **99.688%**
---
| Files with Coverage Reduction | New Missed Lines | % |
| :-----|--------------|--: |
| [exec_helpers/_ssh_client_base.py](https://coveralls.io/builds/16832947/source?filename=exec_helpers%2F_ssh_client_base.py#L829) | 1 | 126.92% |
<!-- | **Total:** | **1** | | -->
| Totals | [](https://coveralls.io/builds/16832947) |
| :-- | --: |
| Change from base [Build 99](https://coveralls.io/builds/16802155): | 0.1% |
| Covered Lines: | 957 |
| Relevant Lines: | 960 |
---
##### 💛 - [Coveralls](https://coveralls.io)
| diff --git a/doc/source/SSHClient.rst b/doc/source/SSHClient.rst
index 4875ea9..b976786 100644
--- a/doc/source/SSHClient.rst
+++ b/doc/source/SSHClient.rst
@@ -69,6 +69,11 @@ API: SSHClient and SSHAuth.
``bool``
Use sudo for all calls, except wrapped in connection.sudo context manager.
+ .. py:attribute:: keepalive_mode
+
+ ``bool``
+ Use keepalive mode for context manager. If `False` - close connection on exit from context manager.
+
.. py:method:: close()
Close connection
@@ -93,6 +98,7 @@ API: SSHClient and SSHAuth.
.. versionchanged:: 1.0.0 disconnect enforced on close
.. versionchanged:: 1.1.0 release lock on exit
+ .. versionchanged:: 1.2.1 disconnect enforced on close only not in keepalive mode
.. py:method:: sudo(enforce=None)
@@ -101,6 +107,16 @@ API: SSHClient and SSHAuth.
:param enforce: Enforce sudo enabled or disabled. By default: None
:type enforce: ``typing.Optional[bool]``
+ .. py:method:: keepalive(enforce=None)
+
+ Context manager getter for keepalive operation.
+
+ :param enforce: Enforce keepalive enabled or disabled. By default: True
+ :type enforce: ``typing.bool``
+
+ .. Note:: Enter and exit ssh context manager is produced as well.
+ .. versionadded:: 1.2.1
+
.. py:method:: execute_async(command, stdin=None, open_stdout=True, open_stderr=True, verbose=False, log_mask_re=None, **kwargs)
Execute command in async mode and return channel with IO objects.
diff --git a/exec_helpers/_ssh_client_base.py b/exec_helpers/_ssh_client_base.py
index d9a24a7..6fb90e6 100644
--- a/exec_helpers/_ssh_client_base.py
+++ b/exec_helpers/_ssh_client_base.py
@@ -215,12 +215,19 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
"""SSH Client helper."""
__slots__ = (
- '__hostname', '__port', '__auth', '__ssh', '__sftp', 'sudo_mode',
+ '__hostname', '__port', '__auth', '__ssh', '__sftp',
+ '__sudo_mode', '__keepalive_mode',
)
class __get_sudo(object):
"""Context manager for call commands with sudo."""
+ __slots__ = (
+ '__ssh',
+ '__sudo_status',
+ '__enforce',
+ )
+
def __init__(
self,
ssh, # type: SSHClientBase
@@ -243,6 +250,40 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
def __exit__(self, exc_type, exc_val, exc_tb):
self.__ssh.sudo_mode = self.__sudo_status
+ class __get_keepalive(object):
+ """Context manager for keepalive management."""
+
+ __slots__ = (
+ '__ssh',
+ '__keepalive_status',
+ '__enforce',
+ )
+
+ def __init__(
+ self,
+ ssh, # type: SSHClientBase
+ enforce=True # type: bool
+ ): # type: (...) -> None
+ """Context manager for keepalive management.
+
+ :type ssh: SSHClient
+ :type enforce: bool
+ :param enforce: Keep connection alive after context manager exit
+ """
+ self.__ssh = ssh
+ self.__keepalive_status = ssh.keepalive_mode
+ self.__enforce = enforce
+
+ def __enter__(self):
+ self.__keepalive_status = self.__ssh.keepalive_mode
+ if self.__enforce is not None:
+ self.__ssh.keepalive_mode = self.__enforce
+ self.__ssh.__enter__()
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.__ssh.__exit__(exc_type=exc_type, exc_val=exc_val, exc_tb=exc_tb)
+ self.__ssh.keepalive_mode = self.__keepalive_status
+
def __hash__(self):
"""Hash for usage as dict keys."""
return hash((
@@ -286,7 +327,9 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
self.__hostname = host
self.__port = port
- self.sudo_mode = False
+ self.__sudo_mode = False
+ self.__keepalive_mode = True
+
self.__ssh = paramiko.SSHClient()
self.__ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.__sftp = None
@@ -460,10 +503,44 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
.. versionchanged:: 1.0.0 disconnect enforced on close
.. versionchanged:: 1.1.0 release lock on exit
+ .. versionchanged:: 1.2.1 disconnect enforced on close only not in keepalive mode
"""
- self.close()
+ if not self.__keepalive_mode:
+ self.close()
super(SSHClientBase, self).__exit__(exc_type, exc_val, exc_tb)
+ @property
+ def sudo_mode(self): # type: () -> bool
+ """Persistent sudo mode for connection object.
+
+ :rtype: bool
+ """
+ return self.__sudo_mode
+
+ @sudo_mode.setter
+ def sudo_mode(self, mode): # type: (bool) -> None
+ """Persistent sudo mode change for connection object.
+
+ :type mode: bool
+ """
+ self.__sudo_mode = bool(mode)
+
+ @property
+ def keepalive_mode(self): # type: () -> bool
+ """Persistent keepalive mode for connection object.
+
+ :rtype: bool
+ """
+ return self.__keepalive_mode
+
+ @keepalive_mode.setter
+ def keepalive_mode(self, mode): # type: (bool) -> None
+ """Persistent keepalive mode change for connection object.
+
+ :type mode: bool
+ """
+ self.__keepalive_mode = bool(mode)
+
def reconnect(self): # type: () -> None
"""Reconnect SSH session."""
with self.lock:
@@ -485,6 +562,20 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
"""
return self.__get_sudo(ssh=self, enforce=enforce)
+ def keepalive(
+ self,
+ enforce=True # type: bool
+ ):
+ """Call contextmanager with keepalive mode change.
+
+ :param enforce: Enforce keepalive enabled or disabled.
+ :type enforce: bool
+
+ .. Note:: Enter and exit ssh context manager is produced as well.
+ .. versionadded:: 1.2.1
+ """
+ return self.__get_keepalive(ssh=self, enforce=enforce)
+
def execute_async(
self,
command, # type: str
@@ -839,7 +930,8 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
list(futures.values()),
timeout=timeout
) # type: typing.Set[concurrent.futures.Future], typing.Set[concurrent.futures.Future]
- for future in not_done:
+
+ for future in not_done: # pragma: no cover
future.cancel()
for (
diff --git a/exec_helpers/subprocess_runner.py b/exec_helpers/subprocess_runner.py
index 1059707..24fa873 100644
--- a/exec_helpers/subprocess_runner.py
+++ b/exec_helpers/subprocess_runner.py
@@ -21,6 +21,8 @@ from __future__ import division
from __future__ import unicode_literals
import collections
+# noinspection PyCompatibility
+import concurrent.futures
import errno
import logging
import os
@@ -203,7 +205,7 @@ class Subprocess(six.with_metaclass(SingletonMeta, _api.ExecHelper)):
verbose=verbose
)
- @threaded.threaded(started=True)
+ @threaded.threadpooled()
def poll_pipes(
result, # type: exec_result.ExecResult
stop, # type: threading.Event
@@ -245,17 +247,17 @@ class Subprocess(six.with_metaclass(SingletonMeta, _api.ExecHelper)):
stop_event = threading.Event()
# pylint: disable=assignment-from-no-return
- poll_thread = poll_pipes(
+ future = poll_pipes(
result,
stop_event
- ) # type: threading.Thread
+ ) # type: concurrent.futures.Future
# pylint: enable=assignment-from-no-return
# wait for process close
- stop_event.wait(timeout)
+
+ concurrent.futures.wait([future], timeout)
# Process closed?
if stop_event.is_set():
- poll_thread.join(0.1)
stop_event.clear()
return result
# Kill not ended process and wait for close
@@ -264,7 +266,7 @@ class Subprocess(six.with_metaclass(SingletonMeta, _api.ExecHelper)):
stop_event.wait(5)
# Force stop cycle if no exit code after kill
stop_event.set()
- poll_thread.join(5)
+ future.cancel()
except OSError:
# Nothing to kill
logger.warning(
| Add 'keepalive' option to SSHClientBase()
SSHClientBase can re-use existing connection to the node if exists, or create a new one.
But it doesn't close the opened connections if the class is used as a context manager. This allows to use many different ssh accesses to the same nodes without re-connecting for each command.
Existing approach is suitable for small amount of nodes when many different ssh command are using the same connections.
For cases when there are many different nodes, it will be necessary to close the connections after leaving the SSHClient context.
Let's add to the SSHClientBase() a new option: 'keepalive', default to True, which will leave the existing connection working if True, and close the existing connection after leaving the context manager if False.
For example:
```
remote = SSHClient(....)
with remote.keepalive(enforce=True): # default
# connection will be preserved at the context exit
...
with remote.keepalive(enforce=False):
# connection will be closed at the context exit
...
``` | python-useful-helpers/exec-helpers | diff --git a/test/test_ssh_client.py b/test/test_ssh_client.py
index d466a2f..65064cb 100644
--- a/test/test_ssh_client.py
+++ b/test/test_ssh_client.py
@@ -92,7 +92,7 @@ class TestExecute(unittest.TestCase):
return (u"Command exit code '{code!s}':\n{cmd!s}\n"
.format(cmd=result.cmd.rstrip(), code=result.exit_code))
- def test_execute_async(self, client, policy, logger):
+ def test_001_execute_async(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -122,7 +122,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_pty(self, client, policy, logger):
+ def test_002_execute_async_pty(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -157,7 +157,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_no_stdout_stderr(self, client, policy, logger):
+ def test_003_execute_async_no_stdout_stderr(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -208,7 +208,7 @@ class TestExecute(unittest.TestCase):
mock.call.exec_command('{}\n'.format(command))
))
- def test_execute_async_sudo(self, client, policy, logger):
+ def test_004_execute_async_sudo(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -241,7 +241,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_with_sudo_enforce(self, client, policy, logger):
+ def test_005_execute_async_with_sudo_enforce(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -277,7 +277,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_with_no_sudo_enforce(self, client, policy, logger):
+ def test_006_execute_async_with_no_sudo_enforce(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -309,7 +309,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_with_none_enforce(self, client, policy, logger):
+ def test_007_execute_async_with_sudo_none_enforce(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -342,7 +342,7 @@ class TestExecute(unittest.TestCase):
)
@mock.patch('exec_helpers.ssh_auth.SSHAuth.enter_password')
- def test_execute_async_sudo_password(
+ def test_008_execute_async_sudo_password(
self, enter_password, client, policy, logger):
stdin = mock.Mock(name='stdin')
stdout = mock.Mock(name='stdout')
@@ -386,7 +386,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_verbose(self, client, policy, logger):
+ def test_009_execute_async_verbose(self, client, policy, logger):
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
transport = mock.Mock()
@@ -416,7 +416,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_execute_async_mask_command(self, client, policy, logger):
+ def test_010_execute_async_mask_command(self, client, policy, logger):
cmd = "USE='secret=secret_pass' do task"
log_mask_re = r"secret\s*=\s*([A-Z-a-z0-9_\-]+)"
masked_cmd = "USE='secret=<*masked*>' do task"
@@ -451,7 +451,7 @@ class TestExecute(unittest.TestCase):
log.mock_calls
)
- def test_check_stdin_str(self, client, policy, logger):
+ def test_011_check_stdin_str(self, client, policy, logger):
stdin_val = u'this is a line'
stdin = mock.Mock(name='stdin')
@@ -496,7 +496,7 @@ class TestExecute(unittest.TestCase):
mock.call.exec_command('{val}\n'.format(val=print_stdin))
))
- def test_check_stdin_bytes(self, client, policy, logger):
+ def test_012_check_stdin_bytes(self, client, policy, logger):
stdin_val = b'this is a line'
stdin = mock.Mock(name='stdin')
@@ -541,7 +541,7 @@ class TestExecute(unittest.TestCase):
mock.call.exec_command('{val}\n'.format(val=print_stdin))
))
- def test_check_stdin_bytearray(self, client, policy, logger):
+ def test_013_check_stdin_bytearray(self, client, policy, logger):
stdin_val = bytearray(b'this is a line')
stdin = mock.Mock(name='stdin')
@@ -586,7 +586,7 @@ class TestExecute(unittest.TestCase):
mock.call.exec_command('{val}\n'.format(val=print_stdin))
))
- def test_check_stdin_closed(self, client, policy, logger):
+ def test_014_check_stdin_closed(self, client, policy, logger):
stdin_val = 'this is a line'
stdin = mock.Mock(name='stdin')
@@ -631,6 +631,76 @@ class TestExecute(unittest.TestCase):
mock.call.exec_command('{val}\n'.format(val=print_stdin))
))
+ def test_015_keepalive(self, client, policy, logger):
+ chan = mock.Mock()
+ open_session = mock.Mock(return_value=chan)
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+
+ with ssh:
+ pass
+
+ _ssh.close.assert_not_called()
+
+ def test_016_no_keepalive(self, client, policy, logger):
+ chan = mock.Mock()
+ open_session = mock.Mock(return_value=chan)
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+ ssh.keepalive_mode = False
+
+ with ssh:
+ pass
+
+ _ssh.close.assert_called_once()
+
+ def test_017_keepalive_enforced(self, client, policy, logger):
+ chan = mock.Mock()
+ open_session = mock.Mock(return_value=chan)
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+ ssh.keepalive_mode = False
+
+ with ssh.keepalive():
+ pass
+
+ _ssh.close.assert_not_called()
+
+ def test_018_no_keepalive_enforced(self, client, policy, logger):
+ chan = mock.Mock()
+ open_session = mock.Mock(return_value=chan)
+ transport = mock.Mock()
+ transport.attach_mock(open_session, 'open_session')
+ get_transport = mock.Mock(return_value=transport)
+ _ssh = mock.Mock()
+ _ssh.attach_mock(get_transport, 'get_transport')
+ client.return_value = _ssh
+
+ ssh = self.get_ssh()
+
+ with ssh.keepalive(enforce=False):
+ pass
+
+ _ssh.close.assert_called_once()
+
@staticmethod
def get_patched_execute_async_retval(
ec=0,
@@ -680,7 +750,7 @@ class TestExecute(unittest.TestCase):
return chan, '', exp_result, stderr, stdout
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute(
+ def test_019_execute(
self,
execute_async,
client, policy, logger
@@ -727,7 +797,7 @@ class TestExecute(unittest.TestCase):
)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_verbose(
+ def test_020_execute_verbose(
self,
execute_async,
client, policy, logger):
@@ -772,7 +842,7 @@ class TestExecute(unittest.TestCase):
)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_no_stdout(
+ def test_021_execute_no_stdout(
self,
execute_async,
client, policy, logger
@@ -816,7 +886,7 @@ class TestExecute(unittest.TestCase):
)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_no_stderr(
+ def test_022_execute_no_stderr(
self,
execute_async,
client, policy, logger
@@ -860,7 +930,7 @@ class TestExecute(unittest.TestCase):
)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_no_stdout_stderr(
+ def test_023_execute_no_stdout_stderr(
self,
execute_async,
client, policy, logger
@@ -907,7 +977,7 @@ class TestExecute(unittest.TestCase):
@mock.patch('time.sleep', autospec=True)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_timeout(
+ def test_024_execute_timeout(
self,
execute_async, sleep,
client, policy, logger):
@@ -941,7 +1011,7 @@ class TestExecute(unittest.TestCase):
@mock.patch('time.sleep', autospec=True)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_timeout_fail(
+ def test_025_execute_timeout_fail(
self,
execute_async, sleep,
client, policy, logger):
@@ -966,7 +1036,7 @@ class TestExecute(unittest.TestCase):
chan.assert_has_calls((mock.call.status_event.is_set(), ))
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_mask_command(
+ def test_026_execute_mask_command(
self,
execute_async,
client, policy, logger
@@ -1019,7 +1089,7 @@ class TestExecute(unittest.TestCase):
)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_together(self, execute_async, client, policy, logger):
+ def test_027_execute_together(self, execute_async, client, policy, logger):
(
chan, _stdin, _, stderr, stdout
) = self.get_patched_execute_async_retval()
@@ -1070,7 +1140,7 @@ class TestExecute(unittest.TestCase):
remotes=remotes, command=command, expected=[1])
@mock.patch('exec_helpers.ssh_client.SSHClient.execute_async')
- def test_execute_together_exceptions(
+ def test_028_execute_together_exceptions(
self,
execute_async, # type: mock.Mock
client,
@@ -1108,7 +1178,7 @@ class TestExecute(unittest.TestCase):
self.assertIsInstance(exception, RuntimeError)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute')
- def test_check_call(self, execute, client, policy, logger):
+ def test_029_check_call(self, execute, client, policy, logger):
exit_code = 0
return_value = exec_result.ExecResult(
cmd=command,
@@ -1147,7 +1217,7 @@ class TestExecute(unittest.TestCase):
execute.assert_called_once_with(command, verbose, None)
@mock.patch('exec_helpers.ssh_client.SSHClient.execute')
- def test_check_call_expected(self, execute, client, policy, logger):
+ def test_030_check_call_expected(self, execute, client, policy, logger):
exit_code = 0
return_value = exec_result.ExecResult(
cmd=command,
@@ -1185,7 +1255,7 @@ class TestExecute(unittest.TestCase):
execute.assert_called_once_with(command, verbose, None)
@mock.patch('exec_helpers.ssh_client.SSHClient.check_call')
- def test_check_stderr(self, check_call, client, policy, logger):
+ def test_031_check_stderr(self, check_call, client, policy, logger):
return_value = exec_result.ExecResult(
cmd=command,
stdout=stdout_list,
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 3
} | 1.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-mock",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | advanced-descriptors==4.0.3
bcrypt==4.3.0
cffi==1.17.1
coverage==7.8.0
cryptography==44.0.2
exceptiongroup==1.2.2
-e git+https://github.com/python-useful-helpers/exec-helpers.git@ce473df58deee47c9e327b507f8fc1ac121ba19b#egg=exec_helpers
iniconfig==2.1.0
mock==5.2.0
packaging==24.2
paramiko==3.5.1
pluggy==1.5.0
pycparser==2.22
PyNaCl==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-mock==3.14.0
PyYAML==6.0.2
six==1.17.0
tenacity==9.0.0
threaded==4.2.0
tomli==2.2.1
| name: exec-helpers
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- advanced-descriptors==4.0.3
- bcrypt==4.3.0
- cffi==1.17.1
- coverage==7.8.0
- cryptography==44.0.2
- exceptiongroup==1.2.2
- exec-helpers==1.2.0
- iniconfig==2.1.0
- mock==5.2.0
- packaging==24.2
- paramiko==3.5.1
- pluggy==1.5.0
- pycparser==2.22
- pynacl==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pyyaml==6.0.2
- six==1.17.0
- tenacity==9.0.0
- threaded==4.2.0
- tomli==2.2.1
prefix: /opt/conda/envs/exec-helpers
| [
"test/test_ssh_client.py::TestExecute::test_015_keepalive",
"test/test_ssh_client.py::TestExecute::test_016_no_keepalive",
"test/test_ssh_client.py::TestExecute::test_017_keepalive_enforced",
"test/test_ssh_client.py::TestExecute::test_018_no_keepalive_enforced"
]
| []
| [
"test/test_ssh_client.py::TestExecute::test_001_execute_async",
"test/test_ssh_client.py::TestExecute::test_002_execute_async_pty",
"test/test_ssh_client.py::TestExecute::test_003_execute_async_no_stdout_stderr",
"test/test_ssh_client.py::TestExecute::test_004_execute_async_sudo",
"test/test_ssh_client.py::TestExecute::test_005_execute_async_with_sudo_enforce",
"test/test_ssh_client.py::TestExecute::test_006_execute_async_with_no_sudo_enforce",
"test/test_ssh_client.py::TestExecute::test_007_execute_async_with_sudo_none_enforce",
"test/test_ssh_client.py::TestExecute::test_008_execute_async_sudo_password",
"test/test_ssh_client.py::TestExecute::test_009_execute_async_verbose",
"test/test_ssh_client.py::TestExecute::test_010_execute_async_mask_command",
"test/test_ssh_client.py::TestExecute::test_011_check_stdin_str",
"test/test_ssh_client.py::TestExecute::test_012_check_stdin_bytes",
"test/test_ssh_client.py::TestExecute::test_013_check_stdin_bytearray",
"test/test_ssh_client.py::TestExecute::test_014_check_stdin_closed",
"test/test_ssh_client.py::TestExecute::test_019_execute",
"test/test_ssh_client.py::TestExecute::test_020_execute_verbose",
"test/test_ssh_client.py::TestExecute::test_021_execute_no_stdout",
"test/test_ssh_client.py::TestExecute::test_022_execute_no_stderr",
"test/test_ssh_client.py::TestExecute::test_023_execute_no_stdout_stderr",
"test/test_ssh_client.py::TestExecute::test_024_execute_timeout",
"test/test_ssh_client.py::TestExecute::test_025_execute_timeout_fail",
"test/test_ssh_client.py::TestExecute::test_026_execute_mask_command",
"test/test_ssh_client.py::TestExecute::test_027_execute_together",
"test/test_ssh_client.py::TestExecute::test_028_execute_together_exceptions",
"test/test_ssh_client.py::TestExecute::test_029_check_call",
"test/test_ssh_client.py::TestExecute::test_030_check_call_expected",
"test/test_ssh_client.py::TestExecute::test_031_check_stderr",
"test/test_ssh_client.py::TestExecuteThrowHost::test_execute_through_host_auth",
"test/test_ssh_client.py::TestExecuteThrowHost::test_execute_through_host_no_creds",
"test/test_ssh_client.py::TestSftp::test_download",
"test/test_ssh_client.py::TestSftp::test_exists",
"test/test_ssh_client.py::TestSftp::test_isdir",
"test/test_ssh_client.py::TestSftp::test_isfile",
"test/test_ssh_client.py::TestSftp::test_mkdir",
"test/test_ssh_client.py::TestSftp::test_open",
"test/test_ssh_client.py::TestSftp::test_rm_rf",
"test/test_ssh_client.py::TestSftp::test_stat",
"test/test_ssh_client.py::TestSftp::test_upload_dir",
"test/test_ssh_client.py::TestSftp::test_upload_file"
]
| []
| Apache License 2.0 | 2,477 | [
"doc/source/SSHClient.rst",
"exec_helpers/_ssh_client_base.py",
"exec_helpers/subprocess_runner.py"
]
| [
"doc/source/SSHClient.rst",
"exec_helpers/_ssh_client_base.py",
"exec_helpers/subprocess_runner.py"
]
|
berkerpeksag__astor-103 | b47718fa095e456c064d3d222f296fccfe36266b | 2018-05-04 10:53:55 | 991e6e9436c2512241e036464f99114438932d85 | radomirbosak: The only failing environment is `pypy3.3-5.2-alpha1` which fails for `master` branch too, and is fixed in https://github.com/berkerpeksag/astor/pull/102
berkerpeksag: > The only failing environment is pypy3.3-5.2-alpha1 which fails for master branch too, and is fixed in #102
This one is almost ready to be merged. We can move that change into this one and rebase #102 after we merge this.
Also, could you add a note in `docs/changelog.rst`? This is a bugfix so you'll need to add a new "Bug fixes" subsection.
berkerpeksag: Oh, and please add your name to https://github.com/berkerpeksag/astor/blob/master/AUTHORS
radomirbosak: Authors and changelog updated; the tox/travis environment-change commit from #102 was moved here. | diff --git a/.travis.yml b/.travis.yml
index df42c87..f789743 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -8,7 +8,7 @@ python:
- 3.5
- 3.6
- pypy
- - pypy3.3-5.2-alpha1
+ - pypy3.5
- 3.7-dev
matrix:
allow_failures:
diff --git a/AUTHORS b/AUTHORS
index 39d96d5..9949ccc 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -12,3 +12,4 @@ And with some modifications based on Armin's code:
* Zack M. Davis <[email protected]>
* Ryan Gonzalez <[email protected]>
* Lenny Truong <[email protected]>
+* Radomír Bosák <[email protected]>
diff --git a/astor/code_gen.py b/astor/code_gen.py
index 47d6acc..c5c1ad6 100644
--- a/astor/code_gen.py
+++ b/astor/code_gen.py
@@ -580,6 +580,9 @@ class SourceGenerator(ExplicitNodeVisitor):
index = len(result)
recurse(node)
+
+ # Flush trailing newlines (so that they are part of mystr)
+ self.write('')
mystr = ''.join(result[index:])
del result[index:]
self.colinfo = res_index, str_index # Put it back like we found it
diff --git a/docs/changelog.rst b/docs/changelog.rst
index 781f39d..0faff36 100644
--- a/docs/changelog.rst
+++ b/docs/changelog.rst
@@ -22,6 +22,16 @@ New features
.. _`Issue 86`: https://github.com/berkerpeksag/astor/issues/86
+Bug fixes
+~~~~~~~~~
+
+* Fixed a bug where newlines would be inserted to a wrong place during
+ printing f-strings with trailing newlines.
+ (Reported by Felix Yan and contributed by Radomír Bosák in
+ `Issue 89`_.)
+
+.. _`Issue 89`: https://github.com/berkerpeksag/astor/issues/89
+
0.6.2 - 2017-11-11
------------------
diff --git a/tox.ini b/tox.ini
index 5149f5c..e364485 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = py26, py27, py33, py34, py35, py36, pypy, pypy3.3-5.2-alpha1
+envlist = py26, py27, py33, py34, py35, py36, pypy, pypy3.5
skipsdist = True
[testenv]
| Test failure in Python 3.6.3
Looks like a test-only failure, though.
```
======================================================================
FAIL: test_convert_stdlib (tests.test_rtrip.RtripTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/build/python-astor/src/astor-0.6/tests/test_rtrip.py", line 24, in test_convert_stdlib
self.assertEqual(result, [])
AssertionError: Lists differ: ['/usr/lib/python3.6/test/test_fstring.py'[34 chars].py'] != []
First list contains 2 additional elements.
First extra element 0:
'/usr/lib/python3.6/test/test_fstring.py'
+ []
- ['/usr/lib/python3.6/test/test_fstring.py',
- '/usr/lib/python3.6/idlelib/grep.py']
``` | berkerpeksag/astor | diff --git a/tests/test_code_gen.py b/tests/test_code_gen.py
index 0638d9a..1a80445 100644
--- a/tests/test_code_gen.py
+++ b/tests/test_code_gen.py
@@ -476,6 +476,12 @@ class CodegenTestCase(unittest.TestCase, Comparisons):
'''
self.assertAstRoundtrips(source)
+ def test_fstring_trailing_newline(self):
+ source = '''
+ x = f"""{host}\n\t{port}\n"""
+ '''
+ self.assertSrcRoundtripsGtVer(source, (3, 6))
+
if __name__ == '__main__':
unittest.main()
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 5
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"nose",
"unittest2",
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements-tox.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/berkerpeksag/astor.git@b47718fa095e456c064d3d222f296fccfe36266b#egg=astor
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
linecache2==1.0.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
nose==1.3.7
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
six==1.17.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
traceback2==1.4.0
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
unittest2==1.1.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: astor
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- argparse==1.4.0
- linecache2==1.0.0
- nose==1.3.7
- six==1.17.0
- traceback2==1.4.0
- unittest2==1.1.0
prefix: /opt/conda/envs/astor
| [
"tests/test_code_gen.py::CodegenTestCase::test_fstring_trailing_newline"
]
| []
| [
"tests/test_code_gen.py::CodegenTestCase::test_annassign",
"tests/test_code_gen.py::CodegenTestCase::test_arguments",
"tests/test_code_gen.py::CodegenTestCase::test_async_comprehension",
"tests/test_code_gen.py::CodegenTestCase::test_async_def_with_for",
"tests/test_code_gen.py::CodegenTestCase::test_class_definition_with_starbases_and_kwargs",
"tests/test_code_gen.py::CodegenTestCase::test_compile_types",
"tests/test_code_gen.py::CodegenTestCase::test_comprehension",
"tests/test_code_gen.py::CodegenTestCase::test_del_statement",
"tests/test_code_gen.py::CodegenTestCase::test_dictionary_literals",
"tests/test_code_gen.py::CodegenTestCase::test_double_await",
"tests/test_code_gen.py::CodegenTestCase::test_elif",
"tests/test_code_gen.py::CodegenTestCase::test_fstrings",
"tests/test_code_gen.py::CodegenTestCase::test_imports",
"tests/test_code_gen.py::CodegenTestCase::test_inf",
"tests/test_code_gen.py::CodegenTestCase::test_matrix_multiplication",
"tests/test_code_gen.py::CodegenTestCase::test_multiple_call_unpackings",
"tests/test_code_gen.py::CodegenTestCase::test_non_string_leakage",
"tests/test_code_gen.py::CodegenTestCase::test_output_formatting",
"tests/test_code_gen.py::CodegenTestCase::test_pass_arguments_node",
"tests/test_code_gen.py::CodegenTestCase::test_pow",
"tests/test_code_gen.py::CodegenTestCase::test_right_hand_side_dictionary_unpacking",
"tests/test_code_gen.py::CodegenTestCase::test_slicing",
"tests/test_code_gen.py::CodegenTestCase::test_try_expect",
"tests/test_code_gen.py::CodegenTestCase::test_tuple_corner_cases",
"tests/test_code_gen.py::CodegenTestCase::test_unary",
"tests/test_code_gen.py::CodegenTestCase::test_unicode_literals",
"tests/test_code_gen.py::CodegenTestCase::test_with",
"tests/test_code_gen.py::CodegenTestCase::test_yield"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,478 | [
"docs/changelog.rst",
".travis.yml",
"tox.ini",
"AUTHORS",
"astor/code_gen.py"
]
| [
"docs/changelog.rst",
".travis.yml",
"tox.ini",
"AUTHORS",
"astor/code_gen.py"
]
|
cekit__cekit-220 | da759627a7591993df8eb24778e2dd9ea39ae917 | 2018-05-04 12:31:36 | c871246da5035e070cf5f79f486283fabd5bfc46 | diff --git a/bash_completion/cekit b/bash_completion/cekit
index efdb39c..e1465c3 100755
--- a/bash_completion/cekit
+++ b/bash_completion/cekit
@@ -20,6 +20,7 @@ _cekit_build_options()
options+='--build-osbs-user '
options+='--build-osbs-nowait '
options+='--build-osbs-stage '
+ options+='--build-osbs-target '
options+='--build-tech-preview '
echo "$options"
}
@@ -73,6 +74,7 @@ _cekit_complete()
options+='--verbose '
options+='--version '
options+='--config '
+ options+='--redhat '
COMPREPLY=( $( compgen -W "$options" -- $cur ) )
return
diff --git a/cekit/builders/osbs.py b/cekit/builders/osbs.py
index b61a34c..f4c1372 100644
--- a/cekit/builders/osbs.py
+++ b/cekit/builders/osbs.py
@@ -18,6 +18,7 @@ class OSBSBuilder(Builder):
self._user = params.get('user')
self._nowait = params.get('nowait', False)
self._release = params.get('release', False)
+ self._target = params.get('target')
self._stage = params.get('stage', False)
@@ -119,9 +120,14 @@ class OSBSBuilder(Builder):
def build(self):
cmd = [self._rhpkg]
+
if self._user:
cmd += ['--user', self._user]
cmd.append("container-build")
+
+ if self._target:
+ cmd += ['--target', self._target]
+
if self._nowait:
cmd += ['--nowait']
diff --git a/cekit/cli.py b/cekit/cli.py
index 54646be..55f9dd6 100644
--- a/cekit/cli.py
+++ b/cekit/cli.py
@@ -104,6 +104,10 @@ class Cekit(object):
action='store_true',
help='use rhpkg-stage instead of rhpkg')
+ build_group.add_argument('--build-osbs-target',
+ dest='build_osbs_target',
+ help='overrides the default rhpkg target')
+
build_group.add_argument('--build-tech-preview',
action='store_true',
help='perform tech preview build')
@@ -200,7 +204,8 @@ class Cekit(object):
'release': self.args.build_osbs_release,
'tags': self.args.tags,
'pull': self.args.build_pull,
- 'redhat': tools.cfg['common']['redhat']
+ 'redhat': tools.cfg['common']['redhat'],
+ 'target': self.args.build_osbs_target
}
builder = Builder(self.args.build_engine,
diff --git a/cekit/generator/base.py b/cekit/generator/base.py
index e55f7dc..b39cb7e 100644
--- a/cekit/generator/base.py
+++ b/cekit/generator/base.py
@@ -51,6 +51,7 @@ class Generator(object):
self.image = Image(descriptor, os.path.dirname(os.path.abspath(descriptor_path)))
self.target = target
+ self._params = params
if overrides:
self.image = self.override(overrides)
diff --git a/docs/build.rst b/docs/build.rst
index 8702b58..78d1f6d 100644
--- a/docs/build.rst
+++ b/docs/build.rst
@@ -25,6 +25,7 @@ You can execute an container image build by running:
* ``--build-osbs-stage`` -- use ``rhpkg-stage`` tool instead of ``rhpkg``
* ``--build-osbs-release`` [#f2]_ -- perform a OSBS release build
* ``--build-osbs-user`` -- alternative user passed to `rhpkg --user`
+* ``--build-osbs-target`` -- overrides the default ``rhpkg`` target
* ``--build-osbs-nowait`` -- run `rhpkg container-build` with `--nowait` option specified
* ``--build-tech-preview`` [#f2]_ -- updates image descriptor ``name`` key to contain ``-tech-preview`` suffix in family part of the image name
| Add support for overriding target in OSBS builder
We should be able to override the build target using the `--target` parameter to `rhpkg`. | cekit/cekit | diff --git a/tests/test_builder.py b/tests/test_builder.py
index 88ba3ec..b3dd769 100644
--- a/tests/test_builder.py
+++ b/tests/test_builder.py
@@ -114,6 +114,17 @@ def test_osbs_builder_run_rhpkg_user(mocker):
check_call.assert_called_once_with(['rhpkg', '--user', 'Foo', 'container-build', '--scratch'])
+def test_osbs_builder_run_rhpkg_target(mocker):
+ params = {'target': 'Foo',
+ 'redhat': True}
+
+ check_call = mocker.patch.object(subprocess, 'check_call')
+ builder = create_osbs_build_object(mocker, 'osbs', params)
+ builder.build()
+
+ check_call.assert_called_once_with(['rhpkg', 'container-build', '--target', 'Foo', '--scratch'])
+
+
def test_docker_builder_defaults():
params = {'tags': ['foo', 'bar']}
builder = Builder('docker', 'tmp', params)
diff --git a/tests/test_unit_args.py b/tests/test_unit_args.py
index 1e221f1..fdb2042 100644
--- a/tests/test_unit_args.py
+++ b/tests/test_unit_args.py
@@ -90,6 +90,15 @@ def test_args_config(mocker):
assert Cekit().parse().args.config == 'whatever'
+def test_args_target(mocker):
+ mocker.patch.object(sys, 'argv', ['cekit',
+ 'build',
+ '--target',
+ 'foo'])
+
+ assert Cekit().parse().args.target == 'foo'
+
+
def test_args_redhat(mocker):
mocker.patch.object(sys, 'argv', ['cekit',
'--redhat',
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 5
} | 1.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"behave",
"docker",
"lxml",
"mock",
"pytest",
"pytest-cov",
"pytest-mock",
"pykwalify"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | behave==1.2.6
-e git+https://github.com/cekit/cekit.git@da759627a7591993df8eb24778e2dd9ea39ae917#egg=cekit
certifi==2025.1.31
charset-normalizer==3.4.1
colorlog==6.9.0
coverage==7.8.0
docker==7.1.0
docopt==0.6.2
exceptiongroup==1.2.2
idna==3.10
iniconfig==2.1.0
Jinja2==3.1.6
lxml==5.3.1
MarkupSafe==3.0.2
mock==5.2.0
packaging==24.2
parse==1.20.2
parse_type==0.6.4
pluggy==1.5.0
pykwalify==1.8.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-mock==3.14.0
python-dateutil==2.9.0.post0
PyYAML==6.0.2
requests==2.32.3
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.12
six==1.17.0
tomli==2.2.1
urllib3==2.3.0
| name: cekit
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- behave==1.2.6
- certifi==2025.1.31
- charset-normalizer==3.4.1
- colorlog==6.9.0
- coverage==7.8.0
- docker==7.1.0
- docopt==0.6.2
- exceptiongroup==1.2.2
- idna==3.10
- iniconfig==2.1.0
- jinja2==3.1.6
- lxml==5.3.1
- markupsafe==3.0.2
- mock==5.2.0
- packaging==24.2
- parse==1.20.2
- parse-type==0.6.4
- pluggy==1.5.0
- pykwalify==1.8.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.2
- requests==2.32.3
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.12
- six==1.17.0
- tomli==2.2.1
- urllib3==2.3.0
prefix: /opt/conda/envs/cekit
| [
"tests/test_builder.py::test_osbs_builder_run_rhpkg_target"
]
| [
"tests/test_builder.py::test_docker_builder_defaults"
]
| [
"tests/test_builder.py::test_osbs_builder_defaults",
"tests/test_builder.py::test_osbs_builder_redhat",
"tests/test_builder.py::test_osbs_builder_use_rhpkg_staget",
"tests/test_builder.py::test_osbs_builder_nowait",
"tests/test_builder.py::test_osbs_builder_user",
"tests/test_builder.py::test_osbs_builder_run_rhpkg_stage",
"tests/test_builder.py::test_osbs_builder_run_rhpkg",
"tests/test_builder.py::test_osbs_builder_run_rhpkg_nowait",
"tests/test_builder.py::test_osbs_builder_run_rhpkg_user",
"tests/test_builder.py::test_docker_builder_run",
"tests/test_builder.py::test_buildah_builder_run",
"tests/test_builder.py::test_buildah_builder_run_pull",
"tests/test_unit_args.py::test_args_command[generate]",
"tests/test_unit_args.py::test_args_command[build]",
"tests/test_unit_args.py::test_args_command[test]",
"tests/test_unit_args.py::test_args_not_valid_command",
"tests/test_unit_args.py::test_args_tags[tags0-build_tags0-expected0]",
"tests/test_unit_args.py::test_args_tags[tags1-build_tags1-expected1]",
"tests/test_unit_args.py::test_args_tags[tags2-build_tags2-expected2]",
"tests/test_unit_args.py::test_args_tags[tags3-build_tags3-expected3]",
"tests/test_unit_args.py::test_args_build_pull",
"tests/test_unit_args.py::test_args_build_engine[osbs]",
"tests/test_unit_args.py::test_args_build_engine[docker]",
"tests/test_unit_args.py::test_args_build_engine[buildah]",
"tests/test_unit_args.py::test_args_osbs_stage",
"tests/test_unit_args.py::test_args_osbs_stage_false",
"tests/test_unit_args.py::test_args_invalid_build_engine",
"tests/test_unit_args.py::test_args_osbs_user",
"tests/test_unit_args.py::test_args_config_default",
"tests/test_unit_args.py::test_args_config",
"tests/test_unit_args.py::test_args_target",
"tests/test_unit_args.py::test_args_redhat",
"tests/test_unit_args.py::test_args_redhat_default",
"tests/test_unit_args.py::test_args_osbs_nowait",
"tests/test_unit_args.py::test_args_osbs_no_nowait"
]
| []
| MIT License | 2,480 | [
"cekit/builders/osbs.py",
"cekit/generator/base.py",
"bash_completion/cekit",
"docs/build.rst",
"cekit/cli.py"
]
| [
"cekit/builders/osbs.py",
"cekit/generator/base.py",
"bash_completion/cekit",
"docs/build.rst",
"cekit/cli.py"
]
|
|
dask__dask-3472 | 7c419580037f552befc2650cb13967dd6bdef86a | 2018-05-04 22:37:21 | 48c4a589393ebc5b335cc5c7df291901401b0b15 | crusaderky: ```
import dask.array as da
a = da.ones((5, 40), chunks=10)
b = da.ones((40, ), chunks=10)
da.einsum('...i,...i', a, b, split_every=2).visualize()
```

crusaderky: Ready for final review and merge
crusaderky: @shoyer @mrocklin apologies for the delays. Explicit kwargs done as requested. | diff --git a/dask/array/chunk.py b/dask/array/chunk.py
index 2879f38e5..fe4d64f76 100644
--- a/dask/array/chunk.py
+++ b/dask/array/chunk.py
@@ -235,3 +235,14 @@ def view(x, dtype, order='C'):
else:
x = np.asfortranarray(x)
return x.T.view(dtype).T
+
+
+def einsum(*operands, **kwargs):
+ subscripts = kwargs.pop('subscripts')
+ ncontract_inds = kwargs.pop('ncontract_inds')
+ dtype = kwargs.pop('kernel_dtype')
+ chunk = np.einsum(subscripts, *operands, dtype=dtype, **kwargs)
+
+ # Avoid concatenate=True in atop by adding 1's
+ # for the contracted dimensions
+ return chunk.reshape(chunk.shape + (1,) * ncontract_inds)
diff --git a/dask/array/einsumfuncs.py b/dask/array/einsumfuncs.py
index 483335a1d..f32a5b7c7 100644
--- a/dask/array/einsumfuncs.py
+++ b/dask/array/einsumfuncs.py
@@ -7,6 +7,7 @@ import numpy as np
from numpy.compat import basestring
from .core import (atop, asarray)
+from . import chunk
einsum_symbols = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
einsum_symbols_set = set(einsum_symbols)
@@ -182,24 +183,20 @@ def parse_einsum_input(operands):
return (input_subscripts, output_subscript, operands)
-def _einsum_kernel(*operands, **kwargs):
- subscripts = kwargs.pop('subscripts')
- ncontract_inds = kwargs.pop('ncontract_inds')
- dtype = kwargs.pop('kernel_dtype')
- chunk = np.einsum(subscripts, *operands, dtype=dtype, **kwargs)
-
- # Avoid concatenate=True in atop by adding 1's
- # for the contracted dimensions
- return chunk.reshape(chunk.shape + (1,) * ncontract_inds)
-
-
einsum_can_optimize = LooseVersion(np.__version__) >= LooseVersion("1.12.0")
@wraps(np.einsum)
def einsum(*operands, **kwargs):
- dtype = kwargs.get('dtype')
- optimize = kwargs.get('optimize')
+ casting = kwargs.pop('casting', 'safe')
+ dtype = kwargs.pop('dtype', None)
+ optimize = kwargs.pop('optimize', False)
+ order = kwargs.pop('order', 'K')
+ split_every = kwargs.pop('split_every', None)
+ if kwargs:
+ raise TypeError("einsum() got unexpected keyword "
+ "argument(s) %s" % ",".join(kwargs))
+
einsum_dtype = dtype
inputs, outputs, ops = parse_einsum_input(operands)
@@ -209,16 +206,18 @@ def einsum(*operands, **kwargs):
if dtype is None:
dtype = np.result_type(*[o.dtype for o in ops])
- if optimize is None:
- optimize = False
-
- if einsum_can_optimize and optimize is not False:
- # Avoid computation of dask arrays within np.einsum_path
- # by passing in small numpy arrays broadcasted
- # up to the right shape
- fake_ops = [np.broadcast_to(o.dtype.type(0), shape=o.shape)
- for o in ops]
- optimize, _ = np.einsum_path(subscripts, *fake_ops, optimize=optimize)
+ if einsum_can_optimize:
+ if optimize is not False:
+ # Avoid computation of dask arrays within np.einsum_path
+ # by passing in small numpy arrays broadcasted
+ # up to the right shape
+ fake_ops = [np.broadcast_to(o.dtype.type(0), shape=o.shape)
+ for o in ops]
+ optimize, _ = np.einsum_path(subscripts, *fake_ops,
+ optimize=optimize)
+ kwargs = {'optimize': optimize}
+ else:
+ kwargs = {}
inputs = [tuple(i) for i in inputs.split(",")]
@@ -229,27 +228,21 @@ def einsum(*operands, **kwargs):
contract_inds = all_inds - set(outputs)
ncontract_inds = len(contract_inds)
- # Update kwargs with np.einsum parameters
- kwargs['subscripts'] = subscripts
- kwargs['kernel_dtype'] = einsum_dtype
- kwargs['ncontract_inds'] = ncontract_inds
-
- if einsum_can_optimize:
- kwargs['optimize'] = optimize
-
- # Update kwargs with atop parameters
- kwargs['adjust_chunks'] = {ind: 1 for ind in contract_inds}
- kwargs['dtype'] = dtype
-
# Introduce the contracted indices into the atop product
# so that we get numpy arrays, not lists
- result = atop(_einsum_kernel, tuple(outputs) + tuple(contract_inds),
+ result = atop(chunk.einsum, tuple(outputs) + tuple(contract_inds),
*(a for ap in zip(ops, inputs) for a in ap),
- **kwargs)
+ # atop parameters
+ adjust_chunks={ind: 1 for ind in contract_inds}, dtype=dtype,
+ # np.einsum parameters
+ subscripts=subscripts, kernel_dtype=einsum_dtype,
+ ncontract_inds=ncontract_inds, order=order,
+ casting=casting, **kwargs)
# Now reduce over any extra contraction dimensions
if ncontract_inds > 0:
size = len(outputs)
- return result.sum(axis=list(range(size, size + ncontract_inds)))
+ return result.sum(axis=list(range(size, size + ncontract_inds)),
+ split_every=split_every)
return result
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
index ae9245a15..1d1657aa7 100644
--- a/docs/source/changelog.rst
+++ b/docs/source/changelog.rst
@@ -9,6 +9,7 @@ Array
+++++
- Fix ``rechunk`` with chunksize of -1 in a dict (:pr:`3469`) `Stephan Hoyer`_
+- ``einsum`` now accepts the ``split_every`` parameter (:pr:`3396`) `Guido Imperiale`_
Dataframe
+++++++++
diff --git a/docs/source/delayed.rst b/docs/source/delayed.rst
index 4cfd86f86..63bff2b9b 100644
--- a/docs/source/delayed.rst
+++ b/docs/source/delayed.rst
@@ -69,7 +69,7 @@ execution, placing the function and its arguments into a task graph.
.. autosummary::
delayed
-We slightly modify our code our code by wrapping functions in ``delayed``.
+We slightly modify our code by wrapping functions in ``delayed``.
This delays the execution of the function and generates a dask graph instead.
.. code-block:: python
| da.einsum ignores split_every
As of git head (495a3611c1ccc12ccc37cf8a56ec3a88743815f5), da.einsum accepts, but ignores, the split_every parameter:
```
import dask.array as da
a = da.ones((5, 40), chunks=10)
b = da.ones((40, ), chunks=10)
da.einsum('...i,...i', a, b, split_every=2).visualize()
```

| dask/dask | diff --git a/dask/array/tests/test_routines.py b/dask/array/tests/test_routines.py
index a5a214eab..f90291102 100644
--- a/dask/array/tests/test_routines.py
+++ b/dask/array/tests/test_routines.py
@@ -1407,6 +1407,19 @@ def test_einsum_casting(casting):
da.einsum(sig, *np_inputs, casting=casting))
[email protected]('split_every', [None, 2])
+def test_einsum_split_every(split_every):
+ np_inputs, da_inputs = _numpy_and_dask_inputs('a')
+ assert_eq(np.einsum('a', *np_inputs),
+ da.einsum('a', *da_inputs, split_every=split_every))
+
+
+def test_einsum_invalid_args():
+ _, da_inputs = _numpy_and_dask_inputs('a')
+ with pytest.raises(TypeError):
+ da.einsum('a', *da_inputs, foo=1, bar=2)
+
+
def test_einsum_broadcasting_contraction():
a = np.random.rand(1, 5, 4)
b = np.random.rand(4, 6)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_git_commit_hash",
"has_media",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 4
} | 1.21 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[complete]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8",
"pytest-xdist",
"moto"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
boto3==1.23.10
botocore==1.26.10
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
click==8.0.4
cloudpickle==2.2.1
cryptography==40.0.2
-e git+https://github.com/dask/dask.git@7c419580037f552befc2650cb13967dd6bdef86a#egg=dask
dataclasses==0.8
distributed==1.21.8
execnet==1.9.0
flake8==5.0.4
HeapDict==1.0.1
idna==3.10
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
jmespath==0.10.0
locket==1.0.0
MarkupSafe==2.0.1
mccabe==0.7.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
moto==4.0.13
msgpack==1.0.5
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
partd==1.2.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-xdist==3.0.2
python-dateutil==2.9.0.post0
pytz==2025.2
requests==2.27.1
responses==0.17.0
s3transfer==0.5.2
six==1.17.0
sortedcontainers==2.4.0
tblib==1.7.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
toolz==0.12.0
tornado==6.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
Werkzeug==2.0.3
xmltodict==0.14.2
zict==2.1.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: dask
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- boto3==1.23.10
- botocore==1.26.10
- cffi==1.15.1
- charset-normalizer==2.0.12
- click==8.0.4
- cloudpickle==2.2.1
- cryptography==40.0.2
- dataclasses==0.8
- distributed==1.21.8
- execnet==1.9.0
- flake8==5.0.4
- heapdict==1.0.1
- idna==3.10
- importlib-metadata==4.2.0
- jinja2==3.0.3
- jmespath==0.10.0
- locket==1.0.0
- markupsafe==2.0.1
- mccabe==0.7.0
- moto==4.0.13
- msgpack==1.0.5
- numpy==1.19.5
- pandas==1.1.5
- partd==1.2.0
- psutil==7.0.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pytest-xdist==3.0.2
- python-dateutil==2.9.0.post0
- pytz==2025.2
- requests==2.27.1
- responses==0.17.0
- s3transfer==0.5.2
- six==1.17.0
- sortedcontainers==2.4.0
- tblib==1.7.0
- toolz==0.12.0
- tornado==6.1
- urllib3==1.26.20
- werkzeug==2.0.3
- xmltodict==0.14.2
- zict==2.1.0
prefix: /opt/conda/envs/dask
| [
"dask/array/tests/test_routines.py::test_einsum_split_every[None]",
"dask/array/tests/test_routines.py::test_einsum_split_every[2]",
"dask/array/tests/test_routines.py::test_einsum_invalid_args"
]
| []
| [
"dask/array/tests/test_routines.py::test_array",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_no_args[atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape0-chunks0-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape1-chunks1-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape2-chunks2-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape3-chunks3-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_one_arg[shape4-chunks4-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape10-shape20-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape11-shape21-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape12-shape22-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape13-shape23-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape14-shape24-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape15-shape25-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape16-shape26-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape17-shape27-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape18-shape28-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape19-shape29-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape110-shape210-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape111-shape211-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape112-shape212-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape113-shape213-atleast_3d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_1d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_2d]",
"dask/array/tests/test_routines.py::test_atleast_nd_two_args[shape114-shape214-atleast_3d]",
"dask/array/tests/test_routines.py::test_transpose",
"dask/array/tests/test_routines.py::test_transpose_negative_axes",
"dask/array/tests/test_routines.py::test_swapaxes",
"dask/array/tests/test_routines.py::test_flip[shape0-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape0-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape0-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape1-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape1-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape1-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape2-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape2-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape2-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape3-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape3-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape3-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_flip[shape4-flipud-kwargs0]",
"dask/array/tests/test_routines.py::test_flip[shape4-fliplr-kwargs1]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs2]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs3]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs4]",
"dask/array/tests/test_routines.py::test_flip[shape4-flip-kwargs5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape0-y_shape0]",
"dask/array/tests/test_routines.py::test_matmul[x_shape1-y_shape1]",
"dask/array/tests/test_routines.py::test_matmul[x_shape2-y_shape2]",
"dask/array/tests/test_routines.py::test_matmul[x_shape3-y_shape3]",
"dask/array/tests/test_routines.py::test_matmul[x_shape4-y_shape4]",
"dask/array/tests/test_routines.py::test_matmul[x_shape5-y_shape5]",
"dask/array/tests/test_routines.py::test_matmul[x_shape6-y_shape6]",
"dask/array/tests/test_routines.py::test_matmul[x_shape7-y_shape7]",
"dask/array/tests/test_routines.py::test_matmul[x_shape8-y_shape8]",
"dask/array/tests/test_routines.py::test_matmul[x_shape9-y_shape9]",
"dask/array/tests/test_routines.py::test_matmul[x_shape10-y_shape10]",
"dask/array/tests/test_routines.py::test_matmul[x_shape11-y_shape11]",
"dask/array/tests/test_routines.py::test_matmul[x_shape12-y_shape12]",
"dask/array/tests/test_routines.py::test_matmul[x_shape13-y_shape13]",
"dask/array/tests/test_routines.py::test_matmul[x_shape14-y_shape14]",
"dask/array/tests/test_routines.py::test_matmul[x_shape15-y_shape15]",
"dask/array/tests/test_routines.py::test_matmul[x_shape16-y_shape16]",
"dask/array/tests/test_routines.py::test_matmul[x_shape17-y_shape17]",
"dask/array/tests/test_routines.py::test_matmul[x_shape18-y_shape18]",
"dask/array/tests/test_routines.py::test_matmul[x_shape19-y_shape19]",
"dask/array/tests/test_routines.py::test_matmul[x_shape20-y_shape20]",
"dask/array/tests/test_routines.py::test_matmul[x_shape21-y_shape21]",
"dask/array/tests/test_routines.py::test_matmul[x_shape22-y_shape22]",
"dask/array/tests/test_routines.py::test_matmul[x_shape23-y_shape23]",
"dask/array/tests/test_routines.py::test_matmul[x_shape24-y_shape24]",
"dask/array/tests/test_routines.py::test_tensordot",
"dask/array/tests/test_routines.py::test_tensordot_2[0]",
"dask/array/tests/test_routines.py::test_tensordot_2[1]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes2]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes3]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes4]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes5]",
"dask/array/tests/test_routines.py::test_tensordot_2[axes6]",
"dask/array/tests/test_routines.py::test_dot_method",
"dask/array/tests/test_routines.py::test_vdot[shape0-chunks0]",
"dask/array/tests/test_routines.py::test_vdot[shape1-chunks1]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape0-0-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape1-1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape2-2-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-ndim-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-sum-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_along_axis[shape3--1-range2-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape0-axes0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape1-0-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape2-axes2-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape3-axes3-range-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum0-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-sum1-<lambda>]",
"dask/array/tests/test_routines.py::test_apply_over_axes[shape4-axes4-range-<lambda>]",
"dask/array/tests/test_routines.py::test_ptp[shape0-None]",
"dask/array/tests/test_routines.py::test_ptp[shape1-0]",
"dask/array/tests/test_routines.py::test_ptp[shape2-1]",
"dask/array/tests/test_routines.py::test_ptp[shape3-2]",
"dask/array/tests/test_routines.py::test_ptp[shape4--1]",
"dask/array/tests/test_routines.py::test_diff[0-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[0-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[0-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[0-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[1-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[1-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[1-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[1-shape3--1]",
"dask/array/tests/test_routines.py::test_diff[2-shape0-0]",
"dask/array/tests/test_routines.py::test_diff[2-shape1-1]",
"dask/array/tests/test_routines.py::test_diff[2-shape2-2]",
"dask/array/tests/test_routines.py::test_diff[2-shape3--1]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[None-None-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[0-0-shape1]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape0]",
"dask/array/tests/test_routines.py::test_ediff1d[to_end2-to_begin2-shape1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape0-varargs0-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape1-varargs1-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape2-varargs2-None]",
"dask/array/tests/test_routines.py::test_gradient[1-shape3-varargs3-0]",
"dask/array/tests/test_routines.py::test_gradient[1-shape4-varargs4-1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape5-varargs5-2]",
"dask/array/tests/test_routines.py::test_gradient[1-shape6-varargs6--1]",
"dask/array/tests/test_routines.py::test_gradient[1-shape7-varargs7-axis7]",
"dask/array/tests/test_routines.py::test_gradient[2-shape0-varargs0-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape1-varargs1-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape2-varargs2-None]",
"dask/array/tests/test_routines.py::test_gradient[2-shape3-varargs3-0]",
"dask/array/tests/test_routines.py::test_gradient[2-shape4-varargs4-1]",
"dask/array/tests/test_routines.py::test_gradient[2-shape5-varargs5-2]",
"dask/array/tests/test_routines.py::test_gradient[2-shape6-varargs6--1]",
"dask/array/tests/test_routines.py::test_gradient[2-shape7-varargs7-axis7]",
"dask/array/tests/test_routines.py::test_bincount",
"dask/array/tests/test_routines.py::test_bincount_with_weights",
"dask/array/tests/test_routines.py::test_bincount_raises_informative_error_on_missing_minlength_kwarg",
"dask/array/tests/test_routines.py::test_digitize",
"dask/array/tests/test_routines.py::test_histogram",
"dask/array/tests/test_routines.py::test_histogram_alternative_bins_range",
"dask/array/tests/test_routines.py::test_histogram_return_type",
"dask/array/tests/test_routines.py::test_histogram_extra_args_and_shapes",
"dask/array/tests/test_routines.py::test_cov",
"dask/array/tests/test_routines.py::test_corrcoef",
"dask/array/tests/test_routines.py::test_round",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[False-True-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-False-True]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-False]",
"dask/array/tests/test_routines.py::test_unique_kwargs[True-True-True]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape0-chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape1-chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape2-chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_unique_rand[shape3-chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[True-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape0-test_chunks0-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape1-test_chunks1-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape2-test_chunks2-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape0-elements_chunks0-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape1-elements_chunks1-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape2-elements_chunks2-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-23]",
"dask/array/tests/test_routines.py::test_isin_rand[False-test_shape3-test_chunks3-elements_shape3-elements_chunks3-0-10-796]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[True]",
"dask/array/tests/test_routines.py::test_isin_assume_unique[False]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[None-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[0-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[-1-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis4-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-7-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-9-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift3-chunks1]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks0]",
"dask/array/tests/test_routines.py::test_roll[axis5-shift4-chunks1]",
"dask/array/tests/test_routines.py::test_ravel",
"dask/array/tests/test_routines.py::test_squeeze[None-True]",
"dask/array/tests/test_routines.py::test_squeeze[None-False]",
"dask/array/tests/test_routines.py::test_squeeze[0-True]",
"dask/array/tests/test_routines.py::test_squeeze[0-False]",
"dask/array/tests/test_routines.py::test_squeeze[-1-True]",
"dask/array/tests/test_routines.py::test_squeeze[-1-False]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-True]",
"dask/array/tests/test_routines.py::test_squeeze[axis3-False]",
"dask/array/tests/test_routines.py::test_vstack",
"dask/array/tests/test_routines.py::test_hstack",
"dask/array/tests/test_routines.py::test_dstack",
"dask/array/tests/test_routines.py::test_take",
"dask/array/tests/test_routines.py::test_take_dask_from_numpy",
"dask/array/tests/test_routines.py::test_compress",
"dask/array/tests/test_routines.py::test_extract",
"dask/array/tests/test_routines.py::test_isnull",
"dask/array/tests/test_routines.py::test_isclose",
"dask/array/tests/test_routines.py::test_allclose",
"dask/array/tests/test_routines.py::test_choose",
"dask/array/tests/test_routines.py::test_piecewise",
"dask/array/tests/test_routines.py::test_piecewise_otherwise",
"dask/array/tests/test_routines.py::test_argwhere",
"dask/array/tests/test_routines.py::test_argwhere_obj",
"dask/array/tests/test_routines.py::test_argwhere_str",
"dask/array/tests/test_routines.py::test_where",
"dask/array/tests/test_routines.py::test_where_scalar_dtype",
"dask/array/tests/test_routines.py::test_where_bool_optimization",
"dask/array/tests/test_routines.py::test_where_nonzero",
"dask/array/tests/test_routines.py::test_where_incorrect_args",
"dask/array/tests/test_routines.py::test_count_nonzero",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[None]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[0]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis2]",
"dask/array/tests/test_routines.py::test_count_nonzero_obj_axis[axis3]",
"dask/array/tests/test_routines.py::test_count_nonzero_str",
"dask/array/tests/test_routines.py::test_flatnonzero",
"dask/array/tests/test_routines.py::test_nonzero",
"dask/array/tests/test_routines.py::test_nonzero_method",
"dask/array/tests/test_routines.py::test_coarsen",
"dask/array/tests/test_routines.py::test_coarsen_with_excess",
"dask/array/tests/test_routines.py::test_insert",
"dask/array/tests/test_routines.py::test_multi_insert",
"dask/array/tests/test_routines.py::test_result_type",
"dask/array/tests/test_routines.py::test_einsum[abc,bad->abcd]",
"dask/array/tests/test_routines.py::test_einsum[abcdef,bcdfg->abcdeg]",
"dask/array/tests/test_routines.py::test_einsum[ea,fb,abcd,gc,hd->efgh]",
"dask/array/tests/test_routines.py::test_einsum[ab,b]",
"dask/array/tests/test_routines.py::test_einsum[aa]",
"dask/array/tests/test_routines.py::test_einsum[a,a->]",
"dask/array/tests/test_routines.py::test_einsum[a,a->a]",
"dask/array/tests/test_routines.py::test_einsum[a,a]",
"dask/array/tests/test_routines.py::test_einsum[a,b]",
"dask/array/tests/test_routines.py::test_einsum[a,b,c]",
"dask/array/tests/test_routines.py::test_einsum[a]",
"dask/array/tests/test_routines.py::test_einsum[ba,b]",
"dask/array/tests/test_routines.py::test_einsum[ba,b->]",
"dask/array/tests/test_routines.py::test_einsum[defab,fedbc->defac]",
"dask/array/tests/test_routines.py::test_einsum[ab...,bc...->ac...]",
"dask/array/tests/test_routines.py::test_einsum[a...a]",
"dask/array/tests/test_routines.py::test_einsum[abc...->cba...]",
"dask/array/tests/test_routines.py::test_einsum[...ab->...a]",
"dask/array/tests/test_routines.py::test_einsum[a...a->a...]",
"dask/array/tests/test_routines.py::test_einsum[...abc,...abcd->...d]",
"dask/array/tests/test_routines.py::test_einsum[ab...,b->ab...]",
"dask/array/tests/test_routines.py::test_einsum[aa->a]",
"dask/array/tests/test_routines.py::test_einsum[ab,ab,c->c]",
"dask/array/tests/test_routines.py::test_einsum[aab,bc->ac]",
"dask/array/tests/test_routines.py::test_einsum[aab,bcc->ac]",
"dask/array/tests/test_routines.py::test_einsum[fdf,cdd,ccd,afe->ae]",
"dask/array/tests/test_routines.py::test_einsum[fff,fae,bef,def->abd]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts0]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts1]",
"dask/array/tests/test_routines.py::test_einsum_optimize[optimize_opts2]",
"dask/array/tests/test_routines.py::test_einsum_order[C]",
"dask/array/tests/test_routines.py::test_einsum_order[F]",
"dask/array/tests/test_routines.py::test_einsum_order[A]",
"dask/array/tests/test_routines.py::test_einsum_order[K]",
"dask/array/tests/test_routines.py::test_einsum_casting[no]",
"dask/array/tests/test_routines.py::test_einsum_casting[equiv]",
"dask/array/tests/test_routines.py::test_einsum_casting[safe]",
"dask/array/tests/test_routines.py::test_einsum_casting[same_kind]",
"dask/array/tests/test_routines.py::test_einsum_casting[unsafe]",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction2",
"dask/array/tests/test_routines.py::test_einsum_broadcasting_contraction3"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,482 | [
"dask/array/chunk.py",
"docs/source/changelog.rst",
"dask/array/einsumfuncs.py",
"docs/source/delayed.rst"
]
| [
"dask/array/chunk.py",
"docs/source/changelog.rst",
"dask/array/einsumfuncs.py",
"docs/source/delayed.rst"
]
|
EdinburghGenomics__clarity_scripts-52 | 5b85e1e462c701d9e01f9a786d82688cfab4d391 | 2018-05-04 22:38:02 | c2eec150467a3cd2185408cd44a6a773b8b6ee99 | diff --git a/scripts/populate_review_step.py b/scripts/populate_review_step.py
index 9cccfd0..3c0e723 100644
--- a/scripts/populate_review_step.py
+++ b/scripts/populate_review_step.py
@@ -206,7 +206,7 @@ class PullSampleInfo(PullInfo):
('SR % Mapped', 'aggregated.pc_mapped_reads'),
('SR % Duplicates', 'aggregated.pc_duplicate_reads'),
('SR Mean Coverage', 'aggregated.mean_coverage'),
- ('SR Species Found', 'matching_species'),
+ ('SR Species Found', 'aggregated.matching_species'),
('SR Sex Check Match', 'aggregated.gender_match'),
('SR Genotyping Match', 'aggregated.genotype_match'),
('SR Freemix', 'sample_contamination.freemix'),
@@ -237,12 +237,10 @@ class PullSampleInfo(PullInfo):
return artifacts
def field_from_entity(self, entity, api_field):
- # TODO: remove once Rest API has a sensible field for species found
- if api_field == 'matching_species':
- species = entity[api_field]
- return ', '.join(species)
-
- return super().field_from_entity(entity, api_field)
+ field = super().field_from_entity(entity, api_field)
+ if api_field == 'aggregated.matching_species':
+ return ', '.join(field)
+ return field
class PushInfo(StepPopulator):
| Matching species field is missing aggregated
https://github.com/EdinburghGenomics/clarity_scripts/blob/5b85e1e462c701d9e01f9a786d82688cfab4d391/scripts/populate_review_step.py#L209
Need to add the `aggregated` in the field name | EdinburghGenomics/clarity_scripts | diff --git a/tests/test_populate_review_step.py b/tests/test_populate_review_step.py
index 6e6c8e0..d6b8ec2 100644
--- a/tests/test_populate_review_step.py
+++ b/tests/test_populate_review_step.py
@@ -150,13 +150,15 @@ class TestPullSampleInfo(TestPopulator):
'sample_id': 'a_sample',
'user_sample_id': 'a_user_sample_id',
'clean_yield_in_gb': 5,
- 'aggregated': {'clean_pc_q30': 70,
- 'pc_mapped_reads': 75,
- 'pc_duplicate_reads': 5,
- 'mean_coverage': 30,
- 'gender_match': 'Match',
- 'genotype_match': 'Match'},
- 'matching_species': ['Homo sapiens', 'Thingius thingy'],
+ 'aggregated': {
+ 'clean_pc_q30': 70,
+ 'pc_mapped_reads': 75,
+ 'pc_duplicate_reads': 5,
+ 'mean_coverage': 30,
+ 'gender_match': 'Match',
+ 'genotype_match': 'Match',
+ 'matching_species': ['Homo sapiens', 'Thingius thingy'],
+ },
'sample_contamination': {'freemix': 0.1},
'reviewed': 'pass',
'review_comments': 'alright',
@@ -197,7 +199,7 @@ class TestPullSampleInfo(TestPopulator):
assert poa.return_value[1].udf['SR Useable Comments'] == 'AR: Review failed'
def test_field_from_entity(self):
- obs = self.epp.field_from_entity(self.fake_rest_entity, 'matching_species')
+ obs = self.epp.field_from_entity(self.fake_rest_entity, 'aggregated.matching_species')
assert obs == 'Homo sapiens, Thingius thingy'
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.7 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asana==0.6.7
attrs==22.2.0
cached-property==1.5.2
certifi==2021.5.30
-e git+https://github.com/EdinburghGenomics/clarity_scripts.git@5b85e1e462c701d9e01f9a786d82688cfab4d391#egg=clarity_scripts
coverage==6.2
EGCG-Core==0.8.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==2.8
MarkupSafe==2.0.1
oauthlib==3.2.2
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyclarity-lims==0.4.8
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
PyYAML==6.0.1
requests==2.14.2
requests-oauthlib==0.8.0
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: clarity_scripts
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asana==0.6.7
- attrs==22.2.0
- cached-property==1.5.2
- coverage==6.2
- egcg-core==0.8.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==2.8
- markupsafe==2.0.1
- oauthlib==3.2.2
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyclarity-lims==0.4.8
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pyyaml==6.0.1
- requests==2.14.2
- requests-oauthlib==0.8.0
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/clarity_scripts
| [
"tests/test_populate_review_step.py::TestPullSampleInfo::test_field_from_entity"
]
| []
| [
"tests/test_populate_review_step.py::TestEPP::test_init",
"tests/test_populate_review_step.py::TestPopulator::test_init",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_assess_sample",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_field_from_entity",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_init",
"tests/test_populate_review_step.py::TestPullRunElementInfo::test_pull",
"tests/test_populate_review_step.py::TestPullSampleInfo::test_assess_sample",
"tests/test_populate_review_step.py::TestPullSampleInfo::test_init",
"tests/test_populate_review_step.py::TestPushRunElementInfo::test_init",
"tests/test_populate_review_step.py::TestPushRunElementInfo::test_push",
"tests/test_populate_review_step.py::TestPushSampleInfo::test_init",
"tests/test_populate_review_step.py::TestPushSampleInfo::test_push"
]
| []
| MIT License | 2,483 | [
"scripts/populate_review_step.py"
]
| [
"scripts/populate_review_step.py"
]
|
|
mgerst__flag-slurper-19 | fe4dca622d1a393844e2588056f9e8e535a9a5ad | 2018-05-06 19:34:02 | dca0114c619f7a091d852d4f23cacb73c3c93ed4 | diff --git a/README.md b/README.md
index 667cfb6..f10a106 100644
--- a/README.md
+++ b/README.md
@@ -117,7 +117,7 @@ You can specify as many flags as you want. All of the following fields are requi
Here's an example of an Auto PWN run that obtained flags:
-[](https://asciinema.org/a/V1NNowGyb2wGmEnhcMEgp6phq)
+[](https://asciinema.org/a/SZK8Ma0lUzX8H1CE02sLOjVIT)
Credentials
-----------
diff --git a/flag_slurper/autolib/exploit.py b/flag_slurper/autolib/exploit.py
index b99b247..b48b9b8 100644
--- a/flag_slurper/autolib/exploit.py
+++ b/flag_slurper/autolib/exploit.py
@@ -1,13 +1,16 @@
-import os
+import logging
import textwrap
-from typing import List, Tuple, Union, Dict, Any
+import time
+from typing import List, Tuple, Union, Dict, Any, Optional
+import os
import paramiko
FlagConf = Dict[str, Any]
+logger = logging.getLogger(__name__)
-def find_flags(ssh: paramiko.SSHClient, base_dir='/root') -> List[Tuple[str, str]]:
+def find_flags(ssh: paramiko.SSHClient, base_dir: str = '/root') -> List[Tuple[str, str]]:
search_glob = os.path.join(base_dir, '*flag*')
_, stdout, stderr = ssh.exec_command('ls {}'.format(search_glob))
@@ -25,20 +28,48 @@ def find_flags(ssh: paramiko.SSHClient, base_dir='/root') -> List[Tuple[str, str
return found
-def get_file_contents(ssh: paramiko.SSHClient, file: str) -> Union[str, bool]:
+def get_file_contents(ssh: paramiko.SSHClient, file: str, sudo: Optional[str] = None) -> Union[str, bool]:
"""
Retrieve the contents of file from the remote server.
This does ***NOT*** use SFTP since teams might disallow SFTP. Granted if you are
smart enough to disallow SFTP, you probably don't have default creds either, but
sometimes people mess up.
+
+ :param ssh: The current ssh session to use
+ :param file: The full path to the file to retrieve
+ :param sudo: The sudo password to use if the current session can sudo
+ """
+ ret = get_file(ssh, file, sudo)
+ if ret:
+ ret = ret.decode('utf-8').strip()
+ return ret
+
+
+def get_file(ssh: paramiko.SSHClient, file: str, sudo: Optional[str] = None) -> Union[bytes, bool]:
+ """
+ Retrieve a file from the remote server.
+
+ This does ***NOT*** use SFTP since teams might disallow SFTP. Granted if you are
+ smart enough to disallow SFTP, you probably don't have default creds either, but
+ sometimes people mess up.
+
+ :param ssh: The current ssh session to use
+ :param file: The full path to the file to retrieve
+ :param sudo: The sudo password to use if the current session can sudo
"""
- _, stdout, stderr = ssh.exec_command('cat {}'.format(file))
- err = stderr.read()
+ if sudo:
+ logger.debug("Using sudo")
+ _, stdout, stderr = run_sudo(ssh, 'cat {}'.format(file), sudo)
+ else:
+ _, stdout, stderr = ssh.exec_command('cat {}'.format(file))
+
+ err = stderr.read().decode('utf-8').strip()
if len(err) > 0:
+ logger.error("There was an error getting the file %s: %s", file, err)
return False
- return stdout.read().decode('utf-8').strip()
+ return stdout.read()
def _run_command(ssh: paramiko.SSHClient, command: str) -> str:
@@ -61,3 +92,26 @@ def get_system_info(ssh: paramiko.SSHClient) -> str:
if len(lsb_release) > 0:
sysinfo.append('LSB Release:\n{}'.format(textwrap.indent(lsb_release, ' > ')))
return '\n'.join(sysinfo)
+
+
+def can_sudo(ssh: paramiko.SSHClient, password: str) -> bool:
+ logger.debug("Attempting sudo")
+ command = "sudo -S -p '' whoami"
+ stdin, stdout, _ = ssh.exec_command(command)
+ stdin.write(password + '\n')
+ stdin.flush()
+ user = stdout.read().decode('utf-8').strip()
+ ret = user == "root"
+ if ret:
+ logger.debug("<<<SUDO FOUND>>>")
+ return ret
+
+
+CHAN_FILE_T = Tuple[paramiko.ChannelFile, paramiko.ChannelFile, paramiko.ChannelFile]
+
+
+def run_sudo(ssh: paramiko.SSHClient, command: str, password: str) -> CHAN_FILE_T:
+ stdin, stdout, stderr = ssh.exec_command("sudo -S -p ' ' {}".format(command))
+ stdin.write(password + '\n')
+ stdin.flush()
+ return stdin, stdout, stderr
diff --git a/flag_slurper/autolib/models.py b/flag_slurper/autolib/models.py
index 78b2e1a..16ef36b 100644
--- a/flag_slurper/autolib/models.py
+++ b/flag_slurper/autolib/models.py
@@ -1,8 +1,10 @@
+import click
import peewee
import playhouse.db_url
# We want to allow setting up the database connection from .flagrc
database_proxy = peewee.Proxy()
+SUDO_FLAG = click.style('!', fg='red', bold=True)
def initialize(database_url: str):
@@ -54,9 +56,15 @@ class Credential(BaseModel):
state = peewee.CharField(choices=[WORKS, REJECT])
bag = peewee.ForeignKeyField(CredentialBag, backref='credentials')
service = peewee.ForeignKeyField(Service, backref='credentials')
+ sudo = peewee.BooleanField(default=False)
def __str__(self):
- return "{}:{}".format(self.bag.username, self.bag.password)
+ flags = ""
+
+ if self.sudo:
+ flags += SUDO_FLAG
+
+ return "{}:{}{}".format(self.bag.username, self.bag.password, flags)
def __repr__(self):
return "<Credential {}>".format(self.__str__())
@@ -78,7 +86,12 @@ class CaptureNote(BaseModel):
searched = peewee.BooleanField(default=False)
def __str__(self):
- return "{} -> {}".format(self.location, self.data)
+ flags = ""
+
+ if "Used Sudo" in self.notes:
+ flags += SUDO_FLAG
+
+ return "{} -> {}{}".format(self.location, self.data, flags)
def create(): # pragma: no cover
@@ -86,9 +99,9 @@ def create(): # pragma: no cover
def delete(): # pragma: no cover
- CredentialBag.delete()
- Team.delete()
- Service.delete()
- Credential.delete()
- Flag.delete()
- CaptureNote.delete()
+ CredentialBag.delete().execute()
+ Team.delete().execute()
+ Service.delete().execute()
+ Credential.delete().execute()
+ Flag.delete().execute()
+ CaptureNote.delete().execute()
diff --git a/flag_slurper/autolib/protocols.py b/flag_slurper/autolib/protocols.py
index c70571e..ef67e80 100644
--- a/flag_slurper/autolib/protocols.py
+++ b/flag_slurper/autolib/protocols.py
@@ -6,7 +6,7 @@ import requests
import paramiko
from flag_slurper.autolib.exploit import get_file_contents, get_system_info
-from .exploit import find_flags, FlagConf
+from .exploit import find_flags, FlagConf, can_sudo
from .models import Service, CredentialBag, Credential, Flag, CaptureNote
logger = logging.getLogger(__name__)
@@ -37,17 +37,28 @@ def pwn_ssh(url: str, port: int, service: Service, flag_conf: FlagConf) -> Tuple
ssh.connect(url, port=port, username=credential.username, password=credential.password,
look_for_keys=False)
cred.state = Credential.WORKS
+
+ # Root doesn't need sudo
+ sudo = False
+ if credential.username != "root":
+ sudo = can_sudo(ssh, credential.password)
+ if sudo:
+ cred.sudo = True
+
cred.save()
working.add(cred)
sysinfo = get_system_info(ssh)
+ if sudo:
+ sysinfo += "\nUsed Sudo"
flag_obj, _ = Flag.get_or_create(team=service.team, name=flag_conf['name'])
if flag_conf:
location = flag_conf['name']
full_location = os.path.join(base_dir, location)
- flag = get_file_contents(ssh, full_location)
+ sudo_cred = credential.password if sudo else None
+ flag = get_file_contents(ssh, full_location, sudo=sudo_cred)
if flag:
enable_search = False
note, created = CaptureNote.get_or_create(flag=flag_obj, data=flag, location=full_location,
@@ -58,8 +69,10 @@ def pwn_ssh(url: str, port: int, service: Service, flag_conf: FlagConf) -> Tuple
for flag in flags:
CaptureNote.get_or_create(flag=flag_obj, data=flag[1], location=flag[0], notes=str(sysinfo),
searched=True, service=service)
- except Exception:
+ except paramiko.ssh_exception.AuthenticationException:
continue
+ except Exception:
+ logger.exception("There was an error pwning this service: %s", url)
if working:
return "Found credentials: {}".format(working), True, False
diff --git a/flag_slurper/autopwn.py b/flag_slurper/autopwn.py
index 42b1426..b0a625d 100644
--- a/flag_slurper/autopwn.py
+++ b/flag_slurper/autopwn.py
@@ -4,6 +4,7 @@ from multiprocessing import Pool
import click
+from flag_slurper.autolib.models import SUDO_FLAG
from . import utils, autolib
from .autolib import models
from .config import Config
@@ -128,7 +129,8 @@ def results():
p.connect_database()
- utils.report_status("Found the following credentials")
+ utils.report_status("Found the following flags")
+ utils.report_status("Key: {} Used Sudo".format(SUDO_FLAG))
flags = models.Flag.select()
if len(flags) == 0:
utils.report_warning('No Flags Found')
@@ -141,12 +143,13 @@ def results():
"{}/{}: {} -> {}".format(flag.team.number, note.service.service_name, note.location, note.data))
elif len(notes) > 1:
data = "\n\t".join(map(str, notes))
- utils.report_success("{}/{}:\n{}".format(flag.team.number, notes[0].service.service_name, data))
+ utils.report_success("{}/{}:\n\t{}".format(flag.team.number, notes[0].service.service_name, data))
else:
continue
click.echo()
utils.report_status("Found the following credentials")
+ utils.report_status("Key: {} Sudo".format(SUDO_FLAG))
services = models.Service.select()
for service in services:
diff --git a/provision/host_vars/team3.yml b/provision/host_vars/team3.yml
index 8bf81c6..4d0b64a 100644
--- a/provision/host_vars/team3.yml
+++ b/provision/host_vars/team3.yml
@@ -1,4 +1,3 @@
# root:other
root_password: "$6$.d8jGpv1krw3G$s5nA6ydHLsL9L2Isj5nMqtb.Zo8oW4JtexzA2jEVna9uG6oZkEjWwoBfsHspD/4hrH6qBx//aSSrhbKSRp3ls/"
-
-
+nosudo: true
diff --git a/provision/main.yml b/provision/main.yml
index b11306a..cb23641 100644
--- a/provision/main.yml
+++ b/provision/main.yml
@@ -28,6 +28,21 @@
- plugdev
- netdev
+ - name: "Create non-sudo user: nosudo:cdc"
+ user:
+ name: nosudo
+ password: "$1$kejaef$s8Y0EuYOIiDSSiItk8zLv1"
+ shell: /bin/bash
+ groups:
+ - cdrom
+ - floppy
+ - audio
+ - dip
+ - video
+ - plugdev
+ - netdev
+ when: nosudo is defined
+
- name: Change root password
user: name=root password="{{ root_password }}"
@@ -45,7 +60,7 @@
line: 'PermitRootLogin yes'
notify: restart ssh
- - name: Create flag for {{ inventory_hostname }}
+ - name: Create flag for each host
copy:
dest: /root/{{ inventory_hostname }}_www_root.flag
content: "{{ lookup('password', 'flags/' + inventory_hostname + '_www_root.flag length=50') }}"
| Attempt sudo when looking for flags
When attempting to grab a flag (like `/root/...`) as a non-root user, we should attempt to do privilege escalation with sudo in addition to grabbing it as-is.
The exact method used to obtain the flag should be recorded for capture notes. | mgerst/flag-slurper | diff --git a/tests/autolib/test_models.py b/tests/autolib/test_models.py
index a3f1859..4302a52 100644
--- a/tests/autolib/test_models.py
+++ b/tests/autolib/test_models.py
@@ -1,6 +1,9 @@
+import click
import pytest
-from flag_slurper.autolib.models import CredentialBag, Credential
+from flag_slurper.autolib.models import CredentialBag, Credential, CaptureNote
+
+SUDO_FLAG = click.style('!', fg='red', bold=True)
@pytest.fixture
@@ -8,11 +11,21 @@ def bag():
return CredentialBag(username='root', password='cdc')
[email protected]
+def sudobag():
+ return CredentialBag(username='cdc', password='cdc')
+
+
@pytest.fixture
def credential(bag, service):
yield Credential(bag=bag, state=Credential.WORKS, service=service)
[email protected]
+def sudocred(sudobag, service):
+ yield Credential(bag=sudobag, state=Credential.WORKS, service=service, sudo=True)
+
+
def test_cred_bag__str__(bag):
assert bag.__str__() == "root:cdc"
@@ -27,3 +40,21 @@ def test_cred__str__(credential):
def test_cred__repr__(credential):
assert credential.__repr__() == "<Credential root:cdc>"
+
+
+def test_cred_sudo__str__(sudocred):
+ assert sudocred.__str__() == "cdc:cdc{}".format(SUDO_FLAG)
+
+
+def test_cred_sudo__repr__(sudocred):
+ assert sudocred.__repr__() == "<Credential cdc:cdc{}>".format(SUDO_FLAG)
+
+
+def test_capture_note__str__(flag, service):
+ note = CaptureNote(flag=flag, service=service, data='abcd', location='/root/test.flag', notes='did stuff')
+ assert note.__str__() == "/root/test.flag -> abcd"
+
+
+def test_capture_note_sudo__str__(flag, service):
+ note = CaptureNote(flag=flag, service=service, data='abcd', location='/root/test.flag', notes='did stuff\nUsed Sudo')
+ assert note.__str__() == "/root/test.flag -> abcd{}".format(SUDO_FLAG)
diff --git a/tests/conftest.py b/tests/conftest.py
index 02bafe2..ddc3a8f 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -65,3 +65,8 @@ def invalid_service(team):
yield models.Service.create(remote_id=2, service_id=2, service_name='WWW Custom', service_port=10391,
service_url='www.team1.isucdc.com', admin_status=None, high_target=0, low_target=0,
is_rand=False, team=team)
+
+
[email protected]
+def flag(team):
+ yield models.Flag.create(id=1, team=team, name='Test Team')
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 3,
"test_score": 2
},
"num_modified_files": 7
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[remote,parallel]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-sugar",
"pytest-mock",
"pytest-xdist",
"tox",
"vcrpy",
"responses",
"pretend"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
bcrypt==4.0.1
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
cryptography==40.0.2
distlib==0.3.9
execnet==1.9.0
filelock==3.4.1
-e git+https://github.com/mgerst/flag-slurper.git@fe4dca622d1a393844e2588056f9e8e535a9a5ad#egg=flag_slurper
idna==3.10
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
multidict==5.2.0
packaging==21.3
paramiko==3.5.1
peewee==3.17.9
platformdirs==2.4.0
pluggy==1.0.0
pretend==1.0.9
psycopg2-binary==2.9.5
py==1.11.0
pycparser==2.21
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-sugar==0.9.6
pytest-xdist==3.0.2
PyYAML==6.0.1
requests==2.27.1
responses==0.17.0
schema==0.7.7
six==1.17.0
termcolor==1.1.0
terminaltables==3.1.10
toml==0.10.2
tomli==1.2.3
tox==3.28.0
typing_extensions==4.1.1
urllib3==1.26.20
vcrpy==4.1.1
virtualenv==20.17.1
wrapt==1.16.0
yarl==1.7.2
zipp==3.6.0
| name: flag-slurper
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- bcrypt==4.0.1
- cffi==1.15.1
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- cryptography==40.0.2
- distlib==0.3.9
- execnet==1.9.0
- filelock==3.4.1
- idna==3.10
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- multidict==5.2.0
- packaging==21.3
- paramiko==3.5.1
- peewee==3.17.9
- platformdirs==2.4.0
- pluggy==1.0.0
- pretend==1.0.9
- psycopg2-binary==2.9.5
- py==1.11.0
- pycparser==2.21
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-sugar==0.9.6
- pytest-xdist==3.0.2
- pyyaml==6.0.1
- requests==2.27.1
- responses==0.17.0
- schema==0.7.7
- six==1.17.0
- termcolor==1.1.0
- terminaltables==3.1.10
- toml==0.10.2
- tomli==1.2.3
- tox==3.28.0
- typing-extensions==4.1.1
- urllib3==1.26.20
- vcrpy==4.1.1
- virtualenv==20.17.1
- wrapt==1.16.0
- yarl==1.7.2
- zipp==3.6.0
prefix: /opt/conda/envs/flag-slurper
| [
"tests/autolib/test_models.py::test_cred_sudo__str__",
"tests/autolib/test_models.py::test_cred_sudo__repr__",
"tests/autolib/test_models.py::test_capture_note_sudo__str__"
]
| []
| [
"tests/autolib/test_models.py::test_cred_bag__str__",
"tests/autolib/test_models.py::test_cred_bag__repr__",
"tests/autolib/test_models.py::test_cred__str__",
"tests/autolib/test_models.py::test_cred__repr__",
"tests/autolib/test_models.py::test_capture_note__str__"
]
| []
| MIT License | 2,484 | [
"provision/host_vars/team3.yml",
"flag_slurper/autolib/models.py",
"provision/main.yml",
"README.md",
"flag_slurper/autolib/exploit.py",
"flag_slurper/autopwn.py",
"flag_slurper/autolib/protocols.py"
]
| [
"provision/host_vars/team3.yml",
"flag_slurper/autolib/models.py",
"provision/main.yml",
"README.md",
"flag_slurper/autolib/exploit.py",
"flag_slurper/autopwn.py",
"flag_slurper/autolib/protocols.py"
]
|
|
buildout__buildout-452 | 9161c43da9aa3c4d8394230b85e4a34762b933e1 | 2018-05-07 15:50:24 | cfd54b80966d2b4d7937e3753ba2733cb1c47339 | diff --git a/CHANGES.rst b/CHANGES.rst
index 835e061..af2d6ca 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -4,7 +4,8 @@ Change History
2.11.4 (unreleased)
===================
-- Nothing changed yet.
+- Fix `issue 451 <https://github.com/buildout/buildout/issues/451>`:
+ distributions with a two-digit version can be installed.
2.11.3 (2018-04-13)
diff --git a/src/zc/buildout/easy_install.py b/src/zc/buildout/easy_install.py
index ba7da0a..7568363 100644
--- a/src/zc/buildout/easy_install.py
+++ b/src/zc/buildout/easy_install.py
@@ -1655,13 +1655,16 @@ def _get_matching_dist_in_location(dist, location):
Check if `locations` contain only the one intended dist.
Return the dist with metadata in the new location.
"""
- # Getting the dist from the environment causes the
- # distribution meta data to be read. Cloning isn't
- # good enough.
+ # Getting the dist from the environment causes the distribution
+ # meta data to be read. Cloning isn't good enough. We must compare
+ # dist.parsed_version, not dist.version, because one or the other
+ # may be normalized (e.g., 3.3 becomes 3.3.0 when downloaded from
+ # PyPI.)
+
env = pkg_resources.Environment([location])
dists = [ d for project_name in env for d in env[project_name] ]
- dist_infos = [ (d.project_name.lower(), d.version) for d in dists ]
- if dist_infos == [(dist.project_name.lower(), dist.version)]:
+ dist_infos = [ (d.project_name.lower(), d.parsed_version) for d in dists ]
+ if dist_infos == [(dist.project_name.lower(), dist.parsed_version)]:
return dists.pop()
def _move_to_eggs_dir_and_compile(dist, dest):
| assert newdist is not None # newloc above is missing our dist?!
I am getting the error below while installing nltk. The egg is actually retrieved so the second time the recipe is run, it completes successfully.
FWIW doing easy_install-2.7 -U nltk works fine.
Carlos
I am running the latest setuptools
Getting distribution for 'nltk==3.3.0'.
warning: no files found matching 'README.txt'
warning: no files found matching 'Makefile' under directory '*.txt'
warning: no previously-included files matching '*~' found anywhere in distribution
While:
Installing eggs.
Getting distribution for 'nltk==3.3.0'.
An internal error occurred due to a bug in either zc.buildout or in a
recipe being used:
Traceback (most recent call last):
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/buildout.py", line 2127, in main
getattr(buildout, command)(args)
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/buildout.py", line 797, in install
installed_files = self[part]._call(recipe.install)
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/buildout.py", line 1557, in _call
return f()
File "/home/ntiuser/buildout/eggs/zc.recipe.egg-2.0.5-py2.7.egg/zc/recipe/egg/egg.py", line 221, in install
reqs, ws = self.working_set()
File "/home/ntiuser/buildout/eggs/zc.recipe.egg-2.0.5-py2.7.egg/zc/recipe/egg/egg.py", line 84, in working_set
allow_hosts=self.allow_hosts,
File "/home/ntiuser/buildout/eggs/zc.recipe.egg-2.0.5-py2.7.egg/zc/recipe/egg/egg.py", line 162, in _working_set
allow_hosts=allow_hosts)
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/easy_install.py", line 924, in install
return installer.install(specs, working_set)
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/easy_install.py", line 726, in install
for dist in self._get_dist(req, ws):
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/easy_install.py", line 570, in _get_dist
dists = [_move_to_eggs_dir_and_compile(dist, self._dest)]
File "/home/ntiuser/buildout/eggs/zc.buildout-2.11.3-py2.7.egg/zc/buildout/easy_install.py", line 1735, in _move_to_eggs_dir_and_compile
assert newdist is not None # newloc above is missing our dist?! | buildout/buildout | diff --git a/src/zc/buildout/tests.py b/src/zc/buildout/tests.py
index 004f39c..dc6bcb1 100644
--- a/src/zc/buildout/tests.py
+++ b/src/zc/buildout/tests.py
@@ -12,6 +12,9 @@
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
+from __future__ import print_function
+import unittest
+
from zc.buildout.buildout import print_
from zope.testing import renormalizing, setupstack
@@ -35,6 +38,116 @@ if os_path_sep == '\\':
os_path_sep *= 2
+class TestEasyInstall(unittest.TestCase):
+
+ # The contents of a zipped egg, created by setuptools:
+ # from setuptools import setup
+ # setup(
+ # name='TheProject',
+ # version='3.3',
+ # )
+ #
+ # (we can't run setuptools at runtime, it may not be installed)
+ EGG_DATA = (
+ b'PK\x03\x04\x14\x00\x00\x00\x08\x00q8\xa8Lg0\xb7ix\x00\x00\x00\xb6\x00'
+ b'\x00\x00\x11\x00\x00\x00EGG-INFO/PKG-INFO\xf3M-ILI,I\xd4\rK-*'
+ b'\xce\xcc\xcf\xb3R0\xd43\xe0\xf2K\xccM\xb5R\x08\xc9H\r(\xca\xcfJM'
+ b'.\xe1\x82\xcb\x1a\xeb\x19s\x05\x97\xe6\xe6&\x16UZ)\x84\xfay\xfb\xf9\x87\xfb'
+ b'qy\xe4\xe7\xa6\xea\x16$\xa6\xa7"\x84\x1cKK2\xf2\x8b\xd0\xf9\xba\xa9\xb9\x89'
+ b'\x999\x08Q\x9f\xcc\xe4\xd4\xbcb$m.\xa9\xc5\xc9E\x99\x05%`\xbb`\x82\x019\x89%'
+ b'i\xf9E\xb9\x08\x11\x00PK\x03\x04\x14\x00\x00\x00\x08\x00q8\xa8L61\xa1'
+ b'XL\x00\x00\x00\x87\x00\x00\x00\x14\x00\x00\x00EGG-INFO/SOURCES.txt\x0b\xc9H'
+ b'\r(\xca\xcfJM.\xd1KMO\xd7\xcd\xccK\xcb\xd7\x0f\xf0v\xd7\xf5\xf4s'
+ b'\xf3\xe7\n\xc1"\x19\xec\x1f\x1a\xe4\xec\x1a\xacWRQ\x82U>%\xb5 5/%5/\xb92>\'3'
+ b'/\xbb\x18\xa7\xc2\x92\xfc\x82\xf8\x9c\xd4\xb2\xd4\x1c\x90\n\x00PK\x03'
+ b'\x04\x14\x00\x00\x00\x08\x00q8\xa8L\x93\x06\xd72\x03\x00\x00\x00\x01'
+ b'\x00\x00\x00\x1d\x00\x00\x00EGG-INFO/dependency_links.txt\xe3\x02\x00P'
+ b'K\x03\x04\x14\x00\x00\x00\x08\x00q8\xa8L\x93\x06\xd72\x03\x00\x00'
+ b'\x00\x01\x00\x00\x00\x16\x00\x00\x00EGG-INFO/top_level.txt\xe3\x02\x00PK'
+ b'\x03\x04\x14\x00\x00\x00\x08\x00q8\xa8L\x93\x06\xd72\x03\x00\x00\x00'
+ b'\x01\x00\x00\x00\x11\x00\x00\x00EGG-INFO/zip-safe\xe3\x02\x00PK\x01\x02'
+ b'\x14\x03\x14\x00\x00\x00\x08\x00q8\xa8Lg0\xb7ix\x00\x00\x00\xb6\x00\x00\x00'
+ b'\x11\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa4\x81\x00\x00\x00\x00EG'
+ b'G-INFO/PKG-INFOPK\x01\x02\x14\x03\x14\x00\x00\x00\x08\x00q8\xa8L61\xa1XL'
+ b'\x00\x00\x00\x87\x00\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x00'
+ b'\x00\x00\x00\xa4\x81\xa7\x00\x00\x00EGG-INFO/SOURCES.txtPK\x01'
+ b'\x02\x14\x03\x14\x00\x00\x00\x08\x00q8\xa8L\x93\x06\xd72\x03\x00\x00'
+ b'\x00\x01\x00\x00\x00\x1d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
+ b'\x00\xa4\x81%\x01\x00\x00EGG-INFO/dependency_links.txtPK\x01\x02'
+ b'\x14\x03\x14\x00\x00\x00\x08\x00q8\xa8L\x93\x06\xd72\x03\x00\x00\x00'
+ b'\x01\x00\x00\x00\x16\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
+ b'\xa4\x81c\x01\x00\x00EGG-INFO/top_level.txtPK\x01\x02\x14\x03\x14\x00'
+ b'\x00\x00\x08\x00q8\xa8L\x93\x06\xd72\x03\x00\x00\x00\x01\x00\x00\x00'
+ b'\x11\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa4\x81\x9a\x01\x00\x00EG'
+ b'G-INFO/zip-safePK\x05\x06\x00\x00\x00\x00\x05\x00\x05\x00O\x01\x00\x00\xcc'
+ b'\x01\x00\x00\x00\x00'
+ )
+
+ def setUp(self):
+ self.cwd = os.getcwd()
+ self.temp_dir = tempfile.mkdtemp('.buildouttest')
+ self.project_dir = os.path.join(self.temp_dir, 'TheProject')
+ self.project_dist_dir = os.path.join(self.temp_dir, 'dist')
+ os.mkdir(self.project_dist_dir)
+ self.egg_path = os.path.join(self.project_dist_dir, 'TheProject.egg')
+ os.mkdir(self.project_dir)
+ self.setup_path = os.path.join(self.project_dir, 'setup.py')
+ os.chdir(self.temp_dir)
+
+ def tearDown(self):
+ os.chdir(self.cwd)
+ shutil.rmtree(self.temp_dir)
+
+ def _make_egg(self):
+ with open(self.egg_path, 'wb') as f:
+ f.write(self.EGG_DATA)
+
+
+ def _get_distro_and_egg_path(self):
+ # Returns a distribution with a version of '3.3.0',
+ # but an egg with a version of '3.3'
+ self._make_egg()
+ from distutils.dist import Distribution
+ dist = Distribution()
+ dist.project_name = 'TheProject'
+ dist.version = '3.3.0'
+ dist.parsed_version = pkg_resources.parse_version(dist.version)
+
+ return dist, self.egg_path
+
+ def test_get_matching_dist_in_location_uses_parsed_version(self):
+ # https://github.com/buildout/buildout/pull/452
+ # An egg built with the version '3.3' should match a distribution
+ # looking for '3.3.0'
+ dist, location = self._get_distro_and_egg_path()
+
+ result = zc.buildout.easy_install._get_matching_dist_in_location(
+ dist,
+ self.project_dist_dir
+ )
+ self.assertIsNotNone(result)
+ self.assertEqual(result.version, '3.3')
+
+ def test_move_to_eggs_dir_and_compile(self):
+ # https://github.com/buildout/buildout/pull/452
+ # An egg built with the version '3.3' should match a distribution
+ # looking for '3.3.0'
+
+ dist, location = self._get_distro_and_egg_path()
+ dist.location = location
+
+ dest = os.path.join(self.temp_dir, 'NewLoc')
+
+ result = zc.buildout.easy_install._move_to_eggs_dir_and_compile(
+ dist,
+ dest
+ )
+
+ self.assertIsNotNone(result)
+ self.assertEqual(result.version, '3.3')
+ self.assertIn(dest, result.location)
+
+
def develop_w_non_setuptools_setup_scripts():
"""
We should be able to deal with setup scripts that aren't setuptools based.
@@ -3788,4 +3901,9 @@ def test_suite():
]),
))
+ test_suite.append(unittest.defaultTestLoader.loadTestsFromName(__name__))
+
return unittest.TestSuite(test_suite)
+
+if __name__ == '__main__':
+ unittest.main()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 2
} | 2.11 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"coverage",
"zc.buildout",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
bobo==2.3.0
certifi==2021.5.30
coverage==6.2
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
manuel==1.13.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
six==1.17.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
WebOb==1.8.9
-e git+https://github.com/buildout/buildout.git@9161c43da9aa3c4d8394230b85e4a34762b933e1#egg=zc.buildout
zc.recipe.deployment==1.3.0
zc.recipe.egg==2.0.6
zc.zdaemonrecipe==1.0.0
ZConfig==3.6.1
zdaemon==4.4
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
zope.testing==5.0.1
| name: buildout
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- bobo==2.3.0
- coverage==6.2
- manuel==1.13.0
- six==1.17.0
- webob==1.8.9
- zc-recipe-deployment==1.3.0
- zc-recipe-egg==2.0.6
- zc-zdaemonrecipe==1.0.0
- zconfig==3.6.1
- zdaemon==4.4
- zope-testing==5.0.1
prefix: /opt/conda/envs/buildout
| [
"src/zc/buildout/tests.py::TestEasyInstall::test_get_matching_dist_in_location_uses_parsed_version",
"src/zc/buildout/tests.py::TestEasyInstall::test_move_to_eggs_dir_and_compile"
]
| []
| [
"src/zc/buildout/tests.py::test_comparing_saved_options_with_funny_characters",
"src/zc/buildout/tests.py::test_help",
"src/zc/buildout/tests.py::test_version",
"src/zc/buildout/tests.py::test_bootstrap_with_extension",
"src/zc/buildout/tests.py::test_exit_codes",
"src/zc/buildout/tests.py::test_constrained_requirement",
"src/zc/buildout/tests.py::test_distutils_scripts_using_import_are_properly_parsed",
"src/zc/buildout/tests.py::test_distutils_scripts_using_from_are_properly_parsed",
"src/zc/buildout/tests.py::test_buildout_section_shorthand_for_command_line_assignments",
"src/zc/buildout/tests.py::test_abi_tag_eggs",
"src/zc/buildout/tests.py::test_buildout_doesnt_keep_adding_itself_to_versions",
"src/zc/buildout/tests.py::test_suite"
]
| []
| Zope Public License 2.1 | 2,486 | [
"src/zc/buildout/easy_install.py",
"CHANGES.rst"
]
| [
"src/zc/buildout/easy_install.py",
"CHANGES.rst"
]
|
|
missionpinball__mpf-1169 | 879c4891d320e089ef7bb9a7b8aba0873c287afa | 2018-05-07 18:51:58 | 2c1bb3aa1e25674916bc4e0d17ccb6c3c87bd01b | diff --git a/mpf/assets/show.py b/mpf/assets/show.py
index 341ccac7c..1ab70ade1 100644
--- a/mpf/assets/show.py
+++ b/mpf/assets/show.py
@@ -274,7 +274,7 @@ class Show(Asset):
events_when_looped=None, events_when_paused=None,
events_when_resumed=None, events_when_advanced=None,
events_when_stepped_back=None, events_when_updated=None,
- events_when_completed=None, start_time=None) -> "RunningShow":
+ events_when_completed=None, start_time=None, start_callback=None) -> "RunningShow":
"""Play a Show.
There are many parameters you can use here which
@@ -382,7 +382,8 @@ class Show(Asset):
events_when_advanced=events_when_advanced,
events_when_stepped_back=events_when_stepped_back,
events_when_updated=events_when_updated,
- events_when_completed=events_when_completed)
+ events_when_completed=events_when_completed,
+ start_callback=start_callback)
if not self.loaded:
self.load(callback=running_show.show_loaded, priority=priority)
@@ -418,7 +419,7 @@ class RunningShow(object):
events_when_paused, events_when_resumed,
events_when_advanced, events_when_stepped_back,
events_when_updated, events_when_completed,
- start_time):
+ start_time, start_callback):
"""Initialise an instance of a show."""
self.machine = machine
self.show = show
@@ -430,6 +431,7 @@ class RunningShow(object):
self.start_step = start_step
self.sync_ms = sync_ms
self.show_tokens = show_tokens
+ self.start_callback = start_callback
self.events = dict(play=events_when_played,
stop=events_when_stopped,
@@ -500,12 +502,14 @@ class RunningShow(object):
# Figure out the show start time
if self.sync_ms:
- delay_secs = (self.sync_ms / 1000.0) - (self.next_step_time % (self.sync_ms / 1000.0))
- self.next_step_time += delay_secs
+ # calculate next step based on synchronized start time
+ self.next_step_time += (self.sync_ms / 1000.0) - (self.next_step_time % (self.sync_ms / 1000.0))
+ # but wait relative to real time
+ delay_secs = self.next_step_time - self.machine.clock.get_time()
self._delay_handler = self.machine.clock.schedule_once(
- partial(self._run_next_step, post_events='play'), delay_secs)
+ self._start_now, delay_secs)
else: # run now
- self._run_next_step(post_events='play')
+ self._start_now()
def _post_events(self, action):
if self.events[action]: # Should make sure this is a list? todo
@@ -575,8 +579,10 @@ class RunningShow(object):
self._stopped = True
- if not self._show_loaded:
- return
+ # if the start callback has never been called then call it now
+ if self.start_callback:
+ self.start_callback()
+ self.start_callback = None
self._remove_delay_handler()
@@ -642,6 +648,13 @@ class RunningShow(object):
if self._show_loaded:
self._run_next_step(post_events='step_back')
+ def _start_now(self) -> None:
+ """Start playing the show."""
+ if self.start_callback:
+ self.start_callback()
+ self.start_callback = None
+ self._run_next_step(post_events='play')
+
def _run_next_step(self, post_events=None) -> None:
"""Run the next show step."""
if post_events:
diff --git a/mpf/config_players/show_player.py b/mpf/config_players/show_player.py
index 367d87592..09c1bd4af 100644
--- a/mpf/config_players/show_player.py
+++ b/mpf/config_players/show_player.py
@@ -63,6 +63,7 @@ class ShowPlayer(DeviceConfigPlayer):
callback = queue.clear
start_step = show_settings['start_step'].evaluate(placeholder_args)
+ start_callback = None
if key in instance_dict and not instance_dict[key].stopped:
# this is an optimization for the case where we only advance a show or do not change it at all
@@ -94,7 +95,12 @@ class ShowPlayer(DeviceConfigPlayer):
instance_dict[key].advance()
return
# in all other cases stop the current show
- instance_dict[key].stop()
+ if show_settings["sync_ms"]:
+ # stop current show in sync with new show
+ start_callback = instance_dict[key].stop
+ else:
+ # stop the current show instantly
+ instance_dict[key].stop()
try:
show_obj = self.machine.shows[show]
except KeyError:
@@ -120,7 +126,8 @@ class ShowPlayer(DeviceConfigPlayer):
events_when_stepped_back=show_settings[
'events_when_stepped_back'],
events_when_updated=show_settings['events_when_updated'],
- events_when_completed=show_settings['events_when_completed']
+ events_when_completed=show_settings['events_when_completed'],
+ start_callback=start_callback
)
@staticmethod
| Stop shows with sync_ms
When playing a show with sync_ms any previous show with the same key will be stopped instantly but the new show will play in sync. Delay the stop until the new show starts. | missionpinball/mpf | diff --git a/mpf/tests/machine_files/shows/config/test_sync_ms.yaml b/mpf/tests/machine_files/shows/config/test_sync_ms.yaml
new file mode 100644
index 000000000..af4367d37
--- /dev/null
+++ b/mpf/tests/machine_files/shows/config/test_sync_ms.yaml
@@ -0,0 +1,26 @@
+#config_version=5
+lights:
+ light:
+ number: 1
+
+shows:
+ my_show1:
+ - duration: -1
+ lights:
+ light: red
+ my_show2:
+ - duration: -1
+ lights:
+ light: blue
+
+show_player:
+ play_show_sync_ms1:
+ my_show1:
+ key: sync_show
+ sync_ms: 250
+ play_show_sync_ms2:
+ my_show2:
+ key: sync_show
+ sync_ms: 250
+ stop_show:
+ sync_show: stop
diff --git a/mpf/tests/test_Shows.py b/mpf/tests/test_Shows.py
index 869aa1bc0..2a0f73460 100644
--- a/mpf/tests/test_Shows.py
+++ b/mpf/tests/test_Shows.py
@@ -9,7 +9,10 @@ from mpf.tests.MpfTestCase import MpfTestCase
class TestShows(MpfTestCase):
def getConfigFile(self):
- return 'test_shows.yaml'
+ if self._testMethodName == "test_sync_ms":
+ return "test_sync_ms.yaml"
+ else:
+ return 'test_shows.yaml'
def getMachinePath(self):
return 'tests/machine_files/shows/'
@@ -589,6 +592,59 @@ class TestShows(MpfTestCase):
self.advance_time_and_run(10)
self.assertEqual(1, self.machine.show_player.instances['_global']['show_player']['test_show1'].next_step_index)
+ def advance_to_sync_ms(self, ms):
+ current_time = self.clock.get_time()
+ ms = float(ms)
+ next_full_second = current_time + ((ms - (current_time * 1000) % ms) / 1000.0)
+ self.advance_time_and_run(next_full_second - current_time)
+ delta = (self.clock.get_time() * 1000) % ms
+ self.assertTrue(delta == ms or delta < 1, "Delta {} too large".format(delta))
+
+ def test_sync_ms(self):
+ self.advance_to_sync_ms(250)
+ # shortly after sync point. start initial show
+ self.advance_time_and_run(.01)
+ self.post_event("play_show_sync_ms1")
+ self.assertLightColor("light", "off")
+ # wait for first sync point
+ self.advance_to_sync_ms(250)
+ self.advance_time_and_run(.01)
+ self.assertLightColor("light", "red")
+ # shortly after sync point again
+ self.post_event("play_show_sync_ms2")
+ # old show still playing
+ self.advance_time_and_run(.1)
+ self.assertLightColor("light", "red")
+ # until next sync point is reached
+ self.advance_time_and_run(.15)
+ self.assertLightColor("light", "blue")
+ self.post_event("stop_show")
+ self.assertLightColor("light", "off")
+
+ # play show and start second show before first was synced
+ self.advance_to_sync_ms(250)
+ self.advance_time_and_run(.01)
+ self.post_event("play_show_sync_ms1")
+ self.advance_time_and_run(.01)
+ self.post_event("play_show_sync_ms2")
+ self.assertLightColor("light", "off")
+ self.advance_to_sync_ms(250)
+ self.assertLightColor("light", "blue")
+ self.post_event("stop_show")
+ self.assertLightColor("light", "off")
+
+ # play show and start second show before first was synced. stop before second is synced
+ self.advance_to_sync_ms(250)
+ self.advance_time_and_run(.01)
+ self.post_event("play_show_sync_ms1")
+ self.advance_time_and_run(.01)
+ self.post_event("play_show_sync_ms2")
+ self.assertLightColor("light", "off")
+ self.post_event("stop_show")
+ self.assertLightColor("light", "off")
+ self.advance_time_and_run(.5)
+ self.assertLightColor("light", "off")
+
def test_pause_resume_shows(self):
self.machine.events.post('play_test_show1')
# make sure show is advancing
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 2
} | 0.33 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asciimatics==1.14.0
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
future==1.0.0
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
-e git+https://github.com/missionpinball/mpf.git@879c4891d320e089ef7bb9a7b8aba0873c287afa#egg=mpf
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==8.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyfiglet==0.8.post1
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pyserial==3.5
pyserial-asyncio==0.6
pytest==6.2.4
ruamel.base==1.0.0
ruamel.yaml==0.10.23
terminaltables==3.1.10
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing==3.7.4.3
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
wcwidth==0.2.13
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: mpf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- asciimatics==1.14.0
- future==1.0.0
- pillow==8.4.0
- psutil==7.0.0
- pyfiglet==0.8.post1
- pyserial==3.5
- pyserial-asyncio==0.6
- ruamel-base==1.0.0
- ruamel-yaml==0.10.23
- terminaltables==3.1.10
- typing==3.7.4.3
- wcwidth==0.2.13
prefix: /opt/conda/envs/mpf
| [
"mpf/tests/test_Shows.py::TestShows::test_sync_ms"
]
| []
| [
"mpf/tests/test_Shows.py::TestShows::test_default_shows",
"mpf/tests/test_Shows.py::TestShows::test_duration_in_shows",
"mpf/tests/test_Shows.py::TestShows::test_get_show_copy",
"mpf/tests/test_Shows.py::TestShows::test_keys_in_show_player",
"mpf/tests/test_Shows.py::TestShows::test_manual_advance",
"mpf/tests/test_Shows.py::TestShows::test_nested_shows",
"mpf/tests/test_Shows.py::TestShows::test_nested_shows_stop_before_load",
"mpf/tests/test_Shows.py::TestShows::test_pause_resume_shows",
"mpf/tests/test_Shows.py::TestShows::test_queue_event",
"mpf/tests/test_Shows.py::TestShows::test_show_from_mode_config",
"mpf/tests/test_Shows.py::TestShows::test_show_player",
"mpf/tests/test_Shows.py::TestShows::test_show_player_completed_events",
"mpf/tests/test_Shows.py::TestShows::test_show_player_emitted_events",
"mpf/tests/test_Shows.py::TestShows::test_shows",
"mpf/tests/test_Shows.py::TestShows::test_token_in_keys",
"mpf/tests/test_Shows.py::TestShows::test_tokens_in_shows",
"mpf/tests/test_Shows.py::TestShows::test_too_few_tokens",
"mpf/tests/test_Shows.py::TestShows::test_too_many_tokens"
]
| []
| MIT License | 2,487 | [
"mpf/assets/show.py",
"mpf/config_players/show_player.py"
]
| [
"mpf/assets/show.py",
"mpf/config_players/show_player.py"
]
|
|
Duke-GCB__datadelivery-cli-3 | cc1bf019c81cc7d6e488a3e79adf0a1d1a3c34d2 | 2018-05-08 16:51:20 | cc1bf019c81cc7d6e488a3e79adf0a1d1a3c34d2 | diff --git a/.circleci/config.yml b/.circleci/config.yml
new file mode 100644
index 0000000..3f54311
--- /dev/null
+++ b/.circleci/config.yml
@@ -0,0 +1,42 @@
+version: 2
+jobs:
+ build:
+ docker:
+ - image: circleci/python:2.7
+
+ working_directory: ~/repo
+
+ steps:
+ - checkout
+
+ # Download and cache dependencies
+ - restore_cache:
+ keys:
+ - v1-dependencies-{{ checksum "setup.py" }}-{{ checksum "devRequirements.txt" }}
+ # fallback to using the latest cache if no exact match is found
+ - v1-dependencies-
+
+ - run:
+ name: install dependencies
+ command: |
+ pip install virtualenv
+ virtualenv venv
+ . venv/bin/activate
+ python setup.py install
+ pip install -r devRequirements.txt
+
+ - save_cache:
+ paths:
+ - ./venv
+ key: v1-dependencies-{{ checksum "setup.py" }}-{{ checksum "devRequirements.txt" }}
+
+ - run:
+ name: run tests
+ command: |
+ . venv/bin/activate
+ python setup.py test
+
+ - store_artifacts:
+ path: test-reports
+ destination: test-reports
+
diff --git a/datadelivery/__main__.py b/datadelivery/__main__.py
index ca7ccaa..4235b04 100644
--- a/datadelivery/__main__.py
+++ b/datadelivery/__main__.py
@@ -1,7 +1,10 @@
#!/usr/bin/env python
+from __future__ import absolute_import
+import sys
from datadelivery.commands import Commands, APP_NAME
from datadelivery.argparser import ArgParser
from datadelivery.config import ConfigSetupAbandoned
+from datadelivery.s3 import S3Exception
import pkg_resources
@@ -12,6 +15,9 @@ def main():
arg_parser.parse_and_run_commands()
except ConfigSetupAbandoned:
pass
+ except S3Exception as e:
+ print("Error: {}".format(e))
+ sys.exit(1)
if __name__ == '__main__':
diff --git a/datadelivery/commands.py b/datadelivery/commands.py
index 0bf4654..902bfaa 100644
--- a/datadelivery/commands.py
+++ b/datadelivery/commands.py
@@ -1,4 +1,4 @@
-from __future__ import print_function
+from __future__ import print_function, absolute_import
from datadelivery.config import ConfigFile
from datadelivery.s3 import S3, NotFoundException
diff --git a/datadelivery/config.py b/datadelivery/config.py
index 7d838b1..1e33583 100644
--- a/datadelivery/config.py
+++ b/datadelivery/config.py
@@ -1,4 +1,4 @@
-from __future__ import print_function
+from __future__ import print_function, absolute_import
import os
import yaml
from six.moves import input
@@ -19,13 +19,14 @@ class ConfigFile(object):
self.filename = os.path.expanduser(filename)
def read_or_create_config(self):
+ config = Config({})
if os.path.exists(self.filename):
- return self.read_config()
- else:
- token = self._prompt_user_for_token()
- print("Writing new config file at {}".format(self.filename))
- self.write_new_config(token)
- return self.read_config()
+ config = self.read_config()
+ if not config.token:
+ config.token = self._prompt_user_for_token()
+ print("Writing config file at {}".format(self.filename))
+ self.write_config(config)
+ return config
def _prompt_user_for_token(self):
token = self.prompt_user(ENTER_DATA_DELIVERY_TOKEN_PROMPT)
@@ -47,18 +48,38 @@ class ConfigFile(object):
with open(self.filename, 'r') as stream:
return Config(yaml.safe_load(stream))
- def write_new_config(self, token):
+ def write_config(self, config):
with open(self.filename, 'w+') as stream:
- yaml.safe_dump({
- 'token': token
- }, stream)
+ yaml.safe_dump(config.to_dict(), stream)
class Config(object):
def __init__(self, data):
- self.token = data['token']
- self.url = data.get('url', DEFAULT_DATA_DELIVERY_URL)
- self.endpoint_name = data.get('endpoint_name', DEFAULT_ENDPOINT_NAME)
+ self.token = data.get('token')
+ self._url = data.get('url')
+ self._endpoint_name = data.get('endpoint_name')
+
+ @property
+ def url(self):
+ if not self._url:
+ return DEFAULT_DATA_DELIVERY_URL
+ return self._url
+
+ @property
+ def endpoint_name(self):
+ if not self._endpoint_name:
+ return DEFAULT_ENDPOINT_NAME
+ return self._endpoint_name
+
+ def to_dict(self):
+ data = {}
+ if self.token:
+ data['token'] = self.token
+ if self._url:
+ data['url'] = self._url
+ if self._endpoint_name:
+ data['endpoint_name'] = self._endpoint_name
+ return data
class ConfigSetupAbandoned(Exception):
diff --git a/datadelivery/s3.py b/datadelivery/s3.py
index d6822ed..50ca962 100644
--- a/datadelivery/s3.py
+++ b/datadelivery/s3.py
@@ -24,7 +24,10 @@ class S3(object):
def _get_request(self, url_suffix):
url = self._build_url(url_suffix)
headers = self._build_headers()
- response = requests.get(url, headers=headers)
+ try:
+ response = requests.get(url, headers=headers)
+ except requests.exceptions.ConnectionError as ex:
+ raise S3Exception("Failed to connect to {}\n{}".format(self.config.url, ex))
self._check_response(response)
return response.json()
@@ -40,7 +43,18 @@ class S3(object):
try:
response.raise_for_status()
except requests.HTTPError:
- raise S3Exception(response.text)
+ raise S3Exception(S3.make_message_for_http_error(response))
+
+ @staticmethod
+ def make_message_for_http_error(response):
+ message = response.text
+ try:
+ data = response.json()
+ if 'detail' in data:
+ message = data['detail']
+ except ValueError:
+ pass # response was not JSON
+ return message
def _get_current_endpoint(self):
"""
@@ -50,7 +64,7 @@ class S3(object):
url_suffix = 's3-endpoints/?name={}'.format(self.config.endpoint_name)
for endpoint_response in self._get_request(url_suffix):
return S3Endpoint(endpoint_response)
- raise NotFoundException("No endpoint found for s3 url: {}".format(self.config.s3_url))
+ raise NotFoundException("No endpoint found for s3 url: {}".format(self.config.url))
def get_current_user(self):
"""
| Improve error messages with bad or missing token
When an invalid token is in `.datadelivery.yml`:
```
$ datadelivery deliver -b handover-demo-no-symlinks --email [email protected]
Traceback (most recent call last):
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/s3.py", line 41, in _check_response
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/requests-2.18.4-py3.6.egg/requests/models.py", line 935, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://127.0.0.1:8000/api/v2/s3-endpoints/?name=default
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dcl9/Code/python/datadelivery-cli/venv/bin/datadelivery", line 11, in <module>
load_entry_point('datadelivery==0.0.1', 'console_scripts', 'datadelivery')()
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/__main__.py", line 12, in main
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/argparser.py", line 26, in parse_and_run_commands
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/argparser.py", line 73, in _run_deliver
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/commands.py", line 24, in deliver
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/commands.py", line 14, in _create_s3
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/s3.py", line 11, in __init__
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/s3.py", line 51, in _get_current_endpoint
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/s3.py", line 28, in _get_request
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/s3.py", line 43, in _check_response
datadelivery.s3.S3Exception: {"detail":"Invalid token."}
```
When no token is in `.datadelivery.yml`:
```
$ datadelivery deliver -b handover-demo-no-symlinks --email [email protected]
Traceback (most recent call last):
File "/Users/dcl9/Code/python/datadelivery-cli/venv/bin/datadelivery", line 11, in <module>
load_entry_point('datadelivery==0.0.1', 'console_scripts', 'datadelivery')()
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/__main__.py", line 12, in main
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/argparser.py", line 26, in parse_and_run_commands
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/argparser.py", line 73, in _run_deliver
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/commands.py", line 24, in deliver
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/commands.py", line 13, in _create_s3
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/config.py", line 23, in read_or_create_config
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/config.py", line 48, in read_config
File "/Users/dcl9/Code/python/datadelivery-cli/venv/lib/python3.6/site-packages/datadelivery-0.0.1-py3.6.egg/datadelivery/config.py", line 59, in __init__
KeyError: 'token'
``` | Duke-GCB/datadelivery-cli | diff --git a/datadelivery/test_argparser.py b/datadelivery/test_argparser.py
index 658e748..054f76a 100644
--- a/datadelivery/test_argparser.py
+++ b/datadelivery/test_argparser.py
@@ -1,3 +1,4 @@
+from __future__ import absolute_import
from unittest import TestCase
from mock import MagicMock, patch, call
from datadelivery.argparser import ArgParser
diff --git a/datadelivery/test_commands.py b/datadelivery/test_commands.py
index da4612e..10c5d81 100644
--- a/datadelivery/test_commands.py
+++ b/datadelivery/test_commands.py
@@ -1,3 +1,4 @@
+from __future__ import absolute_import
from unittest import TestCase
from mock import MagicMock, patch, call
from datadelivery.commands import Commands
diff --git a/datadelivery/test_config.py b/datadelivery/test_config.py
index c333e05..0275908 100644
--- a/datadelivery/test_config.py
+++ b/datadelivery/test_config.py
@@ -1,3 +1,4 @@
+from __future__ import absolute_import
from unittest import TestCase
from mock import MagicMock, patch, call, mock_open
from datadelivery.config import ConfigFile, Config, ConfigSetupAbandoned, \
@@ -14,10 +15,10 @@ class ConfigFileTestCase(TestCase):
mock_config.assert_called_with("data")
@patch('datadelivery.config.Config')
- def test_write_new_config(self, mock_config):
+ def test_write_config(self, mock_config):
config_file = ConfigFile()
with patch("__builtin__.open", mock_open()) as mock_file:
- config_file.write_new_config(token='secret')
+ config_file.write_config(Config({'token': 'secret'}))
write_call_args_list = mock_file.return_value.write.call_args_list
written = ''.join([write_call_args[0][0] for write_call_args in write_call_args_list])
self.assertEqual(written.strip(), '{token: secret}')
@@ -45,10 +46,37 @@ class ConfigFileTestCase(TestCase):
config_file.read_config = mock_read_config
config_file.write_new_config = mock_write_new_config
- config = config_file.read_or_create_config()
+ with patch("__builtin__.open", mock_open()) as mock_file:
+ config = config_file.read_or_create_config()
mock_prompt_user.assert_called_with(ENTER_DATA_DELIVERY_TOKEN_PROMPT)
- mock_write_new_config.assert_called_with('secretToken')
- self.assertEqual(config, mock_read_config.return_value)
+ written_text = ''.join([call_args[0][0] for call_args in mock_file.return_value.write.call_args_list])
+ self.assertEqual(written_text.strip(), '{token: secretToken}')
+ self.assertEqual(config.token, 'secretToken')
+ self.assertEqual(config.url, DEFAULT_DATA_DELIVERY_URL)
+ self.assertEqual(config.endpoint_name, DEFAULT_ENDPOINT_NAME)
+
+ @patch('datadelivery.config.os')
+ def test_read_or_create_config_no_token_in_file(self, mock_os):
+ mock_os.path.exists.return_value = True
+ mock_prompt_user = MagicMock()
+ mock_prompt_user.return_value = 'secretToken'
+ mock_read_config = MagicMock()
+ mock_read_config.return_value = Config({})
+ mock_write_new_config = MagicMock()
+
+ config_file = ConfigFile()
+ config_file.prompt_user = mock_prompt_user
+ config_file.read_config = mock_read_config
+ config_file.write_new_config = mock_write_new_config
+
+ with patch("__builtin__.open", mock_open()) as mock_file:
+ config = config_file.read_or_create_config()
+ mock_prompt_user.assert_called_with(ENTER_DATA_DELIVERY_TOKEN_PROMPT)
+ written_text = ''.join([call_args[0][0] for call_args in mock_file.return_value.write.call_args_list])
+ self.assertEqual(written_text.strip(), '{token: secretToken}')
+ self.assertEqual(config.token, 'secretToken')
+ self.assertEqual(config.url, DEFAULT_DATA_DELIVERY_URL)
+ self.assertEqual(config.endpoint_name, DEFAULT_ENDPOINT_NAME)
@patch('datadelivery.config.os')
def test_read_or_create_config_when_user_doesnt_enter_token(self, mock_os):
diff --git a/datadelivery/test_s3.py b/datadelivery/test_s3.py
index d9d8792..d6fdc30 100644
--- a/datadelivery/test_s3.py
+++ b/datadelivery/test_s3.py
@@ -1,3 +1,4 @@
+from __future__ import absolute_import
from unittest import TestCase
from mock import MagicMock, patch, call
from datadelivery.s3 import S3, NotFoundException
@@ -285,3 +286,27 @@ class S3TestCase(TestCase):
mock_requests.post.assert_has_calls([
call('someurl/s3-deliveries/888/send/?force=true', headers=self.expected_headers, json={}),
])
+
+ @patch('datadelivery.s3.requests')
+ def test_make_message_for_http_error(self, mock_requests):
+ response = MagicMock()
+ response.text = None
+ response.json.return_value = {'detail': 'Invalid bucket name'}
+ msg = S3.make_message_for_http_error(response)
+ self.assertEqual('Invalid bucket name', msg)
+
+ @patch('datadelivery.s3.requests')
+ def test_make_message_for_http_error_no_detail(self, mock_requests):
+ response = MagicMock()
+ response.text = '{ unexpected: json }'
+ response.json.return_value = {}
+ msg = S3.make_message_for_http_error(response)
+ self.assertEqual('{ unexpected: json }', msg)
+
+ @patch('datadelivery.s3.requests')
+ def test_make_message_for_http_error_no_json(self, mock_requests):
+ response = MagicMock()
+ response.text = 'Invalid bucket name'
+ response.json.side_effect = ValueError("No JSON object could be decoded")
+ msg = S3.make_message_for_http_error(response)
+ self.assertEqual('Invalid bucket name', msg)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 4
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
charset-normalizer==3.4.1
-e git+https://github.com/Duke-GCB/datadelivery-cli.git@cc1bf019c81cc7d6e488a3e79adf0a1d1a3c34d2#egg=datadelivery
exceptiongroup==1.2.2
idna==3.10
iniconfig==2.1.0
mock==5.2.0
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
PyYAML==6.0.2
requests==2.32.3
six==1.17.0
tomli==2.2.1
urllib3==2.3.0
| name: datadelivery-cli
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- charset-normalizer==3.4.1
- exceptiongroup==1.2.2
- idna==3.10
- iniconfig==2.1.0
- mock==5.2.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pyyaml==6.0.2
- requests==2.32.3
- six==1.17.0
- tomli==2.2.1
- urllib3==2.3.0
prefix: /opt/conda/envs/datadelivery-cli
| [
"datadelivery/test_s3.py::S3TestCase::test_make_message_for_http_error",
"datadelivery/test_s3.py::S3TestCase::test_make_message_for_http_error_no_detail",
"datadelivery/test_s3.py::S3TestCase::test_make_message_for_http_error_no_json"
]
| [
"datadelivery/test_config.py::ConfigFileTestCase::test_read_config",
"datadelivery/test_config.py::ConfigFileTestCase::test_read_or_create_config_no_token_in_file",
"datadelivery/test_config.py::ConfigFileTestCase::test_read_or_create_config_when_user_enters_token",
"datadelivery/test_config.py::ConfigFileTestCase::test_write_config"
]
| [
"datadelivery/test_argparser.py::ArgParserTestCase::test_parser_description_contains_version",
"datadelivery/test_argparser.py::ArgParserTestCase::test_simple_deliver_command",
"datadelivery/test_argparser.py::ArgParserTestCase::test_simple_deliver_command_resend",
"datadelivery/test_argparser.py::ArgParserTestCase::test_simple_deliver_ddsclient_project_flag",
"datadelivery/test_argparser.py::ArgParserTestCase::test_simple_deliver_with_user_message",
"datadelivery/test_commands.py::CommandsTestCase::test_deliver_bucket",
"datadelivery/test_commands.py::CommandsTestCase::test_deliver_bucket_create_bucket",
"datadelivery/test_commands.py::CommandsTestCase::test_deliver_bucket_resend",
"datadelivery/test_config.py::ConfigFileTestCase::test_read_or_create_config_when_file_exists",
"datadelivery/test_config.py::ConfigFileTestCase::test_read_or_create_config_when_user_doesnt_enter_token",
"datadelivery/test_config.py::ConfigTestCase::test_constructor",
"datadelivery/test_config.py::ConfigTestCase::test_constructor_defaults",
"datadelivery/test_s3.py::S3TestCase::test_constructor_finds_current_endpoint_and_user",
"datadelivery/test_s3.py::S3TestCase::test_create_bucket",
"datadelivery/test_s3.py::S3TestCase::test_create_delivery_and_send",
"datadelivery/test_s3.py::S3TestCase::test_get_bucket_by_name",
"datadelivery/test_s3.py::S3TestCase::test_get_bucket_by_name_not_found",
"datadelivery/test_s3.py::S3TestCase::test_get_s3user_by_email",
"datadelivery/test_s3.py::S3TestCase::test_get_user_by_email_not_found",
"datadelivery/test_s3.py::S3TestCase::test_send_delivery",
"datadelivery/test_s3.py::S3TestCase::test_send_delivery_resend"
]
| []
| MIT License | 2,488 | [
"datadelivery/s3.py",
"datadelivery/__main__.py",
".circleci/config.yml",
"datadelivery/config.py",
"datadelivery/commands.py"
]
| [
"datadelivery/s3.py",
"datadelivery/__main__.py",
".circleci/config.yml",
"datadelivery/config.py",
"datadelivery/commands.py"
]
|
|
jupyter__nbgrader-959 | 1c823a4410ef3abcdd1a9f50aab5a546c994e4e8 | 2018-05-08 21:39:10 | 5bc6f37c39c8b10b8f60440b2e6d9487e63ef3f1 | diff --git a/nbgrader/apps/assignapp.py b/nbgrader/apps/assignapp.py
index 85fcf0e8..ebac9e3b 100644
--- a/nbgrader/apps/assignapp.py
+++ b/nbgrader/apps/assignapp.py
@@ -40,6 +40,10 @@ flags.update({
{'BaseConverter': {'force': True}},
"Overwrite an assignment/submission if it already exists."
),
+ 'f': (
+ {'BaseConverter': {'force': True}},
+ "Overwrite an assignment/submission if it already exists."
+ ),
})
diff --git a/nbgrader/apps/autogradeapp.py b/nbgrader/apps/autogradeapp.py
index 64ef3320..187df53b 100644
--- a/nbgrader/apps/autogradeapp.py
+++ b/nbgrader/apps/autogradeapp.py
@@ -30,6 +30,10 @@ flags.update({
{'BaseConverter': {'force': True}},
"Overwrite an assignment/submission if it already exists."
),
+ 'f': (
+ {'BaseConverter': {'force': True}},
+ "Overwrite an assignment/submission if it already exists."
+ ),
})
diff --git a/nbgrader/apps/dbapp.py b/nbgrader/apps/dbapp.py
index fa0e2c50..0ac5e83c 100644
--- a/nbgrader/apps/dbapp.py
+++ b/nbgrader/apps/dbapp.py
@@ -78,6 +78,10 @@ student_remove_flags.update({
{'DbStudentRemoveApp': {'force': True}},
"Complete the operation, even if it means grades will be deleted."
),
+ 'f': (
+ {'DbStudentRemoveApp': {'force': True}},
+ "Complete the operation, even if it means grades will be deleted."
+ ),
})
class DbStudentRemoveApp(NbGrader):
@@ -314,6 +318,10 @@ assignment_remove_flags.update({
{'DbAssignmentRemoveApp': {'force': True}},
"Complete the operation, even if it means grades will be deleted."
),
+ 'f': (
+ {'DbAssignmentRemoveApp': {'force': True}},
+ "Complete the operation, even if it means grades will be deleted."
+ ),
})
class DbAssignmentRemoveApp(NbGrader):
diff --git a/nbgrader/apps/feedbackapp.py b/nbgrader/apps/feedbackapp.py
index f4bde288..b25a9578 100644
--- a/nbgrader/apps/feedbackapp.py
+++ b/nbgrader/apps/feedbackapp.py
@@ -19,6 +19,10 @@ flags.update({
{'BaseConverter': {'force': True}},
"Overwrite an assignment/submission if it already exists."
),
+ 'f': (
+ {'BaseConverter': {'force': True}},
+ "Overwrite an assignment/submission if it already exists."
+ ),
})
class FeedbackApp(NbGrader):
diff --git a/nbgrader/apps/quickstartapp.py b/nbgrader/apps/quickstartapp.py
index 77154df3..462e1cd7 100644
--- a/nbgrader/apps/quickstartapp.py
+++ b/nbgrader/apps/quickstartapp.py
@@ -26,6 +26,20 @@ flags = {
"""
)
),
+ 'f': (
+ {'QuickStartApp': {'force': True}},
+ dedent(
+ """
+ Overwrite existing files if they already exist. WARNING: this is
+ equivalent to doing:
+
+ rm -r <course_id>
+ nbgrader quickstart <course_id>
+
+ So be careful when using this flag!
+ """
+ )
+ ),
}
class QuickStartApp(NbGrader):
diff --git a/nbgrader/apps/releaseapp.py b/nbgrader/apps/releaseapp.py
index 0968ef4b..c44270cd 100644
--- a/nbgrader/apps/releaseapp.py
+++ b/nbgrader/apps/releaseapp.py
@@ -20,6 +20,10 @@ flags.update({
{'ExchangeRelease' : {'force' : True}},
"Force overwrite of existing files in the exchange."
),
+ 'f': (
+ {'ExchangeRelease' : {'force' : True}},
+ "Force overwrite of existing files in the exchange."
+ ),
})
class ReleaseApp(NbGrader):
diff --git a/nbgrader/apps/zipcollectapp.py b/nbgrader/apps/zipcollectapp.py
index 1183667f..2cac325e 100644
--- a/nbgrader/apps/zipcollectapp.py
+++ b/nbgrader/apps/zipcollectapp.py
@@ -35,6 +35,13 @@ flags = {
},
"Force overwrite of existing files."
),
+ 'f': (
+ {
+ 'ZipCollectApp': {'force': True},
+ 'ExtractorPlugin': {'force': True}
+ },
+ "Force overwrite of existing files."
+ ),
'strict': (
{'ZipCollectApp': {'strict': True}},
"Skip submitted notebooks with invalid names."
| Allow nbgrader apps to use -f and --force
Currently only --force is supported, which means you have to do:
```
nbgrader autograde ps1 --force
```
rather than
```
nbgrader autograde ps1 -f
```
Both should be supported flags.
| jupyter/nbgrader | diff --git a/nbgrader/tests/apps/test_nbgrader_assign.py b/nbgrader/tests/apps/test_nbgrader_assign.py
index c39d91db..0af3eb6a 100644
--- a/nbgrader/tests/apps/test_nbgrader_assign.py
+++ b/nbgrader/tests/apps/test_nbgrader_assign.py
@@ -126,6 +126,38 @@ class TestNbGraderAssign(BaseTestApp):
assert not os.path.isfile(join(course_dir, "release", "ps1", "foo.txt"))
assert not os.path.isfile(join(course_dir, "release", "ps1", "blah.pyc"))
+ def test_force_f(self, course_dir):
+ """Ensure the force option works properly"""
+ self._copy_file(join('files', 'test.ipynb'), join(course_dir, 'source', 'ps1', 'test.ipynb'))
+ self._make_file(join(course_dir, 'source', 'ps1', 'foo.txt'), "foo")
+ self._make_file(join(course_dir, 'source', 'ps1', 'data', 'bar.txt'), "bar")
+ self._make_file(join(course_dir, 'source', 'ps1', 'blah.pyc'), "asdf")
+ with open("nbgrader_config.py", "a") as fh:
+ fh.write("""c.CourseDirectory.db_assignments = [dict(name="ps1")]\n""")
+
+ run_nbgrader(["assign", "ps1"])
+ assert os.path.isfile(join(course_dir, 'release', 'ps1', 'test.ipynb'))
+ assert os.path.isfile(join(course_dir, 'release', 'ps1', 'foo.txt'))
+ assert os.path.isfile(join(course_dir, 'release', 'ps1', 'data', 'bar.txt'))
+ assert not os.path.isfile(join(course_dir, 'release', 'ps1', 'blah.pyc'))
+
+ # check that it skips the existing directory
+ os.remove(join(course_dir, 'release', 'ps1', 'foo.txt'))
+ run_nbgrader(["assign", "ps1"])
+ assert not os.path.isfile(join(course_dir, 'release', 'ps1', 'foo.txt'))
+
+ # force overwrite the supplemental files
+ run_nbgrader(["assign", "ps1", "-f"])
+ assert os.path.isfile(join(course_dir, 'release', 'ps1', 'foo.txt'))
+
+ # force overwrite
+ os.remove(join(course_dir, 'source', 'ps1', 'foo.txt'))
+ run_nbgrader(["assign", "ps1", "-f"])
+ assert os.path.isfile(join(course_dir, "release", "ps1", "test.ipynb"))
+ assert os.path.isfile(join(course_dir, "release", "ps1", "data", "bar.txt"))
+ assert not os.path.isfile(join(course_dir, "release", "ps1", "foo.txt"))
+ assert not os.path.isfile(join(course_dir, "release", "ps1", "blah.pyc"))
+
def test_permissions(self, course_dir):
"""Are permissions properly set?"""
self._empty_notebook(join(course_dir, 'source', 'ps1', 'foo.ipynb'))
diff --git a/nbgrader/tests/apps/test_nbgrader_autograde.py b/nbgrader/tests/apps/test_nbgrader_autograde.py
index ba44d44b..02cfbcbd 100644
--- a/nbgrader/tests/apps/test_nbgrader_autograde.py
+++ b/nbgrader/tests/apps/test_nbgrader_autograde.py
@@ -335,6 +335,46 @@ class TestNbGraderAutograde(BaseTestApp):
assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "data", "bar.txt"))
assert not os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "blah.pyc"))
+ def test_force_f(self, db, course_dir):
+ """Ensure the force option works properly"""
+ with open("nbgrader_config.py", "a") as fh:
+ fh.write("""c.CourseDirectory.db_assignments = [dict(name='ps1', duedate='2015-02-02 14:58:23.948203 PST')]\n""")
+ fh.write("""c.CourseDirectory.db_students = [dict(id="foo"), dict(id="bar")]""")
+
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
+ self._make_file(join(course_dir, "source", "ps1", "foo.txt"), "foo")
+ self._make_file(join(course_dir, "source", "ps1", "data", "bar.txt"), "bar")
+ run_nbgrader(["assign", "ps1", "--db", db])
+
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "foo.txt"), "foo")
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "data", "bar.txt"), "bar")
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "blah.pyc"), "asdf")
+ run_nbgrader(["autograde", "ps1", "--db", db])
+
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "p1.ipynb"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "foo.txt"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "data", "bar.txt"))
+ assert not os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "blah.pyc"))
+
+ # check that it skips the existing directory
+ remove(join(course_dir, "autograded", "foo", "ps1", "foo.txt"))
+ run_nbgrader(["autograde", "ps1", "--db", db])
+ assert not os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "foo.txt"))
+
+ # force overwrite the supplemental files
+ run_nbgrader(["autograde", "ps1", "--db", db, "-f"])
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "foo.txt"))
+
+ # force overwrite
+ remove(join(course_dir, "source", "ps1", "foo.txt"))
+ remove(join(course_dir, "submitted", "foo", "ps1", "foo.txt"))
+ run_nbgrader(["autograde", "ps1", "--db", db, "-f"])
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "p1.ipynb"))
+ assert not os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "foo.txt"))
+ assert os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "data", "bar.txt"))
+ assert not os.path.isfile(join(course_dir, "autograded", "foo", "ps1", "blah.pyc"))
+
def test_filter_notebook(self, db, course_dir):
"""Does autograding filter by notebook properly?"""
with open("nbgrader_config.py", "a") as fh:
diff --git a/nbgrader/tests/apps/test_nbgrader_db.py b/nbgrader/tests/apps/test_nbgrader_db.py
index 5b7789da..9576ecae 100644
--- a/nbgrader/tests/apps/test_nbgrader_db.py
+++ b/nbgrader/tests/apps/test_nbgrader_db.py
@@ -105,7 +105,33 @@ class TestNbGraderDb(BaseTestApp):
# now force it to complete
run_nbgrader(["db", "student", "remove", "foo", "--force", "--db", db])
- # student should be gone
+ # student should be gone
+ with Gradebook(db) as gb:
+ with pytest.raises(MissingEntry):
+ gb.find_student("foo")
+
+ def test_student_remove_with_submissions_f(self, db, course_dir):
+ run_nbgrader(["db", "student", "add", "foo", "--db", db])
+ run_nbgrader(["db", "assignment", "add", "ps1", "--db", db])
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
+ run_nbgrader(["assign", "ps1", "--db", db])
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
+ run_nbgrader(["autograde", "ps1", "--db", db])
+
+ with Gradebook(db) as gb:
+ gb.find_student("foo")
+
+ # it should fail if we don't run with --force
+ run_nbgrader(["db", "student", "remove", "foo", "--db", db], retcode=1)
+
+ # make sure we can still find the student
+ with Gradebook(db) as gb:
+ gb.find_student("foo")
+
+ # now force it to complete
+ run_nbgrader(["db", "student", "remove", "foo", "-f", "--db", db])
+
+ # student should be gone
with Gradebook(db) as gb:
with pytest.raises(MissingEntry):
gb.find_student("foo")
@@ -249,6 +275,32 @@ class TestNbGraderDb(BaseTestApp):
with pytest.raises(MissingEntry):
gb.find_assignment("ps1")
+ def test_assignment_remove_with_submissions_f(self, db, course_dir):
+ run_nbgrader(["db", "student", "add", "foo", "--db", db])
+ run_nbgrader(["db", "assignment", "add", "ps1", "--db", db])
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
+ run_nbgrader(["assign", "ps1", "--db", db])
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
+ run_nbgrader(["autograde", "ps1", "--db", db])
+
+ with Gradebook(db) as gb:
+ gb.find_assignment("ps1")
+
+ # it should fail if we don't run with --force
+ run_nbgrader(["db", "assignment", "remove", "ps1", "--db", db], retcode=1)
+
+ # make sure we can still find the assignment
+ with Gradebook(db) as gb:
+ gb.find_assignment("ps1")
+
+ # now force it to complete
+ run_nbgrader(["db", "assignment", "remove", "ps1", "-f", "--db", db])
+
+ # assignment should be gone
+ with Gradebook(db) as gb:
+ with pytest.raises(MissingEntry):
+ gb.find_assignment("ps1")
+
def test_assignment_list(self, db):
run_nbgrader(["db", "assignment", "add", "foo", '--duedate="Sun Jan 8 2017 4:31:22 PM"', "--db", db])
run_nbgrader(["db", "assignment", "add", "bar", "--db", db])
diff --git a/nbgrader/tests/apps/test_nbgrader_feedback.py b/nbgrader/tests/apps/test_nbgrader_feedback.py
index 637f11d7..20fb7a75 100644
--- a/nbgrader/tests/apps/test_nbgrader_feedback.py
+++ b/nbgrader/tests/apps/test_nbgrader_feedback.py
@@ -67,6 +67,46 @@ class TestNbGraderFeedback(BaseTestApp):
assert isfile(join(course_dir, "feedback", "foo", "ps1", "data", "bar.txt"))
assert not isfile(join(course_dir, "feedback", "foo", "ps1", "blah.pyc"))
+ def test_force_f(self, db, course_dir):
+ """Ensure the force option works properly"""
+ with open("nbgrader_config.py", "a") as fh:
+ fh.write("""c.CourseDirectory.db_assignments = [dict(name="ps1")]\n""")
+ fh.write("""c.CourseDirectory.db_students = [dict(id="foo")]\n""")
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
+ self._make_file(join(course_dir, "source", "ps1", "foo.txt"), "foo")
+ self._make_file(join(course_dir, "source", "ps1", "data", "bar.txt"), "bar")
+ run_nbgrader(["assign", "ps1", "--db", db])
+
+ self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "foo.txt"), "foo")
+ self._make_file(join(course_dir, "submitted", "foo", "ps1", "data", "bar.txt"), "bar")
+ run_nbgrader(["autograde", "ps1", "--db", db])
+
+ self._make_file(join(course_dir, "autograded", "foo", "ps1", "blah.pyc"), "asdf")
+ run_nbgrader(["feedback", "ps1", "--db", db])
+
+ assert isfile(join(course_dir, "feedback", "foo", "ps1", "p1.html"))
+ assert isfile(join(course_dir, "feedback", "foo", "ps1", "foo.txt"))
+ assert isfile(join(course_dir, "feedback", "foo", "ps1", "data", "bar.txt"))
+ assert not isfile(join(course_dir, "feedback", "foo", "ps1", "blah.pyc"))
+
+ # check that it skips the existing directory
+ remove(join(course_dir, "feedback", "foo", "ps1", "foo.txt"))
+ run_nbgrader(["feedback", "ps1", "--db", db])
+ assert not isfile(join(course_dir, "feedback", "foo", "ps1", "foo.txt"))
+
+ # force overwrite the supplemental files
+ run_nbgrader(["feedback", "ps1", "--db", db, "-f"])
+ assert isfile(join(course_dir, "feedback", "foo", "ps1", "foo.txt"))
+
+ # force overwrite
+ remove(join(course_dir, "autograded", "foo", "ps1", "foo.txt"))
+ run_nbgrader(["feedback", "ps1", "--db", db, "--force"])
+ assert isfile(join(course_dir, "feedback", "foo", "ps1", "p1.html"))
+ assert not isfile(join(course_dir, "feedback", "foo", "ps1", "foo.txt"))
+ assert isfile(join(course_dir, "feedback", "foo", "ps1", "data", "bar.txt"))
+ assert not isfile(join(course_dir, "feedback", "foo", "ps1", "blah.pyc"))
+
def test_filter_notebook(self, db, course_dir):
"""Does feedback filter by notebook properly?"""
with open("nbgrader_config.py", "a") as fh:
diff --git a/nbgrader/tests/apps/test_nbgrader_quickstart.py b/nbgrader/tests/apps/test_nbgrader_quickstart.py
index 1189933c..d9a9f705 100644
--- a/nbgrader/tests/apps/test_nbgrader_quickstart.py
+++ b/nbgrader/tests/apps/test_nbgrader_quickstart.py
@@ -39,3 +39,28 @@ class TestNbGraderQuickStart(BaseTestApp):
# nbgrader assign should work
run_nbgrader(["assign", "ps1"])
+ def test_quickstart_f(self):
+ """Is the quickstart example properly generated?"""
+
+ run_nbgrader(["quickstart", "example"])
+
+ # it should fail if it already exists
+ run_nbgrader(["quickstart", "example"], retcode=1)
+
+ # it should succeed if --force is given
+ os.remove(os.path.join("example", "nbgrader_config.py"))
+ run_nbgrader(["quickstart", "example", "-f"])
+ assert os.path.exists(os.path.join("example", "nbgrader_config.py"))
+
+ # nbgrader validate should work
+ os.chdir("example")
+ for nb in os.listdir(os.path.join("source", "ps1")):
+ if not nb.endswith(".ipynb"):
+ continue
+ output = run_nbgrader(["validate", os.path.join("source", "ps1", nb)], stdout=True)
+ assert output.strip() == "Success! Your notebook passes all the tests."
+
+ # nbgrader assign should work
+ run_nbgrader(["assign", "ps1"])
+
+
diff --git a/nbgrader/tests/apps/test_nbgrader_release.py b/nbgrader/tests/apps/test_nbgrader_release.py
index 0d8bf2dc..830f5955 100644
--- a/nbgrader/tests/apps/test_nbgrader_release.py
+++ b/nbgrader/tests/apps/test_nbgrader_release.py
@@ -53,6 +53,19 @@ class TestNbGraderRelease(BaseTestApp):
self._release("ps1", exchange, flags=["--force"])
assert os.path.isfile(join(exchange, "abc101", "outbound", "ps1", "p1.ipynb"))
+ def test_force_release_f(self, exchange, course_dir):
+ self._copy_file(join("files", "test.ipynb"), join(course_dir, "release", "ps1", "p1.ipynb"))
+ self._release("ps1", exchange)
+ assert os.path.isfile(join(exchange, "abc101", "outbound", "ps1", "p1.ipynb"))
+
+ self._release("ps1", exchange, retcode=1)
+
+ os.remove(join(exchange, join("abc101", "outbound", "ps1", "p1.ipynb")))
+ self._release("ps1", exchange, retcode=1)
+
+ self._release("ps1", exchange, flags=["-f"])
+ assert os.path.isfile(join(exchange, "abc101", "outbound", "ps1", "p1.ipynb"))
+
def test_release_with_assignment_flag(self, exchange, course_dir):
self._copy_file(join("files", "test.ipynb"), join(course_dir, "release", "ps1", "p1.ipynb"))
self._release("--assignment=ps1", exchange)
diff --git a/nbgrader/tests/apps/test_nbgrader_zip_collect.py b/nbgrader/tests/apps/test_nbgrader_zip_collect.py
index 58343a55..9f5dca14 100644
--- a/nbgrader/tests/apps/test_nbgrader_zip_collect.py
+++ b/nbgrader/tests/apps/test_nbgrader_zip_collect.py
@@ -72,6 +72,25 @@ class TestNbGraderZipCollect(BaseTestApp):
assert os.path.isdir(extracted_dir)
assert len(os.listdir(extracted_dir)) == 1
+ def test_extract_single_notebook_f(self, course_dir, archive_dir):
+ extracted_dir = join(archive_dir, "..", "extracted")
+ self._make_notebook(archive_dir,
+ 'ps1', 'hacker', '2016-01-30-15-30-10', 'problem1')
+
+ run_nbgrader(["zip_collect", "ps1"])
+ assert os.path.isdir(extracted_dir)
+ assert len(os.listdir(extracted_dir)) == 1
+
+ # Run again should fail
+ run_nbgrader(["zip_collect", "ps1"], retcode=1)
+ assert os.path.isdir(extracted_dir)
+ assert len(os.listdir(extracted_dir)) == 1
+
+ # Run again with --force flag should pass
+ run_nbgrader(["zip_collect", "-f", "ps1"])
+ assert os.path.isdir(extracted_dir)
+ assert len(os.listdir(extracted_dir)) == 1
+
def test_extract_sub_dir_single_notebook(self, course_dir, archive_dir):
extracted_dir = join(archive_dir, "..", "extracted")
self._make_notebook(join(archive_dir, 'hacker'),
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 7
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -r dev-requirements.txt -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-rerunfailures",
"coverage",
"selenium",
"invoke",
"sphinx",
"codecov",
"cov-core",
"nbval"
],
"pre_install": [
"pip install -U pip wheel setuptools"
],
"python": "3.5",
"reqs_path": [
"dev-requirements.txt",
"dev-requirements-windows.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
alembic==1.7.7
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
async-generator==1.10
attrs==22.2.0
Babel==2.11.0
backcall==0.2.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
comm==0.1.4
contextvars==2.4
cov-core==1.15.0
coverage==6.2
dataclasses==0.8
decorator==5.1.1
defusedxml==0.7.1
docutils==0.18.1
entrypoints==0.4
greenlet==2.0.2
idna==3.10
imagesize==1.4.1
immutables==0.19
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
invoke==2.2.0
ipykernel==5.5.6
ipython==7.16.3
ipython-genutils==0.2.0
ipywidgets==7.8.5
jedi==0.17.2
Jinja2==3.0.3
json5==0.9.16
jsonschema==3.2.0
jupyter==1.1.1
jupyter-client==7.1.2
jupyter-console==6.4.3
jupyter-core==4.9.2
jupyter-server==1.13.1
jupyterlab==3.2.9
jupyterlab-pygments==0.1.2
jupyterlab-server==2.10.3
jupyterlab_widgets==1.1.11
Mako==1.1.6
MarkupSafe==2.0.1
mistune==0.8.4
nbclassic==0.3.5
nbclient==0.5.9
nbconvert==6.0.7
nbformat==5.1.3
-e git+https://github.com/jupyter/nbgrader.git@1c823a4410ef3abcdd1a9f50aab5a546c994e4e8#egg=nbgrader
nbval==0.10.0
nest-asyncio==1.6.0
notebook==6.4.10
packaging==21.3
pandocfilters==1.5.1
parso==0.7.1
pexpect==4.9.0
pickleshare==0.7.5
pluggy==1.0.0
prometheus-client==0.17.1
prompt-toolkit==3.0.36
ptyprocess==0.7.0
py==1.11.0
pycparser==2.21
pyenchant==3.2.2
Pygments==2.14.0
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
pytest-cov==4.0.0
pytest-rerunfailures==10.3
python-dateutil==2.9.0.post0
pytz==2025.2
pyzmq==25.1.2
requests==2.27.1
selenium==3.141.0
Send2Trash==1.8.3
six==1.17.0
sniffio==1.2.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-spelling==7.7.0
SQLAlchemy==1.4.54
terminado==0.12.1
testpath==0.6.0
tomli==1.2.3
tornado==6.1
traitlets==4.3.3
typing_extensions==4.1.1
urllib3==1.26.20
wcwidth==0.2.13
webencodings==0.5.1
websocket-client==1.3.1
widgetsnbextension==3.6.10
zipp==3.6.0
| name: nbgrader
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- alembic==1.7.7
- anyio==3.6.2
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- async-generator==1.10
- attrs==22.2.0
- babel==2.11.0
- backcall==0.2.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- comm==0.1.4
- contextvars==2.4
- cov-core==1.15.0
- coverage==6.2
- dataclasses==0.8
- decorator==5.1.1
- defusedxml==0.7.1
- docutils==0.18.1
- entrypoints==0.4
- greenlet==2.0.2
- idna==3.10
- imagesize==1.4.1
- immutables==0.19
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- invoke==2.2.0
- ipykernel==5.5.6
- ipython==7.16.3
- ipython-genutils==0.2.0
- ipywidgets==7.8.5
- jedi==0.17.2
- jinja2==3.0.3
- json5==0.9.16
- jsonschema==3.2.0
- jupyter==1.1.1
- jupyter-client==7.1.2
- jupyter-console==6.4.3
- jupyter-core==4.9.2
- jupyter-server==1.13.1
- jupyterlab==3.2.9
- jupyterlab-pygments==0.1.2
- jupyterlab-server==2.10.3
- jupyterlab-widgets==1.1.11
- mako==1.1.6
- markupsafe==2.0.1
- mistune==0.8.4
- nbclassic==0.3.5
- nbclient==0.5.9
- nbconvert==6.0.7
- nbformat==5.1.3
- nbval==0.10.0
- nest-asyncio==1.6.0
- notebook==6.4.10
- packaging==21.3
- pandocfilters==1.5.1
- parso==0.7.1
- pexpect==4.9.0
- pickleshare==0.7.5
- pip==21.3.1
- pluggy==1.0.0
- prometheus-client==0.17.1
- prompt-toolkit==3.0.36
- ptyprocess==0.7.0
- py==1.11.0
- pycparser==2.21
- pyenchant==3.2.2
- pygments==2.14.0
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- pytest-cov==4.0.0
- pytest-rerunfailures==10.3
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyzmq==25.1.2
- requests==2.27.1
- selenium==3.141.0
- send2trash==1.8.3
- setuptools==59.6.0
- six==1.17.0
- sniffio==1.2.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-rtd-theme==2.0.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- sphinxcontrib-spelling==7.7.0
- sqlalchemy==1.4.54
- terminado==0.12.1
- testpath==0.6.0
- tomli==1.2.3
- tornado==6.1
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- wcwidth==0.2.13
- webencodings==0.5.1
- websocket-client==1.3.1
- widgetsnbextension==3.6.10
- zipp==3.6.0
prefix: /opt/conda/envs/nbgrader
| [
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_force_f",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_force_f",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_remove_with_submissions_f",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_remove_with_submissions_f",
"nbgrader/tests/apps/test_nbgrader_quickstart.py::TestNbGraderQuickStart::test_quickstart_f",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_force_release_f",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_extract_single_notebook_f"
]
| [
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_force_single_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_update_newer_single_notebook",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_single_file",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_force",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_force_f",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_filter_notebook",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_permissions",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_custom_permissions",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_force_single_notebook",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_update_newer",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_update_newer_single_notebook"
]
| [
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_help",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_no_args",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_conflicting_args",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_multiple_args",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_no_assignment",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_single_file",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_single_file_bad_assignment_name",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_multiple_files",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_dependent_files",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_save_cells",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_force",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_permissions",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_custom_permissions",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_add_remove_extra_notebooks",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_add_extra_notebooks_with_submissions",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_remove_extra_notebooks_with_submissions",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_same_notebooks_with_submissions",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_force_single_notebook",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_fail_no_notebooks",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_no_metadata",
"nbgrader/tests/apps/test_nbgrader_assign.py::TestNbGraderAssign::test_header",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_help",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_student",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_assignment",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_timestamp",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_empty_timestamp",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_late_submission_penalty_none",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_late_submission_penalty_zero",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_late_submission_penalty_plugin",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_force",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_filter_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_overwrite_files",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_overwrite_files_subdirs",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_side_effects",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_skip_extra_notebooks",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_permissions",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_custom_permissions",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_update_newer",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_hidden_tests_single_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_handle_failure",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_handle_failure_single_notebook",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_source_kernelspec",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_incorrect_source_kernelspec",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_incorrect_submitted_kernelspec",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_no_execute",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_infinite_loop",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_missing_files",
"nbgrader/tests/apps/test_nbgrader_autograde.py::TestNbGraderAutograde::test_grade_missing_notebook",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_help",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_no_args",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_add",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_remove",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_remove_with_submissions",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_list",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_import",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_student_import_csv_spaces",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_add",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_remove",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_remove_with_submissions",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_list",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_import",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_assignment_import_csv_spaces",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_upgrade_nodb",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_upgrade_current_db",
"nbgrader/tests/apps/test_nbgrader_db.py::TestNbGraderDb::test_upgrade_old_db",
"nbgrader/tests/apps/test_nbgrader_feedback.py::TestNbGraderFeedback::test_help",
"nbgrader/tests/apps/test_nbgrader_quickstart.py::TestNbGraderQuickStart::test_help",
"nbgrader/tests/apps/test_nbgrader_quickstart.py::TestNbGraderQuickStart::test_no_course_id",
"nbgrader/tests/apps/test_nbgrader_quickstart.py::TestNbGraderQuickStart::test_quickstart",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_help",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_no_course_id",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_release",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_force_release",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_release_with_assignment_flag",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_no_exchange",
"nbgrader/tests/apps/test_nbgrader_release.py::TestNbGraderRelease::test_exchange_bad_perms",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_help",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_args",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_no_archive_dir",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_empty_folders",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_extract_single_notebook",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_extract_sub_dir_single_notebook",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_extract_archive",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_extract_archive_copies",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_no_regexp",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_bad_regexp",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_regexp_missing_student_id",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_regexp_bad_student_id_type",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_single_notebook",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_single_notebook_attempts",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_multiple_notebooks",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_sub_dir_single_notebook",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_invalid_notebook",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_timestamp_none",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_timestamp_empty_str",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_timestamp_bad_str",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_timestamp_skip_older",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_timestamp_replace_newer",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_timestamp_file",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_preserve_sub_dir",
"nbgrader/tests/apps/test_nbgrader_zip_collect.py::TestNbGraderZipCollect::test_collect_duplicate_fail"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,489 | [
"nbgrader/apps/autogradeapp.py",
"nbgrader/apps/quickstartapp.py",
"nbgrader/apps/zipcollectapp.py",
"nbgrader/apps/dbapp.py",
"nbgrader/apps/assignapp.py",
"nbgrader/apps/releaseapp.py",
"nbgrader/apps/feedbackapp.py"
]
| [
"nbgrader/apps/autogradeapp.py",
"nbgrader/apps/quickstartapp.py",
"nbgrader/apps/zipcollectapp.py",
"nbgrader/apps/dbapp.py",
"nbgrader/apps/assignapp.py",
"nbgrader/apps/releaseapp.py",
"nbgrader/apps/feedbackapp.py"
]
|
|
fatiando__verde-31 | 4eb0999faa1993dd473d30954aacbab4b6a4feb0 | 2018-05-09 05:23:08 | 4eb0999faa1993dd473d30954aacbab4b6a4feb0 | diff --git a/Makefile b/Makefile
index e2a65aa..c57faf7 100644
--- a/Makefile
+++ b/Makefile
@@ -32,7 +32,7 @@ coverage:
rm -r $(TESTDIR)
pep8:
- flake8 $(PROJECT) setup.py
+ flake8 $(PROJECT) setup.py examples
lint:
pylint $(PROJECT) setup.py
diff --git a/examples/scipygridder.py b/examples/scipygridder.py
new file mode 100644
index 0000000..4531d15
--- /dev/null
+++ b/examples/scipygridder.py
@@ -0,0 +1,84 @@
+"""
+Gridding with Scipy
+===================
+
+Scipy offers a range of interpolation methods in :mod:`scipy.interpolate` and 3
+specifically for 2D data (linear, nearest neighbors, and bicubic). Verde offers
+an interface for these 3 scipy interpolators in :class:`verde.ScipyGridder`.
+
+All of these interpolations work on Cartesian data, so if we want to grid
+geographic data (like our Baja California bathymetry) we need to project them
+into a Cartesian system. We'll use
+`pyproj <https://github.com/jswhit/pyproj>`__ to calculate a Mercator
+projection for the data.
+
+For convenience, Verde still allows us to make geographic grids by passing the
+``projection`` argument to :meth:`verde.ScipyGridder.grid` and the like. When
+doing so, the grid will be generated using geographic coordinates which will be
+projected prior to interpolation.
+"""
+import matplotlib.pyplot as plt
+import cartopy.crs as ccrs
+import cartopy.feature as cfeature
+import pyproj
+import numpy as np
+import verde as vd
+
+# We'll test this on the Baja California shipborne bathymetry data
+data = vd.datasets.fetch_baja_bathymetry()
+
+# Before gridding, we need to decimate the data to avoid aliasing because of
+# the oversampling along the ship tracks. We'll use a blocked median with 5
+# arc-minute blocks.
+spacing = 5/60
+lon, lat, bathymetry = vd.block_reduce(data.longitude, data.latitude,
+ data.bathymetry_m, reduction=np.median,
+ spacing=spacing)
+
+# Project the data using pyproj so that we can use it as input for the gridder.
+# We'll set the latitude of true scale to the mean latitude of the data.
+projection = pyproj.Proj(proj='merc', lat_ts=data.latitude.mean())
+easting, northing = projection(lon, lat)
+
+# Now we can set up a gridder for the decimated data
+grd = vd.ScipyGridder(method='cubic').fit(easting, northing, bathymetry)
+print("Gridder used:", grd)
+
+# Get the grid region in geographic coordinates
+region = vd.get_region(data.longitude, data.latitude)
+print("Data region:", region)
+
+# The 'grid' method can still make a geographic grid if we pass in a projection
+# function that converts lon, lat into the easting, northing coordinates that
+# we used in 'fit'. This can be any function that takes lon, lat and returns x,
+# y. In our case, it'll be the 'projection' variable that we created above.
+# We'll also set the names of the grid dimensions and the name the data
+# variable in our grid (the default would be 'scalars', which isn't very
+# informative).
+grid = grd.grid(region=region, spacing=spacing, projection=projection,
+ dims=['latitude', 'longitude'], data_names=['bathymetry_m'])
+print("Generated geographic grid:")
+print(grid)
+
+# Cartopy requires setting the coordinate reference system (CRS) of the
+# original data through the transform argument. Their docs say to use
+# PlateCarree to represent geographic data.
+crs = ccrs.PlateCarree()
+
+plt.figure(figsize=(7, 6))
+# Make a Mercator map of our gridded bathymetry
+ax = plt.axes(projection=ccrs.Mercator())
+ax.set_title("Gridded Bathymetry Using Scipy", pad=25)
+# Plot the gridded bathymetry
+pc = ax.pcolormesh(grid.longitude, grid.latitude, grid.bathymetry_m,
+ transform=crs, vmax=0)
+cb = plt.colorbar(pc, pad=0.08)
+cb.set_label('bathymetry [m]')
+# Plot the locations of the decimated data
+ax.plot(lon, lat, '.k', markersize=0.5, transform=crs)
+# Plot the land as a solid color
+ax.add_feature(cfeature.LAND, edgecolor='black', zorder=2)
+ax.set_extent(region, crs=crs)
+ax.gridlines(draw_labels=True)
+plt.tight_layout()
+plt.show()
diff --git a/verde/__init__.py b/verde/__init__.py
index 6c4e8f6..505fb9c 100644
--- a/verde/__init__.py
+++ b/verde/__init__.py
@@ -5,7 +5,7 @@ from ._version import get_versions as _get_versions
# Import functions/classes to make the API
from .utils import scatter_points, grid_coordinates, profile_coordinates, \
- block_reduce, block_region, inside
+ block_reduce, block_region, inside, get_region
from . import datasets
from .scipy_bridge import ScipyGridder
diff --git a/verde/base/gridder.py b/verde/base/gridder.py
index 58b7ad1..5789398 100644
--- a/verde/base/gridder.py
+++ b/verde/base/gridder.py
@@ -212,7 +212,7 @@ class BaseGridder(BaseEstimator):
raise NotImplementedError()
def grid(self, region=None, shape=None, spacing=None, adjust='spacing',
- dims=None, data_names=None):
+ dims=None, data_names=None, projection=None):
"""
Interpolate the data onto a regular grid.
@@ -278,7 +278,10 @@ class BaseGridder(BaseEstimator):
region = get_region(self, region)
easting, northing = grid_coordinates(region, shape=shape,
spacing=spacing, adjust=adjust)
- data = check_data(self.predict(easting, northing))
+ if projection is None:
+ data = check_data(self.predict(easting, northing))
+ else:
+ data = check_data(self.predict(*projection(easting, northing)))
coords = {dims[1]: easting[0, :], dims[0]: northing[:, 0]}
attrs = {'metadata': 'Generated by {}'.format(repr(self))}
data_vars = {name: (dims, value, attrs)
@@ -286,7 +289,7 @@ class BaseGridder(BaseEstimator):
return xr.Dataset(data_vars, coords=coords, attrs=attrs)
def scatter(self, region=None, size=300, random_state=0, dims=None,
- data_names=None):
+ data_names=None, projection=None):
"""
Interpolate values onto a random scatter of points.
@@ -331,17 +334,16 @@ class BaseGridder(BaseEstimator):
data_names = get_data_names(self, data_names)
region = get_region(self, region)
east, north = scatter_points(region, size, random_state)
- data = check_data(self.predict(east, north))
- column_names = [dim for dim in dims]
- column_names.extend(data_names)
- columns = [north, east]
- columns.extend(data)
- table = pd.DataFrame(
- {name: value for name, value in zip(column_names, columns)},
- columns=column_names)
- return table
-
- def profile(self, point1, point2, size, dims=None, data_names=None):
+ if projection is None:
+ data = check_data(self.predict(east, north))
+ else:
+ data = check_data(self.predict(*projection(east, north)))
+ columns = [(dims[0], north), (dims[1], east)]
+ columns.extend(zip(data_names, data))
+ return pd.DataFrame(dict(columns), columns=[c[0] for c in columns])
+
+ def profile(self, point1, point2, size, dims=None, data_names=None,
+ projection=None):
"""
Interpolate data along a profile between two points.
@@ -388,13 +390,10 @@ class BaseGridder(BaseEstimator):
data_names = get_data_names(self, data_names)
east, north, distances = profile_coordinates(
point1, point2, size, coordinate_system=coordsys)
- data = check_data(self.predict(east, north))
- column_names = [dim for dim in dims]
- column_names.append('distance')
- column_names.extend(data_names)
- columns = [north, east, distances]
- columns.extend(data)
- table = pd.DataFrame(
- {name: value for name, value in zip(column_names, columns)},
- columns=column_names)
- return table
+ if projection is None:
+ data = check_data(self.predict(east, north))
+ else:
+ data = check_data(self.predict(*projection(east, north)))
+ columns = [(dims[0], north), (dims[1], east), ('distance', distances)]
+ columns.extend(zip(data_names, data))
+ return pd.DataFrame(dict(columns), columns=[c[0] for c in columns])
| Option to project coordinates before gridding
The methods in `BaseGridder` should take an optional `projection(lon, lat) -> east, north` argument to project coordinates before passing them to `predict`. This allows Cartesian gridders to create geographic grids without a lot of work. The grid output should still be in the original coordinates. Can take pyproj projections as well with this interface. | fatiando/verde | diff --git a/verde/tests/test_base.py b/verde/tests/test_base.py
index 98ed528..d39c4b1 100644
--- a/verde/tests/test_base.py
+++ b/verde/tests/test_base.py
@@ -1,4 +1,4 @@
-# pylint: disable=unused-argument
+# pylint: disable=unused-argument,too-many-locals
"""
Test the base classes and their utility functions.
"""
@@ -8,7 +8,7 @@ import numpy.testing as npt
from ..base import BaseGridder
from ..base.gridder import get_dims, get_data_names, get_region
-from .. import grid_coordinates
+from .. import grid_coordinates, scatter_points
def test_get_dims():
@@ -95,46 +95,89 @@ def test_get_region():
assert get_region(grd, region=(1, 2, 3, 4)) == (1, 2, 3, 4)
-def test_basegridder():
- "Test basic functionality of BaseGridder"
+class PolyGridder(BaseGridder):
+ "A test gridder"
- with pytest.raises(NotImplementedError):
- BaseGridder().predict(None, None)
+ def __init__(self, degree=1):
+ self.degree = degree
- class TestGridder(BaseGridder):
- "A test gridder"
+ def fit(self, easting, northing, data):
+ "Fit an easting polynomial"
+ ndata = data.size
+ nparams = self.degree + 1
+ jac = np.zeros((ndata, nparams))
+ for j in range(nparams):
+ jac[:, j] = easting.ravel()**j
+ self.coefs_ = np.linalg.solve(jac.T.dot(jac), jac.T.dot(data.ravel()))
+ return self
- def __init__(self, constant=0):
- self.constant = constant
+ def predict(self, easting, northing):
+ "Predict the data"
+ return np.sum(cof*easting**deg for deg, cof in enumerate(self.coefs_))
- def fit(self, easting, northing, data):
- "Get the data mean"
- self.mean_ = data.mean()
- return self
- def predict(self, easting, northing):
- "Predict the data mean"
- return np.ones_like(easting)*self.mean_ + self.constant
+def test_basegridder():
+ "Test basic functionality of BaseGridder"
- grd = TestGridder()
- assert repr(grd) == 'TestGridder(constant=0)'
- grd.constant = 1000
- assert repr(grd) == 'TestGridder(constant=1000)'
+ with pytest.raises(NotImplementedError):
+ BaseGridder().predict(None, None)
+
+ grd = PolyGridder()
+ assert repr(grd) == 'PolyGridder(degree=1)'
+ grd.degree = 2
+ assert repr(grd) == 'PolyGridder(degree=2)'
region = (0, 10, -10, -5)
shape = (50, 30)
- east, north = grid_coordinates(region, shape)
- data = np.ones_like(east)
- grd = TestGridder().fit(east, north, data)
+ angular, linear = 2, 100
+ east, north = scatter_points(region, 1000, random_state=0)
+ data = angular*east + linear
+ grd = PolyGridder().fit(east, north, data)
with pytest.raises(ValueError):
- # A region should be given because it hasn't been assigned by
- # TestGridder
+ # A region should be given because it hasn't been assigned
grd.grid()
+ east_true, north_true = grid_coordinates(region, shape)
+ data_true = angular*east_true + linear
grid = grd.grid(region, shape)
- npt.assert_allclose(grid.scalars.values, data)
- npt.assert_allclose(grid.easting.values, east[0, :])
- npt.assert_allclose(grid.northing.values, north[:, 0])
- npt.assert_allclose(grd.scatter(region, 100).scalars, 1)
- npt.assert_allclose(grd.profile((0, 100), (20, 10), 100).scalars, 1)
+
+ npt.assert_allclose(grd.coefs_, [linear, angular])
+ npt.assert_allclose(grid.scalars.values, data_true)
+ npt.assert_allclose(grid.easting.values, east_true[0, :])
+ npt.assert_allclose(grid.northing.values, north_true[:, 0])
+ npt.assert_allclose(grd.scatter(region, 1000, random_state=0).scalars,
+ data)
+ npt.assert_allclose(grd.profile((0, 0), (10, 0), 30).scalars,
+ angular*east_true[0, :] + linear)
+
+
+def test_basegridder_projection():
+ "Test basic functionality of BaseGridder when passing in a projection"
+
+ region = (0, 10, -10, -5)
+ shape = (50, 30)
+ angular, linear = 2, 100
+ east, north = scatter_points(region, 1000, random_state=0)
+ data = angular*east + linear
+ east_true, north_true = grid_coordinates(region, shape)
+ data_true = angular*east_true + linear
+ grd = PolyGridder().fit(east, north, data)
+
+ # Lets say we want to specify the region for a grid using a coordinate
+ # system that is lon/2, lat/2.
+ def proj(lon, lat):
+ "Project from the new coordinates to the original"
+ return (lon*2, lat*2)
+
+ proj_region = [i/2 for i in region]
+ grid = grd.grid(proj_region, shape, projection=proj)
+ scat = grd.scatter(proj_region, 1000, random_state=0, projection=proj)
+ prof = grd.profile((0, 0), (5, 0), 30, projection=proj)
+
+ npt.assert_allclose(grd.coefs_, [linear, angular])
+ npt.assert_allclose(grid.scalars.values, data_true)
+ npt.assert_allclose(grid.easting.values, east_true[0, :]/2)
+ npt.assert_allclose(grid.northing.values, north_true[:, 0]/2)
+ npt.assert_allclose(scat.scalars, data)
+ npt.assert_allclose(prof.scalars, angular*east_true[0, :] + linear)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 3
} | unknown | {
"env_vars": null,
"env_yml_path": [
"environment.yml"
],
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": true,
"packages": "environment.yml",
"pip_packages": [
"pytest pytest-cov pytest-xdist pytest-mock pytest-asyncio",
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster @ file:///home/conda/feedstock_root/build_artifacts/alabaster_1673645646525/work
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1633990451307/work
astroid @ file:///home/conda/feedstock_root/build_artifacts/astroid_1631641568157/work
async_generator @ file:///home/conda/feedstock_root/build_artifacts/async_generator_1722652753231/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1671632566681/work
Babel @ file:///home/conda/feedstock_root/build_artifacts/babel_1667688356751/work
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1702571698061/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1696630167146/work
brotlipy==0.7.0
Cartopy @ file:///home/conda/feedstock_root/build_artifacts/cartopy_1630680837223/work
certifi==2021.5.30
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1625835287197/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work
cmarkgfm @ file:///home/conda/feedstock_root/build_artifacts/cmarkgfm_1625147428696/work
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1655412516417/work
coverage @ file:///home/conda/feedstock_root/build_artifacts/coverage_1633450575846/work
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1634230300355/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
docutils @ file:///home/conda/feedstock_root/build_artifacts/docutils_1618676244774/work
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
execnet==1.9.0
flake8 @ file:///home/conda/feedstock_root/build_artifacts/flake8_1646781084538/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1726459485162/work
imagesize @ file:///home/conda/feedstock_root/build_artifacts/imagesize_1656939531508/work
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1630267465156/work
iniconfig @ file:///home/conda/feedstock_root/build_artifacts/iniconfig_1603384189793/work
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1620912934572/work/dist/ipykernel-5.5.5-py3-none-any.whl
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1609697613279/work
ipython_genutils @ file:///home/conda/feedstock_root/build_artifacts/ipython_genutils_1716278396992/work
ipywidgets @ file:///home/conda/feedstock_root/build_artifacts/ipywidgets_1679421482533/work
isort @ file:///home/conda/feedstock_root/build_artifacts/isort_1636447814597/work
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1605054537831/work
jeepney @ file:///home/conda/feedstock_root/build_artifacts/jeepney_1627546597665/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1636510082894/work
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1663332044897/work
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema_1634752161479/work
jupyter @ file:///home/conda/feedstock_root/build_artifacts/jupyter_1696255489086/work
jupyter-client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1642858610849/work
jupyter-console @ file:///home/conda/feedstock_root/build_artifacts/jupyter_console_1676328545892/work
jupyter-core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1631852698933/work
jupyterlab-pygments @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_pygments_1601375948261/work
jupyterlab-widgets @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_widgets_1655961217661/work
keyring @ file:///home/conda/feedstock_root/build_artifacts/keyring_1631478318457/work
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1610099771815/work
lazy-object-proxy @ file:///home/conda/feedstock_root/build_artifacts/lazy-object-proxy_1616506793265/work
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1621455668064/work
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1611858699142/work
mccabe==0.6.1
mistune @ file:///home/conda/feedstock_root/build_artifacts/mistune_1673904152039/work
more-itertools @ file:///home/conda/feedstock_root/build_artifacts/more-itertools_1690211628840/work
nbclient @ file:///home/conda/feedstock_root/build_artifacts/nbclient_1637327213451/work
nbconvert @ file:///home/conda/feedstock_root/build_artifacts/nbconvert_1605401832871/work
nbformat @ file:///home/conda/feedstock_root/build_artifacts/nbformat_1617383142101/work
nbsphinx @ file:///home/conda/feedstock_root/build_artifacts/nbsphinx_1741075436613/work
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1705850609492/work
notebook @ file:///home/conda/feedstock_root/build_artifacts/notebook_1616419146127/work
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1626681920064/work
numpydoc @ file:///home/conda/feedstock_root/build_artifacts/numpydoc_1648619272706/work
olefile @ file:///home/conda/feedstock_root/build_artifacts/olefile_1602866521163/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
pandas==1.1.5
pandocfilters @ file:///home/conda/feedstock_root/build_artifacts/pandocfilters_1631603243851/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1595548966091/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow @ file:///home/conda/feedstock_root/build_artifacts/pillow_1630696616009/work
pkginfo @ file:///home/conda/feedstock_root/build_artifacts/pkginfo_1673281726124/work
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1645298319244/work
pluggy @ file:///home/conda/feedstock_root/build_artifacts/pluggy_1631522669284/work
prometheus-client @ file:///home/conda/feedstock_root/build_artifacts/prometheus_client_1689032443210/work
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1670414775770/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
py @ file:///home/conda/feedstock_root/build_artifacts/py_1636301881863/work
pycodestyle @ file:///home/conda/feedstock_root/build_artifacts/pycodestyle_1633982426610/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pyflakes @ file:///home/conda/feedstock_root/build_artifacts/pyflakes_1633634815271/work
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1672682006896/work
pylint @ file:///home/conda/feedstock_root/build_artifacts/pylint_1631892890219/work
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1663846997386/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1724616129934/work
PyQt5==5.12.3
PyQt5_sip==4.19.18
PyQtChart==5.12
PyQtWebEngine==5.12.1
pyrsistent @ file:///home/conda/feedstock_root/build_artifacts/pyrsistent_1610146795286/work
pyshp @ file:///home/conda/feedstock_root/build_artifacts/pyshp_1659002966020/work
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1610291458349/work
pytest==6.2.5
pytest-asyncio==0.16.0
pytest-cov @ file:///home/conda/feedstock_root/build_artifacts/pytest-cov_1664412836798/work
pytest-mock==3.6.1
pytest-xdist==3.0.2
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1693930252784/work
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1631793305981/work
qtconsole @ file:///home/conda/feedstock_root/build_artifacts/qtconsole-base_1640876679830/work
QtPy @ file:///home/conda/feedstock_root/build_artifacts/qtpy_1643828301492/work
readme-renderer @ file:///home/conda/feedstock_root/build_artifacts/readme_renderer_1602693010535/work
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1656534056640/work
requests-toolbelt @ file:///home/conda/feedstock_root/build_artifacts/requests-toolbelt_1682953341151/work
rfc3986 @ file:///home/conda/feedstock_root/build_artifacts/rfc3986_1641825045899/work
scikit-learn @ file:///home/conda/feedstock_root/build_artifacts/scikit-learn_1630910533947/work
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1629411471490/work
SecretStorage @ file:///home/conda/feedstock_root/build_artifacts/secretstorage_1612911548807/work
Send2Trash @ file:///home/conda/feedstock_root/build_artifacts/send2trash_1682601222253/work
Shapely @ file:///home/conda/feedstock_root/build_artifacts/shapely_1628205367507/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
snowballstemmer @ file:///home/conda/feedstock_root/build_artifacts/snowballstemmer_1637143057757/work
Sphinx @ file:///home/conda/feedstock_root/build_artifacts/sphinx_1658872348413/work
sphinx-gallery @ file:///home/conda/feedstock_root/build_artifacts/sphinx-gallery_1700542355088/work
sphinx-rtd-theme @ file:///home/conda/feedstock_root/build_artifacts/sphinx_rtd_theme_1701183475238/work
sphinxcontrib-applehelp @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-applehelp_1674487779667/work
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-htmlhelp_1675256494457/work
sphinxcontrib-jquery @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-jquery_1678808969227/work
sphinxcontrib-jsmath @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-jsmath_1691604704163/work
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml @ file:///home/conda/feedstock_root/build_artifacts/sphinxcontrib-serializinghtml_1649380998999/work
terminado @ file:///home/conda/feedstock_root/build_artifacts/terminado_1631128154882/work
testpath @ file:///home/conda/feedstock_root/build_artifacts/testpath_1645693042223/work
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1604308577558/work
tomli @ file:///home/conda/feedstock_root/build_artifacts/tomli_1635181214134/work
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1610094701020/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1677887202771/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1631041982274/work
twine @ file:///home/conda/feedstock_root/build_artifacts/twine_1643895093454/work
typed-ast @ file:///home/conda/feedstock_root/build_artifacts/typed-ast_1618337594445/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1644850595256/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1678635778344/work
-e git+https://github.com/fatiando/verde.git@4eb0999faa1993dd473d30954aacbab4b6a4feb0#egg=verde
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1699959196938/work
webencodings @ file:///home/conda/feedstock_root/build_artifacts/webencodings_1694681268211/work
widgetsnbextension @ file:///home/conda/feedstock_root/build_artifacts/widgetsnbextension_1655939017940/work
wrapt @ file:///home/conda/feedstock_root/build_artifacts/wrapt_1610094846427/work
xarray @ file:///home/conda/feedstock_root/build_artifacts/xarray_1621474818012/work
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1633302054558/work
| name: verde
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- alabaster=0.7.13=pyhd8ed1ab_0
- alsa-lib=1.2.3.2=h166bdaf_0
- argon2-cffi=21.1.0=py36h8f6f2f9_0
- astroid=2.8.0=py36h5fab9bb_0
- async_generator=1.10=pyhd8ed1ab_1
- attrs=22.2.0=pyh71513ae_0
- babel=2.11.0=pyhd8ed1ab_0
- backcall=0.2.0=pyh9f0ad1d_0
- backports=1.0=pyhd8ed1ab_4
- backports.functools_lru_cache=2.0.0=pyhd8ed1ab_0
- bleach=6.1.0=pyhd8ed1ab_0
- brotlipy=0.7.0=py36h8f6f2f9_1001
- c-ares=1.18.1=h7f98852_0
- ca-certificates=2025.2.25=h06a4308_0
- cartopy=0.19.0.post1=py36hbcbf2fa_1
- certifi=2021.5.30=py36h06a4308_0
- cffi=1.14.6=py36hc120d54_0
- charset-normalizer=2.1.1=pyhd8ed1ab_0
- cmarkgfm=0.6.0=py36h8f6f2f9_0
- colorama=0.4.5=pyhd8ed1ab_0
- coverage=6.0=py36h8f6f2f9_1
- cryptography=35.0.0=py36hb60f036_0
- cycler=0.11.0=pyhd8ed1ab_0
- dbus=1.13.6=h48d8840_2
- decorator=5.1.1=pyhd8ed1ab_0
- defusedxml=0.7.1=pyhd8ed1ab_0
- docutils=0.17.1=py36h5fab9bb_0
- entrypoints=0.4=pyhd8ed1ab_0
- expat=2.4.8=h27087fc_0
- flake8=4.0.1=pyhd8ed1ab_2
- fontconfig=2.14.0=h8e229c2_0
- freetype=2.10.4=h0708190_1
- geos=3.9.1=h9c3ff4c_2
- gettext=0.19.8.1=h0b5b191_1005
- glib=2.68.4=h9c3ff4c_0
- glib-tools=2.68.4=h9c3ff4c_0
- gst-plugins-base=1.18.5=hf529b03_0
- gstreamer=1.18.5=h76c114f_0
- icu=68.2=h9c3ff4c_0
- idna=3.10=pyhd8ed1ab_0
- imagesize=1.4.1=pyhd8ed1ab_0
- importlib-metadata=4.8.1=py36h5fab9bb_0
- importlib_metadata=4.8.1=hd8ed1ab_1
- iniconfig=1.1.1=pyh9f0ad1d_0
- ipykernel=5.5.5=py36hcb3619a_0
- ipython=7.16.1=py36he448a4c_2
- ipython_genutils=0.2.0=pyhd8ed1ab_1
- ipywidgets=7.7.4=pyhd8ed1ab_0
- isort=5.10.1=pyhd8ed1ab_0
- jbig=2.1=h7f98852_2003
- jedi=0.17.2=py36h5fab9bb_1
- jeepney=0.7.1=pyhd8ed1ab_0
- jinja2=3.0.3=pyhd8ed1ab_0
- joblib=1.2.0=pyhd8ed1ab_0
- jpeg=9e=h166bdaf_1
- jsonschema=4.1.2=pyhd8ed1ab_0
- jupyter=1.0.0=pyhd8ed1ab_10
- jupyter_client=7.1.2=pyhd8ed1ab_0
- jupyter_console=6.5.1=pyhd8ed1ab_0
- jupyter_core=4.8.1=py36h5fab9bb_0
- jupyterlab_pygments=0.1.2=pyh9f0ad1d_0
- jupyterlab_widgets=1.1.1=pyhd8ed1ab_0
- keyring=23.2.1=py36h5fab9bb_0
- keyutils=1.6.1=h166bdaf_0
- kiwisolver=1.3.1=py36h605e78d_1
- krb5=1.19.3=h3790be6_0
- lazy-object-proxy=1.6.0=py36h8f6f2f9_0
- lcms2=2.12=hddcbb42_0
- ld_impl_linux-64=2.40=h12ee557_0
- lerc=2.2.1=h9c3ff4c_0
- libblas=3.9.0=16_linux64_openblas
- libcblas=3.9.0=16_linux64_openblas
- libclang=11.1.0=default_ha53f305_1
- libcurl=7.79.1=h2574ce0_1
- libdeflate=1.7=h7f98852_5
- libedit=3.1.20191231=he28a2e2_2
- libev=4.33=h516909a_1
- libevent=2.1.10=h9b69904_4
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgfortran-ng=13.2.0=h69a702a_0
- libgfortran5=13.2.0=ha4646dd_0
- libglib=2.68.4=h3e27bee_0
- libgomp=11.2.0=h1234567_1
- libiconv=1.17=h166bdaf_0
- liblapack=3.9.0=16_linux64_openblas
- libllvm11=11.1.0=hf817b99_2
- libnghttp2=1.43.0=h812cca2_1
- libogg=1.3.4=h7f98852_1
- libopenblas=0.3.21=h043d6bf_0
- libopus=1.3.1=h7f98852_1
- libpng=1.6.37=h21135ba_2
- libpq=13.3=hd57d9b9_0
- libsodium=1.0.18=h36c2ea0_1
- libssh2=1.10.0=ha56f1ee_2
- libstdcxx-ng=11.2.0=h1234567_1
- libtiff=4.3.0=hf544144_1
- libuuid=2.32.1=h7f98852_1000
- libvorbis=1.3.7=h9c3ff4c_0
- libwebp-base=1.2.2=h7f98852_1
- libxcb=1.13=h7f98852_1004
- libxkbcommon=1.0.3=he3ba5ed_0
- libxml2=2.9.12=h72842e0_0
- lz4-c=1.9.3=h9c3ff4c_1
- markupsafe=2.0.1=py36h8f6f2f9_0
- matplotlib=3.3.4=py36h5fab9bb_0
- matplotlib-base=3.3.4=py36hd391965_0
- mccabe=0.6.1=py_1
- mistune=0.8.4=pyh1a96a4e_1006
- more-itertools=10.0.0=pyhd8ed1ab_0
- mysql-common=8.0.25=ha770c72_2
- mysql-libs=8.0.25=hfa10184_2
- nbclient=0.5.9=pyhd8ed1ab_0
- nbconvert=6.0.7=py36h5fab9bb_3
- nbformat=5.1.3=pyhd8ed1ab_0
- nbsphinx=0.9.7=pyhd8ed1ab_0
- ncurses=6.4=h6a678d5_0
- nest-asyncio=1.6.0=pyhd8ed1ab_0
- notebook=6.3.0=py36h5fab9bb_0
- nspr=4.32=h9c3ff4c_1
- nss=3.69=hb5efdd6_1
- numpy=1.19.5=py36hfc0c790_2
- numpydoc=1.2.1=pyhd8ed1ab_0
- olefile=0.46=pyh9f0ad1d_1
- openjpeg=2.4.0=hb52868f_1
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd8ed1ab_0
- pandas=1.1.5=py36h284efc9_0
- pandoc=2.19.2=ha770c72_0
- pandocfilters=1.5.0=pyhd8ed1ab_0
- parso=0.7.1=pyh9f0ad1d_0
- pcre=8.45=h9c3ff4c_0
- pexpect=4.8.0=pyh1a96a4e_2
- pickleshare=0.7.5=py_1003
- pillow=8.3.2=py36h676a545_0
- pip=21.3.1=pyhd8ed1ab_0
- pkginfo=1.9.6=pyhd8ed1ab_0
- platformdirs=2.5.1=pyhd8ed1ab_0
- pluggy=1.0.0=py36h5fab9bb_1
- proj=7.2.0=h277dcde_2
- prometheus_client=0.17.1=pyhd8ed1ab_0
- prompt-toolkit=3.0.36=pyha770c72_0
- prompt_toolkit=3.0.36=hd8ed1ab_0
- pthread-stubs=0.4=h36c2ea0_1001
- ptyprocess=0.7.0=pyhd3deb0d_0
- py=1.11.0=pyh6c4a22f_0
- pycodestyle=2.8.0=pyhd8ed1ab_0
- pycparser=2.21=pyhd8ed1ab_0
- pyflakes=2.4.0=pyhd8ed1ab_0
- pygments=2.14.0=pyhd8ed1ab_0
- pylint=2.11.1=pyhd8ed1ab_0
- pyopenssl=22.0.0=pyhd8ed1ab_1
- pyparsing=3.1.4=pyhd8ed1ab_0
- pyqt=5.12.3=py36h5fab9bb_7
- pyqt-impl=5.12.3=py36h7ec31b9_7
- pyqt5-sip=4.19.18=py36hc4f0c31_7
- pyqtchart=5.12=py36h7ec31b9_7
- pyqtwebengine=5.12.1=py36h7ec31b9_7
- pyrsistent=0.17.3=py36h8f6f2f9_2
- pyshp=2.3.1=pyhd8ed1ab_0
- pysocks=1.7.1=py36h5fab9bb_3
- pytest=6.2.5=py36h5fab9bb_0
- pytest-cov=4.0.0=pyhd8ed1ab_0
- python=3.6.13=h12debd9_1
- python-dateutil=2.8.2=pyhd8ed1ab_0
- python_abi=3.6=2_cp36m
- pytz=2023.3.post1=pyhd8ed1ab_0
- pyzmq=22.3.0=py36h7068817_0
- qt=5.12.9=hda022c4_4
- qtconsole-base=5.2.2=pyhd8ed1ab_1
- qtpy=2.0.1=pyhd8ed1ab_0
- readline=8.2=h5eee18b_0
- readme_renderer=27.0=pyh9f0ad1d_0
- requests=2.28.1=pyhd8ed1ab_0
- requests-toolbelt=1.0.0=pyhd8ed1ab_0
- rfc3986=2.0.0=pyhd8ed1ab_0
- scikit-learn=0.24.2=py36hc89565f_1
- scipy=1.5.3=py36h81d768a_1
- secretstorage=3.3.1=py36h5fab9bb_0
- send2trash=1.8.2=pyh41d4057_0
- setuptools=58.0.4=py36h06a4308_0
- shapely=1.7.1=py36hff28ebb_5
- six=1.16.0=pyh6c4a22f_0
- snowballstemmer=2.2.0=pyhd8ed1ab_0
- sphinx=5.1.1=pyh6c4a22f_0
- sphinx-gallery=0.15.0=pyhd8ed1ab_0
- sphinx_rtd_theme=2.0.0=pyha770c72_0
- sphinxcontrib-applehelp=1.0.4=pyhd8ed1ab_0
- sphinxcontrib-devhelp=1.0.2=py_0
- sphinxcontrib-htmlhelp=2.0.1=pyhd8ed1ab_0
- sphinxcontrib-jquery=4.1=pyhd8ed1ab_0
- sphinxcontrib-jsmath=1.0.1=pyhd8ed1ab_0
- sphinxcontrib-qthelp=1.0.3=py_0
- sphinxcontrib-serializinghtml=1.1.5=pyhd8ed1ab_2
- sqlite=3.45.3=h5eee18b_0
- terminado=0.12.1=py36h5fab9bb_0
- testpath=0.6.0=pyhd8ed1ab_0
- threadpoolctl=3.1.0=pyh8a188c0_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd8ed1ab_0
- tomli=1.2.2=pyhd8ed1ab_0
- tornado=6.1=py36h8f6f2f9_1
- tqdm=4.65.0=pyhd8ed1ab_0
- traitlets=4.3.3=pyhd8ed1ab_2
- twine=3.8.0=pyhd8ed1ab_0
- typed-ast=1.4.3=py36h8f6f2f9_0
- typing-extensions=4.1.1=hd8ed1ab_0
- typing_extensions=4.1.1=pyha770c72_0
- urllib3=1.26.15=pyhd8ed1ab_0
- wcwidth=0.2.10=pyhd8ed1ab_0
- webencodings=0.5.1=pyhd8ed1ab_2
- wheel=0.37.1=pyhd3eb1b0_0
- widgetsnbextension=3.6.1=pyha770c72_0
- wrapt=1.12.1=py36h8f6f2f9_3
- xarray=0.18.2=pyhd8ed1ab_0
- xorg-libxau=1.0.9=h7f98852_0
- xorg-libxdmcp=1.1.3=h7f98852_0
- xz=5.6.4=h5eee18b_1
- zeromq=4.3.4=h9c3ff4c_1
- zipp=3.6.0=pyhd8ed1ab_0
- zlib=1.2.13=h5eee18b_1
- zstd=1.5.0=ha95c52a_0
- pip:
- execnet==1.9.0
- pytest-asyncio==0.16.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
prefix: /opt/conda/envs/verde
| [
"verde/tests/test_base.py::test_basegridder_projection"
]
| [
"verde/tests/test_base.py::test_basegridder"
]
| [
"verde/tests/test_base.py::test_get_dims",
"verde/tests/test_base.py::test_get_dims_fails",
"verde/tests/test_base.py::test_get_data_names",
"verde/tests/test_base.py::test_get_data_names_fails",
"verde/tests/test_base.py::test_get_region"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,490 | [
"Makefile",
"verde/__init__.py",
"examples/scipygridder.py",
"verde/base/gridder.py"
]
| [
"Makefile",
"verde/__init__.py",
"examples/scipygridder.py",
"verde/base/gridder.py"
]
|
|
pika__pika-1041 | d9a1baa40b162cc4c638f95a8e4e9ab666af4288 | 2018-05-09 06:29:34 | 4c904dea651caaf2a54b0fca0b9e908dec18a4f8 | diff --git a/README.rst b/README.rst
index a7a52be..1fcb7c7 100644
--- a/README.rst
+++ b/README.rst
@@ -12,9 +12,9 @@ extensions.
- Python 2.7 and 3.4+ are supported.
- Since threads aren't appropriate to every situation, it doesn't
- require threads. It takes care not to forbid them, either. The same
- goes for greenlets, callbacks, continuations and generators. It is
- not necessarily thread-safe however, and your mileage will vary.
+ require threads. Pika core takes care not to forbid them, either. The same
+ goes for greenlets, callbacks, continuations, and generators. An instance of
+ Pika's built-in connection adapters is not thread-safe, however.
- People may be using direct sockets, plain old `select()`,
or any of the wide variety of ways of getting network events to and from a
diff --git a/pika/heartbeat.py b/pika/heartbeat.py
index af6d93c..a3822a4 100644
--- a/pika/heartbeat.py
+++ b/pika/heartbeat.py
@@ -50,7 +50,7 @@ class HeartbeatChecker(object):
:rtype True
"""
- return self._connection.heartbeat is self
+ return self._connection._heartbeat_checker is self
@property
def bytes_received_on_connection(self):
| BlockingConnection heartbeat attribute can be undefined
I've been testing connection recovery docs and managed to run into the following exceptions several times:
```
Traceback (most recent call last):
File "./blocking_consumer_recovery2.py", line 31, in <module>
channel.start_consuming()
File "/Users/antares/Development/RabbitMQ/pika.git/pika/adapters/blocking_connection.py", line 1878, in start_consuming
self._process_data_events(time_limit=None)
File "/Users/antares/Development/RabbitMQ/pika.git/pika/adapters/blocking_connection.py", line 2040, in _process_data_events
self.connection.process_data_events(time_limit=time_limit)
File "/Users/antares/Development/RabbitMQ/pika.git/pika/adapters/blocking_connection.py", line 814, in process_data_events
self._flush_output(common_terminator)
File "/Users/antares/Development/RabbitMQ/pika.git/pika/adapters/blocking_connection.py", line 525, in _flush_output
self._impl.ioloop.process_timeouts()
File "/Users/antares/Development/RabbitMQ/pika.git/pika/adapters/select_connection.py", line 462, in process_timeouts
self._timer.process_timeouts()
File "/Users/antares/Development/RabbitMQ/pika.git/pika/adapters/select_connection.py", line 297, in process_timeouts
timeout.callback()
File "/Users/antares/Development/RabbitMQ/pika.git/pika/heartbeat.py", line 104, in send_and_check
self._start_timer()
File "/Users/antares/Development/RabbitMQ/pika.git/pika/heartbeat.py", line 167, in _start_timer
if self.active:
File "/Users/antares/Development/RabbitMQ/pika.git/pika/heartbeat.py", line 53, in active
return self._connection.heartbeat is self
AttributeError: 'SelectConnection' object has no attribute 'heartbeat'
```
unfortunately I don't have a 100% clear idea of how to reproduce this but all I did was restarting down nodes cleanly. At some point during recovery attempt the above exception would make the process exit. Perhaps the code that checks for `self._connection.heartbeat` should be more defensive. | pika/pika | diff --git a/tests/unit/heartbeat_tests.py b/tests/unit/heartbeat_tests.py
index c9e8b63..2ff98a7 100644
--- a/tests/unit/heartbeat_tests.py
+++ b/tests/unit/heartbeat_tests.py
@@ -20,15 +20,39 @@ import pika.exceptions
# pylint: disable=C0103
+class ConstructableConnection(connection.Connection):
+ """Adds dummy overrides for `Connection`'s abstract methods so
+ that we can instantiate and test it.
+
+ """
+ def _adapter_connect_stream(self):
+ pass
+
+ def _adapter_disconnect_stream(self):
+ raise NotImplementedError
+
+ def add_timeout(self, deadline, callback):
+ raise NotImplementedError
+
+ def remove_timeout(self, timeout_id):
+ raise NotImplementedError
+
+ def _adapter_emit_data(self, data):
+ raise NotImplementedError
+
+ def _adapter_get_write_buffer_size(self):
+ raise NotImplementedError
+
+
class HeartbeatTests(unittest.TestCase):
INTERVAL = 5
def setUp(self):
- self.mock_conn = mock.Mock(spec=connection.Connection)
+ self.mock_conn = mock.Mock(spec_set=ConstructableConnection())
self.mock_conn.bytes_received = 100
self.mock_conn.bytes_sent = 100
- self.mock_conn.heartbeat = mock.Mock(spec=heartbeat.HeartbeatChecker)
+ self.mock_conn._heartbeat_checker = mock.Mock(spec=heartbeat.HeartbeatChecker)
self.obj = heartbeat.HeartbeatChecker(self.mock_conn, self.INTERVAL)
def tearDown(self):
@@ -65,11 +89,11 @@ class HeartbeatTests(unittest.TestCase):
timer.assert_called_once_with()
def test_active_true(self):
- self.mock_conn.heartbeat = self.obj
+ self.mock_conn._heartbeat_checker = self.obj
self.assertTrue(self.obj.active)
def test_active_false(self):
- self.mock_conn.heartbeat = mock.Mock()
+ self.mock_conn._heartbeat_checker = mock.Mock()
self.assertFalse(self.obj.active)
def test_bytes_received_on_connection(self):
@@ -178,7 +202,7 @@ class HeartbeatTests(unittest.TestCase):
@mock.patch('pika.heartbeat.HeartbeatChecker._setup_timer')
def test_start_timer_active(self, setup_timer):
- self.mock_conn.heartbeat = self.obj
+ self.mock_conn._heartbeat_checker = self.obj
self.obj._start_timer()
self.assertTrue(setup_timer.called)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 2
} | 0.12 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"test-requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==25.3.0
Automat==24.8.1
certifi==2025.1.31
charset-normalizer==3.4.1
codecov==2.1.13
constantly==23.10.4
coverage==7.8.0
exceptiongroup==1.2.2
hyperlink==21.0.0
idna==3.10
incremental==24.7.2
iniconfig==2.1.0
mock==5.2.0
nose==1.3.7
packaging==24.2
-e git+https://github.com/pika/pika.git@d9a1baa40b162cc4c638f95a8e4e9ab666af4288#egg=pika
pluggy==1.5.0
pytest==8.3.5
requests==2.32.3
tomli==2.2.1
tornado==6.4.2
Twisted==24.11.0
typing_extensions==4.13.0
urllib3==2.3.0
zope.interface==7.2
| name: pika
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==25.3.0
- automat==24.8.1
- certifi==2025.1.31
- charset-normalizer==3.4.1
- codecov==2.1.13
- constantly==23.10.4
- coverage==7.8.0
- exceptiongroup==1.2.2
- hyperlink==21.0.0
- idna==3.10
- incremental==24.7.2
- iniconfig==2.1.0
- mock==5.2.0
- nose==1.3.7
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- requests==2.32.3
- tomli==2.2.1
- tornado==6.4.2
- twisted==24.11.0
- typing-extensions==4.13.0
- urllib3==2.3.0
- zope-interface==7.2
prefix: /opt/conda/envs/pika
| [
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_active_false",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_active_true",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_increment_bytes",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_increment_no_bytes",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_not_closed",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_send_heartbeat_frame",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_update_counters",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_start_timer_active",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_start_timer_not_active"
]
| []
| [
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_bytes_received_on_connection",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_connection_close",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_connection_is_idle_false",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_connection_is_idle_true",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_assignment_connection",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_assignment_heartbeat_interval",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_called_setup_timer",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_initial_bytes_received",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_initial_bytes_sent",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_initial_heartbeat_frames_received",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_initial_heartbeat_frames_sent",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_constructor_initial_idle_byte_intervals",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_default_initialization_max_idle_count",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_has_received_data_false",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_has_received_data_true",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_new_heartbeat_frame",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_received",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_missed_bytes",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_and_check_start_timer",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_heartbeat_counter_incremented",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_send_heartbeat_send_frame_called",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_setup_timer_called",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_update_counters_bytes_received",
"tests/unit/heartbeat_tests.py::HeartbeatTests::test_update_counters_bytes_sent"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,491 | [
"README.rst",
"pika/heartbeat.py"
]
| [
"README.rst",
"pika/heartbeat.py"
]
|
|
linkedin__shiv-27 | 3c5c81fdd2c060e540e76c5df52424fc92980f37 | 2018-05-09 17:44:36 | 3c5c81fdd2c060e540e76c5df52424fc92980f37 | coveralls: ## Pull Request Test Coverage Report for [Build 56](https://coveralls.io/builds/16907439)
* **0** of **0** **(NaN%)** changed or added relevant lines in **0** files are covered.
* **2** unchanged lines in **1** file lost coverage.
* Overall coverage increased (+**0.9%**) to **70.652%**
---
| Files with Coverage Reduction | New Missed Lines | % |
| :-----|--------------|--: |
| [.tox/py36/lib/python3.6/site-packages/shiv/pip.py](https://coveralls.io/builds/16907439/source?filename=.tox%2Fpy36%2Flib%2Fpython3.6%2Fsite-packages%2Fshiv%2Fpip.py#L36) | 2 | 92.86% |
<!-- | **Total:** | **2** | | -->
| Totals | [](https://coveralls.io/builds/16907439) |
| :-- | --: |
| Change from base [Build 55](https://coveralls.io/builds/16906049): | 0.9% |
| Covered Lines: | 195 |
| Relevant Lines: | 276 |
---
##### 💛 - [Coveralls](https://coveralls.io)
sixninetynine: Thanks @rouge8 !
Couple things:
According to [distutils](https://github.com/python/cpython/blob/master/Lib/distutils/dist.py#L339-L343) there's _three_ places this config file can butt it's head in and ruin stuff:
There are three possible config files: distutils.cfg in the
Distutils installation directory (ie. where the top-level
Distutils __inst__.py file lives), a file in the user's home
directory named .pydistutils.cfg on Unix and pydistutils.cfg
on Windows/Mac; and setup.cfg in the current directory.
a modified `distutils.cfg` is what homebrew ships
so I think if you update this PR to check for _either_ `pydistutils.cfg` or `.pydistutils.cfg` we'll be in good shape.
Maybe we could kill two birds with one stone (existence check and getting a Path object):
```
def pydistutils_cfg() -> Path:
try:
return next(Path.home().glob('*pydistutils.cfg'))
except StopIteration:
...
pydistutils = pydistutils_cfg()
if pydistutils is not None and pydistutils.exists():
# stuff
```
rouge8: yikes. updated ✅ | diff --git a/src/shiv/constants.py b/src/shiv/constants.py
index 1b82477..33d742b 100644
--- a/src/shiv/constants.py
+++ b/src/shiv/constants.py
@@ -18,3 +18,4 @@ BLACKLISTED_ARGS: Dict[Tuple[str, ...], str] = {
("-d", "--download"): "Shiv needs to actually perform an install, not merely a download.",
("--user", "--root", "--prefix"): "Which conflicts with Shiv's internal use of '--target'.",
}
+DISTUTILS_CFG_NO_PREFIX = "[install]\nprefix="
diff --git a/src/shiv/pip.py b/src/shiv/pip.py
index 3be1339..9deab31 100644
--- a/src/shiv/pip.py
+++ b/src/shiv/pip.py
@@ -3,9 +3,10 @@ import os
import subprocess
import sys
+from pathlib import Path
from typing import Generator, List
-from .constants import PIP_REQUIRE_VIRTUALENV, PIP_INSTALL_ERROR
+from .constants import PIP_REQUIRE_VIRTUALENV, PIP_INSTALL_ERROR, DISTUTILS_CFG_NO_PREFIX
@contextlib.contextmanager
@@ -17,12 +18,28 @@ def clean_pip_env() -> Generator[None, None, None]:
"""
require_venv = os.environ.pop(PIP_REQUIRE_VIRTUALENV, None)
+ # based on
+ # https://github.com/python/cpython/blob/8cf4b34b3665b8bb39ea7111e6b5c3410899d3e4/Lib/distutils/dist.py#L333-L363
+ pydistutils = Path.home() / (".pydistutils.cfg" if os.name == "posix" else "pydistutils.cfg")
+ pydistutils_already_existed = pydistutils.exists()
+
+ if not pydistutils_already_existed:
+ # distutils doesn't support using --target if there's a config file
+ # specifying --prefix. Homebrew's Pythons include a distutils.cfg that
+ # breaks `pip install --target` with any non-wheel packages. We can
+ # work around that by creating a temporary ~/.pydistutils.cfg
+ # specifying an empty prefix.
+ pydistutils.write_text(DISTUTILS_CFG_NO_PREFIX)
+
try:
yield
finally:
if require_venv is not None:
os.environ[PIP_REQUIRE_VIRTUALENV] = require_venv
+ if not pydistutils_already_existed:
+ # remove the temporary ~/.pydistutils.cfg
+ pydistutils.unlink()
def install(interpreter_path: str, args: List[str]) -> None:
| distutils.errors.DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
With shiv 0.0.14 from PyPI:
```console
$ shiv aws -c aws -p $(which python3.6) -o blergh
shiv! 🔪
Collecting aws
Collecting fabric>=1.6 (from aws)
Collecting boto (from aws)
Using cached https://files.pythonhosted.org/packages/bd/b7/a88a67002b1185ed9a8e8a6ef15266728c2361fcb4f1d02ea331e4c7741d/boto-2.48.0-py2.py3-none-any.whl
Collecting prettytable>=0.7 (from aws)
Collecting paramiko<3.0,>=1.10 (from fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/3e/db/cb7b6656e0e7387637ce850689084dc0b94b44df31cc52e5fc5c2c4fd2c1/paramiko-2.4.1-py2.py3-none-any.whl
Collecting pyasn1>=0.1.7 (from paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/ba/fe/02e3e2ee243966b143657fb8bd6bc97595841163b6d8c26820944acaec4d/pyasn1-0.4.2-py2.py3-none-any.whl
Collecting pynacl>=1.0.1 (from paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/74/8e/a6c0d340972d9e2f1a405aaa3f2460950b4c0337f92db0291a4355974529/PyNaCl-1.2.1-cp36-cp36m-macosx_10_6_intel.whl
Collecting bcrypt>=3.1.3 (from paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/7e/59/d48fd712941da1a5d6490964a37bb3de2e526965b6766273f6a7049ee590/bcrypt-3.1.4-cp36-cp36m-macosx_10_6_intel.whl
Collecting cryptography>=1.5 (from paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/40/87/acdcf84ce6d25a7db1c113f4b9b614fd8d707b7ab56fbf17cf18cd26a627/cryptography-2.2.2-cp34-abi3-macosx_10_6_intel.whl
Collecting cffi>=1.4.1 (from pynacl>=1.0.1->paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/8e/be/40b1bc2c3221acdefeb9dab6773d43cda7543ed0d8c8df8768f05af2d01e/cffi-1.11.5-cp36-cp36m-macosx_10_6_intel.whl
Collecting six (from pynacl>=1.0.1->paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Collecting idna>=2.1 (from cryptography>=1.5->paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/27/cc/6dd9a3869f15c2edfab863b992838277279ce92663d334df9ecf5106f5c6/idna-2.6-py2.py3-none-any.whl
Collecting asn1crypto>=0.21.0 (from cryptography>=1.5->paramiko<3.0,>=1.10->fabric>=1.6->aws)
Using cached https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl
Collecting pycparser (from cffi>=1.4.1->pynacl>=1.0.1->paramiko<3.0,>=1.10->fabric>=1.6->aws)
Installing collected packages: pyasn1, pycparser, cffi, six, pynacl, bcrypt, idna, asn1crypto, cryptography, paramiko, fabric, boto, prettytable, aws
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/pip/_internal/basecommand.py", line 228, in main
status = self.run(options, args)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 335, in run
use_user_site=options.use_user_site,
File "/usr/local/lib/python3.6/site-packages/pip/_internal/req/__init__.py", line 49, in install_given_reqs
**kwargs
File "/usr/local/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 748, in install
use_user_site=use_user_site, pycompile=pycompile,
File "/usr/local/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 961, in move_wheel_files
warn_script_location=warn_script_location,
File "/usr/local/lib/python3.6/site-packages/pip/_internal/wheel.py", line 216, in move_wheel_files
prefix=prefix,
File "/usr/local/lib/python3.6/site-packages/pip/_internal/locations.py", line 165, in distutils_scheme
i.finalize_options()
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/distutils/command/install.py", line 248, in finalize_options
"must supply either home or prefix/exec-prefix -- not both")
distutils.errors.DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
Pip install failed!
```
Here's the packages installed systemwide alongside shiv:
```
click==6.7
importlib-resources==0.5
pip==10.0.1
setuptools==39.1.0
shiv==0.0.14
wheel==0.31.0
```
OS X, Python 3.6.5 from Homebrew. | linkedin/shiv | diff --git a/test/conftest.py b/test/conftest.py
index 9229abb..3454d5f 100644
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -1,11 +1,23 @@
+import os
+
from pathlib import Path
import pytest
[email protected]
-def package_location():
- return Path(__file__).absolute().parent / 'package'
[email protected](params=[True, False], ids=['.', 'absolute-path'])
+def package_location(request):
+ package_location = Path(__file__).absolute().parent / 'package'
+
+ if request.param is True:
+ # test building from the current directory
+ cwd = os.getcwd()
+ os.chdir(package_location)
+ yield Path('.')
+ os.chdir(cwd)
+ else:
+ # test building an absolute path
+ yield package_location
@pytest.fixture
diff --git a/test/test_cli.py b/test/test_cli.py
index ad2f6bb..a38017c 100644
--- a/test/test_cli.py
+++ b/test/test_cli.py
@@ -45,11 +45,20 @@ class TestCLI:
# assert we got the correct reason
assert strip_header(result.output) == DISALLOWED_PIP_ARGS.format(arg=arg, reason=reason)
- def test_hello_world(self, tmpdir, runner, package_location):
+ # /usr/local/bin/python3.6 is a test for https://github.com/linkedin/shiv/issues/16
+ @pytest.mark.parametrize('interpreter', [None, Path('/usr/local/bin/python3.6')])
+ def test_hello_world(self, tmpdir, runner, package_location, interpreter):
+ if interpreter is not None and not interpreter.exists():
+ pytest.skip(f'Interpreter "{interpreter}" does not exist')
+
with tempfile.TemporaryDirectory(dir=tmpdir) as tmpdir:
output_file = Path(tmpdir, 'test.pyz')
- result = runner(['-e', 'hello:main', '-o', output_file.as_posix(), package_location.as_posix()])
+ args = ['-e', 'hello:main', '-o', output_file.as_posix(), package_location.as_posix()]
+ if interpreter is not None:
+ args = ['-p', interpreter.as_posix()] + args
+
+ result = runner(args)
# check that the command successfully completed
assert result.exit_code == 0
diff --git a/test/test_pip.py b/test/test_pip.py
new file mode 100644
index 0000000..aba1721
--- /dev/null
+++ b/test/test_pip.py
@@ -0,0 +1,48 @@
+import os
+
+from pathlib import Path
+
+import pytest
+
+from shiv.constants import PIP_REQUIRE_VIRTUALENV, DISTUTILS_CFG_NO_PREFIX
+from shiv.pip import clean_pip_env
+
+
[email protected]("pydistutils_path, os_name", [
+ ("pydistutils.cfg", "nt"),
+ (".pydistutils.cfg", "posix"),
+ (None, os.name),
+])
+def test_clean_pip_env(monkeypatch, tmpdir, pydistutils_path, os_name):
+ home = tmpdir.join("home").ensure(dir=True)
+ monkeypatch.setenv("HOME", home)
+
+ # patch os.name so distutils will use `pydistutils_path` for its config
+ monkeypatch.setattr(os, 'name', os.name)
+
+ if pydistutils_path:
+ pydistutils = Path.home() / pydistutils_path
+ pydistutils_contents = "foobar"
+ pydistutils.write_text(pydistutils_contents)
+ else:
+ pydistutils = Path.home() / ".pydistutils.cfg"
+ pydistutils_contents = None
+
+ before_env_var = "foo"
+ monkeypatch.setenv(PIP_REQUIRE_VIRTUALENV, before_env_var)
+
+ with clean_pip_env():
+ assert PIP_REQUIRE_VIRTUALENV not in os.environ
+
+ if not pydistutils_path:
+ # ~/.pydistutils.cfg was created
+ assert pydistutils.read_text() == DISTUTILS_CFG_NO_PREFIX
+ else:
+ # ~/.pydistutils.cfg was not modified
+ assert pydistutils.read_text() == pydistutils_contents
+
+ assert os.environ.get(PIP_REQUIRE_VIRTUALENV) == before_env_var
+
+ # If a temporary ~/.pydistutils.cfg was created, it was deleted. If
+ # ~/.pydistutils.cfg already existed, it still exists.
+ assert pydistutils.exists() == bool(pydistutils_path)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 3,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 2
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "click==6.7 pip>=9.0.1 importlib_resources>=0.4",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
click==6.7
importlib-metadata==4.8.3
importlib-resources @ file:///tmp/build/80754af9/importlib_resources_1625135880749/work
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
-e git+https://github.com/linkedin/shiv.git@3c5c81fdd2c060e540e76c5df52424fc92980f37#egg=shiv
tomli==1.2.3
typing_extensions==4.1.1
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: shiv
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- click=6.7=py36_0
- importlib_resources=5.2.0=pyhd3eb1b0_1
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- tomli==1.2.3
- typing-extensions==4.1.1
prefix: /opt/conda/envs/shiv
| [
"test/test_bootstrap.py::TestBootstrap::test_various_imports",
"test/test_builder.py::TestBuilder::test_file_prefix",
"test/test_builder.py::TestBuilder::test_create_archive",
"test/test_builder.py::TestBuilder::test_archive_permissions",
"test/test_cli.py::TestCLI::test_no_args",
"test/test_cli.py::TestCLI::test_no_outfile",
"test/test_cli.py::TestCLI::test_blacklisted_args[-t]",
"test/test_cli.py::TestCLI::test_blacklisted_args[--target]",
"test/test_cli.py::TestCLI::test_blacklisted_args[--editable]",
"test/test_cli.py::TestCLI::test_blacklisted_args[-d]",
"test/test_cli.py::TestCLI::test_blacklisted_args[--download]",
"test/test_cli.py::TestCLI::test_blacklisted_args[--user]",
"test/test_cli.py::TestCLI::test_blacklisted_args[--root]",
"test/test_cli.py::TestCLI::test_blacklisted_args[--prefix]",
"test/test_cli.py::TestCLI::test_hello_world[.-None]",
"test/test_cli.py::TestCLI::test_hello_world[absolute-path-None]",
"test/test_cli.py::TestCLI::test_interpreter",
"test/test_cli.py::TestCLI::test_real_interpreter",
"test/test_pip.py::test_clean_pip_env[pydistutils.cfg-nt]",
"test/test_pip.py::test_clean_pip_env[.pydistutils.cfg-posix]",
"test/test_pip.py::test_clean_pip_env[None-posix]"
]
| []
| []
| []
| BSD 2-Clause "Simplified" License | 2,492 | [
"src/shiv/pip.py",
"src/shiv/constants.py"
]
| [
"src/shiv/pip.py",
"src/shiv/constants.py"
]
|
adamchainz__apig-wsgi-12 | 4fc2d8dc685a8091931d01cb58be17a2e0fe9d38 | 2018-05-10 09:34:52 | 4fc2d8dc685a8091931d01cb58be17a2e0fe9d38 | adamchainz: @kthhrv you'll like this | diff --git a/HISTORY.rst b/HISTORY.rst
index 7cbf671..9db579a 100644
--- a/HISTORY.rst
+++ b/HISTORY.rst
@@ -6,6 +6,9 @@ Pending Release
.. Insert new release notes below this line
+* Add ``binary_support`` flag to enable sending binary responses, if enabled on
+ API Gateway.
+
1.0.0 (2018-03-08)
------------------
diff --git a/README.rst b/README.rst
index e4ddb10..c5842c0 100644
--- a/README.rst
+++ b/README.rst
@@ -32,3 +32,22 @@ Use **pip**:
pip install apig-wsgi
Tested on Python 2.7 and Python 3.6.
+
+Usage
+=====
+
+``make_lambda_handler(app, binary_support=False)``
+--------------------------------------------------
+
+``app`` should be a WSGI app, for example from Django's ``wsgi.py`` or Flask's
+``Flask()`` object.
+
+If you want to support sending binary responses, set ``binary_support`` to
+``True`` and make sure you have ``'*/*'`` in the 'binary media types'
+configuration on your Rest API on API Gateway. Note, whilst API Gateway
+supports a list of media types, using '*/*' is the best way to do it, since it
+is used to match the request 'Accept' header as well.
+
+Note that binary responses aren't sent if your response has a 'Content-Type'
+starting 'text/html' or 'application/json' - this is to support sending larger
+text responses.
diff --git a/apig_wsgi.py b/apig_wsgi.py
index daadc9f..0e2fea1 100644
--- a/apig_wsgi.py
+++ b/apig_wsgi.py
@@ -2,6 +2,7 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
+from base64 import b64encode
from io import BytesIO
import six
@@ -14,14 +15,14 @@ __version__ = '1.0.0'
__all__ = ('make_lambda_handler',)
-def make_lambda_handler(wsgi_app):
+def make_lambda_handler(wsgi_app, binary_support=False):
"""
Turn a WSGI app callable into a Lambda handler function suitable for
running on API Gateway.
"""
def handler(event, context):
environ = get_environ(event)
- response = Response()
+ response = Response(binary_support=binary_support)
result = wsgi_app(environ, response.start_response)
response.consume(result)
return response.as_apig_response()
@@ -70,10 +71,11 @@ def get_environ(event):
class Response(object):
- def __init__(self):
+ def __init__(self, binary_support):
self.status_code = '500'
self.headers = []
self.body = BytesIO()
+ self.binary_support = binary_support
def start_response(self, status, response_headers, exc_info=None):
self.status_code = status.split()[0]
@@ -90,8 +92,24 @@ class Response(object):
result.close()
def as_apig_response(self):
- return {
+ response = {
'statusCode': self.status_code,
'headers': dict(self.headers),
- 'body': self.body.getvalue().decode('utf-8'),
}
+
+ content_type = self._get_content_type()
+
+ if self.binary_support and not content_type.startswith(('text/', 'application/json')):
+ response['isBase64Encoded'] = True
+ response['body'] = b64encode(self.body.getvalue()).decode('utf-8')
+ else:
+ response['body'] = self.body.getvalue().decode('utf-8')
+
+ print(response)
+ return response
+
+ def _get_content_type(self):
+ content_type_headers = [v for k, v in self.headers if k.lower() == 'content-type']
+ if len(content_type_headers):
+ return content_type_headers[-1]
+ return None
| Support sending binary body in response
For e.g. assets served through whitenoise | adamchainz/apig-wsgi | diff --git a/test_apig_wsgi.py b/test_apig_wsgi.py
index a176fa0..a12e90e 100644
--- a/test_apig_wsgi.py
+++ b/test_apig_wsgi.py
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals
+from base64 import b64encode
from io import BytesIO
import pytest
@@ -46,6 +47,45 @@ def test_get(simple_app):
}
+def test_get_missing_content_type(simple_app):
+ simple_app.headers = []
+
+ response = simple_app.handler(make_event(), None)
+
+ assert response == {
+ 'statusCode': '200',
+ 'headers': {},
+ 'body': 'Hello World\n',
+ }
+
+
+def test_get_binary_support_text(simple_app):
+ simple_app.handler = make_lambda_handler(simple_app, binary_support=True)
+
+ response = simple_app.handler(make_event(), None)
+
+ assert response == {
+ 'statusCode': '200',
+ 'headers': {'Content-Type': 'text/plain'},
+ 'body': 'Hello World\n',
+ }
+
+
+def test_get_binary_support_binary(simple_app):
+ simple_app.handler = make_lambda_handler(simple_app, binary_support=True)
+ simple_app.headers = [('Content-Type', 'application/octet-stream')]
+ simple_app.response = b'\x13\x37'
+
+ response = simple_app.handler(make_event(), None)
+
+ assert response == {
+ 'statusCode': '200',
+ 'headers': {'Content-Type': 'application/octet-stream'},
+ 'body': b64encode(b'\x13\x37').decode('utf-8'),
+ 'isBase64Encoded': True,
+ }
+
+
def test_post(simple_app):
event = make_event(method='POST', body='The World is Large')
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 3
} | 1.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/adamchainz/apig-wsgi.git@4fc2d8dc685a8091931d01cb58be17a2e0fe9d38#egg=apig_wsgi
attrs==22.2.0
certifi==2021.5.30
coverage==6.2
importlib-metadata==4.8.3
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: apig-wsgi
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- coverage==6.2
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/apig-wsgi
| [
"test_apig_wsgi.py::test_get_binary_support_text",
"test_apig_wsgi.py::test_get_binary_support_binary"
]
| []
| [
"test_apig_wsgi.py::test_get",
"test_apig_wsgi.py::test_get_missing_content_type",
"test_apig_wsgi.py::test_post",
"test_apig_wsgi.py::test_querystring_none",
"test_apig_wsgi.py::test_querystring_empty",
"test_apig_wsgi.py::test_querystring_one",
"test_apig_wsgi.py::test_plain_header",
"test_apig_wsgi.py::test_special_headers",
"test_apig_wsgi.py::test_no_headers",
"test_apig_wsgi.py::test_headers_None"
]
| []
| MIT License | 2,493 | [
"HISTORY.rst",
"apig_wsgi.py",
"README.rst"
]
| [
"HISTORY.rst",
"apig_wsgi.py",
"README.rst"
]
|
HECBioSim__Longbow-93 | 145a985cb0b3eb18fc3dd1f0dc74a9ee4e9c236c | 2018-05-10 10:37:37 | c81fcaccfa7fb2dc147e40970ef806dc6d6b22a4 | diff --git a/longbow/applications.py b/longbow/applications.py
index c693b42..8f99c1e 100755
--- a/longbow/applications.py
+++ b/longbow/applications.py
@@ -353,13 +353,15 @@ def _procfiles(job, arg, filelist, foundflags, substitution):
# Otherwise we have a replicate job so check these.
else:
- # Add the repX dir
- if ("rep" + str(rep)) not in filelist:
+ repx = str(job["replicate-naming"]) + str(rep)
- filelist.append("rep" + str(rep))
+ # Add the repx dir
+ if (repx) not in filelist:
+
+ filelist.append(repx)
fileitem = _procfilesreplicatejobs(
- app, arg, job["localworkdir"], initargs, rep)
+ app, arg, job["localworkdir"], initargs, repx)
job["executableargs"] = initargs
@@ -407,21 +409,21 @@ def _procfilessinglejob(app, arg, cwd):
return fileitem
-def _procfilesreplicatejobs(app, arg, cwd, initargs, rep):
+def _procfilesreplicatejobs(app, arg, cwd, initargs, repx):
"""Processor for replicate jobs."""
fileitem = ""
tmpitem = ""
# We should check that the replicate directory structure exists.
- if os.path.isdir(os.path.join(cwd, "rep" + str(rep))) is False:
+ if os.path.isdir(os.path.join(cwd, repx)) is False:
- os.mkdir(os.path.join(cwd, "rep" + str(rep)))
+ os.mkdir(os.path.join(cwd, repx))
# If we have a replicate job then we should check if the file resides
# within ./rep{i} or if it is a global (common to each replicate) file.
- if os.path.isfile(os.path.join(cwd, "rep" + str(rep), arg)):
+ if os.path.isfile(os.path.join(cwd, repx, arg)):
- fileitem = os.path.join("rep" + str(rep), arg)
+ fileitem = os.path.join(repx, arg)
# Otherwise do we have a file in cwd
elif os.path.isfile(os.path.join(cwd, arg)):
@@ -440,7 +442,7 @@ def _procfilesreplicatejobs(app, arg, cwd, initargs, rep):
try:
tmpitem, _ = getattr(apps, app.lower()).defaultfilename(
- cwd, os.path.join("rep" + str(rep), arg), "")
+ cwd, os.path.join(repx, arg), "")
except AttributeError:
diff --git a/longbow/configuration.py b/longbow/configuration.py
index 2e82db1..b35c5c8 100755
--- a/longbow/configuration.py
+++ b/longbow/configuration.py
@@ -103,6 +103,7 @@ JOBTEMPLATE = {
"remoteworkdir": "",
"resource": "",
"replicates": "1",
+ "replicate-naming": "rep",
"scheduler": "",
"scripts": "",
"slurm-gres": "",
| Allow replicate directory naming schemes
At the moment users have to have a specific fixed structure for replicates where the directory consists of rep[x] where the rep part is fixed and the number is incremented. Users have requested that the rep part is flexible so Longbow can be better chained with other tools. | HECBioSim/Longbow | diff --git a/tests/unit/applications/test_procfiles.py b/tests/unit/applications/test_procfiles.py
index 01542ab..a3a27f7 100644
--- a/tests/unit/applications/test_procfiles.py
+++ b/tests/unit/applications/test_procfiles.py
@@ -35,6 +35,7 @@ This testing module contains the tests for the applications module methods.
"""
from longbow.applications import _procfiles
+from longbow.configuration import JOBTEMPLATE
def test_procfiles_amber():
@@ -43,16 +44,16 @@ def test_procfiles_amber():
Test to make sure that the file and flag is picked up for an amber-like
command-line.
"""
+ job = JOBTEMPLATE.copy()
arg = "coords"
filelist = []
foundflags = []
- job = {
- "executable": "pmemd.MPI",
- "replicates": "1",
- "localworkdir": "tests/standards/jobs/single",
- "executableargs": ["-i", "input", "-c", "coords", "-p", "topol"]
- }
+
+ job["executable"] = "pmemd.MPI"
+ job["localworkdir"] = "tests/standards/jobs/single"
+ job["executableargs"] = ["-i", "input", "-c", "coords", "-p", "topol"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -68,15 +69,15 @@ def test_procfiles_charmm():
command-line.
"""
+ job = JOBTEMPLATE.copy()
+
arg = "topol"
filelist = []
foundflags = []
- job = {
- "executable": "charmm",
- "replicates": "1",
- "localworkdir": "tests/standards/jobs/single",
- "executableargs": ["<", "topol"]
- }
+ job["executable"] = "charmm"
+ job["localworkdir"] = "tests/standards/jobs/single"
+ job["executableargs"] = ["<", "topol"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -92,15 +93,15 @@ def test_procfiles_gromacs():
command-line.
"""
+ job = JOBTEMPLATE.copy()
+
arg = "test"
filelist = []
foundflags = []
- job = {
- "executable": "mdrun_mpi",
- "replicates": "1",
- "localworkdir": "tests/standards/jobs/single",
- "executableargs": ["-deffnm", "test"]
- }
+ job["executable"] = "mdrun_mpi"
+ job["localworkdir"] = "tests/standards/jobs/single"
+ job["executableargs"] = ["-deffnm", "test"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -116,15 +117,15 @@ def test_procfiles_namd1():
command-line.
"""
+ job = JOBTEMPLATE.copy()
+
arg = "input"
filelist = []
foundflags = []
- job = {
- "executable": "namd2",
- "replicates": "1",
- "localworkdir": "tests/standards/jobs/single",
- "executableargs": ["input"]
- }
+ job["executable"] = "namd2"
+ job["localworkdir"] = "tests/standards/jobs/single"
+ job["executableargs"] = ["input"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -140,15 +141,15 @@ def test_procfiles_namd2():
command-line.
"""
+ job = JOBTEMPLATE.copy()
+
arg = "input"
filelist = []
foundflags = []
- job = {
- "executable": "namd2",
- "replicates": "1",
- "localworkdir": "tests/standards/jobs/single",
- "executableargs": ["input", ">", "output"]
- }
+ job["executable"] = "namd2"
+ job["localworkdir"] = "tests/standards/jobs/single"
+ job["executableargs"] = ["input", ">", "output"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -163,15 +164,16 @@ def test_procfiles_reps1():
Test for replicate variant.
"""
+ job = JOBTEMPLATE.copy()
+
arg = "coords"
filelist = []
foundflags = []
- job = {
- "executable": "pmemd.MPI",
- "replicates": "3",
- "localworkdir": "tests/standards/jobs/replicate",
- "executableargs": ["-i", "input", "-c", "coords", "-p", "topol"]
- }
+ job["executable"] = "pmemd.MPI"
+ job["replicates"] = "3"
+ job["localworkdir"] = "tests/standards/jobs/replicate"
+ job["executableargs"] = ["-i", "input", "-c", "coords", "-p", "topol"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -187,15 +189,16 @@ def test_procfiles_reps2():
Test for replicate variant with global.
"""
+ job = JOBTEMPLATE.copy()
+
arg = "topol"
filelist = []
foundflags = []
- job = {
- "executable": "pmemd.MPI",
- "replicates": "3",
- "localworkdir": "tests/standards/jobs/replicate",
- "executableargs": ["-i", "input", "-c", "coords", "-p", "topol"]
- }
+ job["executable"] = "pmemd.MPI"
+ job["replicates"] = "3"
+ job["localworkdir"] = "tests/standards/jobs/replicate"
+ job["executableargs"] = ["-i", "input", "-c", "coords", "-p", "topol"]
+
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
diff --git a/tests/unit/applications/test_procfilesreplicatejobs.py b/tests/unit/applications/test_procfilesreplicatejobs.py
index ca2e0a5..03c6253 100644
--- a/tests/unit/applications/test_procfilesreplicatejobs.py
+++ b/tests/unit/applications/test_procfilesreplicatejobs.py
@@ -57,7 +57,7 @@ def test_procfilesreplicatejobs_t1():
arg = "input"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-i", "input", "-c", "coords", "-p", "topol"]
- rep = 1
+ rep = "rep1"
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -75,7 +75,7 @@ def test_procfilesreplicatejobs_t2():
arg = "topol"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-i", "input", "-c", "coords", "-p", "topol"]
- rep = 1
+ rep = "rep1"
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -93,7 +93,7 @@ def test_procfilesreplicatejobs_t3():
arg = "test"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-i", "input", "-c", "coords", "-p", "topol"]
- rep = 1
+ rep = "rep1"
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -110,7 +110,7 @@ def test_procfilesreplicatejobs_t4():
arg = "test"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-deffnm", "test"]
- rep = 2
+ rep = "rep2"
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -128,7 +128,7 @@ def test_procfilesreplicatejobs_t5(m_mkdir):
arg = "test"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-deffnm", "test"]
- rep = 4
+ rep = "rep4"
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
diff --git a/tests/unit/configuration/test_processconfigsresource.py b/tests/unit/configuration/test_processconfigsresource.py
index 04e5618..4a6bc8f 100644
--- a/tests/unit/configuration/test_processconfigsresource.py
+++ b/tests/unit/configuration/test_processconfigsresource.py
@@ -130,6 +130,7 @@ def test_processconfigsresource1():
"remoteworkdir": "",
"resource": "host1",
"replicates": "1",
+ "replicate-naming": "rep",
"scheduler": "",
"user": "",
"upload-exclude": "",
@@ -236,6 +237,7 @@ def test_processconfigsresource2():
"remoteworkdir": "",
"resource": "host2",
"replicates": "1",
+ "replicate-naming": "rep",
"scheduler": "",
"user": "",
"upload-exclude": "",
@@ -343,6 +345,7 @@ def test_processconfigsresource3():
"remoteworkdir": "",
"resource": "host1",
"replicates": "1",
+ "replicate-naming": "rep",
"scheduler": "",
"user": "",
"upload-exclude": "",
@@ -449,6 +452,7 @@ def test_processconfigsresource4():
"remoteworkdir": "",
"resource": "host3",
"replicates": "1",
+ "replicate-naming": "rep",
"scheduler": "",
"user": "",
"upload-exclude": "",
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 2
} | .1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
importlib-metadata==4.8.3
iniconfig==1.1.1
-e git+https://github.com/HECBioSim/Longbow.git@145a985cb0b3eb18fc3dd1f0dc74a9ee4e9c236c#egg=Longbow
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: Longbow
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/Longbow
| [
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t1",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t4"
]
| []
| [
"tests/unit/applications/test_procfiles.py::test_procfiles_amber",
"tests/unit/applications/test_procfiles.py::test_procfiles_charmm",
"tests/unit/applications/test_procfiles.py::test_procfiles_gromacs",
"tests/unit/applications/test_procfiles.py::test_procfiles_namd1",
"tests/unit/applications/test_procfiles.py::test_procfiles_namd2",
"tests/unit/applications/test_procfiles.py::test_procfiles_reps1",
"tests/unit/applications/test_procfiles.py::test_procfiles_reps2",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t2",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t3",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t5",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource1",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource2",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource3",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource4",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource5"
]
| []
| BSD 3-Clause License | 2,494 | [
"longbow/configuration.py",
"longbow/applications.py"
]
| [
"longbow/configuration.py",
"longbow/applications.py"
]
|
|
HECBioSim__Longbow-94 | 99ce093c8b3daab24c4cb4f64b3f7e3f22690073 | 2018-05-10 13:27:57 | c81fcaccfa7fb2dc147e40970ef806dc6d6b22a4 | diff --git a/longbow/applications.py b/longbow/applications.py
index 8f99c1e..c693b42 100755
--- a/longbow/applications.py
+++ b/longbow/applications.py
@@ -353,15 +353,13 @@ def _procfiles(job, arg, filelist, foundflags, substitution):
# Otherwise we have a replicate job so check these.
else:
- repx = str(job["replicate-naming"]) + str(rep)
+ # Add the repX dir
+ if ("rep" + str(rep)) not in filelist:
- # Add the repx dir
- if (repx) not in filelist:
-
- filelist.append(repx)
+ filelist.append("rep" + str(rep))
fileitem = _procfilesreplicatejobs(
- app, arg, job["localworkdir"], initargs, repx)
+ app, arg, job["localworkdir"], initargs, rep)
job["executableargs"] = initargs
@@ -409,21 +407,21 @@ def _procfilessinglejob(app, arg, cwd):
return fileitem
-def _procfilesreplicatejobs(app, arg, cwd, initargs, repx):
+def _procfilesreplicatejobs(app, arg, cwd, initargs, rep):
"""Processor for replicate jobs."""
fileitem = ""
tmpitem = ""
# We should check that the replicate directory structure exists.
- if os.path.isdir(os.path.join(cwd, repx)) is False:
+ if os.path.isdir(os.path.join(cwd, "rep" + str(rep))) is False:
- os.mkdir(os.path.join(cwd, repx))
+ os.mkdir(os.path.join(cwd, "rep" + str(rep)))
# If we have a replicate job then we should check if the file resides
# within ./rep{i} or if it is a global (common to each replicate) file.
- if os.path.isfile(os.path.join(cwd, repx, arg)):
+ if os.path.isfile(os.path.join(cwd, "rep" + str(rep), arg)):
- fileitem = os.path.join(repx, arg)
+ fileitem = os.path.join("rep" + str(rep), arg)
# Otherwise do we have a file in cwd
elif os.path.isfile(os.path.join(cwd, arg)):
@@ -442,7 +440,7 @@ def _procfilesreplicatejobs(app, arg, cwd, initargs, repx):
try:
tmpitem, _ = getattr(apps, app.lower()).defaultfilename(
- cwd, os.path.join(repx, arg), "")
+ cwd, os.path.join("rep" + str(rep), arg), "")
except AttributeError:
diff --git a/longbow/configuration.py b/longbow/configuration.py
index b35c5c8..fde6c01 100755
--- a/longbow/configuration.py
+++ b/longbow/configuration.py
@@ -103,13 +103,13 @@ JOBTEMPLATE = {
"remoteworkdir": "",
"resource": "",
"replicates": "1",
- "replicate-naming": "rep",
"scheduler": "",
"scripts": "",
"slurm-gres": "",
"staging-frequency": "300",
"sge-peflag": "mpi",
"sge-peoverride": "false",
+ "subfile": "",
"user": "",
"upload-exclude": "",
"upload-include": ""
diff --git a/longbow/scheduling.py b/longbow/scheduling.py
index 1917d77..93c7f2d 100755
--- a/longbow/scheduling.py
+++ b/longbow/scheduling.py
@@ -317,11 +317,18 @@ def prepare(jobs):
try:
- LOG.info("Creating submit file for job '%s'", item)
+ if job["subfile"] == "":
- getattr(schedulers, scheduler.lower()).prepare(job)
+ LOG.info("Creating submit file for job '%s'", item)
- LOG.info("Submit file created successfully")
+ getattr(schedulers, scheduler.lower()).prepare(job)
+
+ LOG.info("Submit file created successfully")
+
+ else:
+
+ LOG.info("For job '%s' user has supplied their own job submit "
+ "script - skipping creation.", item)
except AttributeError:
| Using pre-existing queue scripts
Hi James,
I was wondering if it would be possible to include the option of longbow using an already existing queue script instead of creating one on the fly. That way I would no need to struggle with longbow if I want to write non-standard commands in the queue script and I could still easily use it to transfer the files, launching the job, monitoring it and download the results when it is done. It would make my life a lot easier for what we are trying to do at the moment. Do you think that is a feature you would like to add?
Thanks a lot! | HECBioSim/Longbow | diff --git a/tests/unit/applications/test_procfiles.py b/tests/unit/applications/test_procfiles.py
index a3a27f7..01542ab 100644
--- a/tests/unit/applications/test_procfiles.py
+++ b/tests/unit/applications/test_procfiles.py
@@ -35,7 +35,6 @@ This testing module contains the tests for the applications module methods.
"""
from longbow.applications import _procfiles
-from longbow.configuration import JOBTEMPLATE
def test_procfiles_amber():
@@ -44,16 +43,16 @@ def test_procfiles_amber():
Test to make sure that the file and flag is picked up for an amber-like
command-line.
"""
- job = JOBTEMPLATE.copy()
arg = "coords"
filelist = []
foundflags = []
-
- job["executable"] = "pmemd.MPI"
- job["localworkdir"] = "tests/standards/jobs/single"
- job["executableargs"] = ["-i", "input", "-c", "coords", "-p", "topol"]
-
+ job = {
+ "executable": "pmemd.MPI",
+ "replicates": "1",
+ "localworkdir": "tests/standards/jobs/single",
+ "executableargs": ["-i", "input", "-c", "coords", "-p", "topol"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -69,15 +68,15 @@ def test_procfiles_charmm():
command-line.
"""
- job = JOBTEMPLATE.copy()
-
arg = "topol"
filelist = []
foundflags = []
- job["executable"] = "charmm"
- job["localworkdir"] = "tests/standards/jobs/single"
- job["executableargs"] = ["<", "topol"]
-
+ job = {
+ "executable": "charmm",
+ "replicates": "1",
+ "localworkdir": "tests/standards/jobs/single",
+ "executableargs": ["<", "topol"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -93,15 +92,15 @@ def test_procfiles_gromacs():
command-line.
"""
- job = JOBTEMPLATE.copy()
-
arg = "test"
filelist = []
foundflags = []
- job["executable"] = "mdrun_mpi"
- job["localworkdir"] = "tests/standards/jobs/single"
- job["executableargs"] = ["-deffnm", "test"]
-
+ job = {
+ "executable": "mdrun_mpi",
+ "replicates": "1",
+ "localworkdir": "tests/standards/jobs/single",
+ "executableargs": ["-deffnm", "test"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -117,15 +116,15 @@ def test_procfiles_namd1():
command-line.
"""
- job = JOBTEMPLATE.copy()
-
arg = "input"
filelist = []
foundflags = []
- job["executable"] = "namd2"
- job["localworkdir"] = "tests/standards/jobs/single"
- job["executableargs"] = ["input"]
-
+ job = {
+ "executable": "namd2",
+ "replicates": "1",
+ "localworkdir": "tests/standards/jobs/single",
+ "executableargs": ["input"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -141,15 +140,15 @@ def test_procfiles_namd2():
command-line.
"""
- job = JOBTEMPLATE.copy()
-
arg = "input"
filelist = []
foundflags = []
- job["executable"] = "namd2"
- job["localworkdir"] = "tests/standards/jobs/single"
- job["executableargs"] = ["input", ">", "output"]
-
+ job = {
+ "executable": "namd2",
+ "replicates": "1",
+ "localworkdir": "tests/standards/jobs/single",
+ "executableargs": ["input", ">", "output"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -164,16 +163,15 @@ def test_procfiles_reps1():
Test for replicate variant.
"""
- job = JOBTEMPLATE.copy()
-
arg = "coords"
filelist = []
foundflags = []
- job["executable"] = "pmemd.MPI"
- job["replicates"] = "3"
- job["localworkdir"] = "tests/standards/jobs/replicate"
- job["executableargs"] = ["-i", "input", "-c", "coords", "-p", "topol"]
-
+ job = {
+ "executable": "pmemd.MPI",
+ "replicates": "3",
+ "localworkdir": "tests/standards/jobs/replicate",
+ "executableargs": ["-i", "input", "-c", "coords", "-p", "topol"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
@@ -189,16 +187,15 @@ def test_procfiles_reps2():
Test for replicate variant with global.
"""
- job = JOBTEMPLATE.copy()
-
arg = "topol"
filelist = []
foundflags = []
- job["executable"] = "pmemd.MPI"
- job["replicates"] = "3"
- job["localworkdir"] = "tests/standards/jobs/replicate"
- job["executableargs"] = ["-i", "input", "-c", "coords", "-p", "topol"]
-
+ job = {
+ "executable": "pmemd.MPI",
+ "replicates": "3",
+ "localworkdir": "tests/standards/jobs/replicate",
+ "executableargs": ["-i", "input", "-c", "coords", "-p", "topol"]
+ }
substitution = {}
foundflags = _procfiles(job, arg, filelist, foundflags, substitution)
diff --git a/tests/unit/applications/test_procfilesreplicatejobs.py b/tests/unit/applications/test_procfilesreplicatejobs.py
index 03c6253..ca2e0a5 100644
--- a/tests/unit/applications/test_procfilesreplicatejobs.py
+++ b/tests/unit/applications/test_procfilesreplicatejobs.py
@@ -57,7 +57,7 @@ def test_procfilesreplicatejobs_t1():
arg = "input"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-i", "input", "-c", "coords", "-p", "topol"]
- rep = "rep1"
+ rep = 1
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -75,7 +75,7 @@ def test_procfilesreplicatejobs_t2():
arg = "topol"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-i", "input", "-c", "coords", "-p", "topol"]
- rep = "rep1"
+ rep = 1
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -93,7 +93,7 @@ def test_procfilesreplicatejobs_t3():
arg = "test"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-i", "input", "-c", "coords", "-p", "topol"]
- rep = "rep1"
+ rep = 1
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -110,7 +110,7 @@ def test_procfilesreplicatejobs_t4():
arg = "test"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-deffnm", "test"]
- rep = "rep2"
+ rep = 2
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
@@ -128,7 +128,7 @@ def test_procfilesreplicatejobs_t5(m_mkdir):
arg = "test"
cwd = os.path.join(os.getcwd(), "tests/standards/jobs/replicate")
initargs = ["-deffnm", "test"]
- rep = "rep4"
+ rep = 4
fileitem = _procfilesreplicatejobs(app, arg, cwd, initargs, rep)
diff --git a/tests/unit/configuration/test_processconfigsresource.py b/tests/unit/configuration/test_processconfigsresource.py
index 4a6bc8f..05423a2 100644
--- a/tests/unit/configuration/test_processconfigsresource.py
+++ b/tests/unit/configuration/test_processconfigsresource.py
@@ -130,8 +130,8 @@ def test_processconfigsresource1():
"remoteworkdir": "",
"resource": "host1",
"replicates": "1",
- "replicate-naming": "rep",
"scheduler": "",
+ "subfile": "",
"user": "",
"upload-exclude": "",
"upload-include": ""
@@ -237,8 +237,8 @@ def test_processconfigsresource2():
"remoteworkdir": "",
"resource": "host2",
"replicates": "1",
- "replicate-naming": "rep",
"scheduler": "",
+ "subfile": "",
"user": "",
"upload-exclude": "",
"upload-include": ""
@@ -345,8 +345,8 @@ def test_processconfigsresource3():
"remoteworkdir": "",
"resource": "host1",
"replicates": "1",
- "replicate-naming": "rep",
"scheduler": "",
+ "subfile": "",
"user": "",
"upload-exclude": "",
"upload-include": ""
@@ -452,8 +452,8 @@ def test_processconfigsresource4():
"remoteworkdir": "",
"resource": "host3",
"replicates": "1",
- "replicate-naming": "rep",
"scheduler": "",
+ "subfile": "",
"user": "",
"upload-exclude": "",
"upload-include": ""
diff --git a/tests/unit/scheduling/test_prepare.py b/tests/unit/scheduling/test_prepare.py
index 1e5be06..dff69d5 100644
--- a/tests/unit/scheduling/test_prepare.py
+++ b/tests/unit/scheduling/test_prepare.py
@@ -60,7 +60,8 @@ def test_prepare_single(mock_prepare):
"job-one": {
"resource": "test-machine",
"scheduler": "LSF",
- "jobid": "test456"
+ "jobid": "test456",
+ "subfile": ""
}
}
@@ -82,17 +83,20 @@ def test_prepare_multiple(mock_prepare):
"job-one": {
"resource": "test-machine",
"scheduler": "LSF",
- "jobid": "test123"
+ "jobid": "test123",
+ "subfile": ""
},
"job-two": {
"resource": "test-machine",
"scheduler": "LSF",
- "jobid": "test456"
+ "jobid": "test456",
+ "subfile": ""
},
"job-three": {
"resource": "test-machine",
"scheduler": "LSF",
- "jobid": "test789"
+ "jobid": "test789",
+ "subfile": ""
}
}
@@ -113,7 +117,8 @@ def test_prepare_attrexcept(mock_prepare):
"job-one": {
"resource": "test-machine",
"scheduler": "LSF",
- "jobid": "test456"
+ "jobid": "test456",
+ "subfile": ""
}
}
@@ -122,3 +127,25 @@ def test_prepare_attrexcept(mock_prepare):
with pytest.raises(exceptions.PluginattributeError):
prepare(jobs)
+
+
[email protected]('longbow.schedulers.lsf.prepare')
+def test_prepare_ownscript(mock_prepare):
+
+ """
+ Test that if user supplies a script that longbow doesn't create one.
+ """
+
+ jobs = {
+ "job-one": {
+ "resource": "test-machine",
+ "scheduler": "LSF",
+ "jobid": "test456",
+ "subfile": "test.lsf"
+ }
+ }
+
+ prepare(jobs)
+
+ assert mock_prepare.call_count == 0, \
+ "This method shouldn't be called at all in this case."
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 3
} | .1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
coverage==6.2
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/HECBioSim/Longbow.git@99ce093c8b3daab24c4cb4f64b3f7e3f22690073#egg=Longbow
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: Longbow
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==6.2
- pytest-cov==4.0.0
- tomli==1.2.3
prefix: /opt/conda/envs/Longbow
| [
"tests/unit/applications/test_procfiles.py::test_procfiles_reps1",
"tests/unit/applications/test_procfiles.py::test_procfiles_reps2",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t1",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t2",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t3",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t4",
"tests/unit/applications/test_procfilesreplicatejobs.py::test_procfilesreplicatejobs_t5",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource1",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource2",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource3",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource4",
"tests/unit/scheduling/test_prepare.py::test_prepare_ownscript"
]
| []
| [
"tests/unit/applications/test_procfiles.py::test_procfiles_amber",
"tests/unit/applications/test_procfiles.py::test_procfiles_charmm",
"tests/unit/applications/test_procfiles.py::test_procfiles_gromacs",
"tests/unit/applications/test_procfiles.py::test_procfiles_namd1",
"tests/unit/applications/test_procfiles.py::test_procfiles_namd2",
"tests/unit/configuration/test_processconfigsresource.py::test_processconfigsresource5",
"tests/unit/scheduling/test_prepare.py::test_prepare_single",
"tests/unit/scheduling/test_prepare.py::test_prepare_multiple",
"tests/unit/scheduling/test_prepare.py::test_prepare_attrexcept"
]
| []
| BSD 3-Clause License | 2,495 | [
"longbow/scheduling.py",
"longbow/configuration.py",
"longbow/applications.py"
]
| [
"longbow/scheduling.py",
"longbow/configuration.py",
"longbow/applications.py"
]
|
|
conan-io__conan-2883 | ed6f652b917dd973a92e91866681e3663b2e04f2 | 2018-05-10 14:23:08 | c3a6ed5dc7b5e27ac69191e36aa7592e47ce7759 | diff --git a/conans/client/build/cmake.py b/conans/client/build/cmake.py
index 2702d2b9e..3a2494873 100644
--- a/conans/client/build/cmake.py
+++ b/conans/client/build/cmake.py
@@ -115,8 +115,11 @@ class CMake(object):
return os.environ["CONAN_CMAKE_GENERATOR"]
if not self._compiler or not self._compiler_version or not self._arch:
- raise ConanException("You must specify compiler, compiler.version and arch in "
- "your settings to use a CMake generator")
+ if self._os_build == "Windows":
+ raise ConanException("You must specify compiler, compiler.version and arch in "
+ "your settings to use a CMake generator. You can also declare "
+ "the env variable CONAN_CMAKE_GENERATOR.")
+ return "Unix Makefiles"
if self._compiler == "Visual Studio":
_visuals = {'8': '8 2005',
| Settings friction review
I've been experimenting with non-common settings for embedded devices and I've found a couple of issues:
1. We have to review why the CMake() build helper is raising in case it doesn't have compiler or architecture. It is very very uncomfortable to specify the settings and deleting them later for the package_id just to avoid the error. And it makes no sense.
2. If I install a `conanfile.txt` but without specifying `os` setting, it is crashing in the settings.py validate() method. Because somehow it has the os in the fields but not in the data.
| conan-io/conan | diff --git a/conans/test/build_helpers/cmake_test.py b/conans/test/build_helpers/cmake_test.py
index 85d1ba227..ff1802915 100644
--- a/conans/test/build_helpers/cmake_test.py
+++ b/conans/test/build_helpers/cmake_test.py
@@ -773,6 +773,29 @@ build_type: [ Release]
cmake = CMake(conan_file)
self.assertIn('-T "v140"', cmake.command_line)
+
+ def test_missing_settings(self):
+ def instance_with_os_build(os_build):
+ settings = Settings.loads(default_settings_yml)
+ settings.os_build = os_build
+ conan_file = ConanFileMock()
+ conan_file.settings = settings
+ return CMake(conan_file)
+
+ cmake = instance_with_os_build("Linux")
+ self.assertEquals(cmake.generator, "Unix Makefiles")
+
+ cmake = instance_with_os_build("Macos")
+ self.assertEquals(cmake.generator, "Unix Makefiles")
+
+ with self.assertRaisesRegexp(ConanException, "You must specify compiler, "
+ "compiler.version and arch"):
+ instance_with_os_build("Windows")
+
+ with tools.environment_append({"CONAN_CMAKE_GENERATOR": "MyCoolGenerator"}):
+ cmake = instance_with_os_build("Windows")
+ self.assertEquals(cmake.generator, "MyCoolGenerator")
+
def test_cmake_system_version_android(self):
with tools.environment_append({"CONAN_CMAKE_SYSTEM_NAME": "SomeSystem",
"CONAN_CMAKE_GENERATOR": "SomeGenerator"}):
diff --git a/conans/test/integration/cmake_flags_test.py b/conans/test/integration/cmake_flags_test.py
index a74a320ba..2af97aeff 100644
--- a/conans/test/integration/cmake_flags_test.py
+++ b/conans/test/integration/cmake_flags_test.py
@@ -254,10 +254,13 @@ class MyLib(ConanFile):
client = TestClient()
client.save({"conanfile.py": conanfile % settings_line})
client.run("install .")
- client.run("build .", ignore_error=True)
-
- self.assertIn("You must specify compiler, compiler.version and arch in "
- "your settings to use a CMake generator", client.user_io.out,)
+ error = client.run("build .", ignore_error=True)
+ if platform.system() == "Windows":
+ self.assertTrue(error)
+ self.assertIn("You must specify compiler, compiler.version and arch in "
+ "your settings to use a CMake generator", client.user_io.out,)
+ else:
+ self.assertFalse(error)
def cmake_shared_flag_test(self):
conanfile = """
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 1
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc",
"apt-get install -y cmake",
"apt-get install -y golang"
],
"python": "3.6",
"reqs_path": [
"conans/requirements.txt",
"conans/requirements_osx.txt",
"conans/requirements_server.txt",
"conans/requirements_dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asn1crypto==1.5.1
astroid==1.6.6
attrs==22.2.0
beautifulsoup4==4.12.3
bottle==0.12.25
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
colorama==0.3.9
-e git+https://github.com/conan-io/conan.git@ed6f652b917dd973a92e91866681e3663b2e04f2#egg=conan
coverage==4.2
cryptography==2.1.4
deprecation==2.0.7
distro==1.1.0
execnet==1.9.0
fasteners==0.19
future==0.16.0
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.7.0
mock==1.3.0
ndg-httpsclient==0.4.4
node-semver==0.2.0
nose==1.3.7
packaging==21.3
parameterized==0.8.1
patch==1.16
pbr==6.1.1
pluggy==1.0.0
pluginbase==0.7
py==1.11.0
pyasn==1.5.0b7
pyasn1==0.5.1
pycparser==2.21
Pygments==2.14.0
PyJWT==1.7.1
pylint==1.8.4
pyOpenSSL==17.5.0
pyparsing==3.1.4
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-xdist==3.0.2
PyYAML==3.12
requests==2.27.1
six==1.17.0
soupsieve==2.3.2.post1
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
waitress==2.0.0
WebOb==1.8.9
WebTest==2.0.35
wrapt==1.16.0
zipp==3.6.0
| name: conan
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asn1crypto==1.5.1
- astroid==1.6.6
- attrs==22.2.0
- beautifulsoup4==4.12.3
- bottle==0.12.25
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- colorama==0.3.9
- coverage==4.2
- cryptography==2.1.4
- deprecation==2.0.7
- distro==1.1.0
- execnet==1.9.0
- fasteners==0.19
- future==0.16.0
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- isort==5.10.1
- lazy-object-proxy==1.7.1
- mccabe==0.7.0
- mock==1.3.0
- ndg-httpsclient==0.4.4
- node-semver==0.2.0
- nose==1.3.7
- packaging==21.3
- parameterized==0.8.1
- patch==1.16
- pbr==6.1.1
- pluggy==1.0.0
- pluginbase==0.7
- py==1.11.0
- pyasn==1.5.0b7
- pyasn1==0.5.1
- pycparser==2.21
- pygments==2.14.0
- pyjwt==1.7.1
- pylint==1.8.4
- pyopenssl==17.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
- pyyaml==3.12
- requests==2.27.1
- six==1.17.0
- soupsieve==2.3.2.post1
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- waitress==2.0.0
- webob==1.8.9
- webtest==2.0.35
- wrapt==1.16.0
- zipp==3.6.0
prefix: /opt/conda/envs/conan
| [
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_missing_settings"
]
| []
| [
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_clean_sh_path",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_cmake_system_version_android",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_cores_ancient_visual",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_deprecated_behaviour",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_run_tests",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_shared",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_sysroot",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_verbose"
]
| []
| MIT License | 2,496 | [
"conans/client/build/cmake.py"
]
| [
"conans/client/build/cmake.py"
]
|
|
conan-io__conan-2884 | dac34c4c1c19b94969f7b0d2aef44d0650285e3a | 2018-05-10 15:28:49 | c3a6ed5dc7b5e27ac69191e36aa7592e47ce7759 | diff --git a/conans/client/build/cmake.py b/conans/client/build/cmake.py
index babb8f0e7..2702d2b9e 100644
--- a/conans/client/build/cmake.py
+++ b/conans/client/build/cmake.py
@@ -53,7 +53,10 @@ class CMake(object):
self._compiler = self._settings.get_safe("compiler")
self._compiler_version = self._settings.get_safe("compiler.version")
self._arch = self._settings.get_safe("arch")
- self._op_system_version = self._settings.get_safe("os.version")
+
+ os_ver_str = "os.api_level" if self._os == "Android" else "os.version"
+ self._op_system_version = self._settings.get_safe(os_ver_str)
+
self._libcxx = self._settings.get_safe("compiler.libcxx")
self._runtime = self._settings.get_safe("compiler.runtime")
self._build_type = self._settings.get_safe("build_type")
| Conan does not set define `CMAKE_SYSTEM_VERSION` when "os.api_level" specified | conan-io/conan | diff --git a/conans/test/build_helpers/cmake_test.py b/conans/test/build_helpers/cmake_test.py
index b7f6cd6c3..85d1ba227 100644
--- a/conans/test/build_helpers/cmake_test.py
+++ b/conans/test/build_helpers/cmake_test.py
@@ -773,6 +773,27 @@ build_type: [ Release]
cmake = CMake(conan_file)
self.assertIn('-T "v140"', cmake.command_line)
+ def test_cmake_system_version_android(self):
+ with tools.environment_append({"CONAN_CMAKE_SYSTEM_NAME": "SomeSystem",
+ "CONAN_CMAKE_GENERATOR": "SomeGenerator"}):
+ settings = Settings.loads(default_settings_yml)
+ settings.os = "WindowsStore"
+ settings.os.version = "8.1"
+
+ conan_file = ConanFileMock()
+ conan_file.settings = settings
+ cmake = CMake(conan_file)
+ self.assertEquals(cmake.definitions["CMAKE_SYSTEM_VERSION"], "8.1")
+
+ settings = Settings.loads(default_settings_yml)
+ settings.os = "Android"
+ settings.os.api_level = "32"
+
+ conan_file = ConanFileMock()
+ conan_file.settings = settings
+ cmake = CMake(conan_file)
+ self.assertEquals(cmake.definitions["CMAKE_SYSTEM_VERSION"], "32")
+
@staticmethod
def scape(args):
pattern = "%s" if sys.platform == "win32" else r"'%s'"
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y cmake",
"apt-get install -y golang"
],
"python": "3.6",
"reqs_path": [
"conans/requirements.txt",
"conans/requirements_osx.txt",
"conans/requirements_server.txt",
"conans/requirements_dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asn1crypto==1.5.1
astroid==1.6.6
attrs==22.2.0
beautifulsoup4==4.12.3
bottle==0.12.25
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
colorama==0.3.9
-e git+https://github.com/conan-io/conan.git@dac34c4c1c19b94969f7b0d2aef44d0650285e3a#egg=conan
coverage==4.2
cryptography==2.1.4
deprecation==2.0.7
distro==1.1.0
fasteners==0.19
future==0.16.0
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.7.0
mock==1.3.0
ndg-httpsclient==0.4.4
node-semver==0.2.0
nose==1.3.7
packaging==21.3
parameterized==0.8.1
patch==1.16
pbr==6.1.1
pluggy==1.0.0
pluginbase==0.7
py==1.11.0
pyasn==1.5.0b7
pyasn1==0.5.1
pycparser==2.21
Pygments==2.14.0
PyJWT==1.7.1
pylint==1.8.4
pyOpenSSL==17.5.0
pyparsing==3.1.4
pytest==7.0.1
PyYAML==3.12
requests==2.27.1
six==1.17.0
soupsieve==2.3.2.post1
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
waitress==2.0.0
WebOb==1.8.9
WebTest==2.0.35
wrapt==1.16.0
zipp==3.6.0
| name: conan
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asn1crypto==1.5.1
- astroid==1.6.6
- attrs==22.2.0
- beautifulsoup4==4.12.3
- bottle==0.12.25
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- colorama==0.3.9
- coverage==4.2
- cryptography==2.1.4
- deprecation==2.0.7
- distro==1.1.0
- fasteners==0.19
- future==0.16.0
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- isort==5.10.1
- lazy-object-proxy==1.7.1
- mccabe==0.7.0
- mock==1.3.0
- ndg-httpsclient==0.4.4
- node-semver==0.2.0
- nose==1.3.7
- packaging==21.3
- parameterized==0.8.1
- patch==1.16
- pbr==6.1.1
- pluggy==1.0.0
- pluginbase==0.7
- py==1.11.0
- pyasn==1.5.0b7
- pyasn1==0.5.1
- pycparser==2.21
- pygments==2.14.0
- pyjwt==1.7.1
- pylint==1.8.4
- pyopenssl==17.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pyyaml==3.12
- requests==2.27.1
- six==1.17.0
- soupsieve==2.3.2.post1
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- waitress==2.0.0
- webob==1.8.9
- webtest==2.0.35
- wrapt==1.16.0
- zipp==3.6.0
prefix: /opt/conda/envs/conan
| [
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_cmake_system_version_android"
]
| []
| [
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_clean_sh_path",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_cores_ancient_visual",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_deprecated_behaviour",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_run_tests",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_shared",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_sysroot",
"conans/test/build_helpers/cmake_test.py::CMakeTest::test_verbose"
]
| []
| MIT License | 2,497 | [
"conans/client/build/cmake.py"
]
| [
"conans/client/build/cmake.py"
]
|
|
conan-io__conan-2885 | 5df3b0134bd6c85185c6b8a0a8574dfad54aa17e | 2018-05-10 16:29:36 | c3a6ed5dc7b5e27ac69191e36aa7592e47ce7759 | diff --git a/conans/client/build/cppstd_flags.py b/conans/client/build/cppstd_flags.py
index 5435d63e6..7711a442f 100644
--- a/conans/client/build/cppstd_flags.py
+++ b/conans/client/build/cppstd_flags.py
@@ -32,11 +32,12 @@ def cppstd_default(compiler, compiler_version):
def _clang_cppstd_default(compiler_version):
- return "gnu98" if Version(compiler_version) < "6.0" else "gnu14"
+ # Official docs are wrong, in 6.0 the default is gnu14 to follow gcc's choice
+ return "gnu98" if Version(compiler_version) < "6" else "gnu14"
def _gcc_cppstd_default(compiler_version):
- return "gnu98" if Version(compiler_version) < "6.1" else "gnu14"
+ return "gnu98" if Version(compiler_version) < "6" else "gnu14"
def _visual_cppstd_default(compiler_version):
@@ -46,17 +47,19 @@ def _visual_cppstd_default(compiler_version):
def _cppstd_visualstudio(visual_version, cppstd):
-
+ # https://docs.microsoft.com/en-us/cpp/build/reference/std-specify-language-standard-version
v14 = None
v17 = None
+ v20 = None
if Version(visual_version) >= "14":
v14 = "c++14"
v17 = "c++latest"
if Version(visual_version) >= "15":
v17 = "c++17"
+ v20 = "c++latest"
- flag = {"14": v14, "17": v17}.get(str(cppstd), None)
+ flag = {"14": v14, "17": v17, "20": v20}.get(str(cppstd), None)
return "/std:%s" % flag if flag else None
@@ -66,7 +69,7 @@ def _cppstd_apple_clang(clang_version, cppstd):
https://github.com/Kitware/CMake/blob/master/Modules/Compiler/AppleClang-CXX.cmake
"""
- v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = None
+ v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = v20 = vgnu20 = None
if Version(clang_version) >= "4.0":
v98 = "c++98"
@@ -93,7 +96,8 @@ def _cppstd_apple_clang(clang_version, cppstd):
flag = {"98": v98, "gnu98": vgnu98,
"11": v11, "gnu11": vgnu11,
"14": v14, "gnu14": vgnu14,
- "17": v17, "gnu17": vgnu17}.get(cppstd, None)
+ "17": v17, "gnu17": vgnu17,
+ "20": v20, "gnu20": vgnu20}.get(cppstd, None)
return "-std=%s" % flag if flag else None
@@ -103,8 +107,10 @@ def _cppstd_clang(clang_version, cppstd):
Inspired in:
https://github.com/Kitware/CMake/blob/
1fe2dc5ef2a1f262b125a2ba6a85f624ce150dd2/Modules/Compiler/Clang-CXX.cmake
+
+ https://clang.llvm.org/cxx_status.html
"""
- v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = None
+ v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = v20 = vgnu20 = None
if Version(clang_version) >= "2.1":
v98 = "c++98"
@@ -131,17 +137,22 @@ def _cppstd_clang(clang_version, cppstd):
v17 = "c++1z"
vgnu17 = "gnu++1z"
+ if Version(clang_version) >= "6":
+ v20 = "c++2a"
+ vgnu20 = "gnu++2a"
+
flag = {"98": v98, "gnu98": vgnu98,
"11": v11, "gnu11": vgnu11,
"14": v14, "gnu14": vgnu14,
- "17": v17, "gnu17": vgnu17}.get(cppstd, None)
+ "17": v17, "gnu17": vgnu17,
+ "20": v20, "gnu20": vgnu20}.get(cppstd, None)
return "-std=%s" % flag if flag else None
def _cppstd_gcc(gcc_version, cppstd):
"""https://github.com/Kitware/CMake/blob/master/Modules/Compiler/GNU-CXX.cmake"""
- # https://gcc.gnu.org/projects/cxx-status.html#cxx98
- v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = None
+ # https://gcc.gnu.org/projects/cxx-status.html
+ v98 = vgnu98 = v11 = vgnu11 = v14 = vgnu14 = v17 = vgnu17 = v20 = vgnu20 = None
if Version(gcc_version) >= "3.4":
v98 = "c++98"
@@ -165,8 +176,17 @@ def _cppstd_gcc(gcc_version, cppstd):
v17 = "c++1z"
vgnu17 = "gnu++1z"
+ if Version(gcc_version) >= "5.2": # Not sure if even in 5.1 gnu17 is valid, but gnu1z is
+ v17 = "c++17"
+ vgnu17 = "gnu++17"
+
+ if Version(gcc_version) >= "8":
+ v20 = "c++2a"
+ vgnu20 = "gnu++2a"
+
flag = {"98": v98, "gnu98": vgnu98,
"11": v11, "gnu11": vgnu11,
"14": v14, "gnu14": vgnu14,
- "17": v17, "gnu17": vgnu17}.get(cppstd)
+ "17": v17, "gnu17": vgnu17,
+ "20": v20, "gnu20": vgnu20}.get(cppstd)
return "-std=%s" % flag if flag else None
diff --git a/conans/client/build/msbuild.py b/conans/client/build/msbuild.py
index 9477fb964..d15bf4b57 100644
--- a/conans/client/build/msbuild.py
+++ b/conans/client/build/msbuild.py
@@ -117,14 +117,21 @@ class MSBuild(object):
flags = vs_build_type_flags(self._settings)
flags.append(vs_std_cpp(self._settings))
+ flags_str = " ".join(list(filter(None, flags))) # Removes empty and None elements
+ additional_node = "<AdditionalOptions>" \
+ "{} %(AdditionalOptions)" \
+ "</AdditionalOptions>".format(flags_str) if flags_str else ""
+ runtime_node = "<RuntimeLibrary>" \
+ "{}" \
+ "</RuntimeLibrary>".format(runtime_library) if runtime_library else ""
template = """<?xml version="1.0" encoding="utf-8"?>
- <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
- <ItemDefinitionGroup>
- <ClCompile>
- <RuntimeLibrary>{runtime}</RuntimeLibrary>
- <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
- </ClCompile>
- </ItemDefinitionGroup>
- </Project>""".format(**{"runtime": runtime_library,
- "compiler_flags": " ".join([flag for flag in flags if flag])})
+<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
+ <ItemDefinitionGroup>
+ <ClCompile>
+ {runtime_node}
+ {additional_node}
+ </ClCompile>
+ </ItemDefinitionGroup>
+</Project>""".format(**{"runtime_node": runtime_node,
+ "additional_node": additional_node})
return template
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index dfb96b9e6..2a760693c 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -53,7 +53,8 @@ compiler:
version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
"5", "5.1", "5.2", "5.3", "5.4", "5.5",
"6", "6.1", "6.2", "6.3", "6.4",
- "7", "7.1", "7.2", "7.3"]
+ "7", "7.1", "7.2", "7.3",
+ "8", "8.1"]
libcxx: [libstdc++, libstdc++11]
threads: [None, posix, win32] # Windows MinGW
exception: [None, dwarf2, sjlj, seh] # Windows MinGW
@@ -62,7 +63,7 @@ compiler:
version: ["8", "9", "10", "11", "12", "14", "15"]
toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp, v140, v140_xp, v140_clang_c2, LLVM-vs2014, LLVM-vs2014_xp, v141, v141_xp, v141_clang_c2]
clang:
- version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0", "5.0", "6.0"]
+ version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0", "5.0", "6.0", "7.0"]
libcxx: [libstdc++, libstdc++11, libc++]
apple-clang:
version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1"]
diff --git a/conans/client/migrations.py b/conans/client/migrations.py
index 27a86af1f..bbb121bcf 100644
--- a/conans/client/migrations.py
+++ b/conans/client/migrations.py
@@ -41,17 +41,15 @@ class ClientMigrator(Migrator):
# VERSION 0.1
if old_version is None:
return
- if old_version < Version("1.2.1"):
+ if old_version < Version("1.4.0"):
old_settings = """
# Only for cross building, 'os_build/arch_build' is the system that runs Conan
os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
arch_build: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
-
# Only for building cross compilation tools, 'os_target/arch_target' is the system for
# which the tools generate code
os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
arch_target: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
-
# Rest of the settings are "host" settings:
# - For native building/cross building: Where the library/program will run.
# - For building cross compilation tools: Where the cross compiler will run.
@@ -96,11 +94,10 @@ compiler:
version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0", "5.0", "6.0"]
libcxx: [libstdc++, libstdc++11, libc++]
apple-clang:
- version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0"]
+ version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1"]
libcxx: [libstdc++, libc++]
-
-build_type: [None, Debug, Release]
-cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17]
+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
"""
self._update_settings_yml(old_settings)
diff --git a/conans/client/tools/win.py b/conans/client/tools/win.py
index d025ec4dd..bdfa9cc67 100644
--- a/conans/client/tools/win.py
+++ b/conans/client/tools/win.py
@@ -300,12 +300,18 @@ def vcvars_dict(settings, arch=None, compiler_version=None, force=False, filter_
new_env = {}
start_reached = False
for line in ret.splitlines():
+ line = line.strip()
if not start_reached:
if "__BEGINS__" in line:
start_reached = True
continue
- name_var, value = line.split("=", 1)
- new_env[name_var] = value
+ if line == "\n" or not line:
+ continue
+ try:
+ name_var, value = line.split("=", 1)
+ new_env[name_var] = value
+ except ValueError:
+ pass
if filter_known_paths:
def relevant_path(path):
| Add C++2a standard
GCC 8 was released a few days ago and added some useful C++2a features such as `std::endian`. I really want to migrate all my Conan packages to `-std=c++2a`.
I think `2a` and `gnu2a` should be added to `cppstd` and then be renamed to `20` and `gnu20` after the publication. This would require support for aliases so packages' hashes won't be changed but we still have several years to develop this feature.
| conan-io/conan | diff --git a/conans/test/build_helpers/autotools_configure_test.py b/conans/test/build_helpers/autotools_configure_test.py
index 4134d9bee..4b93b7529 100644
--- a/conans/test/build_helpers/autotools_configure_test.py
+++ b/conans/test/build_helpers/autotools_configure_test.py
@@ -57,7 +57,7 @@ class AutoToolsConfigureTest(unittest.TestCase):
conanfile = MockConanfile(settings, options)
be = AutoToolsBuildEnvironment(conanfile)
expected = be.vars["CXXFLAGS"]
- self.assertIn("-std=c++1z", expected)
+ self.assertIn("-std=c++17", expected)
# Invalid one for GCC
settings = MockSettings({"build_type": "Release",
diff --git a/conans/test/build_helpers/cpp_std_flags_test.py b/conans/test/build_helpers/cpp_std_flags_test.py
index 7e7c1c5e4..6d80880d2 100644
--- a/conans/test/build_helpers/cpp_std_flags_test.py
+++ b/conans/test/build_helpers/cpp_std_flags_test.py
@@ -41,14 +41,20 @@ class CompilerFlagsTest(unittest.TestCase):
self.assertEquals(cppstd_flag("gcc", "7", "11"), '-std=c++11')
self.assertEquals(cppstd_flag("gcc", "7", "14"), '-std=c++14')
- self.assertEquals(cppstd_flag("gcc", "7", "17"), '-std=c++1z')
+ self.assertEquals(cppstd_flag("gcc", "7", "17"), '-std=c++17')
+
+ self.assertEquals(cppstd_flag("gcc", "8", "11"), '-std=c++11')
+ self.assertEquals(cppstd_flag("gcc", "8", "14"), '-std=c++14')
+ self.assertEquals(cppstd_flag("gcc", "8", "17"), '-std=c++17')
+ self.assertEquals(cppstd_flag("gcc", "8", "20"), '-std=c++2a')
def test_gcc_cppstd_defaults(self):
self.assertEquals(cppstd_default("gcc", "4"), "gnu98")
self.assertEquals(cppstd_default("gcc", "5"), "gnu98")
- self.assertEquals(cppstd_default("gcc", "6"), "gnu98")
+ self.assertEquals(cppstd_default("gcc", "6"), "gnu14")
self.assertEquals(cppstd_default("gcc", "6.1"), "gnu14")
self.assertEquals(cppstd_default("gcc", "7.3"), "gnu14")
+ self.assertEquals(cppstd_default("gcc", "8.1"), "gnu14")
def test_clang_cppstd_flags(self):
self.assertEquals(cppstd_flag("clang", "2.0", "98"), None)
@@ -84,9 +90,10 @@ class CompilerFlagsTest(unittest.TestCase):
self.assertEquals(cppstd_flag("clang", "5.1", "14"), '-std=c++14')
self.assertEquals(cppstd_flag("clang", "5.1", "17"), '-std=c++17')
- self.assertEquals(cppstd_flag("clang", "7", "11"), '-std=c++11')
- self.assertEquals(cppstd_flag("clang", "7", "14"), '-std=c++14')
- self.assertEquals(cppstd_flag("clang", "7", "17"), '-std=c++17')
+ self.assertEquals(cppstd_flag("clang", "6", "11"), '-std=c++11')
+ self.assertEquals(cppstd_flag("clang", "6", "14"), '-std=c++14')
+ self.assertEquals(cppstd_flag("clang", "6", "17"), '-std=c++17')
+ self.assertEquals(cppstd_flag("clang", "6", "20"), '-std=c++2a')
def test_clang_cppstd_defaults(self):
self.assertEquals(cppstd_default("clang", "2"), "gnu98")
@@ -97,6 +104,7 @@ class CompilerFlagsTest(unittest.TestCase):
self.assertEquals(cppstd_default("clang", "3.5"), "gnu98")
self.assertEquals(cppstd_default("clang", "5"), "gnu98")
self.assertEquals(cppstd_default("clang", "5.1"), "gnu98")
+ self.assertEquals(cppstd_default("clang", "6"), "gnu14")
self.assertEquals(cppstd_default("clang", "7"), "gnu14")
def test_apple_clang_cppstd_flags(self):
@@ -162,6 +170,7 @@ class CompilerFlagsTest(unittest.TestCase):
self.assertEquals(cppstd_flag("Visual Studio", "17", "11"), None)
self.assertEquals(cppstd_flag("Visual Studio", "17", "14"), '/std:c++14')
self.assertEquals(cppstd_flag("Visual Studio", "17", "17"), '/std:c++17')
+ self.assertEquals(cppstd_flag("Visual Studio", "17", "20"), '/std:c++latest')
def test_visual_cppstd_defaults(self):
self.assertEquals(cppstd_default("Visual Studio", "11"), None)
diff --git a/conans/test/build_helpers/msbuild_test.py b/conans/test/build_helpers/msbuild_test.py
index 7294136d0..d9792d0f2 100644
--- a/conans/test/build_helpers/msbuild_test.py
+++ b/conans/test/build_helpers/msbuild_test.py
@@ -16,7 +16,8 @@ class MSBuildTest(unittest.TestCase):
def dont_mess_with_build_type_test(self):
settings = MockSettings({"build_type": "Debug",
"compiler": "Visual Studio",
- "arch": "x86_64"})
+ "arch": "x86_64",
+ "compiler.runtime": "MDd"})
conanfile = MockConanfile(settings)
msbuild = MSBuild(conanfile)
self.assertEquals(msbuild.build_env.flags, ["-Zi", "-Ob0", "-Od"])
@@ -30,6 +31,16 @@ class MSBuildTest(unittest.TestCase):
self.assertNotIn("-Ob0", template)
self.assertNotIn("-Od", template)
+ self.assertIn("<RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary>", template)
+
+ def without_runtime_test(self):
+ settings = MockSettings({"build_type": "Debug",
+ "compiler": "Visual Studio",
+ "arch": "x86_64"})
+ conanfile = MockConanfile(settings)
+ msbuild = MSBuild(conanfile)
+ template = msbuild._get_props_file_contents()
+ self.assertNotIn("<RuntimeLibrary>", template)
@attr('slow')
def build_vs_project_test(self):
diff --git a/conans/test/generators/compiler_args_test.py b/conans/test/generators/compiler_args_test.py
index 7a6c1415c..96d14ffbf 100644
--- a/conans/test/generators/compiler_args_test.py
+++ b/conans/test/generators/compiler_args_test.py
@@ -46,7 +46,7 @@ class CompilerArgsTest(unittest.TestCase):
gcc = GCCGenerator(conan_file)
self.assertEquals('-Dmydefine1 -Ipath/to/include1 cxx_flag1 c_flag1 -m32 -O3 -s -DNDEBUG '
'-Wl,-rpath="path/to/lib1" '
- '-Lpath/to/lib1 -lmylib -std=gnu++1z', gcc.content)
+ '-Lpath/to/lib1 -lmylib -std=gnu++17', gcc.content)
settings.arch = "x86_64"
settings.build_type = "Debug"
@@ -55,14 +55,14 @@ class CompilerArgsTest(unittest.TestCase):
gcc = GCCGenerator(conan_file)
self.assertEquals('-Dmydefine1 -Ipath/to/include1 cxx_flag1 c_flag1 -m64 -g '
'-Wl,-rpath="path/to/lib1" -Lpath/to/lib1 -lmylib '
- '-D_GLIBCXX_USE_CXX11_ABI=1 -std=gnu++1z',
+ '-D_GLIBCXX_USE_CXX11_ABI=1 -std=gnu++17',
gcc.content)
settings.compiler.libcxx = "libstdc++"
gcc = GCCGenerator(conan_file)
self.assertEquals('-Dmydefine1 -Ipath/to/include1 cxx_flag1 c_flag1 -m64 -g '
'-Wl,-rpath="path/to/lib1" -Lpath/to/lib1 -lmylib '
- '-D_GLIBCXX_USE_CXX11_ABI=0 -std=gnu++1z',
+ '-D_GLIBCXX_USE_CXX11_ABI=0 -std=gnu++17',
gcc.content)
settings.os = "Windows"
diff --git a/conans/test/util/tools_test.py b/conans/test/util/tools_test.py
index a8984e93d..1334edce9 100644
--- a/conans/test/util/tools_test.py
+++ b/conans/test/util/tools_test.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
+import mock
import os
import platform
import unittest
@@ -625,6 +626,43 @@ class MyConan(ConanFile):
client.run("create . conan/testing")
self.assertIn("VCINSTALLDIR set to: None", client.out)
+ def vcvars_dict_test(self):
+ # https://github.com/conan-io/conan/issues/2904
+ output_with_newline_and_spaces = """__BEGINS__
+ PROCESSOR_ARCHITECTURE=AMD64
+
+PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
+
+
+ PROCESSOR_LEVEL=6
+
+PROCESSOR_REVISION=9e09
+
+
+set nl=^
+env_var=
+without_equals_sign
+
+ProgramFiles(x86)=C:\Program Files (x86)
+
+""".encode("utf-8")
+
+ def vcvars_command_mock(settings, arch, compiler_version, force): # @UnusedVariable
+ return "unused command"
+
+ def subprocess_check_output_mock(cmd, shell):
+ self.assertIn("unused command", cmd)
+ return output_with_newline_and_spaces
+
+ with mock.patch('conans.client.tools.win.vcvars_command', new=vcvars_command_mock):
+ with mock.patch('subprocess.check_output', new=subprocess_check_output_mock):
+ vars = tools.vcvars_dict(None)
+ self.assertEqual(vars["PROCESSOR_ARCHITECTURE"], "AMD64")
+ self.assertEqual(vars["PROCESSOR_IDENTIFIER"], "Intel64 Family 6 Model 158 Stepping 9, GenuineIntel")
+ self.assertEqual(vars["PROCESSOR_LEVEL"], "6")
+ self.assertEqual(vars["PROCESSOR_REVISION"], "9e09")
+ self.assertEqual(vars["ProgramFiles(x86)"], "C:\Program Files (x86)")
+
def run_in_bash_test(self):
if platform.system() != "Windows":
return
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 5
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"conans/requirements.txt",
"conans/requirements_osx.txt",
"conans/requirements_server.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asn1crypto==1.5.1
astroid==1.6.6
attrs==22.2.0
beautifulsoup4==4.12.3
bottle==0.12.25
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
codecov==2.1.13
colorama==0.3.9
-e git+https://github.com/conan-io/conan.git@5df3b0134bd6c85185c6b8a0a8574dfad54aa17e#egg=conan
coverage==4.2
cryptography==2.1.4
deprecation==2.0.7
distro==1.1.0
fasteners==0.19
future==0.16.0
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.7.0
mock==1.3.0
ndg-httpsclient==0.4.4
node-semver==0.2.0
nose==1.3.7
packaging==21.3
parameterized==0.8.1
patch==1.16
pbr==6.1.1
pluggy==1.0.0
pluginbase==0.7
py==1.11.0
pyasn==1.5.0b7
pyasn1==0.5.1
pycparser==2.21
Pygments==2.14.0
PyJWT==1.7.1
pylint==1.8.4
pyOpenSSL==17.5.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
PyYAML==3.12
requests==2.27.1
six==1.17.0
soupsieve==2.3.2.post1
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
waitress==2.0.0
WebOb==1.8.9
WebTest==2.0.35
wrapt==1.16.0
zipp==3.6.0
| name: conan
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asn1crypto==1.5.1
- astroid==1.6.6
- attrs==22.2.0
- beautifulsoup4==4.12.3
- bottle==0.12.25
- cffi==1.15.1
- charset-normalizer==2.0.12
- codecov==2.1.13
- colorama==0.3.9
- coverage==4.2
- cryptography==2.1.4
- deprecation==2.0.7
- distro==1.1.0
- fasteners==0.19
- future==0.16.0
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- isort==5.10.1
- lazy-object-proxy==1.7.1
- mccabe==0.7.0
- mock==1.3.0
- ndg-httpsclient==0.4.4
- node-semver==0.2.0
- nose==1.3.7
- packaging==21.3
- parameterized==0.8.1
- patch==1.16
- pbr==6.1.1
- pluggy==1.0.0
- pluginbase==0.7
- py==1.11.0
- pyasn==1.5.0b7
- pyasn1==0.5.1
- pycparser==2.21
- pygments==2.14.0
- pyjwt==1.7.1
- pylint==1.8.4
- pyopenssl==17.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- pyyaml==3.12
- requests==2.27.1
- six==1.17.0
- soupsieve==2.3.2.post1
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- waitress==2.0.0
- webob==1.8.9
- webtest==2.0.35
- wrapt==1.16.0
- zipp==3.6.0
prefix: /opt/conda/envs/conan
| [
"conans/test/build_helpers/autotools_configure_test.py::AutoToolsConfigureTest::test_cppstd",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_clang_cppstd_defaults",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_clang_cppstd_flags",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_gcc_cppstd_defaults",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_gcc_cppstd_flags",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_visual_cppstd_flags"
]
| [
"conans/test/build_helpers/autotools_configure_test.py::AutoToolsConfigureTest::test_pkg_config_paths",
"conans/test/util/tools_test.py::ToolsTest::test_get_env_in_conanfile",
"conans/test/util/tools_test.py::ToolsTest::test_global_tools_overrided"
]
| [
"conans/test/build_helpers/autotools_configure_test.py::AutoToolsConfigureTest::test_make_targets_install",
"conans/test/build_helpers/autotools_configure_test.py::AutoToolsConfigureTest::test_mocked_methods",
"conans/test/build_helpers/autotools_configure_test.py::AutoToolsConfigureTest::test_previous_env",
"conans/test/build_helpers/autotools_configure_test.py::AutoToolsConfigureTest::test_variables",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_apple_clang_cppstd_defaults",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_apple_clang_cppstd_flags",
"conans/test/build_helpers/cpp_std_flags_test.py::CompilerFlagsTest::test_visual_cppstd_defaults",
"conans/test/util/tools_test.py::ReplaceInFileTest::test_replace_in_file",
"conans/test/util/tools_test.py::ToolsTest::test_environment_nested"
]
| []
| MIT License | 2,498 | [
"conans/client/tools/win.py",
"conans/client/build/cppstd_flags.py",
"conans/client/migrations.py",
"conans/client/build/msbuild.py",
"conans/client/conf/__init__.py"
]
| [
"conans/client/tools/win.py",
"conans/client/build/cppstd_flags.py",
"conans/client/migrations.py",
"conans/client/build/msbuild.py",
"conans/client/conf/__init__.py"
]
|
|
python-cmd2__cmd2-398 | 9d4d929709ffbcfcbd0974d8193c44d514f5a9b4 | 2018-05-10 17:18:31 | 8f88f819fae7508066a81a8d961a7115f2ec4bed | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 503f15e0..f9627194 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -29,6 +29,7 @@
* Deleted ``cmd_with_subs_completer``, ``get_subcommands``, and ``get_subcommand_completer``
* Replaced by default AutoCompleter implementation for all commands using argparse
* Deleted support for old method of calling application commands with ``cmd()`` and ``self``
+ * ``cmd2.redirector`` is no longer supported. Output redirection can only be done with '>' or '>>'
* Python 2 no longer supported
* ``cmd2`` now supports Python 3.4+
diff --git a/cmd2/cmd2.py b/cmd2/cmd2.py
index 02ae96fe..43fd99ec 100755
--- a/cmd2/cmd2.py
+++ b/cmd2/cmd2.py
@@ -338,7 +338,6 @@ class Cmd(cmd.Cmd):
# Attributes used to configure the StatementParser, best not to change these at runtime
blankLinesAllowed = False
multiline_commands = []
- redirector = '>' # for sending output to file
shortcuts = {'?': 'help', '!': 'shell', '@': 'load', '@@': '_relative_load'}
aliases = dict()
terminators = [';']
@@ -1149,29 +1148,26 @@ class Cmd(cmd.Cmd):
if len(raw_tokens) > 1:
- # Build a list of all redirection tokens
- all_redirects = constants.REDIRECTION_CHARS + ['>>']
-
# Check if there are redirection strings prior to the token being completed
seen_pipe = False
has_redirection = False
for cur_token in raw_tokens[:-1]:
- if cur_token in all_redirects:
+ if cur_token in constants.REDIRECTION_TOKENS:
has_redirection = True
- if cur_token == '|':
+ if cur_token == constants.REDIRECTION_PIPE:
seen_pipe = True
# Get token prior to the one being completed
prior_token = raw_tokens[-2]
# If a pipe is right before the token being completed, complete a shell command as the piped process
- if prior_token == '|':
+ if prior_token == constants.REDIRECTION_PIPE:
return self.shell_cmd_complete(text, line, begidx, endidx)
# Otherwise do path completion either as files to redirectors or arguments to the piped process
- elif prior_token in all_redirects or seen_pipe:
+ elif prior_token in constants.REDIRECTION_TOKENS or seen_pipe:
return self.path_complete(text, line, begidx, endidx)
# If there were redirection strings anywhere on the command line, then we
@@ -1820,7 +1816,7 @@ class Cmd(cmd.Cmd):
# We want Popen to raise an exception if it fails to open the process. Thus we don't set shell to True.
try:
- self.pipe_proc = subprocess.Popen(shlex.split(statement.pipe_to), stdin=subproc_stdin)
+ self.pipe_proc = subprocess.Popen(statement.pipe_to, stdin=subproc_stdin)
except Exception as ex:
# Restore stdout to what it was and close the pipe
self.stdout.close()
@@ -1834,24 +1830,30 @@ class Cmd(cmd.Cmd):
raise ex
elif statement.output:
if (not statement.output_to) and (not can_clip):
- raise EnvironmentError('Cannot redirect to paste buffer; install ``xclip`` and re-run to enable')
+ raise EnvironmentError("Cannot redirect to paste buffer; install 'pyperclip' and re-run to enable")
self.kept_state = Statekeeper(self, ('stdout',))
self.kept_sys = Statekeeper(sys, ('stdout',))
self.redirecting = True
if statement.output_to:
+ # going to a file
mode = 'w'
- if statement.output == 2 * self.redirector:
+ # statement.output can only contain
+ # REDIRECTION_APPEND or REDIRECTION_OUTPUT
+ if statement.output == constants.REDIRECTION_APPEND:
mode = 'a'
sys.stdout = self.stdout = open(os.path.expanduser(statement.output_to), mode)
else:
+ # going to a paste buffer
sys.stdout = self.stdout = tempfile.TemporaryFile(mode="w+")
- if statement.output == '>>':
+ if statement.output == constants.REDIRECTION_APPEND:
self.poutput(get_paste_buffer())
def _restore_output(self, statement):
- """Handles restoring state after output redirection as well as the actual pipe operation if present.
+ """Handles restoring state after output redirection as well as
+ the actual pipe operation if present.
- :param statement: Statement object which contains the parsed input from the user
+ :param statement: Statement object which contains the parsed
+ input from the user
"""
# If we have redirected output to a file or the clipboard or piped it to a shell command, then restore state
if self.kept_state is not None:
diff --git a/cmd2/constants.py b/cmd2/constants.py
index 838650e5..b829000f 100644
--- a/cmd2/constants.py
+++ b/cmd2/constants.py
@@ -4,9 +4,14 @@
import re
-# Used for command parsing, tab completion and word breaks. Do not change.
+# Used for command parsing, output redirection, tab completion and word
+# breaks. Do not change.
QUOTES = ['"', "'"]
-REDIRECTION_CHARS = ['|', '>']
+REDIRECTION_PIPE = '|'
+REDIRECTION_OUTPUT = '>'
+REDIRECTION_APPEND = '>>'
+REDIRECTION_CHARS = [REDIRECTION_PIPE, REDIRECTION_OUTPUT]
+REDIRECTION_TOKENS = [REDIRECTION_PIPE, REDIRECTION_OUTPUT, REDIRECTION_APPEND]
# Regular expression to match ANSI escape codes
ANSI_ESCAPE_RE = re.compile(r'\x1b[^m]*m')
diff --git a/cmd2/parsing.py b/cmd2/parsing.py
index 3a9b390b..ce15bd38 100644
--- a/cmd2/parsing.py
+++ b/cmd2/parsing.py
@@ -45,7 +45,8 @@ class Statement(str):
redirection, if any
:type suffix: str or None
:var pipe_to: if output was piped to a shell command, the shell command
- :type pipe_to: str or None
+ as a list of tokens
+ :type pipe_to: list
:var output: if output was redirected, the redirection token, i.e. '>>'
:type output: str or None
:var output_to: if output was redirected, the destination, usually a filename
@@ -283,12 +284,27 @@ class StatementParser:
argv = tokens
tokens = []
+ # check for a pipe to a shell process
+ # if there is a pipe, everything after the pipe needs to be passed
+ # to the shell, even redirected output
+ # this allows '(Cmd) say hello | wc > countit.txt'
+ try:
+ # find the first pipe if it exists
+ pipe_pos = tokens.index(constants.REDIRECTION_PIPE)
+ # save everything after the first pipe as tokens
+ pipe_to = tokens[pipe_pos+1:]
+ # remove all the tokens after the pipe
+ tokens = tokens[:pipe_pos]
+ except ValueError:
+ # no pipe in the tokens
+ pipe_to = None
+
# check for output redirect
output = None
output_to = None
try:
- output_pos = tokens.index('>')
- output = '>'
+ output_pos = tokens.index(constants.REDIRECTION_OUTPUT)
+ output = constants.REDIRECTION_OUTPUT
output_to = ' '.join(tokens[output_pos+1:])
# remove all the tokens after the output redirect
tokens = tokens[:output_pos]
@@ -296,26 +312,14 @@ class StatementParser:
pass
try:
- output_pos = tokens.index('>>')
- output = '>>'
+ output_pos = tokens.index(constants.REDIRECTION_APPEND)
+ output = constants.REDIRECTION_APPEND
output_to = ' '.join(tokens[output_pos+1:])
# remove all tokens after the output redirect
tokens = tokens[:output_pos]
except ValueError:
pass
- # check for pipes
- try:
- # find the first pipe if it exists
- pipe_pos = tokens.index('|')
- # save everything after the first pipe
- pipe_to = ' '.join(tokens[pipe_pos+1:])
- # remove all the tokens after the pipe
- tokens = tokens[:pipe_pos]
- except ValueError:
- # no pipe in the tokens
- pipe_to = None
-
if terminator:
# whatever is left is the suffix
suffix = ' '.join(tokens)
diff --git a/docs/freefeatures.rst b/docs/freefeatures.rst
index 95ae127c..a03a1d08 100644
--- a/docs/freefeatures.rst
+++ b/docs/freefeatures.rst
@@ -100,26 +100,8 @@ As in a Unix shell, output of a command can be redirected:
- appended to a file with ``>>``, as in ``mycommand args >> filename.txt``
- piped (``|``) as input to operating-system commands, as in
``mycommand args | wc``
- - sent to the paste buffer, ready for the next Copy operation, by
- ending with a bare ``>``, as in ``mycommand args >``.. Redirecting
- to paste buffer requires software to be installed on the operating
- system, pywin32_ on Windows or xclip_ on \*nix.
+ - sent to the operating system paste buffer, by ending with a bare ``>``, as in ``mycommand args >``. You can even append output to the current contents of the paste buffer by ending your command with ``>>``.
-If your application depends on mathematical syntax, ``>`` may be a bad
-choice for redirecting output - it will prevent you from using the
-greater-than sign in your actual user commands. You can override your
-app's value of ``self.redirector`` to use a different string for output redirection::
-
- class MyApp(cmd2.Cmd):
- redirector = '->'
-
-::
-
- (Cmd) say line1 -> out.txt
- (Cmd) say line2 ->-> out.txt
- (Cmd) !cat out.txt
- line1
- line2
.. note::
@@ -136,8 +118,8 @@ app's value of ``self.redirector`` to use a different string for output redirect
arguments after them from the command line arguments accordingly. But output from a command will not be redirected
to a file or piped to a shell command.
-.. _pywin32: http://sourceforge.net/projects/pywin32/
-.. _xclip: http://www.cyberciti.biz/faq/xclip-linux-insert-files-command-output-intoclipboard/
+If you need to include any of these redirection characters in your command,
+you can enclose them in quotation marks, ``mycommand 'with > in the argument'``.
Python
======
diff --git a/docs/unfreefeatures.rst b/docs/unfreefeatures.rst
index a4776a53..41144c8f 100644
--- a/docs/unfreefeatures.rst
+++ b/docs/unfreefeatures.rst
@@ -10,13 +10,17 @@ commands whose names are listed in the
parameter ``app.multiline_commands``. These
commands will be executed only
after the user has entered a *terminator*.
-By default, the command terminators is
+By default, the command terminator is
``;``; replacing or appending to the list
``app.terminators`` allows different
terminators. A blank line
is *always* considered a command terminator
(cannot be overridden).
+In multiline commands, output redirection characters
+like ``>`` and ``|`` are part of the command
+arguments unless they appear after the terminator.
+
Parsed statements
=================
| Cmd2.redirector isn't honored any more, either make it work or deprecate it
With the merge of PR #370, `Cmd2.redirector` is no longer honored: the redirector symbol is currently hard-coded as '>'. The documentation still states that you can set `Cmd2.redirector` to something like '->' and have the parsing logic honor that change. We need to either fix the code to match the documentation, or deprecate the attribute and fix the documentation to say that you can't change the redirector.
Related to #392. | python-cmd2/cmd2 | diff --git a/tests/test_cmd2.py b/tests/test_cmd2.py
index bc76505f..6e4a5a3e 100644
--- a/tests/test_cmd2.py
+++ b/tests/test_cmd2.py
@@ -1430,7 +1430,7 @@ def test_clipboard_failure(capsys):
# Make sure we got the error output
out, err = capsys.readouterr()
assert out == ''
- assert 'Cannot redirect to paste buffer; install ``xclip`` and re-run to enable' in err
+ assert "Cannot redirect to paste buffer; install 'pyperclip' and re-run to enable" in err
class CmdResultApp(cmd2.Cmd):
diff --git a/tests/test_parsing.py b/tests/test_parsing.py
index bfb55b23..41966c71 100644
--- a/tests/test_parsing.py
+++ b/tests/test_parsing.py
@@ -159,7 +159,7 @@ def test_parse_simple_pipe(parser, line):
assert statement.command == 'simple'
assert not statement.args
assert statement.argv == ['simple']
- assert statement.pipe_to == 'piped'
+ assert statement.pipe_to == ['piped']
def test_parse_double_pipe_is_not_a_pipe(parser):
line = 'double-pipe || is not a pipe'
@@ -177,7 +177,7 @@ def test_parse_complex_pipe(parser):
assert statement.argv == ['command', 'with', 'args,', 'terminator']
assert statement.terminator == '&'
assert statement.suffix == 'sufx'
- assert statement.pipe_to == 'piped'
+ assert statement.pipe_to == ['piped']
@pytest.mark.parametrize('line,output', [
('help > out.txt', '>'),
@@ -227,9 +227,9 @@ def test_parse_pipe_and_redirect(parser):
assert statement.argv == ['output', 'into']
assert statement.terminator == ';'
assert statement.suffix == 'sufx'
- assert statement.pipe_to == 'pipethrume plz'
- assert statement.output == '>'
- assert statement.output_to == 'afile.txt'
+ assert statement.pipe_to == ['pipethrume', 'plz', '>', 'afile.txt']
+ assert not statement.output
+ assert not statement.output_to
def test_parse_output_to_paste_buffer(parser):
line = 'output to paste buffer >> '
@@ -240,8 +240,9 @@ def test_parse_output_to_paste_buffer(parser):
assert statement.output == '>>'
def test_parse_redirect_inside_terminator(parser):
- """The terminator designates the end of the commmand/arguments portion. If a redirector
- occurs before a terminator, then it will be treated as part of the arguments and not as a redirector."""
+ """The terminator designates the end of the commmand/arguments portion.
+ If a redirector occurs before a terminator, then it will be treated as
+ part of the arguments and not as a redirector."""
line = 'has > inside;'
statement = parser.parse(line)
assert statement.command == 'has'
@@ -385,7 +386,7 @@ def test_parse_alias_pipe(parser, line):
statement = parser.parse(line)
assert statement.command == 'help'
assert not statement.args
- assert statement.pipe_to == 'less'
+ assert statement.pipe_to == ['less']
def test_parse_alias_terminator_no_whitespace(parser):
line = 'helpalias;'
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_issue_reference",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 6
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pyperclip>=1.6",
"pytest",
"sphinx",
"sphinx-rtd-theme",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
babel==2.17.0
certifi==2025.1.31
charset-normalizer==3.4.1
-e git+https://github.com/python-cmd2/cmd2.git@9d4d929709ffbcfcbd0974d8193c44d514f5a9b4#egg=cmd2
colorama==0.4.6
coverage==7.8.0
docutils==0.21.2
exceptiongroup==1.2.2
idna==3.10
imagesize==1.4.1
importlib_metadata==8.6.1
iniconfig==2.1.0
Jinja2==3.1.6
MarkupSafe==3.0.2
packaging==24.2
pluggy==1.5.0
Pygments==2.19.1
pyperclip==1.9.0
pytest==8.3.5
pytest-cov==6.0.0
requests==2.32.3
snowballstemmer==2.2.0
Sphinx==7.4.7
sphinx-rtd-theme==3.0.2
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
tomli==2.2.1
urllib3==2.3.0
wcwidth==0.2.13
zipp==3.21.0
| name: cmd2
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- babel==2.17.0
- certifi==2025.1.31
- charset-normalizer==3.4.1
- colorama==0.4.6
- coverage==7.8.0
- docutils==0.21.2
- exceptiongroup==1.2.2
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- jinja2==3.1.6
- markupsafe==3.0.2
- packaging==24.2
- pluggy==1.5.0
- pygments==2.19.1
- pyperclip==1.9.0
- pytest==8.3.5
- pytest-cov==6.0.0
- requests==2.32.3
- snowballstemmer==2.2.0
- sphinx==7.4.7
- sphinx-rtd-theme==3.0.2
- sphinxcontrib-applehelp==2.0.0
- sphinxcontrib-devhelp==2.0.0
- sphinxcontrib-htmlhelp==2.1.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- tomli==2.2.1
- urllib3==2.3.0
- wcwidth==0.2.13
- zipp==3.21.0
prefix: /opt/conda/envs/cmd2
| [
"tests/test_cmd2.py::test_clipboard_failure",
"tests/test_parsing.py::test_parse_simple_pipe[simple",
"tests/test_parsing.py::test_parse_simple_pipe[simple|piped]",
"tests/test_parsing.py::test_parse_complex_pipe",
"tests/test_parsing.py::test_parse_pipe_and_redirect",
"tests/test_parsing.py::test_parse_alias_pipe[helpalias",
"tests/test_parsing.py::test_parse_alias_pipe[helpalias|less]"
]
| [
"tests/test_cmd2.py::test_base_invalid_option",
"tests/test_cmd2.py::test_which_editor_good"
]
| [
"tests/test_cmd2.py::test_ver",
"tests/test_cmd2.py::test_empty_statement",
"tests/test_cmd2.py::test_base_help",
"tests/test_cmd2.py::test_base_help_verbose",
"tests/test_cmd2.py::test_base_help_history",
"tests/test_cmd2.py::test_base_argparse_help",
"tests/test_cmd2.py::test_base_shortcuts",
"tests/test_cmd2.py::test_base_show",
"tests/test_cmd2.py::test_base_show_long",
"tests/test_cmd2.py::test_base_show_readonly",
"tests/test_cmd2.py::test_cast",
"tests/test_cmd2.py::test_cast_problems",
"tests/test_cmd2.py::test_base_set",
"tests/test_cmd2.py::test_set_not_supported",
"tests/test_cmd2.py::test_set_quiet",
"tests/test_cmd2.py::test_base_shell",
"tests/test_cmd2.py::test_base_py",
"tests/test_cmd2.py::test_base_run_python_script",
"tests/test_cmd2.py::test_base_run_pyscript",
"tests/test_cmd2.py::test_recursive_pyscript_not_allowed",
"tests/test_cmd2.py::test_pyscript_with_nonexist_file",
"tests/test_cmd2.py::test_pyscript_with_exception",
"tests/test_cmd2.py::test_pyscript_requires_an_argument",
"tests/test_cmd2.py::test_base_error",
"tests/test_cmd2.py::test_history_span",
"tests/test_cmd2.py::test_history_get",
"tests/test_cmd2.py::test_base_history",
"tests/test_cmd2.py::test_history_script_format",
"tests/test_cmd2.py::test_history_with_string_argument",
"tests/test_cmd2.py::test_history_with_integer_argument",
"tests/test_cmd2.py::test_history_with_integer_span",
"tests/test_cmd2.py::test_history_with_span_start",
"tests/test_cmd2.py::test_history_with_span_end",
"tests/test_cmd2.py::test_history_with_span_index_error",
"tests/test_cmd2.py::test_history_output_file",
"tests/test_cmd2.py::test_history_edit",
"tests/test_cmd2.py::test_history_run_all_commands",
"tests/test_cmd2.py::test_history_run_one_command",
"tests/test_cmd2.py::test_base_load",
"tests/test_cmd2.py::test_load_with_empty_args",
"tests/test_cmd2.py::test_load_with_nonexistent_file",
"tests/test_cmd2.py::test_load_with_empty_file",
"tests/test_cmd2.py::test_load_with_binary_file",
"tests/test_cmd2.py::test_load_with_utf8_file",
"tests/test_cmd2.py::test_load_nested_loads",
"tests/test_cmd2.py::test_base_runcmds_plus_hooks",
"tests/test_cmd2.py::test_base_relative_load",
"tests/test_cmd2.py::test_relative_load_requires_an_argument",
"tests/test_cmd2.py::test_output_redirection",
"tests/test_cmd2.py::test_feedback_to_output_true",
"tests/test_cmd2.py::test_feedback_to_output_false",
"tests/test_cmd2.py::test_allow_redirection",
"tests/test_cmd2.py::test_pipe_to_shell",
"tests/test_cmd2.py::test_pipe_to_shell_error",
"tests/test_cmd2.py::test_base_timing",
"tests/test_cmd2.py::test_base_debug",
"tests/test_cmd2.py::test_base_colorize",
"tests/test_cmd2.py::test_edit_no_editor",
"tests/test_cmd2.py::test_edit_file",
"tests/test_cmd2.py::test_edit_file_with_spaces",
"tests/test_cmd2.py::test_edit_blank",
"tests/test_cmd2.py::test_base_py_interactive",
"tests/test_cmd2.py::test_exclude_from_history",
"tests/test_cmd2.py::test_base_cmdloop_with_queue",
"tests/test_cmd2.py::test_base_cmdloop_without_queue",
"tests/test_cmd2.py::test_cmdloop_without_rawinput",
"tests/test_cmd2.py::test_precmd_hook_success",
"tests/test_cmd2.py::test_precmd_hook_failure",
"tests/test_cmd2.py::test_interrupt_quit",
"tests/test_cmd2.py::test_interrupt_noquit",
"tests/test_cmd2.py::test_default_to_shell_unknown",
"tests/test_cmd2.py::test_default_to_shell_good",
"tests/test_cmd2.py::test_default_to_shell_failure",
"tests/test_cmd2.py::test_ansi_prompt_not_esacped",
"tests/test_cmd2.py::test_ansi_prompt_escaped",
"tests/test_cmd2.py::test_custom_command_help",
"tests/test_cmd2.py::test_custom_help_menu",
"tests/test_cmd2.py::test_help_undocumented",
"tests/test_cmd2.py::test_help_overridden_method",
"tests/test_cmd2.py::test_help_cat_base",
"tests/test_cmd2.py::test_help_cat_verbose",
"tests/test_cmd2.py::test_select_options",
"tests/test_cmd2.py::test_select_invalid_option",
"tests/test_cmd2.py::test_select_list_of_strings",
"tests/test_cmd2.py::test_select_list_of_tuples",
"tests/test_cmd2.py::test_select_uneven_list_of_tuples",
"tests/test_cmd2.py::test_help_with_no_docstring",
"tests/test_cmd2.py::test_which_editor_bad",
"tests/test_cmd2.py::test_multiline_complete_empty_statement_raises_exception",
"tests/test_cmd2.py::test_multiline_complete_statement_without_terminator",
"tests/test_cmd2.py::test_cmdresult",
"tests/test_cmd2.py::test_is_text_file_bad_input",
"tests/test_cmd2.py::test_eof",
"tests/test_cmd2.py::test_eos",
"tests/test_cmd2.py::test_echo",
"tests/test_cmd2.py::test_pseudo_raw_input_tty_rawinput_true",
"tests/test_cmd2.py::test_pseudo_raw_input_tty_rawinput_false",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_true_echo_true",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_true_echo_false",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_false_echo_true",
"tests/test_cmd2.py::test_pseudo_raw_input_piped_rawinput_false_echo_false",
"tests/test_cmd2.py::test_raw_input",
"tests/test_cmd2.py::test_stdin_input",
"tests/test_cmd2.py::test_empty_stdin_input",
"tests/test_cmd2.py::test_poutput_string",
"tests/test_cmd2.py::test_poutput_zero",
"tests/test_cmd2.py::test_poutput_empty_string",
"tests/test_cmd2.py::test_poutput_none",
"tests/test_cmd2.py::test_alias",
"tests/test_cmd2.py::test_alias_lookup_invalid_alias",
"tests/test_cmd2.py::test_unalias",
"tests/test_cmd2.py::test_unalias_all",
"tests/test_cmd2.py::test_unalias_non_existing",
"tests/test_cmd2.py::test_create_invalid_alias[\">\"]",
"tests/test_cmd2.py::test_create_invalid_alias[\"no>pe\"]",
"tests/test_cmd2.py::test_create_invalid_alias[\"no",
"tests/test_cmd2.py::test_create_invalid_alias[\"nopipe|\"]",
"tests/test_cmd2.py::test_create_invalid_alias[\"noterm;\"]",
"tests/test_cmd2.py::test_create_invalid_alias[noembedded\"quotes]",
"tests/test_cmd2.py::test_ppaged",
"tests/test_cmd2.py::test_parseline_empty",
"tests/test_cmd2.py::test_parseline",
"tests/test_parsing.py::test_parse_empty_string",
"tests/test_parsing.py::test_tokenize[command-tokens0]",
"tests/test_parsing.py::test_tokenize[command",
"tests/test_parsing.py::test_tokenize[42",
"tests/test_parsing.py::test_tokenize[l-tokens4]",
"tests/test_parsing.py::test_tokenize[termbare",
"tests/test_parsing.py::test_tokenize[termbare;",
"tests/test_parsing.py::test_tokenize[termbare&",
"tests/test_parsing.py::test_tokenize[help|less-tokens9]",
"tests/test_parsing.py::test_tokenize[l|less-tokens10]",
"tests/test_parsing.py::test_tokenize_unclosed_quotes",
"tests/test_parsing.py::test_command_and_args[tokens0-None-None]",
"tests/test_parsing.py::test_command_and_args[tokens1-command-None]",
"tests/test_parsing.py::test_command_and_args[tokens2-command-arg1",
"tests/test_parsing.py::test_parse_single_word[plainword]",
"tests/test_parsing.py::test_parse_single_word[\"one",
"tests/test_parsing.py::test_parse_single_word['one",
"tests/test_parsing.py::test_parse_word_plus_terminator[termbare;-;]",
"tests/test_parsing.py::test_parse_word_plus_terminator[termbare",
"tests/test_parsing.py::test_parse_word_plus_terminator[termbare&-&]",
"tests/test_parsing.py::test_parse_suffix_after_terminator[termbare;",
"tests/test_parsing.py::test_parse_suffix_after_terminator[termbare",
"tests/test_parsing.py::test_parse_suffix_after_terminator[termbare&",
"tests/test_parsing.py::test_parse_command_with_args",
"tests/test_parsing.py::test_parse_command_with_quoted_args",
"tests/test_parsing.py::test_parse_command_with_args_terminator_and_suffix",
"tests/test_parsing.py::test_parse_hashcomment",
"tests/test_parsing.py::test_parse_c_comment",
"tests/test_parsing.py::test_parse_c_comment_empty",
"tests/test_parsing.py::test_parse_what_if_quoted_strings_seem_to_start_comments",
"tests/test_parsing.py::test_parse_double_pipe_is_not_a_pipe",
"tests/test_parsing.py::test_parse_redirect[help",
"tests/test_parsing.py::test_parse_redirect[help>out.txt->]",
"tests/test_parsing.py::test_parse_redirect[help>>out.txt->>]",
"tests/test_parsing.py::test_parse_redirect_with_args",
"tests/test_parsing.py::test_parse_redirect_with_dash_in_path",
"tests/test_parsing.py::test_parse_redirect_append",
"tests/test_parsing.py::test_parse_output_to_paste_buffer",
"tests/test_parsing.py::test_parse_redirect_inside_terminator",
"tests/test_parsing.py::test_parse_unfinished_multiliine_command",
"tests/test_parsing.py::test_parse_multiline_command_ignores_redirectors_within_it[multiline",
"tests/test_parsing.py::test_parse_multiline_with_incomplete_comment",
"tests/test_parsing.py::test_parse_multiline_with_complete_comment",
"tests/test_parsing.py::test_parse_multiline_termninated_by_empty_line",
"tests/test_parsing.py::test_parse_multiline_ignores_terminators_in_comments",
"tests/test_parsing.py::test_parse_command_with_unicode_args",
"tests/test_parsing.py::test_parse_unicode_command",
"tests/test_parsing.py::test_parse_redirect_to_unicode_filename",
"tests/test_parsing.py::test_parse_unclosed_quotes",
"tests/test_parsing.py::test_empty_statement_raises_exception",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[helpalias-help-None]",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[helpalias",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[42-theanswer-None]",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[42",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[!ls-shell-ls]",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[!ls",
"tests/test_parsing.py::test_parse_alias_and_shortcut_expansion[l-shell-ls",
"tests/test_parsing.py::test_parse_alias_on_multiline_command",
"tests/test_parsing.py::test_parse_alias_redirection[helpalias",
"tests/test_parsing.py::test_parse_alias_redirection[helpalias>out.txt->]",
"tests/test_parsing.py::test_parse_alias_redirection[helpalias>>out.txt->>]",
"tests/test_parsing.py::test_parse_alias_terminator_no_whitespace",
"tests/test_parsing.py::test_parse_command_only_command_and_args",
"tests/test_parsing.py::test_parse_command_only_emptyline",
"tests/test_parsing.py::test_parse_command_only_strips_line",
"tests/test_parsing.py::test_parse_command_only_expands_alias",
"tests/test_parsing.py::test_parse_command_only_expands_shortcuts",
"tests/test_parsing.py::test_parse_command_only_quoted_args",
"tests/test_parsing.py::test_parse_command_only_specialchars[helpalias",
"tests/test_parsing.py::test_parse_command_only_specialchars[helpalias>out.txt]",
"tests/test_parsing.py::test_parse_command_only_specialchars[helpalias>>out.txt]",
"tests/test_parsing.py::test_parse_command_only_specialchars[help|less]",
"tests/test_parsing.py::test_parse_command_only_specialchars[helpalias;]",
"tests/test_parsing.py::test_parse_command_only_none[;]",
"tests/test_parsing.py::test_parse_command_only_none[>]",
"tests/test_parsing.py::test_parse_command_only_none[']",
"tests/test_parsing.py::test_parse_command_only_none[\"]",
"tests/test_parsing.py::test_parse_command_only_none[|]"
]
| []
| MIT License | 2,499 | [
"cmd2/cmd2.py",
"cmd2/parsing.py",
"CHANGELOG.md",
"cmd2/constants.py",
"docs/freefeatures.rst",
"docs/unfreefeatures.rst"
]
| [
"cmd2/cmd2.py",
"cmd2/parsing.py",
"CHANGELOG.md",
"cmd2/constants.py",
"docs/freefeatures.rst",
"docs/unfreefeatures.rst"
]
|
|
fniessink__next-action-22 | 958d1058cdde5b17f9cf224a77b94044b6f0cef2 | 2018-05-10 20:56:59 | e145fa742fb415e26ec78bff558a359e2022729e | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 6b32444..4fbf629 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
### Added
+- Allow for limiting the next action to a specific context. Closes #5.
- Version number command line argument (`--version`) to display Next-action's version number.
### Fixed
diff --git a/README.md b/README.md
index 35543d7..2d235e0 100644
--- a/README.md
+++ b/README.md
@@ -23,16 +23,17 @@ Next-action requires Python 3.6 or newer.
```console
$ next_action --help
-
-usage: next_action [-h] [-f FILE] [--version]
+usage: next_action [-h] [--version] [-f FILE] [@CONTEXT]
Show the next action in your todo.txt
+positional arguments:
+ @CONTEXT Show the next action in the specified context (default: None)
+
optional arguments:
-h, --help show this help message and exit
- -f FILE, --file FILE filename of the todo.txt file to read (default:
- todo.txt)
--version show program's version number and exit
+ -f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
```
## Develop
diff --git a/next_action/__init__.py b/next_action/__init__.py
index 7fe93c7..bdbeff0 100644
--- a/next_action/__init__.py
+++ b/next_action/__init__.py
@@ -1,3 +1,4 @@
from .pick_action import next_action_based_on_priority
+__title__ = "next-action"
__version__ = "0.0.3"
diff --git a/next_action/arguments.py b/next_action/arguments.py
index 90f69bf..8485bf5 100644
--- a/next_action/arguments.py
+++ b/next_action/arguments.py
@@ -1,13 +1,31 @@
import argparse
+import os
+import shutil
import next_action
+class ContextAction(argparse.Action):
+ """ A context action that checks whether contexts start with an @. """
+ def __call__(self, parser, namespace, values, option_string=None):
+ if not values:
+ return
+ if values.startswith("@"):
+ setattr(namespace, self.dest, values)
+ else:
+ parser.error("Contexts should start with an @.")
+
+
def parse_arguments() -> argparse.Namespace:
""" Parse the command line arguments. """
+ # Ensure that the help info is printed using all columns available
+ os.environ['COLUMNS'] = str(shutil.get_terminal_size().columns)
+
parser = argparse.ArgumentParser(description="Show the next action in your todo.txt",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+ parser.add_argument("--version", action="version", version="Next-action {0}".format(next_action.__version__))
parser.add_argument("-f", "--file", help="filename of the todo.txt file to read",
type=str, default="todo.txt")
- parser.add_argument("--version", action="version", version="Next-action {0}".format(next_action.__version__))
+ parser.add_argument("context", metavar="@CONTEXT", help="Show the next action in the specified context", nargs="?",
+ type=str, action=ContextAction)
return parser.parse_args()
diff --git a/next_action/cli.py b/next_action/cli.py
index baebe98..45dc8d8 100644
--- a/next_action/cli.py
+++ b/next_action/cli.py
@@ -4,7 +4,8 @@ from next_action.arguments import parse_arguments
def next_action() -> None:
- filename: str = parse_arguments().file
+ arguments = parse_arguments()
+ filename: str = arguments.file
try:
todotxt_file = open(filename, "r")
except FileNotFoundError:
@@ -12,5 +13,5 @@ def next_action() -> None:
return
with todotxt_file:
tasks = [Task(line.strip()) for line in todotxt_file.readlines()]
- action = next_action_based_on_priority(tasks)
+ action = next_action_based_on_priority(tasks, arguments.context)
print(action.text if action else "Nothing to do!")
diff --git a/next_action/pick_action.py b/next_action/pick_action.py
index 7366341..35dd9fd 100644
--- a/next_action/pick_action.py
+++ b/next_action/pick_action.py
@@ -3,8 +3,12 @@ from typing import Optional, Iterable
from .todotxt import Task
-def next_action_based_on_priority(tasks: Iterable[Task]) -> Optional[Task]:
+def next_action_based_on_priority(tasks: Iterable[Task], context: str = "") -> Optional[Task]:
""" Return the next action from the collection of tasks. """
uncompleted_tasks = [task for task in tasks if not task.is_completed()]
- sorted_tasks = sorted(uncompleted_tasks, key=lambda task: task.priority() or "ZZZ")
+ if context:
+ tasks_in_context = [task for task in tasks if context.strip("@") in task.contexts()]
+ else:
+ tasks_in_context = uncompleted_tasks
+ sorted_tasks = sorted(tasks_in_context, key=lambda task: task.priority() or "ZZZ")
return sorted_tasks[0] if sorted_tasks else None
diff --git a/setup.py b/setup.py
index a7d4775..46d8260 100644
--- a/setup.py
+++ b/setup.py
@@ -4,15 +4,15 @@ import next_action
setup(
- name="next-action",
+ name=next_action.__title__,
version=next_action.__version__,
- description="Show the next action to work on from a todo.txt file",
+ description="Command-line application to show the next action to work on from a todo.txt file",
long_description="""Show the next action to work on from a todo.txt file, based on context, priority,
and more.""",
- author='Frank Niessink',
- author_email='[email protected]',
- url='https://github.com/fniessink/next-action',
- license='Apache License, Version 2.0',
+ author="Frank Niessink",
+ author_email="[email protected]",
+ url="https://github.com/fniessink/next-action",
+ license="Apache License, Version 2.0",
python_requires=">=3.6",
packages=find_packages(),
entry_points={
| Allow for specifying a context
`$ next-action @home`
(A) Install smoke detectors @home | fniessink/next-action | diff --git a/tests/unittests/test_arguments.py b/tests/unittests/test_arguments.py
index 48c5f17..3156a25 100644
--- a/tests/unittests/test_arguments.py
+++ b/tests/unittests/test_arguments.py
@@ -1,6 +1,7 @@
-import unittest
-from unittest.mock import patch
+import os
import sys
+import unittest
+from unittest.mock import patch, call
from next_action.arguments import parse_arguments
@@ -22,3 +23,23 @@ class ArgumentParserTest(unittest.TestCase):
def test_long_filename_argument(self):
""" Test that the argument parser accepts a filename. """
self.assertEqual("my_other_todo.txt", parse_arguments().file)
+
+ @patch.object(sys, "argv", ["next_action"])
+ def test_no_context(self):
+ """ Test that the argument parser returns no context if the user doesn't pass one. """
+ self.assertEqual(None, parse_arguments().context)
+
+ @patch.object(sys, "argv", ["next_action", "@home"])
+ def test_one_context(self):
+ """ Test that the argument parser returns the context if the user passes one. """
+ self.assertEqual("@home", parse_arguments().context)
+
+ @patch.object(sys, "argv", ["next_action", "home"])
+ @patch.object(sys.stderr, "write")
+ def test_faulty_context(self, mock_stderr_write):
+ """ Test that the argument parser exits if the context is faulty. """
+ os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
+ self.assertRaises(SystemExit, parse_arguments)
+ self.assertEqual([call("usage: next_action [-h] [--version] [-f FILE] [@CONTEXT]\n"),
+ call("next_action: error: Contexts should start with an @.\n")],
+ mock_stderr_write.call_args_list)
\ No newline at end of file
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index b84f948..3cdefe1 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -1,6 +1,7 @@
+import os
+import sys
import unittest
from unittest.mock import patch, mock_open, call
-import sys
from next_action.cli import next_action
from next_action import __version__
@@ -37,20 +38,20 @@ class CLITest(unittest.TestCase):
@patch.object(sys, "argv", ["next_action", "--help"])
@patch.object(sys.stdout, "write")
def test_help(self, mock_stdout_write):
- """ Test that the help contains the default filename. """
- try:
- next_action()
- except SystemExit:
- pass
- self.assertEqual(call("""usage: next_action [-h] [-f FILE] [--version]
+ """ Test the help message. """
+ os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
+ self.assertRaises(SystemExit, next_action)
+ self.assertEqual(call("""usage: next_action [-h] [--version] [-f FILE] [@CONTEXT]
Show the next action in your todo.txt
+positional arguments:
+ @CONTEXT Show the next action in the specified context (default: None)
+
optional arguments:
-h, --help show this help message and exit
- -f FILE, --file FILE filename of the todo.txt file to read (default:
- todo.txt)
--version show program's version number and exit
+ -f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
"""),
mock_stdout_write.call_args_list[0])
@@ -58,8 +59,5 @@ optional arguments:
@patch.object(sys.stdout, "write")
def test_version(self, mock_stdout_write):
""" Test that --version shows the version number. """
- try:
- next_action()
- except SystemExit:
- pass
+ self.assertRaises(SystemExit, next_action)
self.assertEqual([call("Next-action {0}\n".format(__version__))], mock_stdout_write.call_args_list)
\ No newline at end of file
diff --git a/tests/unittests/test_pick_action.py b/tests/unittests/test_pick_action.py
index 8910ff4..16f37f6 100644
--- a/tests/unittests/test_pick_action.py
+++ b/tests/unittests/test_pick_action.py
@@ -40,4 +40,11 @@ class PickActionTest(unittest.TestCase):
completed_task = todotxt.Task("x Completed")
uncompleted_task = todotxt.Task("Todo")
self.assertEqual(uncompleted_task,
- pick_action.next_action_based_on_priority([completed_task, uncompleted_task]))
\ No newline at end of file
+ pick_action.next_action_based_on_priority([completed_task, uncompleted_task]))
+
+ def test_next_action_limited_to_context(self):
+ """ Test that the next action can be limited to a specific context. """
+ task1 = todotxt.Task("Todo 1 @work")
+ task2 = todotxt.Task("(B) Todo 2 @work")
+ task3 = todotxt.Task("(A) Todo 3 @home")
+ self.assertEqual(task2, pick_action.next_action_based_on_priority([task1, task2, task3], context="work"))
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 1
},
"num_modified_files": 7
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/fniessink/next-action.git@958d1058cdde5b17f9cf224a77b94044b6f0cef2#egg=next_action
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_no_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_context",
"tests/unittests/test_cli.py::CLITest::test_help",
"tests/unittests/test_pick_action.py::PickActionTest::test_next_action_limited_to_context"
]
| []
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_long_filename_argument",
"tests/unittests/test_cli.py::CLITest::test_empty_task_file",
"tests/unittests/test_cli.py::CLITest::test_missing_file",
"tests/unittests/test_cli.py::CLITest::test_one_task",
"tests/unittests/test_cli.py::CLITest::test_version",
"tests/unittests/test_pick_action.py::PickActionTest::test_completed_task_is_not_next_action_based_on_priority",
"tests/unittests/test_pick_action.py::PickActionTest::test_completed_tasks_are_not_next_action_based_on_priority",
"tests/unittests/test_pick_action.py::PickActionTest::test_higher_prio_goes_first",
"tests/unittests/test_pick_action.py::PickActionTest::test_multiple_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_no_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_one_task"
]
| []
| Apache License 2.0 | 2,500 | [
"setup.py",
"next_action/pick_action.py",
"CHANGELOG.md",
"next_action/__init__.py",
"README.md",
"next_action/cli.py",
"next_action/arguments.py"
]
| [
"setup.py",
"next_action/pick_action.py",
"CHANGELOG.md",
"next_action/__init__.py",
"README.md",
"next_action/cli.py",
"next_action/arguments.py"
]
|
|
fniessink__next-action-24 | 5c8b2ad17a806bae876c00b86ac4ee8402ecdc84 | 2018-05-11 10:13:48 | e145fa742fb415e26ec78bff558a359e2022729e | diff --git a/CHANGELOG.md b/CHANGELOG.md
index f34f269..0f35cce 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,13 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
+
+## [Unreleased]
+
+### Changed
+
+- Renamed Next-action's binary from `next_action` to `next-action` for consistency with the application and project name.
+
## [0.0.4] - 2018-05-10
### Added
diff --git a/README.md b/README.md
index 2d235e0..df935dd 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@
[](https://www.codacy.com/app/frank_10/next-action?utm_source=github.com&utm_medium=referral&utm_content=fniessink/next-action&utm_campaign=Badge_Grade)
[](https://www.codacy.com/app/frank_10/next-action?utm_source=github.com&utm_medium=referral&utm_content=fniessink/next-action&utm_campaign=Badge_Coverage)
-Determine the next action to work on from a list of actions in a todo.txt file.
+Determine the next action to work on from a list of actions in a todo.txt file. Next-action is alpha-stage at the moment, so its options are rather limited at the moment.
Don't know what todo.txt is? See <https://github.com/todotxt/todo.txt> for the todo.txt specification.
@@ -22,8 +22,8 @@ Next-action requires Python 3.6 or newer.
## Usage
```console
-$ next_action --help
-usage: next_action [-h] [--version] [-f FILE] [@CONTEXT]
+$ next-action --help
+usage: next-action [-h] [--version] [-f FILE] [@CONTEXT]
Show the next action in your todo.txt
@@ -36,6 +36,22 @@ optional arguments:
-f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
```
+Assuming your todo.txt file is in the current folder, running Next-action without arguments will show the next action you should do based on your tasks' priorities:
+
+```console
+$ next-action
+(A) Call mom @phone
+```
+
+You can limit the tasks from which Next-action picks the next action by passing a context:
+
+```console
+$ next-action @work
+(C) Finish proposal for important client @work
+```
+
+Since Next-action is still alpha-stage, this is it for the moment. Stay tuned for more options.
+
## Develop
Clone the repository and run the unit tests with `python setup.py test`.
diff --git a/next_action/arguments.py b/next_action/arguments.py
index 8485bf5..6889cd5 100644
--- a/next_action/arguments.py
+++ b/next_action/arguments.py
@@ -23,7 +23,7 @@ def parse_arguments() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Show the next action in your todo.txt",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument("--version", action="version", version="Next-action {0}".format(next_action.__version__))
+ parser.add_argument("--version", action="version", version="%(prog)s {0}".format(next_action.__version__))
parser.add_argument("-f", "--file", help="filename of the todo.txt file to read",
type=str, default="todo.txt")
parser.add_argument("context", metavar="@CONTEXT", help="Show the next action in the specified context", nargs="?",
diff --git a/setup.py b/setup.py
index 46d8260..4bb8d0d 100644
--- a/setup.py
+++ b/setup.py
@@ -17,7 +17,7 @@ and more.""",
packages=find_packages(),
entry_points={
"console_scripts": [
- "next_action=next_action.cli:next_action",
+ "next-action=next_action.cli:next_action",
],
},
test_suite="tests",
| Change binary to next-action for consistency with application name | fniessink/next-action | diff --git a/tests/unittests/test_arguments.py b/tests/unittests/test_arguments.py
index 3156a25..3c60285 100644
--- a/tests/unittests/test_arguments.py
+++ b/tests/unittests/test_arguments.py
@@ -9,37 +9,37 @@ from next_action.arguments import parse_arguments
class ArgumentParserTest(unittest.TestCase):
""" Unit tests for the argument parses. """
- @patch.object(sys, "argv", ["next_action"])
+ @patch.object(sys, "argv", ["next-action"])
def test_default_filename(self):
""" Test that the argument parser has a default filename. """
self.assertEqual("todo.txt", parse_arguments().file)
- @patch.object(sys, "argv", ["next_action", "-f", "my_todo.txt"])
+ @patch.object(sys, "argv", ["next-action", "-f", "my_todo.txt"])
def test_filename_argument(self):
""" Test that the argument parser accepts a filename. """
self.assertEqual("my_todo.txt", parse_arguments().file)
- @patch.object(sys, "argv", ["next_action", "--file", "my_other_todo.txt"])
+ @patch.object(sys, "argv", ["next-action", "--file", "my_other_todo.txt"])
def test_long_filename_argument(self):
""" Test that the argument parser accepts a filename. """
self.assertEqual("my_other_todo.txt", parse_arguments().file)
- @patch.object(sys, "argv", ["next_action"])
+ @patch.object(sys, "argv", ["next-action"])
def test_no_context(self):
""" Test that the argument parser returns no context if the user doesn't pass one. """
self.assertEqual(None, parse_arguments().context)
- @patch.object(sys, "argv", ["next_action", "@home"])
+ @patch.object(sys, "argv", ["next-action", "@home"])
def test_one_context(self):
""" Test that the argument parser returns the context if the user passes one. """
self.assertEqual("@home", parse_arguments().context)
- @patch.object(sys, "argv", ["next_action", "home"])
+ @patch.object(sys, "argv", ["next-action", "home"])
@patch.object(sys.stderr, "write")
def test_faulty_context(self, mock_stderr_write):
""" Test that the argument parser exits if the context is faulty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call("usage: next_action [-h] [--version] [-f FILE] [@CONTEXT]\n"),
- call("next_action: error: Contexts should start with an @.\n")],
+ self.assertEqual([call("usage: next-action [-h] [--version] [-f FILE] [@CONTEXT]\n"),
+ call("next-action: error: Contexts should start with an @.\n")],
mock_stderr_write.call_args_list)
\ No newline at end of file
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 3cdefe1..5b20ad6 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -10,7 +10,7 @@ from next_action import __version__
class CLITest(unittest.TestCase):
""" Unit tests for the command-line interface. """
- @patch.object(sys, "argv", ["next_action"])
+ @patch.object(sys, "argv", ["next-action"])
@patch("next_action.cli.open", mock_open(read_data=""))
@patch.object(sys.stdout, "write")
def test_empty_task_file(self, mock_stdout_write):
@@ -18,7 +18,7 @@ class CLITest(unittest.TestCase):
next_action()
self.assertEqual([call("Nothing to do!"), call("\n")], mock_stdout_write.call_args_list)
- @patch.object(sys, "argv", ["next_action"])
+ @patch.object(sys, "argv", ["next-action"])
@patch("next_action.cli.open", mock_open(read_data="Todo\n"))
@patch.object(sys.stdout, "write")
def test_one_task(self, mock_stdout_write):
@@ -26,7 +26,7 @@ class CLITest(unittest.TestCase):
next_action()
self.assertEqual([call("Todo"), call("\n")], mock_stdout_write.call_args_list)
- @patch.object(sys, "argv", ["next_action"])
+ @patch.object(sys, "argv", ["next-action"])
@patch("next_action.cli.open")
@patch.object(sys.stdout, "write")
def test_missing_file(self, mock_stdout_write, mock_open):
@@ -35,13 +35,13 @@ class CLITest(unittest.TestCase):
next_action()
self.assertEqual([call("Can't find todo.txt"), call("\n")], mock_stdout_write.call_args_list)
- @patch.object(sys, "argv", ["next_action", "--help"])
+ @patch.object(sys, "argv", ["next-action", "--help"])
@patch.object(sys.stdout, "write")
def test_help(self, mock_stdout_write):
""" Test the help message. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, next_action)
- self.assertEqual(call("""usage: next_action [-h] [--version] [-f FILE] [@CONTEXT]
+ self.assertEqual(call("""usage: next-action [-h] [--version] [-f FILE] [@CONTEXT]
Show the next action in your todo.txt
@@ -55,9 +55,9 @@ optional arguments:
"""),
mock_stdout_write.call_args_list[0])
- @patch.object(sys, "argv", ["next_action", "--version"])
+ @patch.object(sys, "argv", ["next-action", "--version"])
@patch.object(sys.stdout, "write")
def test_version(self, mock_stdout_write):
""" Test that --version shows the version number. """
self.assertRaises(SystemExit, next_action)
- self.assertEqual([call("Next-action {0}\n".format(__version__))], mock_stdout_write.call_args_list)
\ No newline at end of file
+ self.assertEqual([call("next-action {0}\n".format(__version__))], mock_stdout_write.call_args_list)
\ No newline at end of file
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 4
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/fniessink/next-action.git@5c8b2ad17a806bae876c00b86ac4ee8402ecdc84#egg=next_action
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_cli.py::CLITest::test_version"
]
| []
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_long_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_no_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_context",
"tests/unittests/test_cli.py::CLITest::test_empty_task_file",
"tests/unittests/test_cli.py::CLITest::test_help",
"tests/unittests/test_cli.py::CLITest::test_missing_file",
"tests/unittests/test_cli.py::CLITest::test_one_task"
]
| []
| Apache License 2.0 | 2,501 | [
"setup.py",
"README.md",
"next_action/arguments.py",
"CHANGELOG.md"
]
| [
"setup.py",
"README.md",
"next_action/arguments.py",
"CHANGELOG.md"
]
|
|
EdinburghGenomics__pyclarity-lims-33 | d73f5b7d76f0d65b4fe51fbc80e4bf9f49903a6c | 2018-05-11 11:29:17 | a03be6eda34f0d8adaf776d2286198a34e40ecf5 | diff --git a/pyclarity_lims/lims.py b/pyclarity_lims/lims.py
index c00b1a1..532b315 100644
--- a/pyclarity_lims/lims.py
+++ b/pyclarity_lims/lims.py
@@ -210,7 +210,8 @@ class Lims(object):
root = ElementTree.fromstring(response.content)
return root
- def get_udfs(self, name=None, attach_to_name=None, attach_to_category=None, start_index=None, add_info=False):
+ def get_udfs(self, name=None, attach_to_name=None, attach_to_category=None, start_index=None, nb_pages=-1,
+ add_info=False):
"""Get a list of udfs, filtered by keyword arguments.
:param name: name of udf
@@ -218,7 +219,9 @@ class Lims(object):
Sample, Project, Container, or the name of a process.
:param attach_to_category: If 'attach_to_name' is the name of a process, such as 'CaliperGX QC (DNA)',
then you need to set attach_to_category='ProcessType'. Must not be provided otherwise.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
"""
@@ -226,21 +229,23 @@ class Lims(object):
attach_to_name=attach_to_name,
attach_to_category=attach_to_category,
start_index=start_index)
- return self._get_instances(Udfconfig, add_info=add_info, params=params)
+ return self._get_instances(Udfconfig, add_info=add_info, nb_pages=nb_pages, params=params)
- def get_reagent_types(self, name=None, start_index=None):
+ def get_reagent_types(self, name=None, start_index=None, nb_pages=-1):
"""
Get a list of reagent types, filtered by keyword arguments.
:param name: Reagent type name, or list of names.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
"""
params = self._get_params(name=name,
start_index=start_index)
- return self._get_instances(ReagentType, params=params)
+ return self._get_instances(ReagentType, nb_pages=nb_pages, params=params)
def get_labs(self, name=None, last_modified=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None, add_info=False):
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1, add_info=False):
"""Get a list of labs, filtered by keyword arguments.
:param name: Lab name, or list of names.
@@ -249,7 +254,9 @@ class Lims(object):
:param udtname: UDT name, or list of names.
:param udt: dictionary of UDT UDFs with 'UDTNAME.UDFNAME[OPERATOR]' as keys
and a string or list of strings as value.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
"""
@@ -257,11 +264,11 @@ class Lims(object):
last_modified=last_modified,
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
- return self._get_instances(Lab, add_info=add_info, params=params)
+ return self._get_instances(Lab, add_info=add_info, nb_pages=nb_pages, params=params)
def get_researchers(self, firstname=None, lastname=None, username=None,
last_modified=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None,
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1,
add_info=False):
"""Get a list of researchers, filtered by keyword arguments.
@@ -273,7 +280,9 @@ class Lims(object):
:param udtname: UDT name, or list of names.
:param udt: dictionary of UDT UDFs with 'UDTNAME.UDFNAME[OPERATOR]' as keys
and a string or list of strings as value.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
@@ -284,10 +293,10 @@ class Lims(object):
last_modified=last_modified,
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
- return self._get_instances(Researcher, add_info=add_info, params=params)
+ return self._get_instances(Researcher, add_info=add_info, nb_pages=nb_pages, params=params)
def get_projects(self, name=None, open_date=None, last_modified=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None,
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1,
add_info=False):
"""Get a list of projects, filtered by keyword arguments.
@@ -298,7 +307,9 @@ class Lims(object):
:param udtname: UDT name, or list of names.
:param udt: dictionary of UDT UDFs with 'UDTNAME.UDFNAME[OPERATOR]' as keys
and a string or list of strings as value.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
@@ -308,14 +319,16 @@ class Lims(object):
last_modified=last_modified,
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
- return self._get_instances(Project, add_info=add_info, params=params)
+ return self._get_instances(Project, add_info=add_info, nb_pages=nb_pages, params=params)
def get_sample_number(self, name=None, projectname=None, projectlimsid=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None):
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1):
"""
Gets the number of samples matching the query without fetching every
sample, so it should be faster than len(get_samples())
"""
+ # TODO: I doubt that this make any difference in terms of speed since the only thing it save is the Sample
+ # construction. We should test and a replace with len(get_samples())
params = self._get_params(name=name,
projectname=projectname,
projectlimsid=projectlimsid,
@@ -331,7 +344,7 @@ class Lims(object):
return total
def get_samples(self, name=None, projectname=None, projectlimsid=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None):
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1):
"""Get a list of samples, filtered by keyword arguments.
:param name: Sample name, or list of names.
@@ -341,21 +354,22 @@ class Lims(object):
:param udtname: UDT name, or list of names.
:param udt: dictionary of UDT UDFs with 'UDTNAME.UDFNAME[OPERATOR]' as keys
and a string or list of strings as value.
- :param start_index: Page to retrieve; all if None.
-
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
"""
params = self._get_params(name=name,
projectname=projectname,
projectlimsid=projectlimsid,
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
- return self._get_instances(Sample, params=params)
+ return self._get_instances(Sample, nb_pages=nb_pages, params=params)
def get_artifacts(self, name=None, type=None, process_type=None,
artifact_flag_name=None, working_flag=None, qc_flag=None,
sample_name=None, samplelimsid=None, artifactgroup=None, containername=None,
containerlimsid=None, reagent_label=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None,
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1,
resolve=False):
"""Get a list of artifacts, filtered by keyword arguments.
@@ -375,9 +389,10 @@ class Lims(object):
:param udtname: UDT name, or list of names.
:param udt: dictionary of UDT UDFs with 'UDTNAME.UDFNAME[OPERATOR]' as keys
and a string or list of strings as value.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param resolve: Send a batch query to the lims to get the content of all artifacts retrieved
-
"""
params = self._get_params(name=name,
type=type,
@@ -394,13 +409,13 @@ class Lims(object):
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
if resolve:
- return self.get_batch(self._get_instances(Artifact, params=params))
+ return self.get_batch(self._get_instances(Artifact, nb_pages=nb_pages, params=params))
else:
- return self._get_instances(Artifact, params=params)
+ return self._get_instances(Artifact, nb_pages=nb_pages, params=params)
def get_containers(self, name=None, type=None,
state=None, last_modified=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None,
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1,
add_info=False):
"""Get a list of containers, filtered by keyword arguments.
@@ -412,10 +427,11 @@ class Lims(object):
:param udtname: UDT name, or list of names.
:param udt: dictionary of UDT UDFs with 'UDTNAME.UDFNAME[OPERATOR]' as keys
and a string or list of strings as value.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
-
"""
params = self._get_params(name=name,
type=type,
@@ -423,24 +439,25 @@ class Lims(object):
last_modified=last_modified,
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
- return self._get_instances(Container, add_info=add_info, params=params)
+ return self._get_instances(Container, add_info=add_info, nb_pages=nb_pages, params=params)
- def get_container_types(self, name=None, start_index=None, add_info=False):
+ def get_container_types(self, name=None, start_index=None, nb_pages=-1, add_info=False):
"""Get a list of container types, filtered by keyword arguments.
:param name: name of the container type or list of names.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
-
"""
params = self._get_params(name=name, start_index=start_index)
- return self._get_instances(Containertype, add_info=add_info, params=params)
+ return self._get_instances(Containertype, add_info=add_info, nb_pages=nb_pages, params=params)
def get_processes(self, last_modified=None, type=None,
inputartifactlimsid=None,
techfirstname=None, techlastname=None, projectname=None,
- udf=dict(), udtname=None, udt=dict(), start_index=None):
+ udf=dict(), udtname=None, udt=dict(), start_index=None, nb_pages=-1):
"""Get a list of processes, filtered by keyword arguments.
:param last_modified: Since the given ISO format datetime.
@@ -453,7 +470,9 @@ class Lims(object):
:param techfirstname: First name of researcher, or list of.
:param techlastname: Last name of researcher, or list of.
:param projectname: Name of project, or list of.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
"""
params = self._get_params(last_modified=last_modified,
type=type,
@@ -463,7 +482,7 @@ class Lims(object):
projectname=projectname,
start_index=start_index)
params.update(self._get_params_udf(udf=udf, udtname=udtname, udt=udt))
- return self._get_instances(Process, params=params)
+ return self._get_instances(Process, nb_pages=nb_pages, params=params)
def get_workflows(self, name=None, add_info=False):
"""
@@ -513,32 +532,35 @@ class Lims(object):
params = self._get_params(name=name)
return self._get_instances(Protocol, add_info=add_info, params=params)
- def get_reagent_kits(self, name=None, start_index=None, add_info=False):
+ def get_reagent_kits(self, name=None, start_index=None, nb_pages=-1, add_info=False):
"""Get a list of reagent kits, filtered by keyword arguments.
:param name: reagent kit name, or list of names.
- :param start_index: Page to retrieve; all if None.
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
:param add_info: Change the return type to a tuple where the first element is normal return and
the second is a dict of additional information provided in the query.
"""
params = self._get_params(name=name,
start_index=start_index)
- return self._get_instances(ReagentKit, add_info=add_info, params=params)
+ return self._get_instances(ReagentKit, add_info=add_info, nb_pages=nb_pages, params=params)
def get_reagent_lots(self, name=None, kitname=None, number=None,
- start_index=None):
+ start_index=None, nb_pages=-1):
"""Get a list of reagent lots, filtered by keyword arguments.
:param name: reagent kit name, or list of names.
:param kitname: name of the kit this lots belong to
:param number: lot number or list of lot number
- :param start_index: Page to retrieve; all if None.
-
+ :param start_index: first element to retrieve; start at first element if None.
+ :param nb_pages: number of page to iterate over. The page size is 500 by default unless configured otherwise
+ in your LIMS. 0 or negative numbers returns all pages.
"""
params = self._get_params(name=name, kitname=kitname, number=number,
start_index=start_index)
- return self._get_instances(ReagentLot, params=params)
+ return self._get_instances(ReagentLot, nb_pages=nb_pages, params=params)
def _get_params(self, **kwargs):
"""Convert keyword arguments to a kwargs dictionary."""
@@ -560,14 +582,15 @@ class Lims(object):
result["udt.%s" % key] = value
return result
- def _get_instances(self, klass, add_info=None, params=dict()):
+ def _get_instances(self, klass, add_info=None, nb_pages=-1, params=dict()):
results = []
additionnal_info_dicts = []
tag = klass._TAG
if tag is None:
tag = klass.__name__.lower()
root = self.get(self.get_uri(klass._URI), params=params)
- while params.get('start-index') is None: # Loop over all pages.
+ while root: # Loop over all requested pages.
+ nb_pages -= 1
for node in root.findall(tag):
results.append(klass(self, uri=node.attrib['uri']))
info_dict = {}
@@ -577,9 +600,10 @@ class Lims(object):
info_dict[subnode.tag] = subnode.text
additionnal_info_dicts.append(info_dict)
node = root.find('next-page')
- if node is None:
- break
- root = self.get(node.attrib['uri'], params=params)
+ if node is None or nb_pages == 0:
+ root = None
+ else:
+ root = self.get(node.attrib['uri'], params=params)
if add_info:
return results, additionnal_info_dicts
else:
| Lims _get_instances() returns empty array when start_index is set.
An empty array is always returned when retrieving a list of entities using the `get_*` methods of the`Lims` class. For example:
```
samples = l.get_samples(start_index=500)
# samples == []
```
The problem is in the [_get_instances()](https://github.com/EdinburghGenomics/pyclarity-lims/blob/master/pyclarity_lims/lims.py#L563-L586) method. The response is only parsed when `start_index` is `None`. | EdinburghGenomics/pyclarity-lims | diff --git a/tests/test_lims.py b/tests/test_lims.py
index 58fe1a7..0bd29c5 100644
--- a/tests/test_lims.py
+++ b/tests/test_lims.py
@@ -1,5 +1,7 @@
from unittest import TestCase
from requests.exceptions import HTTPError
+
+from pyclarity_lims.entities import Sample
from pyclarity_lims.lims import Lims
try:
callable(1)
@@ -143,3 +145,63 @@ class TestLims(TestCase):
assert lims.get_file_contents(id='an_id', encoding='utf-16', crlf=True) == 'some data\n'
assert lims.request_session.get.return_value.encoding == 'utf-16'
lims.request_session.get.assert_called_with(exp_url, auth=(self.username, self.password), timeout=16)
+
+ def test_get_instances(self):
+ lims = Lims(self.url, username=self.username, password=self.password)
+ sample_xml_template = """<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+ <smp:samples xmlns:smp="http://pyclarity_lims.com/ri/sample">
+ <sample uri="{url}/api/v2/samples/{s1}" limsid="{s1}"/>
+ <sample uri="{url}/api/v2/samples/{s2}" limsid="{s2}"/>
+ {next_page}
+ </smp:samples>
+ """
+ sample_xml1 = sample_xml_template.format(
+ s1='sample1', s2='sample2',
+ url=self.url,
+ next_page='<next-page uri="{url}/api/v2/samples?start-index=3"/>'.format(url=self.url)
+ )
+ sample_xml2 = sample_xml_template.format(
+ s1='sample3', s2='sample4',
+ url=self.url,
+ next_page='<next-page uri="{url}/api/v2/samples?start-index=5"/>'.format(url=self.url)
+ )
+ sample_xml3 = sample_xml_template.format(
+ s1='sample5', s2='sample6',
+ url=self.url,
+ next_page=''
+ )
+ get_returns = [
+ Mock(content=sample_xml1, status_code=200),
+ Mock(content=sample_xml2, status_code=200),
+ Mock(content=sample_xml3, status_code=200)
+ ]
+
+ with patch('requests.Session.get', side_effect=get_returns) as mget:
+ samples = lims._get_instances(Sample, nb_pages=2, params={'projectname': 'p1'})
+ assert len(samples) == 4
+ assert mget.call_count == 2
+ mget.assert_any_call(
+ 'http://testgenologics.com:4040/api/v2/samples',
+ auth=('test', 'password'),
+ headers={'accept': 'application/xml'},
+ params={'projectname': 'p1'},
+ timeout=16
+ )
+ mget.assert_called_with(
+ 'http://testgenologics.com:4040/api/v2/samples?start-index=3',
+ auth=('test', 'password'),
+ headers={'accept': 'application/xml'},
+ params={'projectname': 'p1'},
+ timeout=16
+ )
+
+ with patch('requests.Session.get', side_effect=get_returns) as mget:
+ samples = lims._get_instances(Sample, nb_pages=0)
+ assert len(samples) == 6
+ assert mget.call_count == 3
+
+ with patch('requests.Session.get', side_effect=get_returns) as mget:
+ samples = lims._get_instances(Sample, nb_pages=-1)
+ assert len(samples) == 6
+ assert mget.call_count == 3
+
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/EdinburghGenomics/pyclarity-lims.git@d73f5b7d76f0d65b4fe51fbc80e4bf9f49903a6c#egg=pyclarity_lims
pyparsing==3.1.4
pytest==7.0.1
requests==2.27.1
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: pyclarity-lims
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- requests==2.27.1
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/pyclarity-lims
| [
"tests/test_lims.py::TestLims::test_get_instances"
]
| []
| [
"tests/test_lims.py::TestLims::test_get",
"tests/test_lims.py::TestLims::test_get_file_contents",
"tests/test_lims.py::TestLims::test_get_uri",
"tests/test_lims.py::TestLims::test_parse_response",
"tests/test_lims.py::TestLims::test_post",
"tests/test_lims.py::TestLims::test_put",
"tests/test_lims.py::TestLims::test_route_artifact",
"tests/test_lims.py::TestLims::test_tostring",
"tests/test_lims.py::TestLims::test_upload_new_file"
]
| []
| MIT License | 2,502 | [
"pyclarity_lims/lims.py"
]
| [
"pyclarity_lims/lims.py"
]
|
|
google__mobly-444 | 8caa5c387b2df47a180e0349fbebe7838b099b83 | 2018-05-11 15:02:16 | 95286a01a566e056d44acfa9577a45bc7f37f51d | diff --git a/mobly/controllers/android_device_lib/adb.py b/mobly/controllers/android_device_lib/adb.py
index 12c14bd..cb5b6b6 100644
--- a/mobly/controllers/android_device_lib/adb.py
+++ b/mobly/controllers/android_device_lib/adb.py
@@ -237,7 +237,8 @@ class AdbProxy(object):
def forward(self, args=None, shell=False):
with ADB_PORT_LOCK:
- return self._exec_adb_cmd('forward', args, shell, timeout=None)
+ return self._exec_adb_cmd(
+ 'forward', args, shell, timeout=None, stderr=None)
def instrument(self, package, options=None, runner=None):
"""Runs an instrumentation command on the device.
| `current_test_info` should exist between `setup_class` and `setup_test`
Right now `current_test_info` is None between `setup_class` and `setup_test`, which makes it difficult to use this field consistently.
E.g. if a test relies on this field in `on_fail`, if `setup_class` fails, the logic in `on_fail` would raise an exception for any call to `current_test_info`. | google/mobly | diff --git a/mobly/base_test.py b/mobly/base_test.py
index e4e047b..13a79b0 100644
--- a/mobly/base_test.py
+++ b/mobly/base_test.py
@@ -624,6 +624,10 @@ class BaseTestClass(object):
tests = self._get_test_methods(test_names)
try:
# Setup for the class.
+ class_record = records.TestResultRecord('setup_class', self.TAG)
+ class_record.test_begin()
+ self.current_test_info = runtime_test_info.RuntimeTestInfo(
+ 'setup_class', self.log_path, class_record)
try:
self._setup_class()
except signals.TestAbortSignal:
@@ -633,9 +637,6 @@ class BaseTestClass(object):
# Setup class failed for unknown reasons.
# Fail the class and skip all tests.
logging.exception('Error in setup_class %s.', self.TAG)
- class_record = records.TestResultRecord(
- 'setup_class', self.TAG)
- class_record.test_begin()
class_record.test_error(e)
self._exec_procedure_func(self._on_fail, class_record)
self.results.add_class_error(class_record)
diff --git a/mobly/runtime_test_info.py b/mobly/runtime_test_info.py
index f4eea99..57b0742 100644
--- a/mobly/runtime_test_info.py
+++ b/mobly/runtime_test_info.py
@@ -19,10 +19,13 @@ from mobly import utils
class RuntimeTestInfo(object):
- """Container class for runtime information of a test.
+ """Container class for runtime information of a test or test stage.
One object corresponds to one test. This is meant to be a read-only class.
+ This also applies to test stages like `setup_class`, which has its own
+ runtime info but is not part of any single test.
+
Attributes:
name: string, name of the test.
signature: string, an identifier of the test, a combination of test
diff --git a/tests/mobly/base_test_test.py b/tests/mobly/base_test_test.py
index d78a640..a38b532 100755
--- a/tests/mobly/base_test_test.py
+++ b/tests/mobly/base_test_test.py
@@ -91,6 +91,25 @@ class BaseTestTest(unittest.TestCase):
self.assertIsNone(actual_record.details)
self.assertIsNone(actual_record.extras)
+ def test_current_test_info_in_setup_class(self):
+ class MockBaseTest(base_test.BaseTestClass):
+ def setup_class(self):
+ asserts.assert_true(
+ self.current_test_info.name == 'setup_class',
+ 'Got unexpected test name %s.' %
+ self.current_test_info.name)
+ output_path = self.current_test_info.output_path
+ asserts.assert_true(
+ os.path.exists(output_path), 'test output path missing')
+ raise Exception(MSG_EXPECTED_EXCEPTION)
+
+ bt_cls = MockBaseTest(self.mock_test_cls_configs)
+ bt_cls.run()
+ actual_record = bt_cls.results.error[0]
+ self.assertEqual(actual_record.test_name, 'setup_class')
+ self.assertEqual(actual_record.details, MSG_EXPECTED_EXCEPTION)
+ self.assertIsNone(actual_record.extras)
+
def test_self_tests_list(self):
class MockBaseTest(base_test.BaseTestClass):
def __init__(self, controllers):
diff --git a/tests/mobly/controllers/android_device_lib/adb_test.py b/tests/mobly/controllers/android_device_lib/adb_test.py
index 7bf61ab..cf699ce 100755
--- a/tests/mobly/controllers/android_device_lib/adb_test.py
+++ b/tests/mobly/controllers/android_device_lib/adb_test.py
@@ -173,6 +173,10 @@ class AdbTest(unittest.TestCase):
self.assertEqual(MOCK_DEFAULT_STDERR,
stderr_redirect.getvalue().decode('utf-8'))
+ def test_forward(self):
+ with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ adb.AdbProxy().forward(MOCK_SHELL_COMMAND)
+
def test_instrument_without_parameters(self):
"""Verifies the AndroidDevice object's instrument command is correct in
the basic case.
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 1
} | 1.7 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
exceptiongroup==1.2.2
future==1.0.0
importlib-metadata==6.7.0
iniconfig==2.0.0
-e git+https://github.com/google/mobly.git@8caa5c387b2df47a180e0349fbebe7838b099b83#egg=mobly
mock==1.0.1
packaging==24.0
pluggy==1.2.0
portpicker==1.6.0
psutil==7.0.0
pyserial==3.5
pytest==7.4.4
pytz==2025.2
PyYAML==6.0.1
timeout-decorator==0.5.0
tomli==2.0.1
typing_extensions==4.7.1
zipp==3.15.0
| name: mobly
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- future==1.0.0
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- mock==1.0.1
- packaging==24.0
- pluggy==1.2.0
- portpicker==1.6.0
- psutil==7.0.0
- pyserial==3.5
- pytest==7.4.4
- pytz==2025.2
- pyyaml==6.0.1
- timeout-decorator==0.5.0
- tomli==2.0.1
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/mobly
| [
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_forward"
]
| [
"tests/mobly/base_test_test.py::BaseTestTest::test_write_user_data"
]
| [
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_on_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_on_fail_from_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_setup_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_in_on_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_in_setup_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_in_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_equal_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_equal_fail_with_msg",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_equal_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_fail_with_noop",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_fail_with_wrong_error",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_fail_with_wrong_regex",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_regex_fail_with_noop",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_regex_fail_with_wrong_error",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_regex_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_true",
"tests/mobly/base_test_test.py::BaseTestTest::test_both_teardown_and_test_body_raise_exceptions",
"tests/mobly/base_test_test.py::BaseTestTest::test_cli_test_selection_fail_by_convention",
"tests/mobly/base_test_test.py::BaseTestTest::test_cli_test_selection_override_self_tests_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_current_test_info",
"tests/mobly/base_test_test.py::BaseTestTest::test_current_test_info_in_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_current_test_name",
"tests/mobly/base_test_test.py::BaseTestTest::test_default_execution_of_all_tests",
"tests/mobly/base_test_test.py::BaseTestTest::test_exception_objects_in_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_equal",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_false",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_in_teardown_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_multiple_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_no_op",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_no_raises_custom_msg",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_no_raises_default_msg",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_true",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_true_and_assert_true",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_two_tests",
"tests/mobly/base_test_test.py::BaseTestTest::test_explicit_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_explicit_pass_but_teardown_test_raises_an_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_failure_in_procedure_functions_is_recorded",
"tests/mobly/base_test_test.py::BaseTestTest::test_failure_to_call_procedure_function_is_recorded",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_call_outside_of_setup_generated_tests",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_dup_test_name",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_run",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_selected_run",
"tests/mobly/base_test_test.py::BaseTestTest::test_implicit_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_missing_requested_test_func",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_cannot_modify_original_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_both_test_and_teardown_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_teardown_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_test_setup_fails_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_raise_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_pass_cannot_modify_original_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_pass_raise_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_procedure_function_gets_correct_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_promote_extra_errors_to_termination_signal",
"tests/mobly/base_test_test.py::BaseTestTest::test_self_tests_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_self_tests_list_fail_by_convention",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_and_teardown_execution_count",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_class_fail_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_test_fail_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_test_fail_by_test_signal",
"tests/mobly/base_test_test.py::BaseTestTest::test_skip",
"tests/mobly/base_test_test.py::BaseTestTest::test_skip_if",
"tests/mobly/base_test_test.py::BaseTestTest::test_skip_in_setup_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_class_fail_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_assert_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_executed_if_setup_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_executed_if_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_executed_if_test_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_raise_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_uncaught_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_basic",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_None",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_overwrite",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_overwrite_by_optional_param_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_overwrite_by_required_param_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_optional",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_optional_missing",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_optional_with_default",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_required",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_required_missing",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_cli_cmd_to_string",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_serial",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_shell_true",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_shell_true_with_serial",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_stderr_pipe",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_error_no_timeout",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_no_timeout_success",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_timed_out",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_with_negative_timeout_value",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_with_timeout_success",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_called_correctly",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_existing_command",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_missing_command_on_newer_devices",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_missing_command_on_older_devices",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_with_options",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_with_runner",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_without_parameters"
]
| []
| Apache License 2.0 | 2,505 | [
"mobly/controllers/android_device_lib/adb.py"
]
| [
"mobly/controllers/android_device_lib/adb.py"
]
|
|
google__mobly-445 | 8caa5c387b2df47a180e0349fbebe7838b099b83 | 2018-05-11 15:20:50 | 95286a01a566e056d44acfa9577a45bc7f37f51d | diff --git a/docs/mobly.rst b/docs/mobly.rst
index ee4b412..91ecb1b 100644
--- a/docs/mobly.rst
+++ b/docs/mobly.rst
@@ -35,6 +35,14 @@ mobly.config_parser module
:undoc-members:
:show-inheritance:
+mobly.expects module
+--------------------------
+
+.. automodule:: mobly.expects
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
mobly.keys module
-----------------
@@ -59,6 +67,14 @@ mobly.records module
:undoc-members:
:show-inheritance:
+mobly.runtime_test_info module
+------------------------------
+
+.. automodule:: mobly.runtime_test_info
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
mobly.signals module
--------------------
@@ -67,6 +83,14 @@ mobly.signals module
:undoc-members:
:show-inheritance:
+mobly.suite_runner module
+-------------------------
+
+.. automodule:: mobly.suite_runner
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
mobly.test_runner module
------------------------
diff --git a/mobly/controllers/android_device.py b/mobly/controllers/android_device.py
index 14828a4..746f819 100644
--- a/mobly/controllers/android_device.py
+++ b/mobly/controllers/android_device.py
@@ -436,8 +436,9 @@ class AndroidDevice(object):
self._log_path = os.path.join(self._log_path_base,
'AndroidDevice%s' % self._serial)
self._debug_tag = self._serial
- self.log = AndroidDeviceLoggerAdapter(logging.getLogger(),
- {'tag': self.debug_tag})
+ self.log = AndroidDeviceLoggerAdapter(logging.getLogger(), {
+ 'tag': self.debug_tag
+ })
self.sl4a = None
self.ed = None
self._adb_logcat_process = None
@@ -680,6 +681,9 @@ class AndroidDevice(object):
execution result after device got reconnected.
Example Usage:
+
+ .. code-block:: python
+
with ad.handle_usb_disconnect():
try:
# User action that triggers USB disconnect, could throw
@@ -842,9 +846,12 @@ class AndroidDevice(object):
"""Starts the snippet apk with the given package name and connects.
Examples:
- >>> ad.load_snippet(
+
+ .. code-block:: python
+
+ ad.load_snippet(
name='maps', package='com.google.maps.snippets')
- >>> ad.maps.activateZoom('3')
+ ad.maps.activateZoom('3')
Args:
name: The attribute name to which to attach the snippet server.
diff --git a/mobly/controllers/android_device_lib/adb.py b/mobly/controllers/android_device_lib/adb.py
index 12c14bd..db705e7 100644
--- a/mobly/controllers/android_device_lib/adb.py
+++ b/mobly/controllers/android_device_lib/adb.py
@@ -237,7 +237,8 @@ class AdbProxy(object):
def forward(self, args=None, shell=False):
with ADB_PORT_LOCK:
- return self._exec_adb_cmd('forward', args, shell, timeout=None)
+ return self._exec_adb_cmd(
+ 'forward', args, shell, timeout=None, stderr=None)
def instrument(self, package, options=None, runner=None):
"""Runs an instrumentation command on the device.
@@ -245,6 +246,9 @@ class AdbProxy(object):
This is a convenience wrapper to avoid parameter formatting.
Example:
+
+ .. code-block:: python
+
device.instrument(
'com.my.package.test',
options = {
diff --git a/mobly/suite_runner.py b/mobly/suite_runner.py
index a7e5f16..468b7e7 100644
--- a/mobly/suite_runner.py
+++ b/mobly/suite_runner.py
@@ -16,6 +16,8 @@
To create a test suite, call suite_runner.run_suite() with one or more
individual test classes. For example:
+.. code-block:: python
+
from mobly import suite_runner
from my.test.lib import foo_test
@@ -103,7 +105,7 @@ def run_suite(test_classes, argv=None):
sys.exit(1)
-def _compute_selected_tests(test_classes, selected_tests):
+def compute_selected_tests(test_classes, selected_tests):
"""Computes tests to run for each class from selector strings.
This function transforms a list of selector strings (such as FooTest or
@@ -112,24 +114,34 @@ def _compute_selected_tests(test_classes, selected_tests):
that class are selected.
Args:
- test_classes: (list of class) all classes that are part of this suite.
- selected_tests: (list of string) list of tests to execute, eg:
- [
- 'FooTest',
- 'BarTest',
- 'BazTest.test_method_a',
- 'BazTest.test_method_b'
- ].
- May be empty, in which case all tests in the class are selected.
+ test_classes: list of strings, names of all the classes that are part
+ of a suite.
+ selected_tests: list of strings, list of tests to execute. If empty,
+ all classes `test_classes` are selected. E.g.
+
+ .. code-block:: python
+
+ [
+ 'FooTest',
+ 'BarTest',
+ 'BazTest.test_method_a',
+ 'BazTest.test_method_b'
+ ]
Returns:
- dict: test_name class -> list(test_name name):
- identifiers for TestRunner. For the above example:
- {
- FooTest: None,
- BarTest: None,
- BazTest: ['test_method_a', 'test_method_b'],
- }
+ dict: Identifiers for TestRunner. Keys are test class names; valures
+ are lists of test names within class. E.g. the example in
+ `selected_tests` would translate to:
+
+ .. code-block:: python
+
+ {
+ FooTest: None,
+ BarTest: None,
+ BazTest: ['test_method_a', 'test_method_b']
+ }
+
+ This dict is easy to consume for `TestRunner`.
"""
class_to_tests = collections.OrderedDict()
if not selected_tests:
| Modules missing from API docs
Our [API docs](http://mobly.readthedocs.io/en/stable/)
Is missing some modules like `mobly.runtime_test_info` | google/mobly | diff --git a/mobly/base_test.py b/mobly/base_test.py
index e4e047b..13a79b0 100644
--- a/mobly/base_test.py
+++ b/mobly/base_test.py
@@ -624,6 +624,10 @@ class BaseTestClass(object):
tests = self._get_test_methods(test_names)
try:
# Setup for the class.
+ class_record = records.TestResultRecord('setup_class', self.TAG)
+ class_record.test_begin()
+ self.current_test_info = runtime_test_info.RuntimeTestInfo(
+ 'setup_class', self.log_path, class_record)
try:
self._setup_class()
except signals.TestAbortSignal:
@@ -633,9 +637,6 @@ class BaseTestClass(object):
# Setup class failed for unknown reasons.
# Fail the class and skip all tests.
logging.exception('Error in setup_class %s.', self.TAG)
- class_record = records.TestResultRecord(
- 'setup_class', self.TAG)
- class_record.test_begin()
class_record.test_error(e)
self._exec_procedure_func(self._on_fail, class_record)
self.results.add_class_error(class_record)
diff --git a/mobly/runtime_test_info.py b/mobly/runtime_test_info.py
index f4eea99..57b0742 100644
--- a/mobly/runtime_test_info.py
+++ b/mobly/runtime_test_info.py
@@ -19,10 +19,13 @@ from mobly import utils
class RuntimeTestInfo(object):
- """Container class for runtime information of a test.
+ """Container class for runtime information of a test or test stage.
One object corresponds to one test. This is meant to be a read-only class.
+ This also applies to test stages like `setup_class`, which has its own
+ runtime info but is not part of any single test.
+
Attributes:
name: string, name of the test.
signature: string, an identifier of the test, a combination of test
diff --git a/tests/mobly/base_test_test.py b/tests/mobly/base_test_test.py
index d78a640..a38b532 100755
--- a/tests/mobly/base_test_test.py
+++ b/tests/mobly/base_test_test.py
@@ -91,6 +91,25 @@ class BaseTestTest(unittest.TestCase):
self.assertIsNone(actual_record.details)
self.assertIsNone(actual_record.extras)
+ def test_current_test_info_in_setup_class(self):
+ class MockBaseTest(base_test.BaseTestClass):
+ def setup_class(self):
+ asserts.assert_true(
+ self.current_test_info.name == 'setup_class',
+ 'Got unexpected test name %s.' %
+ self.current_test_info.name)
+ output_path = self.current_test_info.output_path
+ asserts.assert_true(
+ os.path.exists(output_path), 'test output path missing')
+ raise Exception(MSG_EXPECTED_EXCEPTION)
+
+ bt_cls = MockBaseTest(self.mock_test_cls_configs)
+ bt_cls.run()
+ actual_record = bt_cls.results.error[0]
+ self.assertEqual(actual_record.test_name, 'setup_class')
+ self.assertEqual(actual_record.details, MSG_EXPECTED_EXCEPTION)
+ self.assertIsNone(actual_record.extras)
+
def test_self_tests_list(self):
class MockBaseTest(base_test.BaseTestClass):
def __init__(self, controllers):
diff --git a/tests/mobly/controllers/android_device_lib/adb_test.py b/tests/mobly/controllers/android_device_lib/adb_test.py
index 7bf61ab..cf699ce 100755
--- a/tests/mobly/controllers/android_device_lib/adb_test.py
+++ b/tests/mobly/controllers/android_device_lib/adb_test.py
@@ -173,6 +173,10 @@ class AdbTest(unittest.TestCase):
self.assertEqual(MOCK_DEFAULT_STDERR,
stderr_redirect.getvalue().decode('utf-8'))
+ def test_forward(self):
+ with mock.patch.object(adb.AdbProxy, '_exec_cmd') as mock_exec_cmd:
+ adb.AdbProxy().forward(MOCK_SHELL_COMMAND)
+
def test_instrument_without_parameters(self):
"""Verifies the AndroidDevice object's instrument command is correct in
the basic case.
diff --git a/tests/mobly/suite_runner_test.py b/tests/mobly/suite_runner_test.py
index dacd754..d0a9be4 100755
--- a/tests/mobly/suite_runner_test.py
+++ b/tests/mobly/suite_runner_test.py
@@ -21,7 +21,7 @@ from tests.lib import integration2_test
class SuiteRunnerTest(unittest.TestCase):
def test_select_no_args(self):
- identifiers = suite_runner._compute_selected_tests(
+ identifiers = suite_runner.compute_selected_tests(
test_classes=[
integration_test.IntegrationTest,
integration2_test.Integration2Test
@@ -33,7 +33,7 @@ class SuiteRunnerTest(unittest.TestCase):
}, identifiers)
def test_select_by_class(self):
- identifiers = suite_runner._compute_selected_tests(
+ identifiers = suite_runner.compute_selected_tests(
test_classes=[
integration_test.IntegrationTest,
integration2_test.Integration2Test
@@ -42,7 +42,7 @@ class SuiteRunnerTest(unittest.TestCase):
self.assertEqual({integration_test.IntegrationTest: None}, identifiers)
def test_select_by_method(self):
- identifiers = suite_runner._compute_selected_tests(
+ identifiers = suite_runner.compute_selected_tests(
test_classes=[
integration_test.IntegrationTest,
integration2_test.Integration2Test
@@ -55,7 +55,7 @@ class SuiteRunnerTest(unittest.TestCase):
}, identifiers)
def test_select_all_clobbers_method(self):
- identifiers = suite_runner._compute_selected_tests(
+ identifiers = suite_runner.compute_selected_tests(
test_classes=[
integration_test.IntegrationTest,
integration2_test.Integration2Test
@@ -63,7 +63,7 @@ class SuiteRunnerTest(unittest.TestCase):
selected_tests=['IntegrationTest.test_a', 'IntegrationTest'])
self.assertEqual({integration_test.IntegrationTest: None}, identifiers)
- identifiers = suite_runner._compute_selected_tests(
+ identifiers = suite_runner.compute_selected_tests(
test_classes=[
integration_test.IntegrationTest,
integration2_test.Integration2Test
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 4
} | 1.7 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
execnet==2.1.1
future==1.0.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/google/mobly.git@8caa5c387b2df47a180e0349fbebe7838b099b83#egg=mobly
mock==1.0.1
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
portpicker==1.6.0
psutil==7.0.0
pyserial==3.5
pytest @ file:///croot/pytest_1738938843180/work
pytest-asyncio==0.26.0
pytest-cov==6.0.0
pytest-mock==3.14.0
pytest-xdist==3.6.1
pytz==2025.2
PyYAML==6.0.2
timeout-decorator==0.5.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions==4.13.0
| name: mobly
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- execnet==2.1.1
- future==1.0.0
- mock==1.0.1
- portpicker==1.6.0
- psutil==7.0.0
- pyserial==3.5
- pytest-asyncio==0.26.0
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pytest-xdist==3.6.1
- pytz==2025.2
- pyyaml==6.0.2
- timeout-decorator==0.5.0
- typing-extensions==4.13.0
prefix: /opt/conda/envs/mobly
| [
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_forward",
"tests/mobly/suite_runner_test.py::SuiteRunnerTest::test_select_all_clobbers_method",
"tests/mobly/suite_runner_test.py::SuiteRunnerTest::test_select_by_class",
"tests/mobly/suite_runner_test.py::SuiteRunnerTest::test_select_by_method",
"tests/mobly/suite_runner_test.py::SuiteRunnerTest::test_select_no_args"
]
| [
"tests/mobly/base_test_test.py::BaseTestTest::test_write_user_data"
]
| [
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_on_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_on_fail_from_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_setup_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_in_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_all_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_in_on_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_in_setup_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_in_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_abort_class_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_equal_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_equal_fail_with_msg",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_equal_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_fail_with_noop",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_fail_with_wrong_error",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_fail_with_wrong_regex",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_regex_fail_with_noop",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_regex_fail_with_wrong_error",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_raises_regex_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_assert_true",
"tests/mobly/base_test_test.py::BaseTestTest::test_both_teardown_and_test_body_raise_exceptions",
"tests/mobly/base_test_test.py::BaseTestTest::test_cli_test_selection_fail_by_convention",
"tests/mobly/base_test_test.py::BaseTestTest::test_cli_test_selection_override_self_tests_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_current_test_info",
"tests/mobly/base_test_test.py::BaseTestTest::test_current_test_info_in_setup_class",
"tests/mobly/base_test_test.py::BaseTestTest::test_current_test_name",
"tests/mobly/base_test_test.py::BaseTestTest::test_default_execution_of_all_tests",
"tests/mobly/base_test_test.py::BaseTestTest::test_exception_objects_in_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_equal",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_false",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_in_teardown_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_multiple_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_no_op",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_no_raises_custom_msg",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_no_raises_default_msg",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_true",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_true_and_assert_true",
"tests/mobly/base_test_test.py::BaseTestTest::test_expect_two_tests",
"tests/mobly/base_test_test.py::BaseTestTest::test_explicit_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_explicit_pass_but_teardown_test_raises_an_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_failure_in_procedure_functions_is_recorded",
"tests/mobly/base_test_test.py::BaseTestTest::test_failure_to_call_procedure_function_is_recorded",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_call_outside_of_setup_generated_tests",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_dup_test_name",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_run",
"tests/mobly/base_test_test.py::BaseTestTest::test_generate_tests_selected_run",
"tests/mobly/base_test_test.py::BaseTestTest::test_implicit_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_missing_requested_test_func",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_cannot_modify_original_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_both_test_and_teardown_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_teardown_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_executed_if_test_setup_fails_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_fail_raise_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_pass_cannot_modify_original_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_on_pass_raise_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_procedure_function_gets_correct_record",
"tests/mobly/base_test_test.py::BaseTestTest::test_promote_extra_errors_to_termination_signal",
"tests/mobly/base_test_test.py::BaseTestTest::test_self_tests_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_self_tests_list_fail_by_convention",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_and_teardown_execution_count",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_class_fail_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_test_fail_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_setup_test_fail_by_test_signal",
"tests/mobly/base_test_test.py::BaseTestTest::test_skip",
"tests/mobly/base_test_test.py::BaseTestTest::test_skip_if",
"tests/mobly/base_test_test.py::BaseTestTest::test_skip_in_setup_test",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_class_fail_by_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_assert_fail",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_executed_if_setup_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_executed_if_test_fails",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_executed_if_test_pass",
"tests/mobly/base_test_test.py::BaseTestTest::test_teardown_test_raise_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_uncaught_exception",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_basic",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_None",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_overwrite",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_overwrite_by_optional_param_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_default_overwrite_by_required_param_list",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_optional",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_optional_missing",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_optional_with_default",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_required",
"tests/mobly/base_test_test.py::BaseTestTest::test_unpack_userparams_required_missing",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_cli_cmd_to_string",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_serial",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_shell_true",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_shell_true_with_serial",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_adb_cmd_with_stderr_pipe",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_error_no_timeout",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_no_timeout_success",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_timed_out",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_with_negative_timeout_value",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_exec_cmd_with_timeout_success",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_called_correctly",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_existing_command",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_missing_command_on_newer_devices",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_has_shell_command_with_missing_command_on_older_devices",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_with_options",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_with_runner",
"tests/mobly/controllers/android_device_lib/adb_test.py::AdbTest::test_instrument_without_parameters"
]
| []
| Apache License 2.0 | 2,506 | [
"mobly/controllers/android_device.py",
"docs/mobly.rst",
"mobly/suite_runner.py",
"mobly/controllers/android_device_lib/adb.py"
]
| [
"mobly/controllers/android_device.py",
"docs/mobly.rst",
"mobly/suite_runner.py",
"mobly/controllers/android_device_lib/adb.py"
]
|
|
firebase__firebase-admin-python-162 | 351d624a91acb0babfaee019998173555a967755 | 2018-05-11 17:40:50 | 9444c9507c6df68eeb9c2829c52d9f62a3e065e1 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index b886d51..26877b5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,7 @@
# Unreleased
--
+- [fixed] The `db.Reference.update()` function now accepts dictionaries with
+ `None` values. This can be used to delete child keys from a reference.
# v2.10.0
diff --git a/firebase_admin/db.py b/firebase_admin/db.py
index c29c30d..5613a1e 100644
--- a/firebase_admin/db.py
+++ b/firebase_admin/db.py
@@ -269,8 +269,8 @@ class Reference(object):
"""
if not value or not isinstance(value, dict):
raise ValueError('Value argument must be a non-empty dictionary.')
- if None in value.keys() or None in value.values():
- raise ValueError('Dictionary must not contain None keys or values.')
+ if None in value.keys():
+ raise ValueError('Dictionary must not contain None keys.')
self._client.request('patch', self._add_suffix(), json=value, params='print=silent')
def delete(self):
| Skip validation flag for None in db.update()
### [REQUIRED] Step 2: Describe your environment
* Operating System version: Ubuntu 16.04
* Firebase SDK version: 2.10.0
* Library version: _____
* Firebase Product: database
### [REQUIRED] Step 3: Describe the problem
Updating a key with the value null is a valid way to delete data in Node, Java SDK.
However currently you cannot do this in Python as it breaks the validation mechanism. (`Dictionary must not contain None keys or values.`)
This is problematic when you are doing multi-path delete as it doesn't work right now in Python.
Is it possible to add a flag to the .update method to make it skip the validation?
#### Steps to reproduce:
```
update_statement = prepare_multipath_update(args)
db.reference().update(update_statement)
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "", line 60, in <module>
db.reference().update(update_statement)
File "../lib/python3.5/site-packages/firebase_admin/db.py", line 273, in update
raise ValueError('Dictionary must not contain None keys or values.')
ValueError: Dictionary must not contain None keys or values.
```
I managed to get over the validation by using the internal methods of db.Reference
```
db_ref = db.reference()
db_ref._client.request('patch', db_ref._add_suffix(), json=update_statement)
```
It would be nice if there is a flag that can be provided to the update statement that skips the None validation.
E.g: `db.reference().update(update_statement, skip_none_validation=True)` | firebase/firebase-admin-python | diff --git a/integration/test_db.py b/integration/test_db.py
index c3ba2e4..7d2726a 100644
--- a/integration/test_db.py
+++ b/integration/test_db.py
@@ -155,9 +155,14 @@ class TestWriteOperations(object):
def test_update_children_with_existing_values(self, testref):
python = testref.parent
- ref = python.child('users').push({'name' : 'Edwin Colbert', 'since' : 1900})
+ value = {'name' : 'Edwin Colbert', 'since' : 1900, 'temp': True}
+ ref = python.child('users').push(value)
ref.update({'since' : 1905})
- assert ref.get() == {'name' : 'Edwin Colbert', 'since' : 1905}
+ value['since'] = 1905
+ assert ref.get() == value
+ ref.update({'temp': None})
+ del value['temp']
+ assert ref.get() == value
def test_update_nested_children(self, testref):
python = testref.parent
diff --git a/tests/test_db.py b/tests/test_db.py
index 145480f..1bbd4c7 100644
--- a/tests/test_db.py
+++ b/tests/test_db.py
@@ -259,9 +259,9 @@ class TestReference(object):
with pytest.raises(TypeError):
ref.set(value)
- def test_update_children(self):
+ @pytest.mark.parametrize('data', [{'foo': 'bar'}, {'foo': None}])
+ def test_update_children(self, data):
ref = db.reference('/test')
- data = {'foo' : 'bar'}
recorder = self.instrument(ref, json.dumps(data))
ref.update(data)
assert len(recorder) == 1
@@ -317,21 +317,15 @@ class TestReference(object):
with pytest.raises(TypeError):
ref.set_if_unchanged(MockAdapter.ETAG, value)
- def test_update_children_default(self):
- ref = db.reference('/test')
- recorder = self.instrument(ref, '')
- with pytest.raises(ValueError):
- ref.update({})
- assert len(recorder) is 0
-
@pytest.mark.parametrize('update', [
- None, {}, {None:'foo'}, {'foo': None}, '', 'foo', 0, 1, list(), tuple(), _Object()
+ None, {}, {None:'foo'}, '', 'foo', 0, 1, list(), tuple(), _Object()
])
def test_set_invalid_update(self, update):
ref = db.reference('/test')
- self.instrument(ref, '')
+ recorder = self.instrument(ref, '')
with pytest.raises(ValueError):
ref.update(update)
+ assert len(recorder) is 0
@pytest.mark.parametrize('data', valid_values)
def test_push(self, data):
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 2
} | 2.10 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | astroid==1.4.9
CacheControl==0.14.2
cachetools==5.5.2
certifi==2025.1.31
chardet==5.2.0
charset-normalizer==3.4.1
colorama==0.4.6
coverage==7.8.0
distlib==0.3.9
exceptiongroup==1.2.2
filelock==3.18.0
-e git+https://github.com/firebase/firebase-admin-python.git@351d624a91acb0babfaee019998173555a967755#egg=firebase_admin
google-api-core==2.24.2
google-auth==2.38.0
google-cloud-core==2.4.3
google-cloud-firestore==2.20.1
google-cloud-storage==3.1.0
google-crc32c==1.7.1
google-resumable-media==2.7.2
googleapis-common-protos==1.69.2
grpcio==1.71.0
grpcio-status==1.71.0
idna==3.10
iniconfig==2.1.0
isort==6.0.1
lazy-object-proxy==1.10.0
MarkupSafe==3.0.2
mccabe==0.7.0
msgpack==1.1.0
packaging==24.2
platformdirs==4.3.7
pluggy==1.5.0
proto-plus==1.26.1
protobuf==5.29.4
pyasn1==0.6.1
pyasn1_modules==0.4.2
pylint==1.6.4
pyproject-api==1.9.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-localserver==0.9.0.post0
requests==2.32.3
rsa==4.9
six==1.17.0
tomli==2.2.1
tox==4.25.0
typing_extensions==4.13.0
urllib3==2.3.0
virtualenv==20.29.3
Werkzeug==3.1.3
wrapt==1.17.2
| name: firebase-admin-python
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- astroid==1.4.9
- cachecontrol==0.14.2
- cachetools==5.5.2
- certifi==2025.1.31
- chardet==5.2.0
- charset-normalizer==3.4.1
- colorama==0.4.6
- coverage==7.8.0
- distlib==0.3.9
- exceptiongroup==1.2.2
- filelock==3.18.0
- google-api-core==2.24.2
- google-auth==2.38.0
- google-cloud-core==2.4.3
- google-cloud-firestore==2.20.1
- google-cloud-storage==3.1.0
- google-crc32c==1.7.1
- google-resumable-media==2.7.2
- googleapis-common-protos==1.69.2
- grpcio==1.71.0
- grpcio-status==1.71.0
- idna==3.10
- iniconfig==2.1.0
- isort==6.0.1
- lazy-object-proxy==1.10.0
- markupsafe==3.0.2
- mccabe==0.7.0
- msgpack==1.1.0
- packaging==24.2
- platformdirs==4.3.7
- pluggy==1.5.0
- proto-plus==1.26.1
- protobuf==5.29.4
- pyasn1==0.6.1
- pyasn1-modules==0.4.2
- pylint==1.6.4
- pyproject-api==1.9.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-localserver==0.9.0.post0
- requests==2.32.3
- rsa==4.9
- six==1.17.0
- tomli==2.2.1
- tox==4.25.0
- typing-extensions==4.13.0
- urllib3==2.3.0
- virtualenv==20.29.3
- werkzeug==3.1.3
- wrapt==1.17.2
prefix: /opt/conda/envs/firebase-admin-python
| [
"tests/test_db.py::TestReference::test_update_children[data1]"
]
| [
"tests/test_db.py::TestDatabseInitialization::test_valid_db_url[https://test.firebaseio.com]",
"tests/test_db.py::TestDatabseInitialization::test_valid_db_url[https://test.firebaseio.com/]"
]
| [
"tests/test_db.py::TestReferencePath::test_valid_path[/-expected0]",
"tests/test_db.py::TestReferencePath::test_valid_path[-expected1]",
"tests/test_db.py::TestReferencePath::test_valid_path[/foo-expected2]",
"tests/test_db.py::TestReferencePath::test_valid_path[foo-expected3]",
"tests/test_db.py::TestReferencePath::test_valid_path[/foo/bar-expected4]",
"tests/test_db.py::TestReferencePath::test_valid_path[foo/bar-expected5]",
"tests/test_db.py::TestReferencePath::test_valid_path[/foo/bar/-expected6]",
"tests/test_db.py::TestReferencePath::test_invalid_key[None]",
"tests/test_db.py::TestReferencePath::test_invalid_key[True]",
"tests/test_db.py::TestReferencePath::test_invalid_key[False]",
"tests/test_db.py::TestReferencePath::test_invalid_key[0]",
"tests/test_db.py::TestReferencePath::test_invalid_key[1]",
"tests/test_db.py::TestReferencePath::test_invalid_key[path5]",
"tests/test_db.py::TestReferencePath::test_invalid_key[path6]",
"tests/test_db.py::TestReferencePath::test_invalid_key[path7]",
"tests/test_db.py::TestReferencePath::test_invalid_key[path8]",
"tests/test_db.py::TestReferencePath::test_invalid_key[foo#]",
"tests/test_db.py::TestReferencePath::test_invalid_key[foo.]",
"tests/test_db.py::TestReferencePath::test_invalid_key[foo$]",
"tests/test_db.py::TestReferencePath::test_invalid_key[foo[]",
"tests/test_db.py::TestReferencePath::test_invalid_key[foo]]",
"tests/test_db.py::TestReferencePath::test_valid_child[foo-expected0]",
"tests/test_db.py::TestReferencePath::test_valid_child[foo/bar-expected1]",
"tests/test_db.py::TestReferencePath::test_valid_child[foo/bar/-expected2]",
"tests/test_db.py::TestReferencePath::test_invalid_child[None]",
"tests/test_db.py::TestReferencePath::test_invalid_child[]",
"tests/test_db.py::TestReferencePath::test_invalid_child[/foo]",
"tests/test_db.py::TestReferencePath::test_invalid_child[/foo/bar]",
"tests/test_db.py::TestReferencePath::test_invalid_child[True]",
"tests/test_db.py::TestReferencePath::test_invalid_child[False]",
"tests/test_db.py::TestReferencePath::test_invalid_child[0]",
"tests/test_db.py::TestReferencePath::test_invalid_child[1]",
"tests/test_db.py::TestReferencePath::test_invalid_child[child8]",
"tests/test_db.py::TestReferencePath::test_invalid_child[child9]",
"tests/test_db.py::TestReferencePath::test_invalid_child[child10]",
"tests/test_db.py::TestReferencePath::test_invalid_child[foo#]",
"tests/test_db.py::TestReferencePath::test_invalid_child[foo.]",
"tests/test_db.py::TestReferencePath::test_invalid_child[foo$]",
"tests/test_db.py::TestReferencePath::test_invalid_child[foo[]",
"tests/test_db.py::TestReferencePath::test_invalid_child[foo]]",
"tests/test_db.py::TestReferencePath::test_invalid_child[child16]",
"tests/test_db.py::TestReference::test_get_value[]",
"tests/test_db.py::TestReference::test_get_value[foo]",
"tests/test_db.py::TestReference::test_get_value[0]",
"tests/test_db.py::TestReference::test_get_value[1]",
"tests/test_db.py::TestReference::test_get_value[100]",
"tests/test_db.py::TestReference::test_get_value[1.2]",
"tests/test_db.py::TestReference::test_get_value[True]",
"tests/test_db.py::TestReference::test_get_value[False]",
"tests/test_db.py::TestReference::test_get_value[data8]",
"tests/test_db.py::TestReference::test_get_value[data9]",
"tests/test_db.py::TestReference::test_get_value[data10]",
"tests/test_db.py::TestReference::test_get_value[data11]",
"tests/test_db.py::TestReference::test_get_with_etag[]",
"tests/test_db.py::TestReference::test_get_with_etag[foo]",
"tests/test_db.py::TestReference::test_get_with_etag[0]",
"tests/test_db.py::TestReference::test_get_with_etag[1]",
"tests/test_db.py::TestReference::test_get_with_etag[100]",
"tests/test_db.py::TestReference::test_get_with_etag[1.2]",
"tests/test_db.py::TestReference::test_get_with_etag[True]",
"tests/test_db.py::TestReference::test_get_with_etag[False]",
"tests/test_db.py::TestReference::test_get_with_etag[data8]",
"tests/test_db.py::TestReference::test_get_with_etag[data9]",
"tests/test_db.py::TestReference::test_get_with_etag[data10]",
"tests/test_db.py::TestReference::test_get_with_etag[data11]",
"tests/test_db.py::TestReference::test_get_shallow[]",
"tests/test_db.py::TestReference::test_get_shallow[foo]",
"tests/test_db.py::TestReference::test_get_shallow[0]",
"tests/test_db.py::TestReference::test_get_shallow[1]",
"tests/test_db.py::TestReference::test_get_shallow[100]",
"tests/test_db.py::TestReference::test_get_shallow[1.2]",
"tests/test_db.py::TestReference::test_get_shallow[True]",
"tests/test_db.py::TestReference::test_get_shallow[False]",
"tests/test_db.py::TestReference::test_get_shallow[data8]",
"tests/test_db.py::TestReference::test_get_shallow[data9]",
"tests/test_db.py::TestReference::test_get_shallow[data10]",
"tests/test_db.py::TestReference::test_get_shallow[data11]",
"tests/test_db.py::TestReference::test_get_with_etag_and_shallow",
"tests/test_db.py::TestReference::test_get_if_changed[]",
"tests/test_db.py::TestReference::test_get_if_changed[foo]",
"tests/test_db.py::TestReference::test_get_if_changed[0]",
"tests/test_db.py::TestReference::test_get_if_changed[1]",
"tests/test_db.py::TestReference::test_get_if_changed[100]",
"tests/test_db.py::TestReference::test_get_if_changed[1.2]",
"tests/test_db.py::TestReference::test_get_if_changed[True]",
"tests/test_db.py::TestReference::test_get_if_changed[False]",
"tests/test_db.py::TestReference::test_get_if_changed[data8]",
"tests/test_db.py::TestReference::test_get_if_changed[data9]",
"tests/test_db.py::TestReference::test_get_if_changed[data10]",
"tests/test_db.py::TestReference::test_get_if_changed[data11]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[0]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[1]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[True]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[False]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[etag4]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[etag5]",
"tests/test_db.py::TestReference::test_get_if_changed_invalid_etag[etag6]",
"tests/test_db.py::TestReference::test_order_by_query[]",
"tests/test_db.py::TestReference::test_order_by_query[foo]",
"tests/test_db.py::TestReference::test_order_by_query[0]",
"tests/test_db.py::TestReference::test_order_by_query[1]",
"tests/test_db.py::TestReference::test_order_by_query[100]",
"tests/test_db.py::TestReference::test_order_by_query[1.2]",
"tests/test_db.py::TestReference::test_order_by_query[True]",
"tests/test_db.py::TestReference::test_order_by_query[False]",
"tests/test_db.py::TestReference::test_order_by_query[data8]",
"tests/test_db.py::TestReference::test_order_by_query[data9]",
"tests/test_db.py::TestReference::test_order_by_query[data10]",
"tests/test_db.py::TestReference::test_order_by_query[data11]",
"tests/test_db.py::TestReference::test_limit_query[]",
"tests/test_db.py::TestReference::test_limit_query[foo]",
"tests/test_db.py::TestReference::test_limit_query[0]",
"tests/test_db.py::TestReference::test_limit_query[1]",
"tests/test_db.py::TestReference::test_limit_query[100]",
"tests/test_db.py::TestReference::test_limit_query[1.2]",
"tests/test_db.py::TestReference::test_limit_query[True]",
"tests/test_db.py::TestReference::test_limit_query[False]",
"tests/test_db.py::TestReference::test_limit_query[data8]",
"tests/test_db.py::TestReference::test_limit_query[data9]",
"tests/test_db.py::TestReference::test_limit_query[data10]",
"tests/test_db.py::TestReference::test_limit_query[data11]",
"tests/test_db.py::TestReference::test_range_query[]",
"tests/test_db.py::TestReference::test_range_query[foo]",
"tests/test_db.py::TestReference::test_range_query[0]",
"tests/test_db.py::TestReference::test_range_query[1]",
"tests/test_db.py::TestReference::test_range_query[100]",
"tests/test_db.py::TestReference::test_range_query[1.2]",
"tests/test_db.py::TestReference::test_range_query[True]",
"tests/test_db.py::TestReference::test_range_query[False]",
"tests/test_db.py::TestReference::test_range_query[data8]",
"tests/test_db.py::TestReference::test_range_query[data9]",
"tests/test_db.py::TestReference::test_range_query[data10]",
"tests/test_db.py::TestReference::test_range_query[data11]",
"tests/test_db.py::TestReference::test_set_value[]",
"tests/test_db.py::TestReference::test_set_value[foo]",
"tests/test_db.py::TestReference::test_set_value[0]",
"tests/test_db.py::TestReference::test_set_value[1]",
"tests/test_db.py::TestReference::test_set_value[100]",
"tests/test_db.py::TestReference::test_set_value[1.2]",
"tests/test_db.py::TestReference::test_set_value[True]",
"tests/test_db.py::TestReference::test_set_value[False]",
"tests/test_db.py::TestReference::test_set_value[data8]",
"tests/test_db.py::TestReference::test_set_value[data9]",
"tests/test_db.py::TestReference::test_set_value[data10]",
"tests/test_db.py::TestReference::test_set_value[data11]",
"tests/test_db.py::TestReference::test_set_none_value",
"tests/test_db.py::TestReference::test_set_non_json_value[value0]",
"tests/test_db.py::TestReference::test_set_non_json_value[value1]",
"tests/test_db.py::TestReference::test_set_non_json_value[value2]",
"tests/test_db.py::TestReference::test_update_children[data0]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[foo]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[0]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[1]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[100]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[1.2]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[True]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[False]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[data8]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[data9]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[data10]",
"tests/test_db.py::TestReference::test_set_if_unchanged_success[data11]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[foo]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[0]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[1]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[100]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[1.2]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[True]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[False]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[data8]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[data9]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[data10]",
"tests/test_db.py::TestReference::test_set_if_unchanged_failure[data11]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[0]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[1]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[True]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[False]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[etag4]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[etag5]",
"tests/test_db.py::TestReference::test_set_if_unchanged_invalid_etag[etag6]",
"tests/test_db.py::TestReference::test_set_if_unchanged_none_value",
"tests/test_db.py::TestReference::test_set_if_unchanged_non_json_value[value0]",
"tests/test_db.py::TestReference::test_set_if_unchanged_non_json_value[value1]",
"tests/test_db.py::TestReference::test_set_if_unchanged_non_json_value[value2]",
"tests/test_db.py::TestReference::test_set_invalid_update[None]",
"tests/test_db.py::TestReference::test_set_invalid_update[update1]",
"tests/test_db.py::TestReference::test_set_invalid_update[update2]",
"tests/test_db.py::TestReference::test_set_invalid_update[]",
"tests/test_db.py::TestReference::test_set_invalid_update[foo]",
"tests/test_db.py::TestReference::test_set_invalid_update[0]",
"tests/test_db.py::TestReference::test_set_invalid_update[1]",
"tests/test_db.py::TestReference::test_set_invalid_update[update7]",
"tests/test_db.py::TestReference::test_set_invalid_update[update8]",
"tests/test_db.py::TestReference::test_set_invalid_update[update9]",
"tests/test_db.py::TestReference::test_push[]",
"tests/test_db.py::TestReference::test_push[foo]",
"tests/test_db.py::TestReference::test_push[0]",
"tests/test_db.py::TestReference::test_push[1]",
"tests/test_db.py::TestReference::test_push[100]",
"tests/test_db.py::TestReference::test_push[1.2]",
"tests/test_db.py::TestReference::test_push[True]",
"tests/test_db.py::TestReference::test_push[False]",
"tests/test_db.py::TestReference::test_push[data8]",
"tests/test_db.py::TestReference::test_push[data9]",
"tests/test_db.py::TestReference::test_push[data10]",
"tests/test_db.py::TestReference::test_push[data11]",
"tests/test_db.py::TestReference::test_push_default",
"tests/test_db.py::TestReference::test_push_none_value",
"tests/test_db.py::TestReference::test_delete",
"tests/test_db.py::TestReference::test_transaction",
"tests/test_db.py::TestReference::test_transaction_scalar",
"tests/test_db.py::TestReference::test_transaction_error",
"tests/test_db.py::TestReference::test_transaction_invalid_function[None]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[0]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[1]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[True]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[False]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[foo]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[func6]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[func7]",
"tests/test_db.py::TestReference::test_transaction_invalid_function[func8]",
"tests/test_db.py::TestReference::test_get_root_reference",
"tests/test_db.py::TestReference::test_get_reference[/-expected0]",
"tests/test_db.py::TestReference::test_get_reference[-expected1]",
"tests/test_db.py::TestReference::test_get_reference[/foo-expected2]",
"tests/test_db.py::TestReference::test_get_reference[foo-expected3]",
"tests/test_db.py::TestReference::test_get_reference[/foo/bar-expected4]",
"tests/test_db.py::TestReference::test_get_reference[foo/bar-expected5]",
"tests/test_db.py::TestReference::test_get_reference[/foo/bar/-expected6]",
"tests/test_db.py::TestReference::test_server_error[400]",
"tests/test_db.py::TestReference::test_server_error[401]",
"tests/test_db.py::TestReference::test_server_error[500]",
"tests/test_db.py::TestReference::test_other_error[400]",
"tests/test_db.py::TestReference::test_other_error[401]",
"tests/test_db.py::TestReference::test_other_error[500]",
"tests/test_db.py::TestReferenceWithAuthOverride::test_get_value",
"tests/test_db.py::TestReferenceWithAuthOverride::test_set_value",
"tests/test_db.py::TestReferenceWithAuthOverride::test_order_by_query",
"tests/test_db.py::TestReferenceWithAuthOverride::test_range_query",
"tests/test_db.py::TestDatabseInitialization::test_no_app",
"tests/test_db.py::TestDatabseInitialization::test_no_db_url",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[None]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[foo]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[http://test.firebaseio.com]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[https://google.com]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[True]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[False]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[1]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[0]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[url9]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[url10]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[url11]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_db_url[url12]",
"tests/test_db.py::TestDatabseInitialization::test_valid_auth_override[override0]",
"tests/test_db.py::TestDatabseInitialization::test_valid_auth_override[override1]",
"tests/test_db.py::TestDatabseInitialization::test_valid_auth_override[None]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[foo]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[0]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[1]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[True]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[False]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[override6]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[override7]",
"tests/test_db.py::TestDatabseInitialization::test_invalid_auth_override[override8]",
"tests/test_db.py::TestDatabseInitialization::test_http_timeout",
"tests/test_db.py::TestDatabseInitialization::test_app_delete",
"tests/test_db.py::TestDatabseInitialization::test_user_agent_format",
"tests/test_db.py::TestQuery::test_invalid_path[]",
"tests/test_db.py::TestQuery::test_invalid_path[None]",
"tests/test_db.py::TestQuery::test_invalid_path[/]",
"tests/test_db.py::TestQuery::test_invalid_path[/foo]",
"tests/test_db.py::TestQuery::test_invalid_path[0]",
"tests/test_db.py::TestQuery::test_invalid_path[1]",
"tests/test_db.py::TestQuery::test_invalid_path[True]",
"tests/test_db.py::TestQuery::test_invalid_path[False]",
"tests/test_db.py::TestQuery::test_invalid_path[path8]",
"tests/test_db.py::TestQuery::test_invalid_path[path9]",
"tests/test_db.py::TestQuery::test_invalid_path[path10]",
"tests/test_db.py::TestQuery::test_invalid_path[path11]",
"tests/test_db.py::TestQuery::test_invalid_path[$foo]",
"tests/test_db.py::TestQuery::test_invalid_path[.foo]",
"tests/test_db.py::TestQuery::test_invalid_path[#foo]",
"tests/test_db.py::TestQuery::test_invalid_path[[foo]",
"tests/test_db.py::TestQuery::test_invalid_path[foo]]",
"tests/test_db.py::TestQuery::test_invalid_path[$key]",
"tests/test_db.py::TestQuery::test_invalid_path[$value]",
"tests/test_db.py::TestQuery::test_invalid_path[$priority]",
"tests/test_db.py::TestQuery::test_order_by_valid_path[foo-foo]",
"tests/test_db.py::TestQuery::test_order_by_valid_path[foo/bar-foo/bar]",
"tests/test_db.py::TestQuery::test_order_by_valid_path[foo/bar/-foo/bar]",
"tests/test_db.py::TestQuery::test_filter_by_valid_path[foo-foo]",
"tests/test_db.py::TestQuery::test_filter_by_valid_path[foo/bar-foo/bar]",
"tests/test_db.py::TestQuery::test_filter_by_valid_path[foo/bar/-foo/bar]",
"tests/test_db.py::TestQuery::test_order_by_key",
"tests/test_db.py::TestQuery::test_key_filter",
"tests/test_db.py::TestQuery::test_order_by_value",
"tests/test_db.py::TestQuery::test_value_filter",
"tests/test_db.py::TestQuery::test_multiple_limits",
"tests/test_db.py::TestQuery::test_invalid_limit[None]",
"tests/test_db.py::TestQuery::test_invalid_limit[-1]",
"tests/test_db.py::TestQuery::test_invalid_limit[foo]",
"tests/test_db.py::TestQuery::test_invalid_limit[1.2]",
"tests/test_db.py::TestQuery::test_invalid_limit[limit4]",
"tests/test_db.py::TestQuery::test_invalid_limit[limit5]",
"tests/test_db.py::TestQuery::test_invalid_limit[limit6]",
"tests/test_db.py::TestQuery::test_invalid_limit[limit7]",
"tests/test_db.py::TestQuery::test_start_at_none",
"tests/test_db.py::TestQuery::test_valid_start_at[]",
"tests/test_db.py::TestQuery::test_valid_start_at[foo]",
"tests/test_db.py::TestQuery::test_valid_start_at[True]",
"tests/test_db.py::TestQuery::test_valid_start_at[False]",
"tests/test_db.py::TestQuery::test_valid_start_at[0]",
"tests/test_db.py::TestQuery::test_valid_start_at[1]",
"tests/test_db.py::TestQuery::test_valid_start_at[arg6]",
"tests/test_db.py::TestQuery::test_end_at_none",
"tests/test_db.py::TestQuery::test_valid_end_at[]",
"tests/test_db.py::TestQuery::test_valid_end_at[foo]",
"tests/test_db.py::TestQuery::test_valid_end_at[True]",
"tests/test_db.py::TestQuery::test_valid_end_at[False]",
"tests/test_db.py::TestQuery::test_valid_end_at[0]",
"tests/test_db.py::TestQuery::test_valid_end_at[1]",
"tests/test_db.py::TestQuery::test_valid_end_at[arg6]",
"tests/test_db.py::TestQuery::test_equal_to_none",
"tests/test_db.py::TestQuery::test_valid_equal_to[]",
"tests/test_db.py::TestQuery::test_valid_equal_to[foo]",
"tests/test_db.py::TestQuery::test_valid_equal_to[True]",
"tests/test_db.py::TestQuery::test_valid_equal_to[False]",
"tests/test_db.py::TestQuery::test_valid_equal_to[0]",
"tests/test_db.py::TestQuery::test_valid_equal_to[1]",
"tests/test_db.py::TestQuery::test_valid_equal_to[arg6]",
"tests/test_db.py::TestQuery::test_range_query[foo]",
"tests/test_db.py::TestQuery::test_range_query[$key]",
"tests/test_db.py::TestQuery::test_range_query[$value]",
"tests/test_db.py::TestQuery::test_limit_first_query[foo]",
"tests/test_db.py::TestQuery::test_limit_first_query[$key]",
"tests/test_db.py::TestQuery::test_limit_first_query[$value]",
"tests/test_db.py::TestQuery::test_limit_last_query[foo]",
"tests/test_db.py::TestQuery::test_limit_last_query[$key]",
"tests/test_db.py::TestQuery::test_limit_last_query[$value]",
"tests/test_db.py::TestQuery::test_all_in[foo]",
"tests/test_db.py::TestQuery::test_all_in[$key]",
"tests/test_db.py::TestQuery::test_all_in[$value]",
"tests/test_db.py::TestQuery::test_invalid_query_args",
"tests/test_db.py::TestSorter::test_order_by_value[result0-expected0]",
"tests/test_db.py::TestSorter::test_order_by_value[result1-expected1]",
"tests/test_db.py::TestSorter::test_order_by_value[result2-expected2]",
"tests/test_db.py::TestSorter::test_order_by_value[result3-expected3]",
"tests/test_db.py::TestSorter::test_order_by_value[result4-expected4]",
"tests/test_db.py::TestSorter::test_order_by_value[result5-expected5]",
"tests/test_db.py::TestSorter::test_order_by_value[result6-expected6]",
"tests/test_db.py::TestSorter::test_order_by_value[result7-expected7]",
"tests/test_db.py::TestSorter::test_order_by_value[result8-expected8]",
"tests/test_db.py::TestSorter::test_order_by_value[result9-expected9]",
"tests/test_db.py::TestSorter::test_order_by_value[result10-expected10]",
"tests/test_db.py::TestSorter::test_order_by_value[result11-expected11]",
"tests/test_db.py::TestSorter::test_order_by_value[result12-expected12]",
"tests/test_db.py::TestSorter::test_order_by_value[result13-expected13]",
"tests/test_db.py::TestSorter::test_order_by_value[result14-expected14]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result0-expected0]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result1-expected1]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result2-expected2]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result3-expected3]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result4-expected4]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result5-expected5]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result6-expected6]",
"tests/test_db.py::TestSorter::test_order_by_value_with_list[result7-expected7]",
"tests/test_db.py::TestSorter::test_invalid_sort[None]",
"tests/test_db.py::TestSorter::test_invalid_sort[False]",
"tests/test_db.py::TestSorter::test_invalid_sort[True]",
"tests/test_db.py::TestSorter::test_invalid_sort[0]",
"tests/test_db.py::TestSorter::test_invalid_sort[1]",
"tests/test_db.py::TestSorter::test_invalid_sort[foo]",
"tests/test_db.py::TestSorter::test_order_by_key[result0-expected0]",
"tests/test_db.py::TestSorter::test_order_by_key[result1-expected1]",
"tests/test_db.py::TestSorter::test_order_by_key[result2-expected2]",
"tests/test_db.py::TestSorter::test_order_by_child[result0-expected0]",
"tests/test_db.py::TestSorter::test_order_by_child[result1-expected1]",
"tests/test_db.py::TestSorter::test_order_by_child[result2-expected2]",
"tests/test_db.py::TestSorter::test_order_by_child[result3-expected3]",
"tests/test_db.py::TestSorter::test_order_by_child[result4-expected4]",
"tests/test_db.py::TestSorter::test_order_by_child[result5-expected5]",
"tests/test_db.py::TestSorter::test_order_by_child[result6-expected6]",
"tests/test_db.py::TestSorter::test_order_by_child[result7-expected7]",
"tests/test_db.py::TestSorter::test_order_by_child[result8-expected8]",
"tests/test_db.py::TestSorter::test_order_by_child[result9-expected9]",
"tests/test_db.py::TestSorter::test_order_by_child[result10-expected10]",
"tests/test_db.py::TestSorter::test_order_by_child[result11-expected11]",
"tests/test_db.py::TestSorter::test_order_by_child[result12-expected12]",
"tests/test_db.py::TestSorter::test_order_by_child[result13-expected13]",
"tests/test_db.py::TestSorter::test_order_by_child[result14-expected14]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result0-expected0]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result1-expected1]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result2-expected2]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result3-expected3]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result4-expected4]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result5-expected5]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result6-expected6]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result7-expected7]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result8-expected8]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result9-expected9]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result10-expected10]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result11-expected11]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result12-expected12]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result13-expected13]",
"tests/test_db.py::TestSorter::test_order_by_grand_child[result14-expected14]",
"tests/test_db.py::TestSorter::test_child_path_resolution[result0-expected0]",
"tests/test_db.py::TestSorter::test_child_path_resolution[result1-expected1]",
"tests/test_db.py::TestSorter::test_child_path_resolution[result2-expected2]"
]
| []
| Apache License 2.0 | 2,507 | [
"firebase_admin/db.py",
"CHANGELOG.md"
]
| [
"firebase_admin/db.py",
"CHANGELOG.md"
]
|
|
python-visualization__folium-866 | c228c261be42d801809e0ba037dbe14b8229fb4b | 2018-05-12 10:41:08 | cb3987ad598278d9ed2c8de164f2dee07c4562fb | diff --git a/CHANGES.txt b/CHANGES.txt
index fee3bfb2..f845af64 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,8 @@
0.6.0
~~~~~
+- `Popup` accepts new arguments `show` (render open on page load) and `sticky` (popups
+ only close when explicitly clicked) (jwhendy #778)
- Added leaflet-search plugin (ghandic #759)
- Improved Vector Layers docs, notebooks, and optional arguments (ocefpaf #731)
- Implemented `export=False/True` option to the Draw plugin layer for saving
@@ -24,6 +26,7 @@ Bug Fixes
- Fixed numpy array bug (#749) in _flatten
- Unify `get_bounds` routine to avoid wrong responses
- If Path option `fill_color` is present it will override `fill=False`
+- Fix disappearing layer control when using FastMarkerCluster (conengmo #866)
0.5.0
~~~~~
diff --git a/folium/__init__.py b/folium/__init__.py
index 2574ff8e..e03ff878 100644
--- a/folium/__init__.py
+++ b/folium/__init__.py
@@ -23,6 +23,11 @@ from folium.map import (
from folium.vector_layers import Circle, CircleMarker, PolyLine, Polygon, Rectangle # noqa
+import branca
+if tuple(int(x) for x in branca.__version__.split('.')) < (0, 3, 0):
+ raise ImportError('branca version 0.3.0 or higher is required. '
+ 'Update branca with e.g. `pip install branca --upgrade`.')
+
__version__ = get_versions()['version']
del get_versions
diff --git a/folium/map.py b/folium/map.py
index 4680510c..c2fb7261 100644
--- a/folium/map.py
+++ b/folium/map.py
@@ -276,23 +276,30 @@ class Popup(Element):
True if the popup is a template that needs to the rendered first.
max_width: int, default 300
The maximal width of the popup.
+ show: bool, default False
+ True renders the popup open on page load.
+ sticky: bool, default False
+ True prevents map and other popup clicks from closing.
"""
_template = Template(u"""
- var {{this.get_name()}} = L.popup({maxWidth: '{{this.max_width}}'});
+ var {{this.get_name()}} = L.popup({maxWidth: '{{this.max_width}}'
+ {% if this.show or this.sticky %}, autoClose: false{% endif %}
+ {% if this.sticky %}, closeOnClick: false{% endif %}});
{% for name, element in this.html._children.items() %}
var {{name}} = $('{{element.render(**kwargs).replace('\\n',' ')}}')[0];
{{this.get_name()}}.setContent({{name}});
{% endfor %}
- {{this._parent.get_name()}}.bindPopup({{this.get_name()}});
+ {{this._parent.get_name()}}.bindPopup({{this.get_name()}})
+ {% if this.show %}.openPopup(){% endif %};
{% for name, element in this.script._children.items() %}
{{element.render()}}
{% endfor %}
""") # noqa
- def __init__(self, html=None, parse_html=False, max_width=300):
+ def __init__(self, html=None, parse_html=False, max_width=300, show=False, sticky=False):
super(Popup, self).__init__()
self._name = 'Popup'
self.header = Element()
@@ -311,6 +318,8 @@ class Popup(Element):
self.html.add_child(Html(text_type(html), script=script))
self.max_width = max_width
+ self.show = show
+ self.sticky = sticky
def render(self, **kwargs):
"""Renders the HTML representation of the element."""
diff --git a/folium/plugins/fast_marker_cluster.py b/folium/plugins/fast_marker_cluster.py
index b42cd281..d4a8f756 100644
--- a/folium/plugins/fast_marker_cluster.py
+++ b/folium/plugins/fast_marker_cluster.py
@@ -41,11 +41,11 @@ class FastMarkerCluster(MarkerCluster):
"""
_template = Template(u"""
{% macro script(this, kwargs) %}
- {{this._callback}}
- (function(){
- var data = {{this._data}};
- var map = {{this._parent.get_name()}};
+ var {{ this.get_name() }} = (function(){
+ {{this._callback}}
+
+ var data = {{ this._data }};
var cluster = L.markerClusterGroup();
for (var i = 0; i < data.length; i++) {
@@ -54,7 +54,8 @@ class FastMarkerCluster(MarkerCluster):
marker.addTo(cluster);
}
- cluster.addTo(map);
+ cluster.addTo({{ this._parent.get_name() }});
+ return cluster;
})();
{% endmacro %}""")
@@ -66,15 +67,12 @@ class FastMarkerCluster(MarkerCluster):
self._data = _validate_coordinates(data)
if callback is None:
- self._callback = ('var callback;\n' +
- 'callback = function (row) {\n' +
- '\tvar icon, marker;\n' +
- '\t// Returns a L.marker object\n' +
- '\ticon = L.AwesomeMarkers.icon();\n' +
- '\tmarker = L.marker(new L.LatLng(row[0], ' +
- 'row[1]));\n' +
- '\tmarker.setIcon(icon);\n' +
- '\treturn marker;\n' +
- '};')
+ self._callback = """
+ var callback = function (row) {
+ var icon = L.AwesomeMarkers.icon();
+ var marker = L.marker(new L.LatLng(row[0], row[1]));
+ marker.setIcon(icon);
+ return marker;
+ };"""
else:
self._callback = 'var callback = {};'.format(callback)
diff --git a/requirements.txt b/requirements.txt
index f2c4eb37..69e1a836 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,4 +1,4 @@
-branca
+branca>=0.3.0
jinja2
numpy
requests
| Layer selector disappears when using FastMarkerCluster()
When I trying to add a FastMarkerCluster() layer to my multilayer map, the layer selector in the top right of the browser window disappears.
#### Code Sample:
```python
import pandas as pd
import folium
df = pd.read_csv(r'raw_data.csv')
data = df[['lat', 'long']].values.tolist()
m = folium.Map()
folium.plugins.HeatMap(data, name="layer1").add_to(m)
folium.plugins.HeatMap(data, name="layer2").add_to(m)
#folium.plugins.FastMarkerCluster(data, name="layer3").add_to(m)
folium.LayerControl().add_to(m)
m.save("map.html")
```
#### Problem description
When I run the above as is, I get a layer selector in the top right which I can use to switch on/off the two heatmap layers. When I uncomment the FastMarkerCluster() line and run the code, the cluster layer is added to the map, however the layer selector is no longer present.
#### Expected Output
I expect the layer selector in the top right corner of the browser window to be present when adding layers using FastMarkerCluster()
#### Output of ``folium.__version__``
'0.5.0+111.gc228c26' | python-visualization/folium | diff --git a/tests/test_map.py b/tests/test_map.py
index 7a2d49ae..846a2a36 100644
--- a/tests/test_map.py
+++ b/tests/test_map.py
@@ -8,6 +8,7 @@ Folium map Tests
from __future__ import (absolute_import, division, print_function)
+from folium import Map
from folium.map import Popup
@@ -18,6 +19,10 @@ tmpl = u"""
""".format
+def _normalize(rendered):
+ return ''.join(rendered.split())
+
+
def test_popup_ascii():
popup = Popup('Some text.')
_id = list(popup.html._children.keys())[0]
@@ -52,3 +57,33 @@ def test_popup_unicode():
'text': u'Ça c'est chouette',
}
assert ''.join(popup.html.render().split()) == ''.join(tmpl(**kw).split())
+
+
+def test_popup_sticky():
+ m = Map()
+ popup = Popup('Some text.', sticky=True).add_to(m)
+ rendered = popup._template.render(this=popup, kwargs={})
+ expected = """
+ var {popup_name} = L.popup({{maxWidth: \'300\', autoClose: false, closeOnClick: false}});
+ var {html_name} = $(\'<div id="{html_name}" style="width: 100.0%; height: 100.0%;">Some text.</div>\')[0];
+ {popup_name}.setContent({html_name});
+ {map_name}.bindPopup({popup_name});
+ """.format(popup_name=popup.get_name(),
+ html_name=list(popup.html._children.keys())[0],
+ map_name=m.get_name())
+ assert _normalize(rendered) == _normalize(expected)
+
+
+def test_popup_show():
+ m = Map()
+ popup = Popup('Some text.', show=True).add_to(m)
+ rendered = popup._template.render(this=popup, kwargs={})
+ expected = """
+ var {popup_name} = L.popup({{maxWidth: \'300\' , autoClose: false}});
+ var {html_name} = $(\'<div id="{html_name}" style="width: 100.0%; height: 100.0%;">Some text.</div>\')[0];
+ {popup_name}.setContent({html_name});
+ {map_name}.bindPopup({popup_name}).openPopup();
+ """.format(popup_name=popup.get_name(),
+ html_name=list(popup.html._children.keys())[0],
+ map_name=m.get_name())
+ assert _normalize(rendered) == _normalize(expected)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 5
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
branca==0.5.0
certifi==2021.5.30
charset-normalizer==2.0.12
-e git+https://github.com/python-visualization/folium.git@c228c261be42d801809e0ba037dbe14b8229fb4b#egg=folium
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
numpy==1.19.5
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
requests==2.27.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: folium
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- branca==0.5.0
- charset-normalizer==2.0.12
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- numpy==1.19.5
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- requests==2.27.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/folium
| [
"tests/test_map.py::test_popup_sticky",
"tests/test_map.py::test_popup_show"
]
| []
| [
"tests/test_map.py::test_popup_ascii",
"tests/test_map.py::test_popup_quotes",
"tests/test_map.py::test_popup_unicode"
]
| []
| MIT License | 2,508 | [
"CHANGES.txt",
"folium/map.py",
"folium/plugins/fast_marker_cluster.py",
"requirements.txt",
"folium/__init__.py"
]
| [
"CHANGES.txt",
"folium/map.py",
"folium/plugins/fast_marker_cluster.py",
"requirements.txt",
"folium/__init__.py"
]
|
|
fniessink__next-action-30 | b9546fdd3eb1f6b43bbfd0f92fc7947c7bc9575d | 2018-05-12 18:32:09 | e145fa742fb415e26ec78bff558a359e2022729e | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 68770d2..684bdb0 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
+## [Unreleased]
+
+### Added
+
+- Specify the number of next actions to show: `next-action --number 3`. Closes #7.
+
## [0.0.7] - 2018-05-12
### Added
diff --git a/README.md b/README.md
index 1454a95..1c06d62 100644
--- a/README.md
+++ b/README.md
@@ -23,7 +23,7 @@ Don't know what *Todo.txt* is? See <https://github.com/todotxt/todo.txt> for the
```console
$ next-action --help
-usage: next-action [-h] [--version] [-f FILE] [@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]]
+usage: next-action [-h] [--version] [-f FILE] [-n N] [@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]]
Show the next action in your todo.txt
@@ -35,6 +35,7 @@ optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
+ -n N, --number N number of next actions to show (default: 1)
```
Assuming your todo.txt file is in the current folder, running *Next-action* without arguments will show the next action you should do based on your tasks' priorities:
@@ -64,6 +65,15 @@ $ next-action +DogHouse +PaintHouse @store @weekend
(B) Buy paint to +PaintHouse @store @weekend
```
+To show more than one next action, supply the number you think you can handle:
+
+```console
+$ next-action --number 3
+(A) Call mom @phone
+(B) Buy paint to +PaintHouse @store @weekend
+(C) Finish proposal for important client @work
+```
+
Since *Next-action* is still pre-alpha-stage, this is it for the moment. Stay tuned for more options.
## Develop
diff --git a/next_action/__init__.py b/next_action/__init__.py
index ca056b5..43f2b3c 100644
--- a/next_action/__init__.py
+++ b/next_action/__init__.py
@@ -1,7 +1,4 @@
""" Main Next-action package. """
-from .pick_action import next_action_based_on_priority
-
-
__title__ = "next-action"
__version__ = "0.0.7"
diff --git a/next_action/arguments.py b/next_action/arguments.py
index 8a8dd6a..51249fc 100644
--- a/next_action/arguments.py
+++ b/next_action/arguments.py
@@ -47,8 +47,8 @@ def parse_arguments() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Show the next action in your todo.txt",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--version", action="version", version="%(prog)s {0}".format(next_action.__version__))
- parser.add_argument("-f", "--file", help="filename of the todo.txt file to read",
- type=str, default="todo.txt")
+ parser.add_argument("-f", "--file", help="filename of the todo.txt file to read", type=str, default="todo.txt")
+ parser.add_argument("-n", "--number", metavar="N", help="number of next actions to show", type=int, default=1)
parser.add_argument("contexts", metavar="@CONTEXT", help="show the next action in the specified contexts",
nargs="*", type=str, default=None, action=ContextProjectAction)
parser.add_argument("projects", metavar="+PROJECT", help="show the next action for the specified projects",
diff --git a/next_action/cli.py b/next_action/cli.py
index 4618e10..93a7d57 100644
--- a/next_action/cli.py
+++ b/next_action/cli.py
@@ -1,7 +1,7 @@
""" Entry point for Next-action's command-line interface. """
from next_action.todotxt import Task
-from next_action.pick_action import next_action_based_on_priority
+from next_action.pick_action import next_actions
from next_action.arguments import parse_arguments
@@ -11,7 +11,7 @@ def next_action() -> None:
Basic recipe:
1) parse command-line arguments,
2) read todo.txt file,
- 3) determine the next action and display it.
+ 3) determine the next action(s) and display them.
"""
arguments = parse_arguments()
filename: str = arguments.file
@@ -21,6 +21,6 @@ def next_action() -> None:
print("Can't find {0}".format(filename))
return
with todotxt_file:
- tasks = [Task(line.strip()) for line in todotxt_file.readlines()]
- action = next_action_based_on_priority(tasks, set(arguments.contexts), set(arguments.projects))
- print(action.text if action else "Nothing to do!")
+ tasks = [Task(line.strip()) for line in todotxt_file.readlines() if line.strip()]
+ actions = next_actions(tasks, set(arguments.contexts), set(arguments.projects))
+ print("\n".join(action.text for action in actions[:arguments.number]) if actions else "Nothing to do!")
diff --git a/next_action/pick_action.py b/next_action/pick_action.py
index e565186..68584cb 100644
--- a/next_action/pick_action.py
+++ b/next_action/pick_action.py
@@ -1,17 +1,13 @@
-""" Algorithm for deciding the next action. """
+""" Algorithm for deciding the next action(s). """
-from typing import Optional, Set, Sequence
+from typing import Set, Sequence
from .todotxt import Task
-def next_action_based_on_priority(tasks: Sequence[Task], contexts: Set[str] = None,
- projects: Set[str] = None) -> Optional[Task]:
- """ Return the next action from the collection of tasks. """
- contexts = contexts or set()
- projects = projects or set()
+def next_actions(tasks: Sequence[Task], contexts: Set[str] = None, projects: Set[str] = None) -> Sequence[Task]:
+ """ Return the next action(s) from the collection of tasks. """
uncompleted_tasks = [task for task in tasks if not task.is_completed()]
- tasks_in_context = filter(lambda task: contexts <= task.contexts(), uncompleted_tasks)
+ tasks_in_context = filter(lambda task: contexts <= task.contexts() if contexts else True, uncompleted_tasks)
tasks_in_project = filter(lambda task: projects & task.projects() if projects else True, tasks_in_context)
- sorted_tasks = sorted(tasks_in_project, key=lambda task: task.priority() or "ZZZ")
- return sorted_tasks[0] if sorted_tasks else None
+ return sorted(tasks_in_project, key=lambda task: task.priority() or "ZZZ")
| Allow for showing multiple next actions
$ `next_action -n3 # Or --number 3`
(A) World peace
(B) End hunger
Call mom @home | fniessink/next-action | diff --git a/tests/unittests/test_arguments.py b/tests/unittests/test_arguments.py
index a737329..25d20ee 100644
--- a/tests/unittests/test_arguments.py
+++ b/tests/unittests/test_arguments.py
@@ -11,6 +11,9 @@ from next_action.arguments import parse_arguments
class ArgumentParserTest(unittest.TestCase):
""" Unit tests for the argument parses. """
+ usage_message = "usage: next-action [-h] [--version] [-f FILE] [-n N] [@CONTEXT [@CONTEXT ...]] " \
+ "[+PROJECT [+PROJECT ...]]\n"
+
@patch.object(sys, "argv", ["next-action"])
def test_default_filename(self):
""" Test that the argument parser has a default filename. """
@@ -47,9 +50,7 @@ class ArgumentParserTest(unittest.TestCase):
""" Test that the argument parser exits if the context is empty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call("usage: next-action [-h] [--version] [-f FILE] [@CONTEXT [@CONTEXT ...]] "
- "[+PROJECT [+PROJECT ...]]\n"),
- call("next-action: error: Context name cannot be empty.\n")],
+ self.assertEqual([call(self.usage_message), call("next-action: error: Context name cannot be empty.\n")],
mock_stderr_write.call_args_list)
@patch.object(sys, "argv", ["next-action", "+DogHouse"])
@@ -68,9 +69,7 @@ class ArgumentParserTest(unittest.TestCase):
""" Test that the argument parser exits if the project is empty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call("usage: next-action [-h] [--version] [-f FILE] [@CONTEXT [@CONTEXT ...]] "
- "[+PROJECT [+PROJECT ...]]\n"),
- call("next-action: error: Project name cannot be empty.\n")],
+ self.assertEqual([call(self.usage_message), call("next-action: error: Project name cannot be empty.\n")],
mock_stderr_write.call_args_list)
@patch.object(sys, "argv", ["next-action", "+DogHouse", "@home", "+PaintHouse", "@weekend"])
@@ -85,7 +84,25 @@ class ArgumentParserTest(unittest.TestCase):
""" Test that the argument parser exits if the option is faulty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call("usage: next-action [-h] [--version] [-f FILE] [@CONTEXT [@CONTEXT ...]] "
- "[+PROJECT [+PROJECT ...]]\n"),
- call("next-action: error: Unrecognized argument 'home'.\n")],
+ self.assertEqual([call(self.usage_message), call("next-action: error: Unrecognized argument 'home'.\n")],
+ mock_stderr_write.call_args_list)
+
+ @patch.object(sys, "argv", ["next-action"])
+ def test_default_number(self):
+ """ Test that the argument parser has a default number of actions to return. """
+ self.assertEqual(1, parse_arguments().number)
+
+ @patch.object(sys, "argv", ["next-action", "--number", "3"])
+ def test_number(self):
+ """ Test that a number of actions to be shown can be passed. """
+ self.assertEqual(3, parse_arguments().number)
+
+ @patch.object(sys, "argv", ["next-action", "--number", "not_a_number"])
+ @patch.object(sys.stderr, "write")
+ def test_faulty_number(self, mock_stderr_write):
+ """ Test that the argument parser exits if the option is faulty. """
+ os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
+ self.assertRaises(SystemExit, parse_arguments)
+ self.assertEqual([call(self.usage_message),
+ call("next-action: error: argument -n/--number: invalid int value: 'not_a_number'\n")],
mock_stderr_write.call_args_list)
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index b517f34..8f8e68f 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -59,7 +59,7 @@ class CLITest(unittest.TestCase):
""" Test the help message. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, next_action)
- self.assertEqual(call("""usage: next-action [-h] [--version] [-f FILE] [@CONTEXT [@CONTEXT ...]] \
+ self.assertEqual(call("""usage: next-action [-h] [--version] [-f FILE] [-n N] [@CONTEXT [@CONTEXT ...]] \
[+PROJECT [+PROJECT ...]]
Show the next action in your todo.txt
@@ -72,6 +72,7 @@ optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
+ -n N, --number N number of next actions to show (default: 1)
"""),
mock_stdout_write.call_args_list[0])
@@ -81,3 +82,19 @@ optional arguments:
""" Test that --version shows the version number. """
self.assertRaises(SystemExit, next_action)
self.assertEqual([call("next-action {0}\n".format(__version__))], mock_stdout_write.call_args_list)
+
+ @patch.object(sys, "argv", ["next-action", "--number", "2"])
+ @patch("next_action.cli.open", mock_open(read_data="Walk the dog @home\n(A) Buy wood +DogHouse\n(B) Call mom\n"))
+ @patch.object(sys.stdout, "write")
+ def test_number(self, mock_stdout_write):
+ """ Test that the number of next actions can be specified. """
+ next_action()
+ self.assertEqual([call("(A) Buy wood +DogHouse\n(B) Call mom"), call("\n")], mock_stdout_write.call_args_list)
+
+ @patch.object(sys, "argv", ["next-action", "--number", "3"])
+ @patch("next_action.cli.open", mock_open(read_data="\nWalk the dog @home\n \n(B) Call mom\n"))
+ @patch.object(sys.stdout, "write")
+ def test_ignore_empty_lines(self, mock_stdout_write):
+ """ Test that empty lines in the todo.txt file are ignored. """
+ next_action()
+ self.assertEqual([call("(B) Call mom\nWalk the dog @home"), call("\n")], mock_stdout_write.call_args_list)
diff --git a/tests/unittests/test_pick_action.py b/tests/unittests/test_pick_action.py
index cf62887..9ad1364 100644
--- a/tests/unittests/test_pick_action.py
+++ b/tests/unittests/test_pick_action.py
@@ -10,18 +10,18 @@ class PickActionTest(unittest.TestCase):
def test_no_tasks(self):
""" Test that no tasks means no next action. """
- self.assertEqual(None, pick_action.next_action_based_on_priority([]))
+ self.assertEqual([], pick_action.next_actions([]))
def test_one_task(self):
""" If there is one task, that one is the next action. """
task = todotxt.Task("Todo")
- self.assertEqual(task, pick_action.next_action_based_on_priority([task]))
+ self.assertEqual([task], pick_action.next_actions([task]))
def test_multiple_tasks(self):
""" If there are multiple tasks, the first is the next action. """
task1 = todotxt.Task("Todo 1")
task2 = todotxt.Task("Todo 2")
- self.assertEqual(task1, pick_action.next_action_based_on_priority([task1, task2]))
+ self.assertEqual([task1, task2], pick_action.next_actions([task1, task2]))
def test_higher_prio_goes_first(self):
""" If there are multiple tasks with different priorities, the task with the
@@ -29,47 +29,45 @@ class PickActionTest(unittest.TestCase):
task1 = todotxt.Task("Todo 1")
task2 = todotxt.Task("(B) Todo 2")
task3 = todotxt.Task("(A) Todo 3")
- self.assertEqual(task3, pick_action.next_action_based_on_priority([task1, task2, task3]))
+ self.assertEqual([task3, task2, task1], pick_action.next_actions([task1, task2, task3]))
def test_completed_task_is_ignored(self):
""" If there's one completed and one uncompleted task, the uncompleted one is the next action. """
completed_task = todotxt.Task("x Completed")
uncompleted_task = todotxt.Task("Todo")
- self.assertEqual(uncompleted_task,
- pick_action.next_action_based_on_priority([completed_task, uncompleted_task]))
+ self.assertEqual([uncompleted_task], pick_action.next_actions([completed_task, uncompleted_task]))
def test_completed_tasks_only(self):
""" If all tasks are completed, there's no next action. """
completed_task1 = todotxt.Task("x Completed")
completed_task2 = todotxt.Task("x Completed too")
- self.assertEqual(None, pick_action.next_action_based_on_priority([completed_task1, completed_task2]))
+ self.assertEqual([], pick_action.next_actions([completed_task1, completed_task2]))
def test_context(self):
""" Test that the next action can be limited to a specific context. """
task1 = todotxt.Task("Todo 1 @work")
task2 = todotxt.Task("(B) Todo 2 @work")
task3 = todotxt.Task("(A) Todo 3 @home")
- self.assertEqual(task2, pick_action.next_action_based_on_priority([task1, task2, task3], contexts={"work"}))
+ self.assertEqual([task2, task1], pick_action.next_actions([task1, task2, task3], contexts={"work"}))
def test_contexts(self):
""" Test that the next action can be limited to a set of contexts. """
task1 = todotxt.Task("Todo 1 @work @computer")
task2 = todotxt.Task("(B) Todo 2 @work @computer")
task3 = todotxt.Task("(A) Todo 3 @home @computer")
- self.assertEqual(task2, pick_action.next_action_based_on_priority([task1, task2, task3],
- contexts={"work", "computer"}))
+ self.assertEqual([task2, task1], pick_action.next_actions([task1, task2, task3], contexts={"work", "computer"}))
def test_project(self):
""" Test that the next action can be limited to a specific project. """
task1 = todotxt.Task("Todo 1 +ProjectX")
task2 = todotxt.Task("(B) Todo 2 +ProjectX")
task3 = todotxt.Task("(A) Todo 3 +ProjectY")
- self.assertEqual(task2, pick_action.next_action_based_on_priority([task1, task2, task3], projects={"ProjectX"}))
+ self.assertEqual([task2, task1], pick_action.next_actions([task1, task2, task3], projects={"ProjectX"}))
def test_project_and_context(self):
""" Test that the next action can be limited to a specific project and context. """
task1 = todotxt.Task("Todo 1 +ProjectX @office")
task2 = todotxt.Task("(B) Todo 2 +ProjectX")
task3 = todotxt.Task("(A) Todo 3 +ProjectY")
- self.assertEqual(task1, pick_action.next_action_based_on_priority([task1, task2, task3], projects={"ProjectX"},
- contexts={"office"}))
+ self.assertEqual([task1],
+ pick_action.next_actions([task1, task2, task3], projects={"ProjectX"}, contexts={"office"}))
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 6
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/fniessink/next-action.git@b9546fdd3eb1f6b43bbfd0f92fc7947c7bc9575d#egg=next_action
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_number",
"tests/unittests/test_cli.py::CLITest::test_ignore_empty_lines",
"tests/unittests/test_cli.py::CLITest::test_number",
"tests/unittests/test_pick_action.py::PickActionTest::test_completed_task_is_ignored",
"tests/unittests/test_pick_action.py::PickActionTest::test_completed_tasks_only",
"tests/unittests/test_pick_action.py::PickActionTest::test_context",
"tests/unittests/test_pick_action.py::PickActionTest::test_contexts",
"tests/unittests/test_pick_action.py::PickActionTest::test_higher_prio_goes_first",
"tests/unittests/test_pick_action.py::PickActionTest::test_multiple_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_no_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_one_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_project",
"tests/unittests/test_pick_action.py::PickActionTest::test_project_and_context"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_project",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_option",
"tests/unittests/test_cli.py::CLITest::test_help"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_contexts_and_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_long_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_contexts",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_no_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_project",
"tests/unittests/test_cli.py::CLITest::test_context",
"tests/unittests/test_cli.py::CLITest::test_empty_task_file",
"tests/unittests/test_cli.py::CLITest::test_missing_file",
"tests/unittests/test_cli.py::CLITest::test_one_task",
"tests/unittests/test_cli.py::CLITest::test_project",
"tests/unittests/test_cli.py::CLITest::test_version"
]
| []
| Apache License 2.0 | 2,509 | [
"next_action/pick_action.py",
"CHANGELOG.md",
"next_action/__init__.py",
"README.md",
"next_action/cli.py",
"next_action/arguments.py"
]
| [
"next_action/pick_action.py",
"CHANGELOG.md",
"next_action/__init__.py",
"README.md",
"next_action/cli.py",
"next_action/arguments.py"
]
|
|
fniessink__next-action-31 | 41a4b998821978a9a7ed832d8b6865a8e07a46c1 | 2018-05-12 22:00:47 | e145fa742fb415e26ec78bff558a359e2022729e | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 684bdb0..0c3dbfe 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
### Added
- Specify the number of next actions to show: `next-action --number 3`. Closes #7.
+- Show all next actions: `next-action --all`. Closes #29.
## [0.0.7] - 2018-05-12
diff --git a/README.md b/README.md
index 1c06d62..4876ffb 100644
--- a/README.md
+++ b/README.md
@@ -23,7 +23,7 @@ Don't know what *Todo.txt* is? See <https://github.com/todotxt/todo.txt> for the
```console
$ next-action --help
-usage: next-action [-h] [--version] [-f FILE] [-n N] [@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]]
+uusage: next-action [-h] [--version] [-f FILE] [-n N | -a] [@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]]
Show the next action in your todo.txt
@@ -36,6 +36,7 @@ optional arguments:
--version show program's version number and exit
-f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
-n N, --number N number of next actions to show (default: 1)
+ -a, --all show all next actions (default: False)
```
Assuming your todo.txt file is in the current folder, running *Next-action* without arguments will show the next action you should do based on your tasks' priorities:
@@ -74,6 +75,19 @@ $ next-action --number 3
(C) Finish proposal for important client @work
```
+Or even show all next actions:
+
+```console
+$ next-action --number 3
+(A) Call mom @phone
+(B) Buy paint to +PaintHouse @store @weekend
+(C) Finish proposal for important client @work
+(G) Buy wood for new +DogHouse @store
+...
+```
+
+Note that completed tasks are never shown since they can't be a next action.
+
Since *Next-action* is still pre-alpha-stage, this is it for the moment. Stay tuned for more options.
## Develop
diff --git a/next_action/arguments.py b/next_action/arguments.py
index 51249fc..ec87cb1 100644
--- a/next_action/arguments.py
+++ b/next_action/arguments.py
@@ -3,6 +3,7 @@
import argparse
import os
import shutil
+import sys
from typing import Any, Sequence, Union
import next_action
@@ -17,12 +18,12 @@ class ContextProjectAction(argparse.Action): # pylint: disable=too-few-public-m
contexts = []
projects = []
for value in values:
- if self.__is_valid("Context", "@", value, parser):
+ if self.__is_valid("context", "@", value, parser):
contexts.append(value.strip("@"))
- elif self.__is_valid("Project", "+", value, parser):
+ elif self.__is_valid("project", "+", value, parser):
projects.append(value.strip("+"))
else:
- parser.error("Unrecognized argument '{0}'.".format(value))
+ parser.error("unrecognized argument: {0}".format(value))
if namespace.contexts is None:
namespace.contexts = contexts
if namespace.projects is None:
@@ -35,7 +36,7 @@ class ContextProjectAction(argparse.Action): # pylint: disable=too-few-public-m
if len(value) > 1:
return True
else:
- parser.error("{0} name cannot be empty.".format(argument_type.capitalize()))
+ parser.error("{0} name cannot be empty".format(argument_type))
return False
@@ -48,9 +49,14 @@ def parse_arguments() -> argparse.Namespace:
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--version", action="version", version="%(prog)s {0}".format(next_action.__version__))
parser.add_argument("-f", "--file", help="filename of the todo.txt file to read", type=str, default="todo.txt")
- parser.add_argument("-n", "--number", metavar="N", help="number of next actions to show", type=int, default=1)
+ group = parser.add_mutually_exclusive_group()
+ group.add_argument("-n", "--number", metavar="N", help="number of next actions to show", type=int, default=1)
+ group.add_argument("-a", "--all", help="show all next actions", action="store_true")
parser.add_argument("contexts", metavar="@CONTEXT", help="show the next action in the specified contexts",
nargs="*", type=str, default=None, action=ContextProjectAction)
parser.add_argument("projects", metavar="+PROJECT", help="show the next action for the specified projects",
nargs="*", type=str, default=None, action=ContextProjectAction)
- return parser.parse_args()
+ namespace = parser.parse_args()
+ if namespace.all:
+ namespace.number = sys.maxsize
+ return namespace
| Allow for showing all (next) actions
```console
$ next-action --all
(A) World peace
(B) End hunger
(C) Call mom @home
Make a pizza @home
``` | fniessink/next-action | diff --git a/tests/unittests/test_arguments.py b/tests/unittests/test_arguments.py
index 25d20ee..6fa5f60 100644
--- a/tests/unittests/test_arguments.py
+++ b/tests/unittests/test_arguments.py
@@ -11,7 +11,7 @@ from next_action.arguments import parse_arguments
class ArgumentParserTest(unittest.TestCase):
""" Unit tests for the argument parses. """
- usage_message = "usage: next-action [-h] [--version] [-f FILE] [-n N] [@CONTEXT [@CONTEXT ...]] " \
+ usage_message = "usage: next-action [-h] [--version] [-f FILE] [-n N | -a] [@CONTEXT [@CONTEXT ...]] " \
"[+PROJECT [+PROJECT ...]]\n"
@patch.object(sys, "argv", ["next-action"])
@@ -50,7 +50,7 @@ class ArgumentParserTest(unittest.TestCase):
""" Test that the argument parser exits if the context is empty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call(self.usage_message), call("next-action: error: Context name cannot be empty.\n")],
+ self.assertEqual([call(self.usage_message), call("next-action: error: context name cannot be empty\n")],
mock_stderr_write.call_args_list)
@patch.object(sys, "argv", ["next-action", "+DogHouse"])
@@ -69,7 +69,7 @@ class ArgumentParserTest(unittest.TestCase):
""" Test that the argument parser exits if the project is empty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call(self.usage_message), call("next-action: error: Project name cannot be empty.\n")],
+ self.assertEqual([call(self.usage_message), call("next-action: error: project name cannot be empty\n")],
mock_stderr_write.call_args_list)
@patch.object(sys, "argv", ["next-action", "+DogHouse", "@home", "+PaintHouse", "@weekend"])
@@ -84,7 +84,7 @@ class ArgumentParserTest(unittest.TestCase):
""" Test that the argument parser exits if the option is faulty. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, parse_arguments)
- self.assertEqual([call(self.usage_message), call("next-action: error: Unrecognized argument 'home'.\n")],
+ self.assertEqual([call(self.usage_message), call("next-action: error: unrecognized argument: home\n")],
mock_stderr_write.call_args_list)
@patch.object(sys, "argv", ["next-action"])
@@ -106,3 +106,19 @@ class ArgumentParserTest(unittest.TestCase):
self.assertEqual([call(self.usage_message),
call("next-action: error: argument -n/--number: invalid int value: 'not_a_number'\n")],
mock_stderr_write.call_args_list)
+
+ @patch.object(sys, "argv", ["next-action", "--all"])
+ def test_all_actions(self):
+ """ Test that --all option also sets the number of actions to show to a very big number. """
+ self.assertTrue(parse_arguments().all)
+ self.assertEqual(sys.maxsize, parse_arguments().number)
+
+ @patch.object(sys, "argv", ["next-action", "--all", "--number", "3"])
+ @patch.object(sys.stderr, "write")
+ def test_all_and_number(self, mock_stderr_write):
+ """ Test that the argument parser exits if the both --all and --number are used. """
+ os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
+ self.assertRaises(SystemExit, parse_arguments)
+ self.assertEqual([call(self.usage_message),
+ call("next-action: error: argument -n/--number: not allowed with argument -a/--all\n")],
+ mock_stderr_write.call_args_list)
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 8f8e68f..1b80920 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -59,7 +59,7 @@ class CLITest(unittest.TestCase):
""" Test the help message. """
os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
self.assertRaises(SystemExit, next_action)
- self.assertEqual(call("""usage: next-action [-h] [--version] [-f FILE] [-n N] [@CONTEXT [@CONTEXT ...]] \
+ self.assertEqual(call("""usage: next-action [-h] [--version] [-f FILE] [-n N | -a] [@CONTEXT [@CONTEXT ...]] \
[+PROJECT [+PROJECT ...]]
Show the next action in your todo.txt
@@ -73,6 +73,7 @@ optional arguments:
--version show program's version number and exit
-f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
-n N, --number N number of next actions to show (default: 1)
+ -a, --all show all next actions (default: False)
"""),
mock_stdout_write.call_args_list[0])
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 3
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/fniessink/next-action.git@41a4b998821978a9a7ed832d8b6865a8e07a46c1#egg=next_action
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_all_actions"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_all_and_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_project",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_option",
"tests/unittests/test_cli.py::CLITest::test_help"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_contexts_and_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_long_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_contexts",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_no_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_project",
"tests/unittests/test_cli.py::CLITest::test_context",
"tests/unittests/test_cli.py::CLITest::test_empty_task_file",
"tests/unittests/test_cli.py::CLITest::test_ignore_empty_lines",
"tests/unittests/test_cli.py::CLITest::test_missing_file",
"tests/unittests/test_cli.py::CLITest::test_number",
"tests/unittests/test_cli.py::CLITest::test_one_task",
"tests/unittests/test_cli.py::CLITest::test_project",
"tests/unittests/test_cli.py::CLITest::test_version"
]
| []
| Apache License 2.0 | 2,510 | [
"README.md",
"next_action/arguments.py",
"CHANGELOG.md"
]
| [
"README.md",
"next_action/arguments.py",
"CHANGELOG.md"
]
|
|
altair-viz__altair-830 | e6bdaf4ca09fbc83f121e7b0e835a43f665e8694 | 2018-05-13 05:16:59 | f676bd0875da1b51ee65661e8e6441d9c5e0c3dc | diff --git a/altair/sphinxext/altairgallery.py b/altair/sphinxext/altairgallery.py
index a413bf98..6309cda2 100644
--- a/altair/sphinxext/altairgallery.py
+++ b/altair/sphinxext/altairgallery.py
@@ -34,12 +34,6 @@ This gallery contains a selection of examples of the plots Altair can create.
Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.
-Many draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.
-
-.. code-block::
-
- $ pip install vega_datasets
-
{% for grouper, group in examples %}
.. _gallery-category-{{ grouper }}:
diff --git a/altair/vegalite/v1/schema/channels.py b/altair/vegalite/v1/schema/channels.py
index 046d99d9..500a36d7 100644
--- a/altair/vegalite/v1/schema/channels.py
+++ b/altair/vegalite/v1/schema/channels.py
@@ -15,6 +15,12 @@ class FieldChannelMixin(object):
context = context or {}
if self.shorthand is Undefined:
kwds = {}
+ elif isinstance(self.shorthand, (tuple, list)):
+ # If given a list of shorthands, then transform it to a list of classes
+ kwds = self._kwds.copy()
+ kwds.pop('shorthand')
+ return [self.__class__(shorthand, **kwds).to_dict()
+ for shorthand in self.shorthand]
elif isinstance(self.shorthand, six.string_types):
kwds = parse_shorthand(self.shorthand, data=context.get('data', None))
type_defined = self._kwds.get('type', Undefined) is not Undefined
diff --git a/altair/vegalite/v2/api.py b/altair/vegalite/v2/api.py
index 557fe897..5a94eb4b 100644
--- a/altair/vegalite/v2/api.py
+++ b/altair/vegalite/v2/api.py
@@ -860,9 +860,14 @@ class EncodingMixin(object):
def encode(self, *args, **kwargs):
# First convert args to kwargs by inferring the class from the argument
if args:
- mapping = _get_channels_mapping()
+ channels_mapping = _get_channels_mapping()
for arg in args:
- encoding = mapping.get(type(arg), None)
+ if isinstance(arg, (list, tuple)) and len(arg) > 0:
+ type_ = type(arg[0])
+ else:
+ type_ = type(arg)
+
+ encoding = channels_mapping.get(type_, None)
if encoding is None:
raise NotImplementedError("non-keyword arg of type {0}"
"".format(type(arg)))
@@ -880,6 +885,9 @@ class EncodingMixin(object):
if isinstance(obj, six.string_types):
obj = {'shorthand': obj}
+ if isinstance(obj, (list, tuple)):
+ return [_wrap_in_channel_class(subobj, prop) for subobj in obj]
+
if 'value' in obj:
clsname += 'Value'
diff --git a/altair/vegalite/v2/examples/simple_scatter_with_errorbars.py b/altair/vegalite/v2/examples/simple_scatter_with_errorbars.py
deleted file mode 100644
index 9118e497..00000000
--- a/altair/vegalite/v2/examples/simple_scatter_with_errorbars.py
+++ /dev/null
@@ -1,46 +0,0 @@
-"""
-Simple Scatter Plot with Errorbars
-----------------------------------
-
-A simple scatter plot of a data set with errorbars.
-
-"""
-# category: scatter plots
-
-import altair as alt
-
-import numpy as np
-import pandas as pd
-
-# generate some data points with uncertainties
-np.random.seed(0)
-x = [1, 2, 3, 4, 5]
-y = np.random.normal(10, 0.5, size=len(x))
-yerr = 0.2
-
-# set up data frame
-data = pd.DataFrame({"x":x, "y":y, "yerr":yerr})
-
-# generate the points
-points = alt.Chart(data).mark_point(filled=True, size=50).encode(
- alt.X("x",
- scale=alt.Scale(domain=(0,6)),
- axis=alt.Axis(title='x')
- ),
- y=alt.Y('y',
- scale=alt.Scale(zero=False, domain=(10, 11)),
- axis=alt.Axis(title="y")),
- color=alt.value('black')
-)
-
-# generate the error bars
-errorbars = alt.Chart(data).mark_rule().encode(
- x=alt.X("x"),
- y="ymin:Q",
- y2="ymax:Q"
-).transform_calculate(
- ymin="datum.y-datum.yerr",
- ymax="datum.y+datum.yerr"
-)
-
-points + errorbars
diff --git a/altair/vegalite/v2/examples/stem_and_leaf.py b/altair/vegalite/v2/examples/stem_and_leaf.py
index dd3d328d..604fe837 100644
--- a/altair/vegalite/v2/examples/stem_and_leaf.py
+++ b/altair/vegalite/v2/examples/stem_and_leaf.py
@@ -1,7 +1,7 @@
"""
-Stem and Leaf Plot
-------------------
-This example shows how to make a stem and leaf plot.
+Steam and Leaf Plot
+-------------------
+This example shows how to make a steam and leaf plot.
"""
# category: other charts
import altair as alt
@@ -12,7 +12,7 @@ np.random.seed(42)
# Generating random data
original_data = pd.DataFrame({'samples': np.array(np.random.normal(50, 15, 100), dtype=np.int)})
-# Splitting stem and leaf
+# Splitting steam and leaf
original_data['stem'] = original_data['samples'].apply(lambda x: str(x)[:-1])
original_data['leaf'] = original_data['samples'].apply(lambda x: str(x)[-1])
diff --git a/altair/vegalite/v2/schema/channels.py b/altair/vegalite/v2/schema/channels.py
index bc9c40e7..b780dd88 100644
--- a/altair/vegalite/v2/schema/channels.py
+++ b/altair/vegalite/v2/schema/channels.py
@@ -15,6 +15,12 @@ class FieldChannelMixin(object):
context = context or {}
if self.shorthand is Undefined:
kwds = {}
+ elif isinstance(self.shorthand, (tuple, list)):
+ # If given a list of shorthands, then transform it to a list of classes
+ kwds = self._kwds.copy()
+ kwds.pop('shorthand')
+ return [self.__class__(shorthand, **kwds).to_dict()
+ for shorthand in self.shorthand]
elif isinstance(self.shorthand, six.string_types):
kwds = parse_shorthand(self.shorthand, data=context.get('data', None))
type_defined = self._kwds.get('type', Undefined) is not Undefined
diff --git a/doc/user_guide/saving_charts.rst b/doc/user_guide/saving_charts.rst
index cddaefa4..308530a6 100644
--- a/doc/user_guide/saving_charts.rst
+++ b/doc/user_guide/saving_charts.rst
@@ -74,7 +74,7 @@ For example, here we save a simple scatter-plot to JSON:
chart.save('chart.json')
-The contents of the resulting file will look something like this:
+The contetns of the resulting file will look something like this:
.. code-block:: json
diff --git a/setup.py b/setup.py
index 696329a4..92c40447 100644
--- a/setup.py
+++ b/setup.py
@@ -126,13 +126,12 @@ setup(name=NAME,
'dev': DEV_REQUIRES
},
classifiers=[
- 'Development Status :: 5 - Production/Stable',
+ 'Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.4',
- 'Programming Language :: Python :: 3.5',
- 'Programming Language :: Python :: 3.6'],
+ 'Programming Language :: Python :: 3.5'],
)
diff --git a/tools/generate_schema_wrapper.py b/tools/generate_schema_wrapper.py
index 0caae349..c0ad0aad 100644
--- a/tools/generate_schema_wrapper.py
+++ b/tools/generate_schema_wrapper.py
@@ -68,6 +68,12 @@ class FieldChannelMixin(object):
context = context or {}
if self.shorthand is Undefined:
kwds = {}
+ elif isinstance(self.shorthand, (tuple, list)):
+ # If given a list of shorthands, then transform it to a list of classes
+ kwds = self._kwds.copy()
+ kwds.pop('shorthand')
+ return [self.__class__(shorthand, **kwds).to_dict()
+ for shorthand in self.shorthand]
elif isinstance(self.shorthand, six.string_types):
kwds = parse_shorthand(self.shorthand, data=context.get('data', None))
type_defined = self._kwds.get('type', Undefined) is not Undefined
| ENH: encodings with multiple fields (e.g. tooltip)
We need to update the ``encode()`` method so that it will handle multiple encoding fields in cases where it is supported.
For example, in the most recent vega-lite release, it is possible to pass multiple fields to the tooltip encoding:
```python
from vega_datasets import data
cars = data.cars()
chart = alt.Chart(cars).mark_point().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin'
)
chart.encoding.tooltip = [{'field': 'Name', 'type': 'nominal'},
{'field': 'Origin', 'type': 'nominal'}]
chart
```
<img width="534" alt="screen shot 2018-05-08 at 10 37 35 am" src="https://user-images.githubusercontent.com/781659/39773111-e8c2f8dc-52ab-11e8-9d4b-55c7eb4f31b5.png">
Altair should make this available via a more simplified API, such as
```python
from vega_datasets import data
cars = data.cars()
chart = alt.Chart(cars).mark_point().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
tooltip=['Name', 'Origin']
)
``` | altair-viz/altair | diff --git a/altair/vegalite/v2/tests/test_api.py b/altair/vegalite/v2/tests/test_api.py
index 1b78f112..b78e6963 100644
--- a/altair/vegalite/v2/tests/test_api.py
+++ b/altair/vegalite/v2/tests/test_api.py
@@ -90,6 +90,30 @@ def test_chart_infer_types():
assert dct['encoding']['y']['type'] == 'ordinal'
+def test_multiple_encodings():
+ encoding_dct = [{'field': 'value', 'type': 'quantitative'},
+ {'field': 'name', 'type': 'nominal'}]
+ chart1 = alt.Chart('data.csv').mark_point().encode(
+ detail=['value:Q', 'name:N'],
+ tooltip=['value:Q', 'name:N']
+ )
+
+ chart2 = alt.Chart('data.csv').mark_point().encode(
+ alt.Detail(['value:Q', 'name:N']),
+ alt.Tooltip(['value:Q', 'name:N'])
+ )
+
+ chart3 = alt.Chart('data.csv').mark_point().encode(
+ [alt.Detail('value:Q'), alt.Detail('name:N')],
+ [alt.Tooltip('value:Q'), alt.Tooltip('name:N')]
+ )
+
+ for chart in [chart1, chart2, chart3]:
+ dct = chart.to_dict()
+ assert dct['encoding']['detail'] == encoding_dct
+ assert dct['encoding']['tooltip'] == encoding_dct
+
+
def test_chart_operations():
data = pd.DataFrame({'x': pd.date_range('2012', periods=10, freq='Y'),
'y': range(10),
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_media",
"has_removed_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 8
} | 2.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
-e git+https://github.com/altair-viz/altair.git@e6bdaf4ca09fbc83f121e7b0e835a43f665e8694#egg=altair
asttokens==3.0.0
attrs==25.3.0
babel==2.17.0
certifi==2025.1.31
charset-normalizer==3.4.1
decorator==5.2.1
docutils==0.21.2
entrypoints==0.4
exceptiongroup==1.2.2
executing==2.2.0
flake8==7.2.0
idna==3.10
imagesize==1.4.1
importlib_metadata==8.6.1
iniconfig==2.1.0
ipython==8.18.1
jedi==0.19.2
Jinja2==3.1.6
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
m2r==0.3.1
MarkupSafe==3.0.2
matplotlib-inline==0.1.7
mccabe==0.7.0
mistune==0.8.4
numpy==2.0.2
packaging==24.2
pandas==2.2.3
parso==0.8.4
pexpect==4.9.0
pluggy==1.5.0
prompt_toolkit==3.0.50
ptyprocess==0.7.0
pure_eval==0.2.3
pycodestyle==2.13.0
pyflakes==3.3.1
Pygments==2.19.1
pytest==8.3.5
python-dateutil==2.9.0.post0
pytz==2025.2
referencing==0.36.2
requests==2.32.3
rpds-py==0.24.0
six==1.17.0
snowballstemmer==2.2.0
Sphinx==7.4.7
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
stack-data==0.6.3
tomli==2.2.1
toolz==1.0.0
traitlets==5.14.3
typing==3.7.4.3
typing_extensions==4.13.0
tzdata==2025.2
urllib3==2.3.0
vega-datasets==0.9.0
wcwidth==0.2.13
zipp==3.21.0
| name: altair
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- asttokens==3.0.0
- attrs==25.3.0
- babel==2.17.0
- certifi==2025.1.31
- charset-normalizer==3.4.1
- decorator==5.2.1
- docutils==0.21.2
- entrypoints==0.4
- exceptiongroup==1.2.2
- executing==2.2.0
- flake8==7.2.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- ipython==8.18.1
- jedi==0.19.2
- jinja2==3.1.6
- jsonschema==4.23.0
- jsonschema-specifications==2024.10.1
- m2r==0.3.1
- markupsafe==3.0.2
- matplotlib-inline==0.1.7
- mccabe==0.7.0
- mistune==0.8.4
- numpy==2.0.2
- packaging==24.2
- pandas==2.2.3
- parso==0.8.4
- pexpect==4.9.0
- pluggy==1.5.0
- prompt-toolkit==3.0.50
- ptyprocess==0.7.0
- pure-eval==0.2.3
- pycodestyle==2.13.0
- pyflakes==3.3.1
- pygments==2.19.1
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- pytz==2025.2
- referencing==0.36.2
- requests==2.32.3
- rpds-py==0.24.0
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==7.4.7
- sphinxcontrib-applehelp==2.0.0
- sphinxcontrib-devhelp==2.0.0
- sphinxcontrib-htmlhelp==2.1.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- stack-data==0.6.3
- tomli==2.2.1
- toolz==1.0.0
- traitlets==5.14.3
- typing==3.7.4.3
- typing-extensions==4.13.0
- tzdata==2025.2
- urllib3==2.3.0
- vega-datasets==0.9.0
- wcwidth==0.2.13
- zipp==3.21.0
prefix: /opt/conda/envs/altair
| [
"altair/vegalite/v2/tests/test_api.py::test_multiple_encodings"
]
| [
"altair/vegalite/v2/tests/test_api.py::test_chart_data_types",
"altair/vegalite/v2/tests/test_api.py::test_chart_infer_types",
"altair/vegalite/v2/tests/test_api.py::test_facet_parse_data",
"altair/vegalite/v2/tests/test_api.py::test_LookupData"
]
| [
"altair/vegalite/v2/tests/test_api.py::test_chart_operations",
"altair/vegalite/v2/tests/test_api.py::test_selection_to_dict",
"altair/vegalite/v2/tests/test_api.py::test_facet_parse",
"altair/vegalite/v2/tests/test_api.py::test_SelectionMapping",
"altair/vegalite/v2/tests/test_api.py::test_transforms",
"altair/vegalite/v2/tests/test_api.py::test_resolve_methods",
"altair/vegalite/v2/tests/test_api.py::test_themes",
"altair/vegalite/v2/tests/test_api.py::test_chart_from_dict"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,511 | [
"altair/vegalite/v2/examples/simple_scatter_with_errorbars.py",
"altair/vegalite/v2/schema/channels.py",
"altair/vegalite/v2/examples/stem_and_leaf.py",
"setup.py",
"altair/vegalite/v1/schema/channels.py",
"altair/vegalite/v2/api.py",
"tools/generate_schema_wrapper.py",
"altair/sphinxext/altairgallery.py",
"doc/user_guide/saving_charts.rst"
]
| [
"altair/vegalite/v2/examples/simple_scatter_with_errorbars.py",
"altair/vegalite/v2/schema/channels.py",
"altair/vegalite/v2/examples/stem_and_leaf.py",
"setup.py",
"altair/vegalite/v1/schema/channels.py",
"altair/vegalite/v2/api.py",
"tools/generate_schema_wrapper.py",
"altair/sphinxext/altairgallery.py",
"doc/user_guide/saving_charts.rst"
]
|
|
sernst__pipper-5 | 3c6bec29901aa6f4e65e7024edd8ded3f85f9a3c | 2018-05-13 14:44:13 | 3c6bec29901aa6f4e65e7024edd8ded3f85f9a3c | codecov-io: # [Codecov](https://codecov.io/gh/sernst/pipper/pull/5?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@6ff102b`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `90.19%`.
[](https://codecov.io/gh/sernst/pipper/pull/5?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5 +/- ##
=========================================
Coverage ? 48.19%
=========================================
Files ? 16
Lines ? 884
Branches ? 0
=========================================
Hits ? 426
Misses ? 458
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/sernst/pipper/pull/5?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [pipper/test/versioning/test\_serialize\_prefix.py](https://codecov.io/gh/sernst/pipper/pull/5/diff?src=pr&el=tree#diff-cGlwcGVyL3Rlc3QvdmVyc2lvbmluZy90ZXN0X3NlcmlhbGl6ZV9wcmVmaXgucHk=) | `100% <100%> (ø)` | |
| [pipper/test/utils.py](https://codecov.io/gh/sernst/pipper/pull/5/diff?src=pr&el=tree#diff-cGlwcGVyL3Rlc3QvdXRpbHMucHk=) | `100% <100%> (ø)` | |
| [pipper/test/test\_commands.py](https://codecov.io/gh/sernst/pipper/pull/5/diff?src=pr&el=tree#diff-cGlwcGVyL3Rlc3QvdGVzdF9jb21tYW5kcy5weQ==) | `100% <100%> (ø)` | |
| [pipper/s3.py](https://codecov.io/gh/sernst/pipper/pull/5/diff?src=pr&el=tree#diff-cGlwcGVyL3MzLnB5) | `34.78% <66.66%> (ø)` | |
| [pipper/versioning.py](https://codecov.io/gh/sernst/pipper/pull/5/diff?src=pr&el=tree#diff-cGlwcGVyL3ZlcnNpb25pbmcucHk=) | `57.25% <81.25%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/sernst/pipper/pull/5?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/sernst/pipper/pull/5?src=pr&el=footer). Last update [6ff102b...894d4e6](https://codecov.io/gh/sernst/pipper/pull/5?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/.coveragerc b/.coveragerc
new file mode 100644
index 0000000..3880345
--- /dev/null
+++ b/.coveragerc
@@ -0,0 +1,3 @@
+[run]
+omit =
+ setup.py
diff --git a/conda.dockerfile b/conda.dockerfile
new file mode 100644
index 0000000..383fdef
--- /dev/null
+++ b/conda.dockerfile
@@ -0,0 +1,11 @@
+FROM continuumio/miniconda3:latest
+
+RUN apt-get update \
+ && apt-get install nano \
+ && pip install pip --upgrade
+
+COPY requirements.txt /root/requirements.txt
+
+RUN pip install -r /root/requirements.txt
+
+WORKDIR "/root/"
diff --git a/docker-compose.yaml b/docker-compose.yaml
index f07622f..7c6db5b 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -1,8 +1,23 @@
version: "3"
services:
- dev:
- build: .
+ vanilla:
+ build:
+ context: .
+ dockerfile: vanilla.dockerfile
entrypoint: /bin/bash
+ stdin_open: true
+ tty: true
+ volumes:
+ - ~/.aws:/root/.aws
+ - ./:/root/libraries
+ - ~/.pipper:/root/.pipper
+ conda:
+ build:
+ context: .
+ dockerfile: conda.dockerfile
+ entrypoint: /bin/bash
+ stdin_open: true
+ tty: true
volumes:
- ~/.aws:/root/.aws
- ./:/root/libraries
diff --git a/pipper/__init__.py b/pipper/__init__.py
index 12fc0f7..e69de29 100644
--- a/pipper/__init__.py
+++ b/pipper/__init__.py
@@ -1,14 +0,0 @@
-"""
-This package requires that pip be available for import or many
-operations will fail. So a test import of pip is tried immediately.
-
-The pip library also has a bug (9.0.2) that requires it be imported
-before the requests library is imported or it will fail. This initial
-import serves to maintain order as well until that bug is fixed.
-"""
-
-try:
- import pip # noqa
-except ImportError:
- print('Pipper requires that the pip package be available for import')
- raise
diff --git a/pipper/authorizer.py b/pipper/authorizer.py
index d054c51..40b93bb 100644
--- a/pipper/authorizer.py
+++ b/pipper/authorizer.py
@@ -12,7 +12,7 @@ DELTA_REGEX = re.compile('(?P<number>[0-9]+)\s*(?P<unit>[a-zA-Z]+)')
def to_time_delta(age: str) -> timedelta:
- """
+ """
Converts an age string into a timedelta object, parsing the number and
units of the age. Valid units are:
* hour, hrs, hr, h
@@ -24,7 +24,6 @@ def to_time_delta(age: str) -> timedelta:
24mins -> 24 minutes
3hours -> 3 hours
"""
-
try:
result = DELTA_REGEX.search(age)
unit = result.group('unit').lower()
@@ -41,7 +40,7 @@ def to_time_delta(age: str) -> timedelta:
def parse_url(pipper_url: str) -> dict:
- """
+ """
Parses a wheel url into a dictionary containing information about the wheel
file
"""
diff --git a/pipper/bundler.py b/pipper/bundler.py
index c302749..1072ca6 100644
--- a/pipper/bundler.py
+++ b/pipper/bundler.py
@@ -18,7 +18,7 @@ def zip_bundle(
output_directory: str,
distribution_data: dict
) -> str:
- """
+ """
Creates a pipper zip file from the temporarily stored meta data and wheel
files and saves that zip file to the output directory location with the
pipper extension.
@@ -56,7 +56,7 @@ def create_meta(
bundle_directory: str,
distribution_data: dict
) -> str:
- """
+ """
Creates a JSON-formatted metadata file with information about the package
being bundled that is saved into the specified output directory.
@@ -100,7 +100,7 @@ def create_meta(
def create_wheel(package_directory: str, bundle_directory: str) -> dict:
- """
+ """
Creates a universally wheel distribution of the specified package and
saves that to the bundle directory.
@@ -146,7 +146,7 @@ def create_wheel(package_directory: str, bundle_directory: str) -> dict:
def run(env: Environment):
- """
+ """
Executes the bundling process on the specified package directory and saves
the pipper bundle file in the specified output directory.
diff --git a/pipper/command.py b/pipper/command.py
index 7ed073a..7f2a1d4 100644
--- a/pipper/command.py
+++ b/pipper/command.py
@@ -41,7 +41,6 @@ def show_version(env: Environment):
def run(cli_args: list = None):
""" """
-
args = parser.parse(cli_args)
env = Environment(args)
diff --git a/pipper/downloader.py b/pipper/downloader.py
index bab9daa..ad18a03 100644
--- a/pipper/downloader.py
+++ b/pipper/downloader.py
@@ -1,42 +1,24 @@
-import os
import json
-import zipfile
+import os
import shutil
-from urllib.parse import urlparse
+import zipfile
from contextlib import closing
import requests
+
from pipper import environment
-from pipper import info
from pipper import versioning
from pipper import wrapper
from pipper.environment import Environment
-def parse_package_url(package_url: str) -> dict:
- """ """
-
- url_data = urlparse(package_url)
- parts = url_data.path.strip('/').split('/')
- filename = parts[-1]
- safe_version = filename.rsplit('.', 1)[0]
-
- return dict(
- url=package_url,
- bucket=parts[0],
- name=parts[-2],
- safe_version=safe_version,
- version=versioning.deserialize(safe_version),
- key='/'.join(parts[1:])
- )
-
-
def parse_package_id(
env: Environment,
package_id: str,
- use_latest_version: bool = False
+ use_latest_version: bool = False,
+ include_prereleases: bool = False
) -> dict:
- """
+ """
Parses a package id into its constituent name and version information. If
the version is not specified as part of the identifier, a version will
be determined. If the package is already installed and the upgrade flag is
@@ -51,6 +33,9 @@ def parse_package_id(
name, or a package name and version (NAME:VERSION) combination.
:param use_latest_version:
Whether or not to use the latest version.
+ :param include_prereleases:
+ Whether or not to include prelease versions in the list of available
+ packages.
:return:
A dictionary containing installation information for the specified
package. The dictionary has the following fields:
@@ -60,24 +45,37 @@ def parse_package_id(
- key: S3 key for the remote pipper file where the specified
package name and version reside
"""
-
if package_id.startswith('https://'):
- return parse_package_url(package_id)
+ r = versioning.parse_package_url(package_id)
+ return dict(
+ url=r.url,
+ bucket=r.bucket,
+ name=r.package_name,
+ safe_version=r.safe_version,
+ version=r.version,
+ key=r.key
+ )
package_parts = package_id.split(':')
name = package_parts[0]
upgrade = use_latest_version or env.args.get('upgrade')
+ unstable = include_prereleases or env.args.get('unstable')
def possible_versions():
- yield (
- versioning.find_latest_match(env, *package_parts).version
- if len(package_parts) > 1 else
- None
- )
+ if len(package_parts) > 1:
+ yield versioning.find_latest_match(
+ env,
+ *package_parts,
+ include_prereleases=unstable
+ ).version
if not upgrade:
existing = wrapper.status(name)
yield existing.version if existing else None
- yield versioning.find_latest_match(env, name).version
+ yield versioning.find_latest_match(
+ env,
+ name,
+ include_prereleases=unstable
+ ).version
try:
version = next((v for v in possible_versions() if v is not None))
@@ -96,9 +94,7 @@ def parse_package_id(
def save(url: str, local_path: str) -> str:
- """
- """
-
+ """..."""
with closing(requests.get(url, stream=True)) as response:
if response.status_code != 200:
print((
@@ -185,24 +181,18 @@ def download_package(env: Environment, package_id: str) -> str:
def download_many(env: Environment, package_ids: list) -> dict:
- """
- """
-
+ """..."""
return {pid: download_package(env, pid) for pid in package_ids}
def download_from_configs(env: Environment, configs_path: str = None) -> dict:
- """
- """
-
+ """..."""
configs = environment.load_configs(configs_path)
return download_many(env, configs.get('dependencies') or [])
def run(env: Environment):
- """
- """
-
+ """..."""
package_ids = env.args.get('packages')
if not package_ids:
return download_from_configs(env, env.args.get('configs_path'))
diff --git a/pipper/environment.py b/pipper/environment.py
index 8d3b033..be24ccc 100644
--- a/pipper/environment.py
+++ b/pipper/environment.py
@@ -70,7 +70,6 @@ def load_repositories() -> dict:
def save_repositories(config_data: dict) -> dict:
""" """
-
directory = os.path.dirname(REPOSITORY_CONFIGS_PATH)
if not os.path.exists(directory):
os.makedirs(directory)
@@ -86,7 +85,6 @@ def load_repository(
allow_default: bool = False
) -> dict:
""" """
-
results = load_repositories()
try:
@@ -103,7 +101,6 @@ def load_repository(
def load_configs(configs_path: str = None):
""" """
-
path = os.path.realpath(
configs_path or
os.path.join(os.curdir, 'pipper.json')
@@ -121,11 +118,10 @@ def get_session(
repository: dict,
default_repository: dict
) -> Session:
- """
+ """
Creates an S3 session using AWS credentials, which can be specified in a
myriad of potential ways.
"""
-
aws_profile = args.get('aws_profile')
command_credentials = args.get('aws_credentials') or []
diff --git a/pipper/info.py b/pipper/info.py
index 90e81f2..0ccefd5 100644
--- a/pipper/info.py
+++ b/pipper/info.py
@@ -9,55 +9,21 @@ from pipper import versioning
from pipper.environment import Environment
-def list_remote_package_keys(
- env: Environment,
- package_name: str
-) -> typing.List[str]:
- """ """
- return [v.key for v in versioning.list_versions(env, package_name)]
-
-
-def list_remote_version_info(env: Environment, package_name: str) -> list:
- """ """
-
- def from_key(key: str) -> dict:
- filename = key.strip('/').split('/')[-1]
- safe_version = filename.rsplit('.', 1)[0]
- return dict(
- name=package_name,
- safe_version=safe_version,
- version=versioning.deserialize(safe_version)
- )
-
- versions = [
- from_key(key)
- for key in list_remote_package_keys(env, package_name)
- ]
-
- def compare_versions(a: dict, b: dict) -> int:
- return semver.compare(a['version'], b['version'])
-
- return sorted(versions, key=functools.cmp_to_key(compare_versions))
-
-
def get_package_metadata(
env: Environment,
package_name: str,
package_version: str
):
""" """
-
response = env.s3_client.head_object(
Bucket=env.bucket,
Key=versioning.make_s3_key(package_name, package_version)
)
-
return {key: value for key, value in response['Metadata'].items()}
def print_local_only(package_name: str):
""" """
-
print('[PACKAGE]: {}'.format(package_name))
local_data = wrapper.status(package_name)
@@ -73,14 +39,13 @@ def print_local_only(package_name: str):
def print_with_remote(env: Environment, package_name: str):
""" """
-
- remote_versions = list_remote_version_info(env, package_name)
+ remote_versions = versioning.list_versions(env, package_name)
try:
latest = get_package_metadata(
env,
package_name,
- remote_versions[-1]['version']
+ remote_versions[-1].version
)
except IndexError:
latest = dict(
@@ -125,13 +90,11 @@ def print_with_remote(env: Environment, package_name: str):
def run(env: Environment):
- """ """
-
+ """Executes an info command for the specified environment."""
local_only = env.args.get('local_only')
package_name = env.args.get('package_name')
if local_only:
return print_local_only(package_name)
-
return print_with_remote(env, package_name)
diff --git a/pipper/installer.py b/pipper/installer.py
index c722473..54efba2 100644
--- a/pipper/installer.py
+++ b/pipper/installer.py
@@ -13,9 +13,9 @@ from pipper.environment import Environment
def install_pipper_file(
local_source_path: str,
to_user: bool = False,
- target: str = None,
+ target_directory: str = None
) -> dict:
- """
+ """
Installs the specified local pipper bundle file.
:param local_source_path:
@@ -23,27 +23,32 @@ def install_pipper_file(
:param to_user:
Whether or not to install the package for the user or not. If not a
user package, the package will be installed globally.
- :param target:
+ :param target_directory:
Alternate installation location if specified.
:return
The package metadata from the pipper bundle
"""
-
directory = tempfile.mkdtemp(prefix='pipper-install-')
- extracted = downloader.extract_pipper_file(
- local_source_path,
- directory
- )
-
- wrapper.install_wheel(extracted['wheel_path'], to_user, target)
- shutil.rmtree(directory)
-
- return extracted['metadata']
+ try:
+ extracted = downloader.extract_pipper_file(
+ local_source_path,
+ directory
+ )
+ wrapper.install_wheel(
+ wheel_path=extracted['wheel_path'],
+ to_user=to_user,
+ target_directory=target_directory
+ )
+ return extracted['metadata']
+ except Exception:
+ raise
+ finally:
+ shutil.rmtree(directory)
def install_dependencies(env: Environment, dependencies: typing.List[str]):
- """
+ """
:param env:
Command environment in which this function is being executed
@@ -53,7 +58,6 @@ def install_dependencies(env: Environment, dependencies: typing.List[str]):
(NAME:VERSION) combination, but this version information is ignored for
dependencies.
"""
-
def do_install(package_name: str):
try:
data = downloader.parse_package_id(env, package_name)
@@ -77,7 +81,6 @@ def install(env: Environment, package_id: str):
Identifier for the package to be loaded. This can be either a package
name, or a package name and version (NAME:VERSION) combination.
"""
-
upgrade = env.args.get('upgrade')
data = downloader.parse_package_id(env, package_id)
is_url = 'url' in data
@@ -98,8 +101,8 @@ def install(env: Environment, package_id: str):
return
remote_version_exists = (
- is_url or
- s3.key_exists(env.s3_client, data['bucket'], data['key'])
+ is_url
+ or s3.key_exists(env.s3_client, data['bucket'], data['key'])
)
if not remote_version_exists:
@@ -125,9 +128,9 @@ def install(env: Environment, package_id: str):
try:
metadata = install_pipper_file(
- path,
+ local_source_path=path,
to_user=env.args.get('pip_user'),
- target=env.args.get('target'),
+ target_directory=env.args.get('target_directory')
)
except Exception:
raise
@@ -139,7 +142,7 @@ def install(env: Environment, package_id: str):
def install_many(env: Environment, package_ids: typing.List[str]):
- """
+ """
Installs a list of package identifiers, which can be either package names
or package name and version combinations.
@@ -149,13 +152,12 @@ def install_many(env: Environment, package_ids: typing.List[str]):
A list of package names or package name and version combinations to
install
"""
-
- for package_id in package_ids:
+ for package_id in (package_ids or []):
install(env, package_id)
def install_from_configs(env: Environment, configs_path: str = None):
- """
+ """
Installs pipper dependencies specified in a pipper configs file. If the
path to the configs file is not specified, the default path will be used
instead. The default location is a pipper.json file in the current
@@ -168,22 +170,30 @@ def install_from_configs(env: Environment, configs_path: str = None):
path will be used instead
"""
to_user = env.args.get('pip_user')
- target = env.args.get('target')
+ target_directory = env.args.get('target_directory')
configs = environment.load_configs(configs_path)
for package in configs.get('pypi', []):
print('\n=== PYPI {} ==='.format(package))
- wrapper.install_pypi(package, to_user=to_user, target=target)
+ wrapper.install_pypi(
+ package_name=package,
+ to_user=to_user,
+ target_directory=target_directory
+ )
for package in configs.get('conda', []):
print('\n=== CONDA {} ==='.format(package))
- wrapper.install_conda(package, to_user=to_user, target=target)
+ wrapper.install_conda(
+ package=package,
+ to_user=to_user,
+ target_directory=target_directory
+ )
return install_many(env, configs.get('dependencies'))
def run(env: Environment):
- """
+ """
Executes an installation command action under the given environmental
conditions. If a packages argument is specified and contains one or more
package IDs, they will be installed. If a path to a JSON pipper configs
@@ -194,7 +204,6 @@ def run(env: Environment):
:param env:
Command environment in which this function is being executed
"""
-
packages = env.args.get('packages')
if packages:
return install_many(env, packages)
diff --git a/pipper/parser.py b/pipper/parser.py
index b943ebb..f879b67 100644
--- a/pipper/parser.py
+++ b/pipper/parser.py
@@ -1,5 +1,5 @@
-import os
import argparse
+import os
from argparse import ArgumentParser
package_directory = os.path.dirname(os.path.realpath(__file__))
@@ -7,7 +7,6 @@ package_directory = os.path.dirname(os.path.realpath(__file__))
def required_length(minimum: int, maximum: int, optional: bool = False):
"""Returns a custom required length class"""
-
class RequiredLength(argparse.Action):
def __call__(self, parser, args, values, option_string=None):
is_allowed = (
@@ -15,7 +14,7 @@ def required_length(minimum: int, maximum: int, optional: bool = False):
(optional and not len(values))
)
- if is_allowed:
+ if is_allowed:
setattr(args, self.dest, values)
return
@@ -32,7 +31,6 @@ def required_length(minimum: int, maximum: int, optional: bool = False):
def read_file(*args) -> str:
""" """
-
path = os.path.join(package_directory, *args)
with open(path, 'r') as f:
return f.read()
@@ -40,7 +38,6 @@ def read_file(*args) -> str:
def populate_with_common(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.add_argument(
'-q', '--quiet',
dest='quiet',
@@ -54,7 +51,6 @@ def populate_with_common(parser: ArgumentParser) -> ArgumentParser:
def populate_with_credentials(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.add_argument(
'-r', '--repository',
dest='repository_name',
@@ -86,7 +82,6 @@ def populate_with_credentials(parser: ArgumentParser) -> ArgumentParser:
def populate_install(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.description = read_file('resources', 'install_action.txt')
parser.add_argument(
@@ -108,12 +103,21 @@ def populate_install(parser: ArgumentParser) -> ArgumentParser:
parser.add_argument(
'-t', '--target',
- dest='target',
- help='Install packages into the specified directory.',
+ dest='target_directory',
+ metavar='<dir>',
+ help=(
+ 'Install packages into <dir>. By default this '
+ 'will not replace existing files/folders in '
+ '<dir>. Use --upgrade to replace existing '
+ 'packages in <dir> with new versions. '
+ 'When applied to conda packages, this should '
+ 'be the root directory of the conda environment '
+ 'where the libraries will be installed.'
+ )
)
parser.add_argument(
- '-u', '--upgrade',
+ '-u', '--upgrade', '--update',
dest='upgrade',
action='store_true',
default=False,
@@ -125,7 +129,6 @@ def populate_install(parser: ArgumentParser) -> ArgumentParser:
def populate_bundle(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.description = read_file('resources', 'bundle_action.txt')
parser.add_argument('package_directory')
@@ -139,7 +142,6 @@ def populate_bundle(parser: ArgumentParser) -> ArgumentParser:
def populate_publish(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.description = read_file('resources', 'publish_action.txt')
parser.add_argument(
@@ -171,7 +173,6 @@ def populate_publish(parser: ArgumentParser) -> ArgumentParser:
def populate_info(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.description = read_file('resources', 'info_action.txt')
parser.add_argument(
@@ -192,7 +193,6 @@ def populate_info(parser: ArgumentParser) -> ArgumentParser:
def populate_download(parser: ArgumentParser) -> ArgumentParser:
""" """
-
parser.description = read_file('resources', 'download_action.txt')
parser.add_argument(
@@ -297,8 +297,14 @@ def populate_authorize(parser: ArgumentParser) -> ArgumentParser:
def parse(cli_args: list = None) -> dict:
- """ """
-
+ """
+ Parses command line arguments for consumption by the invoked action
+ and returns the parsed arguments as a dictionary.
+
+ :param cli_args:
+ Overrides the command line argument inputs. By default the sys args
+ will be used.
+ """
parser = ArgumentParser(
description=read_file('resources', 'command_description.txt'),
add_help=True
diff --git a/pipper/resources/command_description.txt b/pipper/resources/command_description.txt
index 6547d5b..6da1cf3 100644
--- a/pipper/resources/command_description.txt
+++ b/pipper/resources/command_description.txt
@@ -1,3 +1,4 @@
Pipper is a package manager built on Python's Pip package installer that
provides a lightweight package management system for custom packages that
-you do not want distributed through pypi.
\ No newline at end of file
+you do not want distributed through pypi that requires only an S3 bucket
+for hosting.
diff --git a/pipper/s3.py b/pipper/s3.py
index 9cafacd..4f83d41 100644
--- a/pipper/s3.py
+++ b/pipper/s3.py
@@ -1,14 +1,13 @@
-import os
-
import typing
+
from boto3.session import Session
+from botocore.client import BaseClient
def session_from_credentials_list(
credentials: list
) -> typing.Union[Session, None]:
""" """
-
is_valid = (
credentials and
len(credentials) > 1 and
@@ -42,7 +41,6 @@ def session_from_profile_name(
def key_exists(s3_client, bucket: str, key: str) -> bool:
""" """
-
try:
response = s3_client.list_objects(
Bucket=bucket,
@@ -52,3 +50,18 @@ def key_exists(s3_client, bucket: str, key: str) -> bool:
return len(response['Contents']) > 0
except Exception:
return False
+
+
+def list_objects(
+ execution_identifier: str,
+ s3_client: BaseClient,
+ bucket: str,
+ prefix: str,
+ **kwargs
+) -> dict:
+ """..."""
+ return s3_client.list_objects_v2(
+ Bucket=bucket,
+ Prefix=prefix,
+ **kwargs
+ )
diff --git a/pipper/settings.json b/pipper/settings.json
index ff99585..156772d 100644
--- a/pipper/settings.json
+++ b/pipper/settings.json
@@ -1,3 +1,3 @@
{
- "version": "0.4.0"
+ "version": "0.5.0"
}
diff --git a/pipper/versioning.py b/pipper/versioning.py
deleted file mode 100644
index 027436a..0000000
--- a/pipper/versioning.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import typing
-import pkg_resources
-
-import semver
-
-from pipper.environment import Environment
-
-
-class RemoteVersion(object):
- """
- Data structure for storing information about remote data sources
- """
-
- def __init__(self, bucket: str, key: str):
- """_ doc..."""
- self._key = key
- self._bucket = bucket
-
- @property
- def key(self) -> str:
- return self._key
-
- @property
- def bucket(self) -> str:
- return self._bucket
-
- @property
- def package_name(self) -> str:
- return self._key.strip('/').split('/')[1]
-
- @property
- def filename(self) -> str:
- return self.key.rsplit('/', 1)[-1]
-
- @property
- def version(self) -> str:
- return deserialize(self.key.rsplit('/', 1)[-1].rsplit('.', 1)[0])
-
- @property
- def safe_version(self) -> str:
- return self.key.rsplit('/', 1)[-1].rsplit('.', 1)[0]
-
- def __lt__(self, other):
- return semver.compare(self.version, other.version) < 0
-
- def __le__(self, other):
- return semver.compare(self.version, other.version) != 1
-
- def __gt__(self, other):
- return semver.compare(self.version, other.version) > 0
-
- def __ge__(self, other):
- return semver.compare(self.version, other.version) != -1
-
- def __eq__(self, other):
- return semver.compare(self.version, other.version) == 0
-
- def __repr__(self):
- return '<{} {}:{}>'.format(
- self.__class__.__name__,
- self.package_name,
- self.version
- )
-
-
-def serialize(version: str) -> str:
- """ """
- try:
- info = semver.parse_version_info(version)
- except ValueError:
- raise ValueError('Invalid semantic version "{}"'.format(version))
-
- pre = info.prerelease.replace('.', '_') if info.prerelease else None
- build = info.build.replace('.', '_') if info.build else None
-
- return 'v{}-{}-{}{}{}'.format(
- info.major,
- info.minor,
- info.patch,
- '-p-{}'.format(pre) if pre else '',
- '-b-{}'.format(build) if build else ''
- )
-
-
-def deserialize(version: str) -> str:
- """ """
- return (
- version
- .lstrip('v')
- .replace('-', '.', 2)
- .replace('-p-', '-')
- .replace('-b-', '+')
- )
-
-
-def make_s3_key(package_name: str, package_version: str) -> str:
- """ """
-
- safe_version = (
- serialize(package_version)
- if not package_version.startswith('v') else
- package_version
- )
-
- return 'pipper/{}/{}.pipper'.format(package_name, safe_version)
-
-
-def parse_s3_key(key: str) -> dict:
- """ """
-
- parts = key.strip('/').split('/')
- safe_version = parts[2].rsplit('.', 1)[0]
-
- return dict(
- name=parts[1],
- safe_version=safe_version,
- version=deserialize(safe_version)
- )
-
-
-def list_versions(
- environment: Environment,
- package_name: str,
- version_prefix: str = None
-) -> typing.List[RemoteVersion]:
- """..."""
- client = environment.aws_session.client('s3')
- prefix = (
- (version_prefix or '')
- .lstrip('v')
- .replace('.', '-')
- .split('*')[0]
- )
- key_prefix = 'pipper/{}/v{}'.format(package_name, prefix)
-
- responses = []
- while not responses or responses[-1].get('NextContinuationToken'):
- continuation_kwargs = (
- {'ContinuationToken': responses[-1].get('NextContinuationToken')}
- if responses else
- {}
- )
- responses.append(client.list_objects_v2(
- Bucket=environment.bucket,
- Prefix=key_prefix,
- **continuation_kwargs
- ))
-
- results = [
- RemoteVersion(key=entry['Key'], bucket=environment.bucket)
- for response in responses
- for entry in response.get('Contents')
- if entry['Key'].endswith('.pipper')
- ]
-
- return sorted(results)
-
-
-def compare_constraint(version: str, constraint: str) -> int:
- """Returns comparison between versions"""
- if version == constraint:
- return 0
-
- version_parts = version.split('.')
- constraint_parts = constraint.split('.')
-
- def compare_part(a: str, b: str) -> int:
- if a == b or a == '*' or b == '*':
- return 0
- return -1 if int(a) < int(b) else 1
-
- comparisons = [
- compare_part(v, c)
- for v, c in zip(version_parts, constraint_parts)
- ]
-
- return next((c for c in comparisons if c != 0), 0)
-
-
-def find_latest_match(
- environment: Environment,
- package_name: str,
- version_constraint: str = None
-) -> typing.Union[RemoteVersion, None]:
- """..."""
-
- available = list_versions(environment, package_name)
- available.reverse()
-
- if not version_constraint:
- return available[0]
-
- comparison = version_constraint.strip('=<>')
-
- if version_constraint.startswith('<='):
- choices = (
- a for a in available
- if compare_constraint(a.version, comparison) != 1
- )
- elif version_constraint.startswith('<'):
- choices = (
- a for a in available
- if compare_constraint(a.version, comparison) < 0
- )
- elif version_constraint.startswith('>='):
- choices = (
- a for a in available
- if compare_constraint(a.version, comparison) != -1
- )
- elif version_constraint.startswith('>'):
- choices = (
- a for a in available
- if compare_constraint(a.version, comparison) > 0
- )
- else:
- choices = (
- a for a in available
- if compare_constraint(a.version, comparison) == 0
- )
-
- return next(choices, None)
diff --git a/pipper/versioning/__init__.py b/pipper/versioning/__init__.py
new file mode 100644
index 0000000..7536078
--- /dev/null
+++ b/pipper/versioning/__init__.py
@@ -0,0 +1,224 @@
+import typing
+from urllib.parse import urlparse
+
+from pipper import s3
+from pipper.environment import Environment
+from pipper.versioning.definitions import RemoteVersion
+from pipper.versioning.serde import deserialize
+from pipper.versioning.serde import deserialize_prefix
+from pipper.versioning.serde import explode
+from pipper.versioning.serde import serialize
+from pipper.versioning.serde import serialize_prefix
+
+
+def to_remote_version(
+ package_name: str,
+ package_version: str,
+ bucket: str
+) -> RemoteVersion:
+ """
+ Converts the constituent properties of a pipper remote file into a
+ RemoteVersion object.
+ """
+ return RemoteVersion(
+ bucket=bucket,
+ key=make_s3_key(package_name, package_version)
+ )
+
+
+def parse_package_url(package_url: str) -> RemoteVersion:
+ """
+ Parses a standard S3 URL of the format:
+
+ `https://s3.amazonaws.com/bucket-name/pipper/package-name/v0-0-18.pipper`
+
+ into a RemoteVersion object.
+ """
+ url_data = urlparse(package_url)
+ parts = url_data.path.strip('/').split('/', 1)
+ return RemoteVersion(bucket=parts[0], key=parts[1], url=package_url)
+
+
+def make_s3_key(package_name: str, package_version: str) -> str:
+ """
+ Converts a package name and version into a fully-qualified S3 key to the
+ location where the file resides in the hosting S3 bucket. The package
+ version must be a complete semantic version but can be serialized or not.
+ """
+ safe_version = (
+ serialize(package_version)
+ if not package_version.startswith('v') else
+ package_version
+ )
+ return 'pipper/{}/{}.pipper'.format(package_name, safe_version)
+
+
+def list_versions(
+ environment: Environment,
+ package_name: str,
+ version_prefix: str = None,
+ include_prereleases: bool = False,
+ reverse: bool = False
+) -> typing.List[RemoteVersion]:
+ """
+ Lists the available versions of the specified package by querying the
+ remote S3 storage and returns those as keys. The results are sorted in
+ order of increasing version unless `reverse` is True in which case the
+ returned list is sorted from highest version to lowest one.
+
+ By default, only stable releases are returned, but pre-releases can be
+ included as well if the `include_prereleases` argument is set to True.
+
+ :param environment:
+ Context object for the currently running command invocation.
+ :param package_name:
+ Name of the pipper package to list versions of.
+ :param version_prefix:
+ A constraining version prefix, which may include wildcard characters.
+ Constraints are hierarchical, which means satisfying the highest
+ level constraint automatically satisfies the subsequent ones.
+ Therefore, a constraint like `1.*.4` would ignore the `4` patch value.
+ :param reverse:
+ Whether or not to reverse the order of the returned results.
+ :param include_prereleases:
+ Whether or not to include pre-release versions in the results.
+ """
+ prefix = serialize_prefix(version_prefix or '').split('*')[0]
+ key_prefix = 'pipper/{}/v{}'.format(package_name, prefix)
+
+ responses = []
+ while not responses or responses[-1].get('NextContinuationToken'):
+ continuation_kwargs = (
+ {'ContinuationToken': responses[-1].get('NextContinuationToken')}
+ if responses else
+ {}
+ )
+ responses.append(s3.list_objects(
+ execution_identifier='list_versions',
+ s3_client=environment.s3_client,
+ bucket=environment.bucket,
+ prefix=key_prefix,
+ **continuation_kwargs
+ ))
+
+ results = [
+ RemoteVersion(key=entry['Key'], bucket=environment.bucket)
+ for response in responses
+ for entry in response.get('Contents')
+ if entry['Key'].endswith('.pipper')
+ ]
+
+ return [
+ r for r in sorted(results, reverse=reverse)
+ if not r.is_prerelease or include_prereleases
+ ]
+
+
+def compare_constraint(version: str, constraint: str) -> int:
+ """
+ Returns an integer representing the sortable comparison between two
+ versions using standard sorting values:
+ -1 (version is less than constraint)
+ 0 (version is equal to constraint)
+ 1 (version is greater than constraint)
+ The use-case is to compare a version against a version
+ constraint to determine how the version satisfies the constraint.
+ """
+ if version == constraint:
+ return 0
+
+ version_parts = explode(version)
+ constraint_parts = explode(constraint)
+
+ def compare_part(v: str, c: str) -> int:
+ print(version_parts, constraint_parts)
+ is_equal = (
+ v == c # direct match
+ or '*' in [v, c] # one is a wildcard
+ or c == '' # no constraint specified
+ )
+ if is_equal:
+ return 0
+
+ if v == '':
+ return -1
+
+ a = ''.join([x.zfill(32) for x in v.split('.')])
+ b = ''.join([x.zfill(32) for x in c.split('.')])
+ items = sorted([a, b])
+ return -1 if items.index(a) == 0 else 1
+
+ comparisons = (
+ compare_part(v, c)
+ for v, c in zip(version_parts, constraint_parts)
+ )
+
+ return next((c for c in comparisons if c != 0), 0)
+
+
+def find_latest_match(
+ environment: Environment,
+ package_name: str,
+ version_constraint: str = None,
+ include_prereleases: bool = False
+) -> typing.Union[RemoteVersion, None]:
+ """
+ Searches through available remote versions of the specified package and
+ returns the highest version that satisfies the specified version
+ constraint. If no constraint is specified, the highest version available
+ is returned. If no match is found, a `None` value is returned instead.
+
+ :param environment:
+ Context object for the currently running command invocation.
+ :param package_name:
+ Name of the pipper package to list versions of.
+ :param version_constraint:
+ A constraining version or partial version, which may include wildcard
+ characters. Constraints are hierarchical, which means satisfying the
+ highest level constraint automatically satisfies the subsequent ones.
+ Therefore, a constraint like `=1.*.4` would ignore the `4` patch value.
+ Constraints should be prefixed by an equality such as `<`, `<=`, `=`,
+ `>=` or `>`.
+ :param include_prereleases:
+ Whether or not to include pre-release versions when looking for a
+ match.
+ """
+ available = list_versions(
+ environment=environment,
+ package_name=package_name,
+ reverse=True,
+ include_prereleases=include_prereleases
+ )
+
+ if not version_constraint:
+ return available[0]
+
+ comparison = version_constraint.strip('=<>')
+
+ if version_constraint.startswith('<='):
+ choices = (
+ a for a in available
+ if compare_constraint(a.version, comparison) != 1
+ )
+ elif version_constraint.startswith('<'):
+ choices = (
+ a for a in available
+ if compare_constraint(a.version, comparison) < 0
+ )
+ elif version_constraint.startswith('>='):
+ choices = (
+ a for a in available
+ if compare_constraint(a.version, comparison) != -1
+ )
+ elif version_constraint.startswith('>'):
+ choices = (
+ a for a in available
+ if compare_constraint(a.version, comparison) > 0
+ )
+ else:
+ choices = (
+ a for a in available
+ if compare_constraint(a.version, comparison) == 0
+ )
+
+ return next(choices, None)
diff --git a/pipper/versioning/definitions.py b/pipper/versioning/definitions.py
new file mode 100644
index 0000000..9957c15
--- /dev/null
+++ b/pipper/versioning/definitions.py
@@ -0,0 +1,80 @@
+import semver
+
+from pipper.versioning import serde
+
+
+class RemoteVersion(object):
+ """
+ Data structure for storing information about remote data sources.
+ """
+
+ def __init__(self, bucket: str, key: str, url: str = None):
+ """_ doc..."""
+ self._key = key
+ self._bucket = bucket
+ self._url = url
+
+ @property
+ def key(self) -> str:
+ return self._key
+
+ @property
+ def bucket(self) -> str:
+ return self._bucket
+
+ @property
+ def package_name(self) -> str:
+ return self._key.strip('/').split('/')[1]
+
+ @property
+ def filename(self) -> str:
+ return self.key.rsplit('/', 1)[-1]
+
+ @property
+ def version(self) -> str:
+ return serde.deserialize(self.key.rsplit('/', 1)[-1].rsplit('.', 1)[0])
+
+ @property
+ def is_url_based(self) -> bool:
+ return bool(self._url is not None)
+
+ @property
+ def safe_version(self) -> str:
+ return self.key.rsplit('/', 1)[-1].rsplit('.', 1)[0]
+
+ @property
+ def url(self) -> str:
+ standard_url = '/'.join([
+ 'https://s3.amazonaws.com',
+ self.bucket,
+ 'pipper',
+ self.package_name,
+ '{}.pipper'.format(self.safe_version)
+ ])
+ return self._url or standard_url
+
+ @property
+ def is_prerelease(self) -> bool:
+ return self.version.find('-') != -1
+
+ def __lt__(self, other):
+ return semver.compare(self.version, other.version) < 0
+
+ def __le__(self, other):
+ return semver.compare(self.version, other.version) != 1
+
+ def __gt__(self, other):
+ return semver.compare(self.version, other.version) > 0
+
+ def __ge__(self, other):
+ return semver.compare(self.version, other.version) != -1
+
+ def __eq__(self, other):
+ return semver.compare(self.version, other.version) == 0
+
+ def __repr__(self):
+ return '<{} {}:{}>'.format(
+ self.__class__.__name__,
+ self.package_name,
+ self.version
+ )
diff --git a/pipper/versioning/serde.py b/pipper/versioning/serde.py
new file mode 100644
index 0000000..d1a53a5
--- /dev/null
+++ b/pipper/versioning/serde.py
@@ -0,0 +1,115 @@
+import semver
+
+
+def explode(version_prefix: str) -> tuple:
+ """
+ Breaks apart a semantic version or partial semantic version string into
+ its constituent elements and returns them as a tuple of strings. Any
+ missing elements will be returned as empty strings.
+
+ :param version_prefix:
+ A semantic version or part of a semantic version, which can include
+ wildcard characters.
+ """
+ sections = []
+ remainder = version_prefix.rstrip('.')
+ for separator in ('+', '-'):
+ parts = remainder.split(separator, 1)
+ remainder = parts[0]
+ section = parts[1] if len(parts) == 2 else ''
+ sections.insert(0, section)
+
+ parts = remainder.split('.')
+ parts.extend(['', ''])
+ sections = parts[:3] + sections
+
+ return tuple(sections)
+
+
+def serialize(version: str) -> str:
+ """
+ Converts the specified semantic version into a URL/filesystem safe
+ version. If the version argument is not a valid semantic version a
+ ValueError will be raised.
+ """
+ try:
+ semver.parse_version_info(version)
+ except ValueError:
+ raise ValueError('Invalid semantic version "{}"'.format(version))
+
+ return serialize_prefix(version)
+
+
+def serialize_prefix(version_prefix: str) -> str:
+ """
+ Serializes the specified prefix into a URL/filesystem safe version that
+ can be used as a filename to store the versioned bundle.
+
+ :param version_prefix:
+ A partial or complete semantic version to be converted into its
+ URL/filesystem equivalent.
+ """
+ if version_prefix.startswith('v'):
+ return version_prefix
+
+ sections = [part.replace('.', '_') for part in explode(version_prefix)]
+ prefix = '-'.join([section for section in sections[:3] if section])
+ if sections[3]:
+ prefix += '__pre_{}'.format(sections[3])
+ if sections[4]:
+ prefix += '__build_{}'.format(sections[4])
+
+ return 'v{}'.format(prefix) if prefix else ''
+
+
+def deserialize_prefix(safe_version_prefix: str) -> str:
+ """
+ Deserializes the specified prefix from a URL/filesystem safe version into
+ its standard semantic version equivalent.
+
+ :param safe_version_prefix:
+ A partial or complete URL/filesystem safe version prefix to convert
+ into a standard semantic version prefix.
+ """
+ if not safe_version_prefix.startswith('v'):
+ return safe_version_prefix
+
+ searches = [
+ ('__build_', 'split'),
+ ('__pre_', 'split'),
+ ('-', 'rsplit'),
+ ('-', 'rsplit')
+ ]
+
+ sections = []
+ remainder = safe_version_prefix.strip('v').rstrip('_')
+ for separator, operator in searches:
+ parts = getattr(remainder, operator)(separator, 1)
+ remainder = parts[0]
+ section = parts[1] if len(parts) == 2 else ''
+ sections.insert(0, section.replace('_', '.'))
+ sections.insert(0, remainder)
+
+ prefix = '.'.join([section for section in sections[:3] if section])
+ if sections[3]:
+ prefix += '-{}'.format(sections[3])
+ if sections[4]:
+ prefix += '+{}'.format(sections[4])
+
+ return prefix
+
+
+def deserialize(safe_version: str) -> str:
+ """
+ Converts the specified URL/filesystem safe version into a standard semantic
+ version. If the converted output is not a valid semantic version a
+ ValueError will be raised.
+ """
+ result = deserialize_prefix(safe_version)
+
+ try:
+ semver.parse_version_info(result)
+ except ValueError:
+ raise ValueError('Invalid semantic version "{}"'.format(result))
+
+ return result
diff --git a/pipper/wrapper.py b/pipper/wrapper.py
index b53cd93..439108a 100644
--- a/pipper/wrapper.py
+++ b/pipper/wrapper.py
@@ -1,3 +1,4 @@
+import os
import subprocess
import sys
import typing
@@ -25,6 +26,12 @@ def update_required(package_name: str, install_version: str) -> bool:
return current != target
+def clean_path(path: str) -> str:
+ """Returns a fully-qualified absolute path for the source"""
+ path = os.path.expanduser(path) if path.startswith('~') else path
+ return os.path.realpath(path)
+
+
def status(package_name: str):
""" """
try:
@@ -38,7 +45,7 @@ def status(package_name: str):
def install_wheel(
wheel_path: str,
to_user: bool = False,
- target: str = None,
+ target_directory: str = None
):
"""
Installs the specified wheel using the pip associated with the
@@ -48,11 +55,14 @@ def install_wheel(
sys.executable,
'-m', 'pip',
'install', wheel_path,
- '--user' if to_user else None,
- '--target={}'.format(target) if target else None,
]
- cmd = [c for c in cmd if c is not None]
- print('COMMAND:', ' '.join(cmd))
+ cmd += ['--user'] if to_user else []
+ cmd += (
+ ['--target={}'.format(clean_path(target_directory))]
+ if target_directory else
+ []
+ )
+ print('[COMMAND]:\n', ' '.join(cmd).replace(' --', '\n --'))
result = subprocess.run(cmd)
result.check_returncode()
@@ -61,7 +71,7 @@ def install_wheel(
def install_pypi(
package_name: str,
to_user: bool = False,
- target: str = None,
+ target_directory: str = None
):
"""
Installs the specified package from pypi using pip.
@@ -70,11 +80,14 @@ def install_pypi(
sys.executable,
'-m', 'pip',
'install', package_name,
- '--user' if to_user else None,
- '--target={}'.format(target) if target else None,
]
- cmd = [c for c in cmd if c is not None]
- print('COMMAND:', ' '.join(cmd))
+ cmd += ['--user'] if to_user else []
+ cmd += (
+ ['--target={}'.format(clean_path(target_directory))]
+ if target_directory else
+ []
+ )
+ print('[COMMAND]:\n', ' '.join(cmd).replace(' --', '\n --'))
result = subprocess.run(cmd)
result.check_returncode()
@@ -83,7 +96,7 @@ def install_pypi(
def install_conda(
package: typing.Union[str, dict],
to_user: bool = False,
- target: str = None,
+ target_directory: str = None
):
"""
Installs the specified package using conda.
@@ -102,8 +115,12 @@ def install_conda(
]
cmd += ['--channel', channel] if channel else []
cmd += ['--user'] if to_user else []
- cmd += ['--target', target] if target else []
- print('COMMAND:', ' '.join(cmd))
+ cmd += (
+ ['--prefix={}'.format(clean_path(target_directory))]
+ if target_directory else
+ []
+ )
+ print('[COMMAND]:\n', ' '.join(cmd).replace(' --', '\n --'))
result = subprocess.run(cmd)
result.check_returncode()
diff --git a/requirements.txt b/requirements.txt
index a1054cd..f39df9e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,5 +1,8 @@
+pip
requests
wheel
setuptools
semver
boto3
+pytest
+pytest-runner
diff --git a/setup.py b/setup.py
index 20114ef..94b74ec 100644
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,6 @@ def populate_extra_files():
"""
Creates a list of non-python data files to include in package distribution
"""
-
glob_path = os.path.join(MY_DIRECTORY, PACKAGE_NAME, '**', '*.txt')
return (
[SETTINGS_PATH] +
| Enhanced Versioning
Enhance versioning to better support pre-release and build options in semantic versioning.
| sernst/pipper | diff --git a/pipper/test/__init__.py b/pipper/test/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/pipper/test/__init__.py
@@ -0,0 +1,1 @@
+
diff --git a/pipper/test/commands/__init__.py b/pipper/test/commands/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/pipper/test/commands/test_install.py b/pipper/test/commands/test_install.py
new file mode 100644
index 0000000..b106549
--- /dev/null
+++ b/pipper/test/commands/test_install.py
@@ -0,0 +1,15 @@
+from unittest.mock import MagicMock
+from unittest.mock import patch
+
+from pipper import command
+from pipper.test import utils
+
+
+@patch('pipper.installer.install')
[email protected]()
+def test_install(
+ boto_mocks: utils.BotoMocks,
+ install: MagicMock
+):
+ """..."""
+ command.run(['install', 'foo'])
diff --git a/pipper/test/test_authorizer.py b/pipper/test/test_authorizer.py
new file mode 100644
index 0000000..2f8e2a3
--- /dev/null
+++ b/pipper/test/test_authorizer.py
@@ -0,0 +1,19 @@
+import pytest
+
+
+scenarios = [
+ ('72s', 72),
+ ('85sec', 85),
+ ('92Seconds', 92),
+ ('1m', 60),
+ ('5min', 300),
+ ('2MINUTES', 120),
+ ('1hr', 3600),
+ ('10hrs', 36000),
+ ('100hours', 360000)
+]
+
+
[email protected]('age,total_seconds')
+def to_time_delta(age: str, total_seconds: int):
+ """Should convert the age to the expected number of seconds"""
\ No newline at end of file
diff --git a/pipper/test/test_commands.py b/pipper/test/test_commands.py
new file mode 100644
index 0000000..6a631e4
--- /dev/null
+++ b/pipper/test/test_commands.py
@@ -0,0 +1,25 @@
+from unittest.mock import MagicMock
+from unittest.mock import patch
+
+from pipper import command
+from pipper.test import utils
+
+
+@patch('pipper.s3.list_objects')
[email protected]()
+def test_info(boto_mocks: utils.BotoMocks, list_objects: MagicMock):
+ """..."""
+ list_objects.side_effect = utils.affect_by_identifier(
+ list_versions=utils.make_list_objects_response(contents=[])
+ )
+ command.run(['info', 'fake-package'])
+
+
+@patch('pipper.s3.list_objects')
[email protected]()
+def test_info_local(boto_mocks: utils.BotoMocks, list_objects: MagicMock):
+ """..."""
+ list_objects.side_effect = utils.affect_by_identifier(
+ list_versions=utils.make_list_objects_response(contents=[])
+ )
+ command.run(['info', 'fake-package', '--local'])
diff --git a/pipper/test/utils.py b/pipper/test/utils.py
new file mode 100644
index 0000000..2276c71
--- /dev/null
+++ b/pipper/test/utils.py
@@ -0,0 +1,70 @@
+import typing
+import functools
+from unittest.mock import MagicMock
+from unittest.mock import patch
+
+
+class BotoMocks(typing.NamedTuple):
+ """Data structure for boto3 mocked objects"""
+
+ session: MagicMock
+ s3_client: MagicMock
+
+
+class PatchSession:
+ """A Patch function for the boto3 session"""
+
+ def __init__(self, *args, **kwargs):
+ """Create decorator with arguments"""
+ pass
+
+ def __call__(self, test_function):
+ """
+ Decorates the specified test function by returning a new function
+ that wraps it with patching in place for mocked phalanx functions.
+ """
+ @patch('pipper.environment.get_session')
+ def patch_session(get_session: MagicMock, *args, **kwargs):
+ boto_mocks = _create_boto_mocks()
+ get_session.return_value = boto_mocks.session
+ test_function(boto_mocks, *args, **kwargs)
+
+ return patch_session
+
+
+def affect_by_identifier(**identifiers):
+ """..."""
+ def side_effect(execution_identifier: str, *args, **kwargs):
+ return identifiers.get(execution_identifier)
+ return side_effect
+
+
+def make_list_objects_response(
+ contents: list = None,
+ next_continuation_token: str = None
+) -> dict:
+ """..."""
+ return dict(
+ Contents=contents or [],
+ NextContinuationToken=next_continuation_token
+ )
+
+
+def _get_client(
+ mocked_clients: typing.Dict[str, MagicMock],
+ identifier: str,
+ **kwargs
+) -> MagicMock:
+ """..."""
+ return mocked_clients.get(identifier) or MagicMock()
+
+
+def _create_boto_mocks() -> BotoMocks:
+ """..."""
+ s3_client = MagicMock()
+ session = MagicMock()
+ session.client.side_effect = functools.partial(
+ _get_client,
+ {'s3': s3_client}
+ )
+ return BotoMocks(session=session, s3_client=s3_client) # noqa
diff --git a/pipper/test/versioning/__init__.py b/pipper/test/versioning/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/pipper/test/versioning/__init__.py
@@ -0,0 +1,1 @@
+
diff --git a/pipper/test/versioning/test_compare_constraint.py b/pipper/test/versioning/test_compare_constraint.py
new file mode 100644
index 0000000..6f1dc3d
--- /dev/null
+++ b/pipper/test/versioning/test_compare_constraint.py
@@ -0,0 +1,25 @@
+from pipper import versioning
+import pytest
+
+comparisons = [
+ ('0.0.1', '0.0.1', 0),
+ ('0.0.1', '0.0.2', -1),
+ ('0.0.1', '0.0.1-alpha.1', -1),
+ ('0.0.1', '0.0.1-alpha.1+build.2', -1),
+ ('0.0.1', '0.0.*', 0)
+]
+
+
[email protected]('version,constraint,expected', comparisons)
+def test_compare_constraint(version, constraint, expected):
+ """Should correctly compare between two versions"""
+ result = versioning.compare_constraint(version, constraint)
+ assert expected == result, """
+ Expect comparison of "{version}" with "{constraint}" to produce
+ a {expected} result instead of a {result} result.
+ """.format(
+ version=version,
+ constraint=constraint,
+ expected=expected,
+ result=result
+ )
diff --git a/pipper/test/versioning/test_explode.py b/pipper/test/versioning/test_explode.py
new file mode 100644
index 0000000..ec9a9e8
--- /dev/null
+++ b/pipper/test/versioning/test_explode.py
@@ -0,0 +1,20 @@
+import pytest
+
+from pipper.versioning import serde
+
+
+checks = [
+ ('0.0.1', ('0', '0', '1', '', '')),
+ ('0.1.12+build.1', ('0', '1', '12', '', 'build.1')),
+ ('0.1.*-beta.4', ('0', '1', '*', 'beta.4', '')),
+ ('1.*.*-beta.4+build.12', ('1', '*', '*', 'beta.4', 'build.12')),
+ ('1.12-beta.4', ('1', '12', '', 'beta.4', ''))
+]
+
+
[email protected]('version,expected', checks)
+def test_explode(version, expected):
+ """..."""
+ observed = serde.explode(version)
+ assert expected == observed
+
diff --git a/pipper/test/versioning/test_find_latest_match.py b/pipper/test/versioning/test_find_latest_match.py
new file mode 100644
index 0000000..21e85c2
--- /dev/null
+++ b/pipper/test/versioning/test_find_latest_match.py
@@ -0,0 +1,65 @@
+from unittest.mock import MagicMock
+from unittest.mock import patch
+
+import pytest
+
+from pipper import versioning
+
+listed_versions = list(reversed([
+ versioning.to_remote_version('test', '0.0.1', 'FAKE'),
+ versioning.to_remote_version('test', '0.0.1-alpha.1', 'FAKE'),
+ versioning.to_remote_version('test', '0.0.1-alpha.2', 'FAKE'),
+ versioning.to_remote_version('test', '0.0.1-alpha.2+build.122', 'FAKE'),
+ versioning.to_remote_version('test', '0.0.1-alpha.2+build.123', 'FAKE'),
+ versioning.to_remote_version('test', '0.0.2', 'FAKE'),
+ versioning.to_remote_version('test', '0.1.0', 'FAKE'),
+ versioning.to_remote_version('test', '0.1.1', 'FAKE'),
+ versioning.to_remote_version('test', '1.0.0', 'FAKE'),
+ versioning.to_remote_version('test', '2.0.0-alpha.1', 'FAKE'),
+ versioning.to_remote_version('test', '2.0.0-alpha.2', 'FAKE'),
+ versioning.to_remote_version('test', '2.0.0-beta.1', 'FAKE'),
+ versioning.to_remote_version('test', '2.0.0-beta.2', 'FAKE'),
+ versioning.to_remote_version('test', '2.0.0-rc.1+build.2', 'FAKE'),
+ versioning.to_remote_version('test', '2.0.0-rc.1+build.3', 'FAKE')
+]))
+
+validations = [
+ ('=0.0.1', False, '0.0.1'),
+ ('=0.0.1', True, '0.0.1-alpha.2+build.123'),
+ ('=0.0.1+build.122', True, '0.0.1-alpha.2+build.122'),
+ ('<0.0.1+build.123', True, '0.0.1-alpha.2+build.122'),
+ ('=0.0.1-alpha.1', True, '0.0.1-alpha.1'),
+ ('<0.0.2', False, '0.0.1'),
+ ('<=0.0.2', False, '0.0.2'),
+ ('=0.0.*', False, '0.0.2'),
+ ('=0.*', False, '0.1.1'),
+ ('=*', False, '1.0.0'),
+ ('=*', True, '2.0.0-rc.1+build.3'),
+ ('>0.1.1', False, '1.0.0'),
+ ('>=1.0.0', False, '1.0.0'),
+ ('', False, '1.0.0'),
+ ('', True, '2.0.0-rc.1+build.3'),
+]
+
+
[email protected]('constraint,unstable,expected', validations)
+@patch('pipper.versioning.list_versions')
+def test_find_latest_match(
+ list_versions: MagicMock,
+ constraint: str,
+ expected: str,
+ unstable: bool
+):
+ """Should find correct latest match for given constraint"""
+ list_versions.return_value = [
+ v for v in listed_versions
+ if not v.is_prerelease or unstable
+ ]
+
+ result = versioning.find_latest_match(
+ environment=MagicMock(),
+ package_name='test',
+ version_constraint=constraint,
+ include_prereleases=unstable
+ )
+ assert expected == result.version
diff --git a/pipper/test/versioning/test_parse_package_url.py b/pipper/test/versioning/test_parse_package_url.py
new file mode 100644
index 0000000..666e882
--- /dev/null
+++ b/pipper/test/versioning/test_parse_package_url.py
@@ -0,0 +1,9 @@
+
+from pipper import versioning
+
+
+def test_parse_package_url():
+ """Should parse package URL"""
+ rv = versioning.to_remote_version('fake', '1.0.1-alpha.1', 'fake-bucket')
+ rv_url = versioning.parse_package_url(rv.url)
+ assert rv == rv_url, 'Expect URL parsing to be consistent.'
diff --git a/pipper/test/versioning/test_remote_version.py b/pipper/test/versioning/test_remote_version.py
new file mode 100644
index 0000000..e7d70ba
--- /dev/null
+++ b/pipper/test/versioning/test_remote_version.py
@@ -0,0 +1,28 @@
+from pipper import versioning
+
+
+def test_attributes():
+ """Should have correct attributes specified in assignment"""
+ bucket = 'FAKE'
+ package = 'test'
+ version = '0.0.1'
+ rv = versioning.to_remote_version(package, version, bucket)
+ assert rv.bucket == bucket
+ assert rv.key.find(package) > 0
+ assert str(rv).find(package) > 0
+ assert str(rv).find(version) > 0
+ assert not rv.is_url_based
+
+
+def test_comparison():
+ """Should compare two remote versions correctly"""
+ rv1 = versioning.to_remote_version('test', '0.0.1-alpha.1', 'FAKE')
+ rv2 = versioning.to_remote_version('test', '0.0.1', 'FAKE')
+ rv3 = versioning.to_remote_version('test', '0.1.0', 'FAKE')
+
+ assert rv1 < rv2 < rv3
+ assert rv1 <= rv2 <= rv3
+ assert not rv1 >= rv2
+ assert not rv1 > rv2
+ assert not rv1 == rv2
+ assert rv1 == rv1
diff --git a/pipper/test/versioning/test_serialize.py b/pipper/test/versioning/test_serialize.py
new file mode 100644
index 0000000..d923368
--- /dev/null
+++ b/pipper/test/versioning/test_serialize.py
@@ -0,0 +1,40 @@
+import pytest
+
+from pipper import versioning
+
+TEST_PARAMETERS = [
+ ('1.2.3-alpha-beta.1', 'v1-2-3__pre_alpha-beta_1'),
+ ('1.2.3-alpha.1+2-test', 'v1-2-3__pre_alpha_1__build_2-test'),
+ ('1.2.3+2-test', 'v1-2-3__build_2-test'),
+ ('1.2.0-2+2-test', 'v1-2-0__pre_2__build_2-test')
+]
+
+
[email protected]('source,expected', TEST_PARAMETERS)
+def test_serialize(source: str, expected: str):
+ """Should serialize the source version to the expected value"""
+ result = versioning.serialize(source)
+ assert result == expected, """
+ Expected "{}" to be serialized as "{}" and not "{}".
+ """.format(source, expected, result)
+
+
[email protected]('expected,source', TEST_PARAMETERS)
+def test_deserialize(source: str, expected: str):
+ """Should deserialize the source version to the expected value"""
+ result = versioning.deserialize(source)
+ assert result == expected, """
+ Expected "{}" to be deserialized as "{}" and not "{}".
+ """.format(source, expected, result)
+
+
+def test_serialize_invalid():
+ """Should raise error trying to serialize invalid value."""
+ with pytest.raises(ValueError):
+ versioning.serialize('1.2')
+
+
+def test_deserialize_invalid():
+ """Should raise error trying to deserialize invalid value."""
+ with pytest.raises(ValueError):
+ versioning.deserialize('v1-2')
diff --git a/pipper/test/versioning/test_serialize_prefix.py b/pipper/test/versioning/test_serialize_prefix.py
new file mode 100644
index 0000000..1d0b182
--- /dev/null
+++ b/pipper/test/versioning/test_serialize_prefix.py
@@ -0,0 +1,45 @@
+import pytest
+
+from pipper import versioning
+
+TEST_PARAMETERS = [
+ ('', ''),
+ ('1.2', 'v1-2'),
+ ('0.4.', 'v0-4'),
+ ('1.2.3-alpha-beta.1', 'v1-2-3__pre_alpha-beta_1'),
+ ('1.2.3-alpha.1+2-test', 'v1-2-3__pre_alpha_1__build_2-test'),
+ ('1.2.3+2-test', 'v1-2-3__build_2-test'),
+ ('1.2-2+2-test', 'v1-2__pre_2__build_2-test'),
+ ('1+2-test', 'v1__build_2-test'),
+]
+
+
[email protected]('source,expected', TEST_PARAMETERS)
+def test_serialize_prefix(source: str, expected: str):
+ """Should serialize the source prefix to the expected value"""
+ result = versioning.serialize_prefix(source)
+ assert result == expected, """
+ Expected "{}" to be serialized as "{}" and not "{}".
+ """.format(source, expected, result)
+
+
[email protected]('expected,source', TEST_PARAMETERS)
+def test_deserialize_prefix(source: str, expected: str):
+ """Should deserialize the source prefix to the expected value"""
+ expected = expected.rstrip('.)+-_')
+ result = versioning.deserialize_prefix(source)
+ assert result == expected, """
+ Expected "{}" to be deserialized as "{}" and not "{}".
+ """.format(source, expected, result)
+
+
+def test_serialize_prefix_already():
+ """Should not modify a prefix that is already serialized."""
+ result = versioning.serialize_prefix('v1-2')
+ assert 'v1-2' == result, 'Expected prefix to remain unchanged.'
+
+
+def test_deserialize_prefix_already():
+ """Should not modify a prefix that has already been deserialized."""
+ result = versioning.deserialize_prefix('1.2')
+ assert '1.2' == result, 'Expected prefix to remain unchanged.'
diff --git a/pipper/test/versioning/test_versioning.py b/pipper/test/versioning/test_versioning.py
new file mode 100644
index 0000000..96bdb07
--- /dev/null
+++ b/pipper/test/versioning/test_versioning.py
@@ -0,0 +1,49 @@
+from pipper import versioning
+
+
+def test_make_s3_key():
+ """Should convert package name and version to an associated S3 key."""
+ name = 'fake-package'
+ version = '1.2.3-alpha.1+2-test'
+ key = versioning.make_s3_key(name, version)
+ remote = versioning.RemoteVersion('FAKE', key)
+
+ assert key.endswith('.pipper'), 'Expected a pipper file key.'
+ assert key.find(f'/{name}/'), 'Expected the package name as a folder.'
+ assert name == remote.package_name
+ assert version == remote.version
+ assert key.endswith('{}.pipper'.format(remote.safe_version)), """
+ Expected the key to end with the safe version as the file name.
+ """
+
+
+def test_to_remote_version():
+ """Should convert the package information into a RemoteVersion object."""
+ name = 'fake-package'
+ version = '1.2.3'
+ bucket = 'FAKE'
+ remote = versioning.to_remote_version(name, version, bucket)
+ assert version == remote.version
+ assert 'v1-2-3' == remote.safe_version
+ assert name == remote.package_name
+ assert f'pipper/{name}/v1-2-3.pipper' == remote.key
+ assert 'v1-2-3.pipper' == remote.filename
+ assert remote == remote, 'Should be equal to itself when compared'
+
+
+def test_remote_sorting():
+ """Should sort remote packages correctly according to version."""
+ remotes = [
+ versioning.to_remote_version('f', '1.0.1', 'FAKE'),
+ versioning.to_remote_version('a', '0.1.0', 'FAKE'),
+ versioning.to_remote_version('c', '0.1.1-alpha.2+1-test', 'FAKE'),
+ versioning.to_remote_version('d', '0.1.1-alpha.2+2-test', 'FAKE'),
+ versioning.to_remote_version('e', '0.1.1-alpha.2', 'FAKE'),
+ versioning.to_remote_version('b', '0.1.1-alpha.1', 'FAKE'),
+ ]
+ result = sorted(remotes)
+ comparison = sorted(remotes, key=lambda r: r.package_name)
+ assert comparison == result, """
+ Expected the RemoteVersions to be sorted by version such that their
+ package names are sorted alphabetically.
+ """
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_added_files",
"has_removed_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 15
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | boto3==1.33.13
botocore==1.33.13
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
coverage==7.2.7
exceptiongroup==1.2.2
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
jmespath==1.0.1
packaging==24.0
-e git+https://github.com/sernst/pipper.git@3c6bec29901aa6f4e65e7024edd8ded3f85f9a3c#egg=pipper
pluggy==1.2.0
pytest==7.4.4
pytest-cov==4.1.0
python-dateutil==2.9.0.post0
requests==2.31.0
s3transfer==0.8.2
semver==3.0.4
six==1.17.0
tomli==2.0.1
typing_extensions==4.7.1
urllib3==1.26.20
zipp==3.15.0
| name: pipper
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- boto3==1.33.13
- botocore==1.33.13
- charset-normalizer==3.4.1
- coverage==7.2.7
- exceptiongroup==1.2.2
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- jmespath==1.0.1
- packaging==24.0
- pluggy==1.2.0
- pytest==7.4.4
- pytest-cov==4.1.0
- python-dateutil==2.9.0.post0
- requests==2.31.0
- s3transfer==0.8.2
- semver==3.0.4
- six==1.17.0
- tomli==2.0.1
- typing-extensions==4.7.1
- urllib3==1.26.20
- zipp==3.15.0
prefix: /opt/conda/envs/pipper
| [
"pipper/test/commands/test_install.py::test_install",
"pipper/test/test_commands.py::test_info",
"pipper/test/test_commands.py::test_info_local",
"pipper/test/versioning/test_compare_constraint.py::test_compare_constraint[0.0.1-0.0.1-0]",
"pipper/test/versioning/test_compare_constraint.py::test_compare_constraint[0.0.1-0.0.2--1]",
"pipper/test/versioning/test_compare_constraint.py::test_compare_constraint[0.0.1-0.0.1-alpha.1--1]",
"pipper/test/versioning/test_compare_constraint.py::test_compare_constraint[0.0.1-0.0.1-alpha.1+build.2--1]",
"pipper/test/versioning/test_compare_constraint.py::test_compare_constraint[0.0.1-0.0.*-0]",
"pipper/test/versioning/test_explode.py::test_explode[0.0.1-expected0]",
"pipper/test/versioning/test_explode.py::test_explode[0.1.12+build.1-expected1]",
"pipper/test/versioning/test_explode.py::test_explode[0.1.*-beta.4-expected2]",
"pipper/test/versioning/test_explode.py::test_explode[1.*.*-beta.4+build.12-expected3]",
"pipper/test/versioning/test_explode.py::test_explode[1.12-beta.4-expected4]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=0.0.1-False-0.0.1]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=0.0.1-True-0.0.1-alpha.2+build.123]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=0.0.1+build.122-True-0.0.1-alpha.2+build.122]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[<0.0.1+build.123-True-0.0.1-alpha.2+build.122]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=0.0.1-alpha.1-True-0.0.1-alpha.1]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[<0.0.2-False-0.0.1]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[<=0.0.2-False-0.0.2]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=0.0.*-False-0.0.2]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=0.*-False-0.1.1]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=*-False-1.0.0]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[=*-True-2.0.0-rc.1+build.3]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[>0.1.1-False-1.0.0]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[>=1.0.0-False-1.0.0]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[-False-1.0.0]",
"pipper/test/versioning/test_find_latest_match.py::test_find_latest_match[-True-2.0.0-rc.1+build.3]",
"pipper/test/versioning/test_parse_package_url.py::test_parse_package_url",
"pipper/test/versioning/test_remote_version.py::test_attributes",
"pipper/test/versioning/test_remote_version.py::test_comparison",
"pipper/test/versioning/test_serialize.py::test_serialize[1.2.3-alpha-beta.1-v1-2-3__pre_alpha-beta_1]",
"pipper/test/versioning/test_serialize.py::test_serialize[1.2.3-alpha.1+2-test-v1-2-3__pre_alpha_1__build_2-test]",
"pipper/test/versioning/test_serialize.py::test_serialize[1.2.3+2-test-v1-2-3__build_2-test]",
"pipper/test/versioning/test_serialize.py::test_serialize[1.2.0-2+2-test-v1-2-0__pre_2__build_2-test]",
"pipper/test/versioning/test_serialize.py::test_deserialize[1.2.3-alpha-beta.1-v1-2-3__pre_alpha-beta_1]",
"pipper/test/versioning/test_serialize.py::test_deserialize[1.2.3-alpha.1+2-test-v1-2-3__pre_alpha_1__build_2-test]",
"pipper/test/versioning/test_serialize.py::test_deserialize[1.2.3+2-test-v1-2-3__build_2-test]",
"pipper/test/versioning/test_serialize.py::test_deserialize[1.2.0-2+2-test-v1-2-0__pre_2__build_2-test]",
"pipper/test/versioning/test_serialize.py::test_serialize_invalid",
"pipper/test/versioning/test_serialize.py::test_deserialize_invalid",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[-]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[1.2-v1-2]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[0.4.-v0-4]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[1.2.3-alpha-beta.1-v1-2-3__pre_alpha-beta_1]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[1.2.3-alpha.1+2-test-v1-2-3__pre_alpha_1__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[1.2.3+2-test-v1-2-3__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[1.2-2+2-test-v1-2__pre_2__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix[1+2-test-v1__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[-]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[1.2-v1-2]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[0.4.-v0-4]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[1.2.3-alpha-beta.1-v1-2-3__pre_alpha-beta_1]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[1.2.3-alpha.1+2-test-v1-2-3__pre_alpha_1__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[1.2.3+2-test-v1-2-3__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[1.2-2+2-test-v1-2__pre_2__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix[1+2-test-v1__build_2-test]",
"pipper/test/versioning/test_serialize_prefix.py::test_serialize_prefix_already",
"pipper/test/versioning/test_serialize_prefix.py::test_deserialize_prefix_already",
"pipper/test/versioning/test_versioning.py::test_make_s3_key",
"pipper/test/versioning/test_versioning.py::test_to_remote_version",
"pipper/test/versioning/test_versioning.py::test_remote_sorting"
]
| []
| []
| []
| null | 2,512 | [
"pipper/environment.py",
"pipper/resources/command_description.txt",
"pipper/settings.json",
"requirements.txt",
"pipper/versioning/definitions.py",
"conda.dockerfile",
"pipper/command.py",
"pipper/versioning/__init__.py",
"pipper/wrapper.py",
"pipper/bundler.py",
".coveragerc",
"pipper/info.py",
"setup.py",
"pipper/versioning/serde.py",
"pipper/__init__.py",
"docker-compose.yaml",
"pipper/s3.py",
"pipper/versioning.py",
"pipper/authorizer.py",
"pipper/parser.py",
"pipper/downloader.py",
"pipper/installer.py"
]
| [
"pipper/environment.py",
"pipper/resources/command_description.txt",
"pipper/settings.json",
"requirements.txt",
"pipper/versioning/definitions.py",
"conda.dockerfile",
"pipper/command.py",
"pipper/versioning/__init__.py",
"pipper/wrapper.py",
"pipper/bundler.py",
".coveragerc",
"pipper/info.py",
"setup.py",
"pipper/versioning/serde.py",
"pipper/__init__.py",
"docker-compose.yaml",
"pipper/s3.py",
"pipper/versioning.py",
"pipper/authorizer.py",
"pipper/parser.py",
"pipper/downloader.py",
"pipper/installer.py"
]
|
fniessink__next-action-37 | 4f4f97d3a24dba7b534e09cc514805f18a03d9c4 | 2018-05-13 17:32:23 | e145fa742fb415e26ec78bff558a359e2022729e | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9bbe07a..270239d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
## [Unreleased]
+### Added
+
+- Next-action can now read multiple todo.txt files to select the next action from. For example: `next-action --file todo.txt --file big-project-todo.txt`. Closes #35.
+
### Changed
- Ignore tasks that have a start date in the future. Closes #34.
diff --git a/README.md b/README.md
index 3dfda58..4bc5ef1 100644
--- a/README.md
+++ b/README.md
@@ -52,7 +52,7 @@ $ next-action
The next action is determined using priority. Creation date is considered after priority, with older tasks getting precedence over newer tasks.
-Completed tasks (<del>`x This is a completed task`</del>) and tasks with a creation date in the future (`9999-01-01 Start preparing for five-digit years`) are not considered when determining the next action.
+Completed tasks (~~`x This is a completed task`~~) and tasks with a creation date in the future (`9999-01-01 Start preparing for five-digit years`) are not considered when determining the next action.
### Limit next actions
diff --git a/next_action/arguments.py b/next_action/arguments.py
index ec87cb1..adc90f4 100644
--- a/next_action/arguments.py
+++ b/next_action/arguments.py
@@ -48,7 +48,9 @@ def parse_arguments() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Show the next action in your todo.txt",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--version", action="version", version="%(prog)s {0}".format(next_action.__version__))
- parser.add_argument("-f", "--file", help="filename of the todo.txt file to read", type=str, default="todo.txt")
+ default_filenames = ["todo.txt"]
+ parser.add_argument("-f", "--file", help="todo.txt file to read; argument can be repeated", type=str,
+ action="append", dest="filenames", metavar="FILE", default=default_filenames)
group = parser.add_mutually_exclusive_group()
group.add_argument("-n", "--number", metavar="N", help="number of next actions to show", type=int, default=1)
group.add_argument("-a", "--all", help="show all next actions", action="store_true")
@@ -57,6 +59,14 @@ def parse_arguments() -> argparse.Namespace:
parser.add_argument("projects", metavar="+PROJECT", help="show the next action for the specified projects",
nargs="*", type=str, default=None, action=ContextProjectAction)
namespace = parser.parse_args()
+ # Work around the issue that the "append" action doesn't overwrite defaults.
+ # See https://bugs.python.org/issue16399.
+ if default_filenames != namespace.filenames:
+ for default_filename in default_filenames:
+ namespace.filenames.remove(default_filename)
+ # Remove duplicate filenames while maintaining order.
+ namespace.filenames = list(dict.fromkeys(namespace.filenames))
+ # If the user wants to see all next actions, set the number to something big.
if namespace.all:
namespace.number = sys.maxsize
return namespace
diff --git a/next_action/cli.py b/next_action/cli.py
index 93a7d57..e0c7f41 100644
--- a/next_action/cli.py
+++ b/next_action/cli.py
@@ -10,17 +10,18 @@ def next_action() -> None:
Basic recipe:
1) parse command-line arguments,
- 2) read todo.txt file,
+ 2) read todo.txt file(s),
3) determine the next action(s) and display them.
"""
arguments = parse_arguments()
- filename: str = arguments.file
- try:
- todotxt_file = open(filename, "r")
- except FileNotFoundError:
- print("Can't find {0}".format(filename))
- return
- with todotxt_file:
- tasks = [Task(line.strip()) for line in todotxt_file.readlines() if line.strip()]
+ tasks = []
+ for filename in arguments.filenames:
+ try:
+ todotxt_file = open(filename, "r")
+ except FileNotFoundError:
+ print("Can't find {0}".format(filename))
+ return
+ with todotxt_file:
+ tasks.extend([Task(line.strip()) for line in todotxt_file.readlines() if line.strip()])
actions = next_actions(tasks, set(arguments.contexts), set(arguments.projects))
print("\n".join(action.text for action in actions[:arguments.number]) if actions else "Nothing to do!")
| Allow for repeating --file
Show which file the next action was selected from somehow, e.g.:
```console
$ next-action --file work.txt home.txt
(A) Fix leaking roof [home.txt]
``` | fniessink/next-action | diff --git a/tests/unittests/test_arguments.py b/tests/unittests/test_arguments.py
index 6fa5f60..ac314cd 100644
--- a/tests/unittests/test_arguments.py
+++ b/tests/unittests/test_arguments.py
@@ -17,17 +17,32 @@ class ArgumentParserTest(unittest.TestCase):
@patch.object(sys, "argv", ["next-action"])
def test_default_filename(self):
""" Test that the argument parser has a default filename. """
- self.assertEqual("todo.txt", parse_arguments().file)
+ self.assertEqual(["todo.txt"], parse_arguments().filenames)
@patch.object(sys, "argv", ["next-action", "-f", "my_todo.txt"])
def test_filename_argument(self):
""" Test that the argument parser accepts a filename. """
- self.assertEqual("my_todo.txt", parse_arguments().file)
+ self.assertEqual(["my_todo.txt"], parse_arguments().filenames)
@patch.object(sys, "argv", ["next-action", "--file", "my_other_todo.txt"])
def test_long_filename_argument(self):
""" Test that the argument parser accepts a filename. """
- self.assertEqual("my_other_todo.txt", parse_arguments().file)
+ self.assertEqual(["my_other_todo.txt"], parse_arguments().filenames)
+
+ @patch.object(sys, "argv", ["next-action", "-f", "todo.txt"])
+ def test_add_default_filename(self):
+ """ Test that adding the default filename doesn't get it included twice. """
+ self.assertEqual(["todo.txt"], parse_arguments().filenames)
+
+ @patch.object(sys, "argv", ["next-action", "-f", "todo.txt", "-f", "other.txt"])
+ def test_default_and_non_default(self):
+ """ Test that adding the default filename and another filename gets both included. """
+ self.assertEqual(["todo.txt", "other.txt"], parse_arguments().filenames)
+
+ @patch.object(sys, "argv", ["next-action", "-f", "other.txt", "-f", "other.txt"])
+ def test__add_filename_twice(self):
+ """ Test that adding the same filename twice includes it only once. """
+ self.assertEqual(["other.txt"], parse_arguments().filenames)
@patch.object(sys, "argv", ["next-action"])
def test_no_context(self):
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 1b80920..7c7d92a 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -71,7 +71,7 @@ positional arguments:
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
- -f FILE, --file FILE filename of the todo.txt file to read (default: todo.txt)
+ -f FILE, --file FILE todo.txt file to read; argument can be repeated (default: ['todo.txt'])
-n N, --number N number of next actions to show (default: 1)
-a, --all show all next actions (default: False)
"""),
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 4
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/fniessink/next-action.git@4f4f97d3a24dba7b534e09cc514805f18a03d9c4#egg=next_action
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test__add_filename_twice",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_add_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_and_non_default",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_long_filename_argument"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_all_and_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_project",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_option",
"tests/unittests/test_cli.py::CLITest::test_help"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_all_actions",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_contexts_and_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_contexts",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_no_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_project",
"tests/unittests/test_cli.py::CLITest::test_context",
"tests/unittests/test_cli.py::CLITest::test_empty_task_file",
"tests/unittests/test_cli.py::CLITest::test_ignore_empty_lines",
"tests/unittests/test_cli.py::CLITest::test_missing_file",
"tests/unittests/test_cli.py::CLITest::test_number",
"tests/unittests/test_cli.py::CLITest::test_one_task",
"tests/unittests/test_cli.py::CLITest::test_project",
"tests/unittests/test_cli.py::CLITest::test_version"
]
| []
| Apache License 2.0 | 2,513 | [
"next_action/cli.py",
"README.md",
"next_action/arguments.py",
"CHANGELOG.md"
]
| [
"next_action/cli.py",
"README.md",
"next_action/arguments.py",
"CHANGELOG.md"
]
|
|
fniessink__next-action-39 | e145fa742fb415e26ec78bff558a359e2022729e | 2018-05-13 20:52:13 | e145fa742fb415e26ec78bff558a359e2022729e | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5353797..b08b416 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
+## [0.1.0] - 2018-05-13
+
+### Added
+
+- Take due date into account when determining the next action. Tasks due earlier take precedence. Closes #33.
+
## [0.0.9] - 2018-05-13
### Added
diff --git a/README.md b/README.md
index 3a98d93..beca8e8 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@
[](https://www.codacy.com/app/frank_10/next-action?utm_source=github.com&utm_medium=referral&utm_content=fniessink/next-action&utm_campaign=Badge_Grade)
[](https://www.codacy.com/app/frank_10/next-action?utm_source=github.com&utm_medium=referral&utm_content=fniessink/next-action&utm_campaign=Badge_Coverage)
-Determine the next action to work on from a list of actions in a todo.txt file. *Next-action* is pre-alpha-stage at the moment, so its functionality is still rather limited.
+Determine the next action to work on from a list of actions in a todo.txt file. *Next-action* is alpha-stage at the moment, so its functionality is still rather limited.
Don't know what *Todo.txt* is? See <https://github.com/todotxt/todo.txt> for the *Todo.txt* specification.
@@ -50,7 +50,7 @@ $ next-action
(A) Call mom @phone
```
-The next action is determined using priority. Creation date is considered after priority, with older tasks getting precedence over newer tasks.
+The next action is determined using priority. Due date is considered after priority, with tasks due earlier getting precedence over tasks due later. Creation date is considered after due date, with older tasks getting precedence over newer tasks.
Completed tasks (~~`x This is a completed task`~~) and tasks with a creation date in the future (`9999-01-01 Start preparing for five-digit years`) are not considered when determining the next action.
@@ -97,7 +97,7 @@ $ next-action --all @store
Note again that completed tasks and task with a future creation date are never shown since they can't be a next action.
-*Next-action* being still pre-alpha-stage, this is it for the moment. Stay tuned for more options.
+*Next-action* being still alpha-stage, this is it for the moment. Stay tuned for more options.
## Develop
diff --git a/next_action/__init__.py b/next_action/__init__.py
index f916b9e..9aa288e 100644
--- a/next_action/__init__.py
+++ b/next_action/__init__.py
@@ -1,4 +1,4 @@
""" Main Next-action package. """
__title__ = "next-action"
-__version__ = "0.0.9"
+__version__ = "0.1.0"
diff --git a/next_action/pick_action.py b/next_action/pick_action.py
index a950112..444f65a 100644
--- a/next_action/pick_action.py
+++ b/next_action/pick_action.py
@@ -8,7 +8,7 @@ from .todotxt import Task
def sort_key(task: Task) -> Tuple[str, datetime.date]:
""" Return the sort key for a task. """
- return (task.priority() or "ZZZ", task.creation_date() or datetime.date.max)
+ return (task.priority() or "ZZZ", task.due_date() or datetime.date.max, task.creation_date() or datetime.date.max)
def next_actions(tasks: Sequence[Task], contexts: Set[str] = None, projects: Set[str] = None) -> Sequence[Task]:
@@ -19,5 +19,5 @@ def next_actions(tasks: Sequence[Task], contexts: Set[str] = None, projects: Set
tasks_in_context = filter(lambda task: contexts <= task.contexts() if contexts else True, actionable_tasks)
# Next, select the tasks that belong to at least one of the given projects, if any
tasks_in_project = filter(lambda task: projects & task.projects() if projects else True, tasks_in_context)
- # Finally, sort by priority and creation date
+ # Finally, sort by priority, due date and creation date
return sorted(tasks_in_project, key=sort_key)
diff --git a/next_action/todotxt/task.py b/next_action/todotxt/task.py
index 6ea1319..0ee3042 100644
--- a/next_action/todotxt/task.py
+++ b/next_action/todotxt/task.py
@@ -3,10 +3,14 @@
import datetime
import re
from typing import Optional, Set
+from typing.re import Match # pylint: disable=import-error
class Task(object):
""" A task from a line in a todo.txt file. """
+
+ iso_date_reg_exp = r"(\d{4})-(\d{1,2})-(\d{1,2})"
+
def __init__(self, todo_txt: str) -> None:
self.text = todo_txt
@@ -28,13 +32,13 @@ class Task(object):
def creation_date(self) -> Optional[datetime.date]:
""" Return the creation date of the task. """
- match = re.match(r"(?:\([A-Z]\) )?(\d{4})-(\d{1,2})-(\d{1,2})", self.text)
- if match:
- try:
- return datetime.date(*(int(group) for group in match.groups()))
- except ValueError:
- pass
- return None
+ match = re.match(r"(?:\([A-Z]\) )?{0}\b".format(self.iso_date_reg_exp), self.text)
+ return self.__create_date(match)
+
+ def due_date(self) -> Optional[datetime.date]:
+ """ Return the due date of the task. """
+ match = re.search(r"\bdue:{0}\b".format(self.iso_date_reg_exp), self.text)
+ return self.__create_date(match)
def is_completed(self) -> bool:
""" Return whether the task is completed or not. """
@@ -48,3 +52,13 @@ class Task(object):
def __prefixed_items(self, prefix: str) -> Set[str]:
""" Return the prefixed items in the task. """
return {match.group(1) for match in re.finditer(" {0}([^ ]+)".format(prefix), self.text)}
+
+ @staticmethod
+ def __create_date(match: Match) -> Optional[datetime.date]:
+ """ Create a date from the match, if possible. """
+ if match:
+ try:
+ return datetime.date(*(int(group) for group in match.groups()))
+ except ValueError:
+ pass
+ return None
diff --git a/setup.py b/setup.py
index bdbb6cd..67b4fd0 100644
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,7 @@ and more.""",
},
test_suite="tests",
classifiers=[
- "Development Status :: 2 - Pre-Alpha",
+ "Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: Apache Software License",
| Take due date into account when determining next action
Due dates are not part of the todo.txt spec, but are usually added using the key:value pattern, e.g.:
`Pay taxes due:2018-05-01` | fniessink/next-action | diff --git a/tests/unittests/test_pick_action.py b/tests/unittests/test_pick_action.py
index 163b4aa..afd7844 100644
--- a/tests/unittests/test_pick_action.py
+++ b/tests/unittests/test_pick_action.py
@@ -94,3 +94,17 @@ class PickActionTest(unittest.TestCase):
older_task = todotxt.Task("2017-01-01 Task 3")
self.assertEqual([priority, older_task, newer_task],
pick_action.next_actions([priority, newer_task, older_task]))
+
+ def test_due_dates(self):
+ """ Test that a task with an earlier due date takes precedence. """
+ no_due_date = todotxt.Task("Task 1")
+ earlier_task = todotxt.Task("Task 2 due:2018-02-02")
+ later_task = todotxt.Task("Task 3 due:2019-01-01")
+ self.assertEqual([earlier_task, later_task, no_due_date],
+ pick_action.next_actions([no_due_date, later_task, earlier_task]))
+
+ def test_due_and_creation_dates(self):
+ """ Test that a task with a due date takes precedence over creation date. """
+ task1 = todotxt.Task("2018-1-1 Task 1")
+ task2 = todotxt.Task("Task 2 due:2018-1-1")
+ self.assertEqual([task2, task1], pick_action.next_actions([task1, task2]))
diff --git a/tests/unittests/todotxt/test_task.py b/tests/unittests/todotxt/test_task.py
index a0b1bf9..a6fa2b8 100644
--- a/tests/unittests/todotxt/test_task.py
+++ b/tests/unittests/todotxt/test_task.py
@@ -95,6 +95,10 @@ class CreationDateTest(unittest.TestCase):
""" Test an invalid creation date. """
self.assertEqual(None, todotxt.Task("2018-14-02 Todo").creation_date())
+ def test_no_space_after(self):
+ """ Test a creation date without a word boundary. """
+ self.assertEqual(None, todotxt.Task("2018-10-10Todo").creation_date())
+
def test_single_digits(self):
""" Test a creation date with single digits for day and/or month. """
self.assertEqual(datetime.date(2018, 12, 3), todotxt.Task("(B) 2018-12-3 Todo").creation_date())
@@ -106,6 +110,33 @@ class CreationDateTest(unittest.TestCase):
self.assertTrue(todotxt.Task("9999-01-01 Prepare for five-digit years").is_future())
+class DueDateTest(unittest.TestCase):
+ """ Unit tests for the due date of tasks. """
+
+ def test_no_due_date(self):
+ """ Test that tasks have no due date by default. """
+ self.assertEqual(None, todotxt.Task("Todo").due_date())
+
+ def test_due_date(self):
+ """ Test a valid due date. """
+ self.assertEqual(datetime.date(2018, 1, 2), todotxt.Task("Todo due:2018-01-02").due_date())
+
+ def test_invalid_date(self):
+ """ Test an invalid due date. """
+ self.assertEqual(None, todotxt.Task("Todo due:2018-01-32").due_date())
+
+ def test_no_space_after(self):
+ """ Test a due date without a word boundary following it. """
+ self.assertEqual(None, todotxt.Task("Todo due:2018-01-023").due_date())
+
+ def test_single_digits(self):
+ """ Test a due date with single digits for day and/or month. """
+ self.assertEqual(datetime.date(2018, 12, 3), todotxt.Task("(B) due:2018-12-3 Todo").due_date())
+ self.assertEqual(datetime.date(2018, 1, 13), todotxt.Task("(B) due:2018-1-13 Todo").due_date())
+ self.assertEqual(datetime.date(2018, 1, 1), todotxt.Task("(B) due:2018-1-1 Todo").due_date())
+
+
+
class TaskCompletionTest(unittest.TestCase):
""" Unit tests for the completion status of tasks. """
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 6
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/fniessink/next-action.git@e145fa742fb415e26ec78bff558a359e2022729e#egg=next_action
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_pick_action.py::PickActionTest::test_due_and_creation_dates",
"tests/unittests/test_pick_action.py::PickActionTest::test_due_dates",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_no_space_after",
"tests/unittests/todotxt/test_task.py::DueDateTest::test_due_date",
"tests/unittests/todotxt/test_task.py::DueDateTest::test_invalid_date",
"tests/unittests/todotxt/test_task.py::DueDateTest::test_no_due_date",
"tests/unittests/todotxt/test_task.py::DueDateTest::test_no_space_after",
"tests/unittests/todotxt/test_task.py::DueDateTest::test_single_digits"
]
| []
| [
"tests/unittests/test_pick_action.py::PickActionTest::test_context",
"tests/unittests/test_pick_action.py::PickActionTest::test_contexts",
"tests/unittests/test_pick_action.py::PickActionTest::test_creation_dates",
"tests/unittests/test_pick_action.py::PickActionTest::test_higher_prio_goes_first",
"tests/unittests/test_pick_action.py::PickActionTest::test_ignore_completed_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_ignore_future_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_ignore_these_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_multiple_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_no_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_one_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_priority_and_creation_date",
"tests/unittests/test_pick_action.py::PickActionTest::test_project",
"tests/unittests/test_pick_action.py::PickActionTest::test_project_and_context",
"tests/unittests/todotxt/test_task.py::TodoTest::test_task_repr",
"tests/unittests/todotxt/test_task.py::TodoTest::test_task_text",
"tests/unittests/todotxt/test_task.py::TodoContextTest::test_no_context",
"tests/unittests/todotxt/test_task.py::TodoContextTest::test_no_space_before_at_sign",
"tests/unittests/todotxt/test_task.py::TodoContextTest::test_one_context",
"tests/unittests/todotxt/test_task.py::TodoContextTest::test_two_contexts",
"tests/unittests/todotxt/test_task.py::TaskProjectTest::test_no_projects",
"tests/unittests/todotxt/test_task.py::TaskProjectTest::test_no_space_before_at_sign",
"tests/unittests/todotxt/test_task.py::TaskProjectTest::test_one_project",
"tests/unittests/todotxt/test_task.py::TaskProjectTest::test_two_projects",
"tests/unittests/todotxt/test_task.py::TaskPriorityTest::test_faulty_priorities",
"tests/unittests/todotxt/test_task.py::TaskPriorityTest::test_no_priority",
"tests/unittests/todotxt/test_task.py::TaskPriorityTest::test_priorities",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_creation_date",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_creation_date_after_priority",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_invalid_creation_date",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_is_future_task",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_no_creation_date",
"tests/unittests/todotxt/test_task.py::CreationDateTest::test_single_digits",
"tests/unittests/todotxt/test_task.py::TaskCompletionTest::test_completed",
"tests/unittests/todotxt/test_task.py::TaskCompletionTest::test_not_completed",
"tests/unittests/todotxt/test_task.py::TaskCompletionTest::test_space_after_x",
"tests/unittests/todotxt/test_task.py::TaskCompletionTest::test_x_must_be_lowercase"
]
| []
| Apache License 2.0 | 2,514 | [
"setup.py",
"next_action/pick_action.py",
"CHANGELOG.md",
"next_action/__init__.py",
"next_action/todotxt/task.py",
"README.md"
]
| [
"setup.py",
"next_action/pick_action.py",
"CHANGELOG.md",
"next_action/__init__.py",
"next_action/todotxt/task.py",
"README.md"
]
|
|
python-useful-helpers__exec-helpers-38 | 63166d1ac340be47d64488a5b84a9d6fa317e8fe | 2018-05-14 10:14:06 | 814d435b7eda2b00fa1559d5a94103f1e888ab52 | diff --git a/exec_helpers/_api.py b/exec_helpers/_api.py
index 3283474..55c32ad 100644
--- a/exec_helpers/_api.py
+++ b/exec_helpers/_api.py
@@ -33,7 +33,6 @@ from exec_helpers import constants
from exec_helpers import exceptions
from exec_helpers import exec_result # noqa # pylint: disable=unused-import
from exec_helpers import proc_enums
-from exec_helpers import _log_templates
_type_exit_codes = typing.Union[int, proc_enums.ExitCodes]
_type_expected = typing.Optional[typing.Iterable[_type_exit_codes]]
@@ -249,7 +248,7 @@ class ExecHelper(object):
verbose=verbose,
**kwargs
)
- message = _log_templates.CMD_RESULT.format(result=result)
+ message = "Command {result.cmd!r} exit code: {result.exit_code!s}".format(result=result)
self.logger.log(
level=logging.INFO if verbose else logging.DEBUG,
msg=message
@@ -292,7 +291,8 @@ class ExecHelper(object):
ret = self.execute(command, verbose, timeout, **kwargs)
if ret['exit_code'] not in expected:
message = (
- _log_templates.CMD_UNEXPECTED_EXIT_CODE.format(
+ "{append}Command {result.cmd!r} returned exit code "
+ "{result.exit_code!s} while expected {expected!s}".format(
append=error_info + '\n' if error_info else '',
result=ret,
expected=expected
@@ -339,7 +339,8 @@ class ExecHelper(object):
error_info=error_info, raise_on_err=raise_on_err, **kwargs)
if ret['stderr']:
message = (
- _log_templates.CMD_UNEXPECTED_STDERR.format(
+ "{append}Command {result.cmd!r} STDERR while not expected\n"
+ "\texit code: {result.exit_code!s}".format(
append=error_info + '\n' if error_info else '',
result=ret,
))
diff --git a/exec_helpers/_log_templates.py b/exec_helpers/_log_templates.py
index d3f8c3b..56cda0f 100644
--- a/exec_helpers/_log_templates.py
+++ b/exec_helpers/_log_templates.py
@@ -20,18 +20,10 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
-CMD_EXEC = "Executing command:\n{cmd!s}\n"
-CMD_RESULT = "Command exit code '{result.exit_code!s}':\n{result.cmd}\n"
-CMD_UNEXPECTED_EXIT_CODE = (
- "{append}Command '{result.cmd}' returned exit code '{result.exit_code!s}' "
- "while expected '{expected!s}'"
-)
-CMD_UNEXPECTED_STDERR = (
- "{append}Command '{result.cmd}' STDERR while not expected\n"
- "\texit code: '{result.exit_code!s}'"
-)
+CMD_EXEC = "Executing command:\n{cmd!r}\n"
+
CMD_WAIT_ERROR = (
- "Wait for '{result.cmd}' during {timeout!s}s: no return code!\n"
+ "Wait for {result.cmd!r} during {timeout!s}s: no return code!\n"
'\tSTDOUT:\n'
'{result.stdout_brief}\n'
'\tSTDERR"\n'
diff --git a/exec_helpers/_ssh_client_base.py b/exec_helpers/_ssh_client_base.py
index 8ac7d88..e30ca28 100644
--- a/exec_helpers/_ssh_client_base.py
+++ b/exec_helpers/_ssh_client_base.py
@@ -50,7 +50,6 @@ from exec_helpers import _log_templates
__all__ = ('SSHClientBase', )
-logger = logging.getLogger(__name__)
logging.getLogger('paramiko').setLevel(logging.WARNING)
logging.getLogger('iso8601').setLevel(logging.WARNING)
@@ -59,9 +58,6 @@ _type_ConnectSSH = typing.Union[
]
_type_RSAKeys = typing.Iterable[paramiko.RSAKey]
_type_exit_codes = typing.Union[int, proc_enums.ExitCodes]
-_type_multiple_results = typing.Dict[
- typing.Tuple[str, int], exec_result.ExecResult
-]
_type_execute_async = typing.Tuple[
paramiko.Channel,
paramiko.ChannelFile,
@@ -149,7 +145,7 @@ class _MemorizedSSH(type):
try:
ssh.execute('cd ~', timeout=5)
except BaseException: # Note: Do not change to lower level!
- logger.debug('Reconnect {}'.format(ssh))
+ ssh.logger.debug('Reconnect')
ssh.reconnect()
return ssh
if (
@@ -158,7 +154,7 @@ class _MemorizedSSH(type):
): # pragma: no cover
# If we have only cache reference and temporary getrefcount
# reference: close connection before deletion
- logger.debug('Closing {} as unused'.format(cls.__cache[key]))
+ cls.__cache[key].logger.debug('Closing as unused')
cls.__cache[key].close()
del cls.__cache[key]
# noinspection PyArgumentList
@@ -186,7 +182,7 @@ class _MemorizedSSH(type):
CPYTHON and
sys.getrefcount(ssh) == n_count
): # pragma: no cover
- logger.debug('Closing {} as unused'.format(ssh))
+ ssh.logger.debug('Closing as unused')
ssh.close()
mcs.__cache = {}
@@ -306,7 +302,9 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
.. note:: auth has priority over username/password/private_keys
"""
super(SSHClientBase, self).__init__(
- logger=logger.getChild(
+ logger=logging.getLogger(
+ self.__class__.__name__
+ ).getChild(
'{host}:{port}'.format(host=host, port=port)
),
)
@@ -376,7 +374,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
auth=self.auth
)
- def __str__(self):
+ def __str__(self): # pragma: no cover
"""Representation for debug purposes."""
return '{cls}(host={host}, port={port}) for user {user}'.format(
cls=self.__class__.__name__, host=self.hostname, port=self.port,
@@ -832,7 +830,7 @@ class SSHClientBase(six.with_metaclass(_MemorizedSSH, _api.ExecHelper)):
expected=None, # type: typing.Optional[typing.Iterable[int]]
raise_on_err=True, # type: bool
**kwargs
- ): # type: (...) -> _type_multiple_results
+ ): # type: (...) -> typing.Dict[typing.Tuple[str, int], exec_result.ExecResult]
"""Execute command on multiple remotes in async mode.
:param remotes: Connections to execute on
diff --git a/exec_helpers/ssh_auth.py b/exec_helpers/ssh_auth.py
index 7ab9f98..63ca33b 100644
--- a/exec_helpers/ssh_auth.py
+++ b/exec_helpers/ssh_auth.py
@@ -187,9 +187,7 @@ class SSHAuth(object):
logger.exception('No password has been set!')
raise
else:
- logger.critical(
- 'Unexpected PasswordRequiredException, '
- 'when password is set!')
+ logger.critical('Unexpected PasswordRequiredException, when password is set!')
raise
except (paramiko.AuthenticationException,
paramiko.BadHostKeyException):
| exec_helpers._ssh_client_base is incorrect logger prefix from private section
Should be `exec_helpers.ssh_client` | python-useful-helpers/exec-helpers | diff --git a/test/test_ssh_client.py b/test/test_ssh_client.py
index 3a3c4a3..9119773 100644
--- a/test/test_ssh_client.py
+++ b/test/test_ssh_client.py
@@ -53,7 +53,7 @@ port = 22
username = 'user'
password = 'pass'
command = 'ls ~\nline 2\nline 3\nline с кирилицей'
-command_log = u"Executing command:\n{!s}\n".format(command.rstrip())
+command_log = u"Executing command:\n{!r}\n".format(command.rstrip())
stdout_list = [b' \n', b'2\n', b'3\n', b' \n']
stdout_str = b''.join(stdout_list).strip().decode('utf-8')
stderr_list = [b' \n', b'0\n', b'1\n', b' \n']
@@ -64,7 +64,7 @@ encoded_cmd = base64.b64encode(
print_stdin = 'read line; echo "$line"'
[email protected]('exec_helpers._ssh_client_base.logger', autospec=True)
[email protected]('logging.getLogger', autospec=True)
@mock.patch('paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
@mock.patch('paramiko.SSHClient', autospec=True)
class TestExecute(unittest.TestCase):
@@ -89,8 +89,7 @@ class TestExecute(unittest.TestCase):
@staticmethod
def gen_cmd_result_log_message(result):
- return (u"Command exit code '{code!s}':\n{cmd!s}\n"
- .format(cmd=result.cmd.rstrip(), code=result.exit_code))
+ return u"Command {result.cmd!r} exit code: {result.exit_code!s}".format(result=result)
def test_001_execute_async(self, client, policy, logger):
chan = mock.Mock()
@@ -116,7 +115,8 @@ class TestExecute(unittest.TestCase):
mock.call.makefile_stderr('rb'),
mock.call.exec_command('{}\n'.format(command))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ # raise ValueError(logger.mock_calls)
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -151,7 +151,7 @@ class TestExecute(unittest.TestCase):
mock.call.makefile_stderr('rb'),
mock.call.exec_command('{}\n'.format(command))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -235,7 +235,7 @@ class TestExecute(unittest.TestCase):
"sudo -S bash -c '"
"eval \"$(base64 -d <(echo \"{0}\"))\"'".format(encoded_cmd))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -271,7 +271,7 @@ class TestExecute(unittest.TestCase):
"sudo -S bash -c '"
"eval \"$(base64 -d <(echo \"{0}\"))\"'".format(encoded_cmd))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -303,7 +303,7 @@ class TestExecute(unittest.TestCase):
mock.call.makefile_stderr('rb'),
mock.call.exec_command('{}\n'.format(command))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -335,7 +335,7 @@ class TestExecute(unittest.TestCase):
mock.call.makefile_stderr('rb'),
mock.call.exec_command('{}\n'.format(command))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -380,7 +380,7 @@ class TestExecute(unittest.TestCase):
"sudo -S bash -c '"
"eval \"$(base64 -d <(echo \"{0}\"))\"'".format(encoded_cmd))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=command_log),
log.mock_calls
@@ -410,7 +410,7 @@ class TestExecute(unittest.TestCase):
mock.call.makefile_stderr('rb'),
mock.call.exec_command('{}\n'.format(command))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.INFO, msg=command_log),
log.mock_calls
@@ -420,7 +420,7 @@ class TestExecute(unittest.TestCase):
cmd = "USE='secret=secret_pass' do task"
log_mask_re = r"secret\s*=\s*([A-Z-a-z0-9_\-]+)"
masked_cmd = "USE='secret=<*masked*>' do task"
- cmd_log = u"Executing command:\n{!s}\n".format(masked_cmd)
+ cmd_log = u"Executing command:\n{!r}\n".format(masked_cmd)
chan = mock.Mock()
open_session = mock.Mock(return_value=chan)
@@ -445,7 +445,7 @@ class TestExecute(unittest.TestCase):
mock.call.makefile_stderr('rb'),
mock.call.exec_command('{}\n'.format(cmd))
))
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
self.assertIn(
mock.call.log(level=logging.DEBUG, msg=cmd_log),
log.mock_calls
@@ -620,7 +620,7 @@ class TestExecute(unittest.TestCase):
open_session.assert_called_once()
stdin.assert_not_called()
- log = logger.getChild('{host}:{port}'.format(host=host, port=port))
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port))
log.warning.assert_called_once_with('STDIN Send failed: closed channel')
self.assertIn(chan, result)
@@ -777,7 +777,7 @@ class TestExecute(unittest.TestCase):
execute_async.assert_called_once_with(command, verbose=False)
chan.assert_has_calls((mock.call.status_event.is_set(), ))
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
log.assert_has_calls(
[
mock.call(
@@ -824,7 +824,7 @@ class TestExecute(unittest.TestCase):
chan.assert_has_calls((mock.call.status_event.is_set(), ))
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
log.assert_has_calls(
[
mock.call(
@@ -872,7 +872,7 @@ class TestExecute(unittest.TestCase):
execute_async.assert_called_once_with(
command, verbose=False, open_stdout=False)
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
log.assert_has_calls(
[
mock.call(
@@ -916,7 +916,7 @@ class TestExecute(unittest.TestCase):
execute_async.assert_called_once_with(
command, verbose=False, open_stderr=False)
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
log.assert_has_calls(
[
mock.call(
@@ -968,7 +968,7 @@ class TestExecute(unittest.TestCase):
open_stderr=False
)
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
log.assert_has_calls(
[
mock.call(level=logging.DEBUG, msg=message),
@@ -1003,7 +1003,7 @@ class TestExecute(unittest.TestCase):
execute_async.assert_called_once_with(command, verbose=False)
chan.assert_has_calls((mock.call.status_event.is_set(), ))
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
self.assertIn(
mock.call(level=logging.DEBUG, msg=message),
log.mock_calls
@@ -1069,7 +1069,7 @@ class TestExecute(unittest.TestCase):
cmd, log_mask_re=log_mask_re, verbose=False)
chan.assert_has_calls((mock.call.status_event.is_set(),))
message = self.gen_cmd_result_log_message(result)
- log = logger.getChild('{host}:{port}'.format(host=host, port=port)).log
+ log = logger(ssh.__class__.__name__).getChild('{host}:{port}'.format(host=host, port=port)).log
log.assert_has_calls(
[
mock.call(
@@ -1297,7 +1297,7 @@ class TestExecute(unittest.TestCase):
error_info=None, raise_on_err=raise_on_err)
[email protected]('exec_helpers._ssh_client_base.logger', autospec=True)
[email protected]('logging.getLogger', autospec=True)
@mock.patch('paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
@mock.patch('paramiko.SSHClient', autospec=True)
@mock.patch('paramiko.Transport', autospec=True)
@@ -1528,7 +1528,7 @@ class TestExecuteThrowHost(unittest.TestCase):
))
[email protected]('exec_helpers._ssh_client_base.logger', autospec=True)
[email protected]('logging.getLogger', autospec=True)
@mock.patch('paramiko.AutoAddPolicy', autospec=True, return_value='AutoAddPolicy')
@mock.patch('paramiko.SSHClient', autospec=True)
class TestSftp(unittest.TestCase):
diff --git a/test/test_ssh_client_init.py b/test/test_ssh_client_init.py
index 8117eff..a5f400d 100644
--- a/test/test_ssh_client_init.py
+++ b/test/test_ssh_client_init.py
@@ -381,9 +381,10 @@ class TestSSHClientInit(unittest.TestCase):
_ssh.attach_mock(mock.Mock(return_value=_sftp), 'open_sftp')
with mock.patch(
- 'exec_helpers._ssh_client_base.logger',
+ 'logging.getLogger',
autospec=True
- ) as ssh_logger:
+ ) as get_logger:
+ ssh_logger = get_logger(exec_helpers.SSHClient.__name__)
ssh = exec_helpers.SSHClient(
host=host,
@@ -408,13 +409,13 @@ class TestSSHClientInit(unittest.TestCase):
ssh.close()
- log = ssh_logger.getChild(
- '{host}:{port}'.format(host=host, port=port)
- )
- log.assert_has_calls((
- mock.call.exception('Could not close ssh connection'),
- mock.call.exception('Could not close sftp connection'),
- ))
+ log = ssh_logger.getChild(
+ '{host}:{port}'.format(host=host, port=port)
+ )
+ log.assert_has_calls((
+ mock.call.exception('Could not close ssh connection'),
+ mock.call.exception('Could not close sftp connection'),
+ ))
def test_014_init_reconnect(self, client, policy, logger):
"""Test reconnect
@@ -619,9 +620,10 @@ class TestSSHClientInit(unittest.TestCase):
client.return_value = _ssh
with mock.patch(
- 'exec_helpers._ssh_client_base.logger',
+ 'logging.getLogger',
autospec=True
- ) as ssh_logger:
+ ) as get_logger:
+ ssh_logger = get_logger(exec_helpers.SSHClient.__name__)
ssh = exec_helpers.SSHClient(
host=host, auth=exec_helpers.SSHAuth(password=password))
@@ -631,14 +633,14 @@ class TestSSHClientInit(unittest.TestCase):
# noinspection PyStatementEffect
ssh._sftp
# pylint: enable=pointless-statement
- log = ssh_logger.getChild(
- '{host}:{port}'.format(host=host, port=port)
- )
- log.assert_has_calls((
- mock.call.debug('SFTP is not connected, try to connect...'),
- mock.call.warning(
- 'SFTP enable failed! SSH only is accessible.'),
- ))
+ log = ssh_logger.getChild(
+ '{host}:{port}'.format(host=host, port=port)
+ )
+ log.assert_has_calls((
+ mock.call.debug('SFTP is not connected, try to connect...'),
+ mock.call.warning(
+ 'SFTP enable failed! SSH only is accessible.'),
+ ))
def test_022_init_sftp_repair(self, client, policy, logger):
_sftp = mock.Mock()
@@ -652,9 +654,10 @@ class TestSSHClientInit(unittest.TestCase):
client.return_value = _ssh
with mock.patch(
- 'exec_helpers._ssh_client_base.logger',
+ 'logging.getLogger',
autospec=True
- ) as ssh_logger:
+ ) as get_logger:
+ ssh_logger = get_logger(exec_helpers.SSHClient.__name__)
ssh = exec_helpers.SSHClient(
host=host, auth=exec_helpers.SSHAuth(password=password)
@@ -670,12 +673,12 @@ class TestSSHClientInit(unittest.TestCase):
sftp = ssh._sftp
self.assertEqual(sftp, open_sftp())
- log = ssh_logger.getChild(
- '{host}:{port}'.format(host=host, port=port)
- )
- log.assert_has_calls((
- mock.call.debug('SFTP is not connected, try to connect...'),
- ))
+ log = ssh_logger.getChild(
+ '{host}:{port}'.format(host=host, port=port)
+ )
+ log.assert_has_calls((
+ mock.call.debug('SFTP is not connected, try to connect...'),
+ ))
@mock.patch('exec_helpers.exec_result.ExecResult', autospec=True)
def test_023_init_memorize(
diff --git a/test/test_sshauth.py b/test/test_sshauth.py
index 0c4ce5f..60c1cfe 100644
--- a/test/test_sshauth.py
+++ b/test/test_sshauth.py
@@ -36,10 +36,7 @@ import exec_helpers
def gen_private_keys(amount=1):
- keys = []
- for _ in range(amount):
- keys.append(paramiko.RSAKey.generate(1024))
- return keys
+ return [paramiko.RSAKey.generate(1024) for _ in range(amount)]
def gen_public_key(private_key=None):
diff --git a/test/test_subprocess_runner.py b/test/test_subprocess_runner.py
index ef3449a..04555c7 100644
--- a/test/test_subprocess_runner.py
+++ b/test/test_subprocess_runner.py
@@ -32,7 +32,7 @@ import exec_helpers
from exec_helpers import subprocess_runner
command = 'ls ~\nline 2\nline 3\nline с кирилицей'
-command_log = u"Executing command:\n{!s}\n".format(command.rstrip())
+command_log = u"Executing command:\n{!r}\n".format(command.rstrip())
stdout_list = [b' \n', b'2\n', b'3\n', b' \n']
stderr_list = [b' \n', b'0\n', b'1\n', b' \n']
print_stdin = 'read line; echo "$line"'
@@ -105,8 +105,7 @@ class TestSubprocessRunner(unittest.TestCase):
@staticmethod
def gen_cmd_result_log_message(result):
- return ("Command exit code '{code!s}':\n{cmd!s}\n"
- .format(cmd=result.cmd.rstrip(), code=result.exit_code))
+ return u"Command {result.cmd!r} exit code: {result.exit_code!s}".format(result=result)
def test_001_call(
self,
@@ -369,7 +368,7 @@ class TestSubprocessRunner(unittest.TestCase):
cmd = "USE='secret=secret_pass' do task"
log_mask_re = r"secret\s*=\s*([A-Z-a-z0-9_\-]+)"
masked_cmd = "USE='secret=<*masked*>' do task"
- cmd_log = u"Executing command:\n{!s}\n".format(masked_cmd)
+ cmd_log = u"Executing command:\n{!r}\n".format(masked_cmd)
popen_obj, exp_result = self.prepare_close(
popen,
@@ -424,7 +423,7 @@ class TestSubprocessRunner(unittest.TestCase):
cmd = "USE='secret=secret_pass' do task"
log_mask_re = r"secret\s*=\s*([A-Z-a-z0-9_\-]+)"
masked_cmd = "USE='secret=<*masked*>' do task"
- cmd_log = u"Executing command:\n{!s}\n".format(masked_cmd)
+ cmd_log = u"Executing command:\n{!r}\n".format(masked_cmd)
popen_obj, exp_result = self.prepare_close(
popen,
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 4
} | 1.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-html",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | advanced-descriptors==4.0.3
bcrypt==4.3.0
cffi==1.17.1
coverage==7.8.0
cryptography==44.0.2
exceptiongroup==1.2.2
-e git+https://github.com/python-useful-helpers/exec-helpers.git@63166d1ac340be47d64488a5b84a9d6fa317e8fe#egg=exec_helpers
iniconfig==2.1.0
Jinja2==3.1.6
MarkupSafe==3.0.2
mock==5.2.0
packaging==24.2
paramiko==3.5.1
pluggy==1.5.0
pycparser==2.22
PyNaCl==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-html==4.1.1
pytest-metadata==3.1.1
PyYAML==6.0.2
six==1.17.0
tenacity==9.0.0
threaded==4.2.0
tomli==2.2.1
| name: exec-helpers
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- advanced-descriptors==4.0.3
- bcrypt==4.3.0
- cffi==1.17.1
- coverage==7.8.0
- cryptography==44.0.2
- exceptiongroup==1.2.2
- exec-helpers==1.2.1
- iniconfig==2.1.0
- jinja2==3.1.6
- markupsafe==3.0.2
- mock==5.2.0
- packaging==24.2
- paramiko==3.5.1
- pluggy==1.5.0
- pycparser==2.22
- pynacl==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-html==4.1.1
- pytest-metadata==3.1.1
- pyyaml==6.0.2
- six==1.17.0
- tenacity==9.0.0
- threaded==4.2.0
- tomli==2.2.1
prefix: /opt/conda/envs/exec-helpers
| [
"test/test_ssh_client.py::TestExecute::test_001_execute_async",
"test/test_ssh_client.py::TestExecute::test_002_execute_async_pty",
"test/test_ssh_client.py::TestExecute::test_004_execute_async_sudo",
"test/test_ssh_client.py::TestExecute::test_005_execute_async_with_sudo_enforce",
"test/test_ssh_client.py::TestExecute::test_006_execute_async_with_no_sudo_enforce",
"test/test_ssh_client.py::TestExecute::test_007_execute_async_with_sudo_none_enforce",
"test/test_ssh_client.py::TestExecute::test_008_execute_async_sudo_password",
"test/test_ssh_client.py::TestExecute::test_009_execute_async_verbose",
"test/test_ssh_client.py::TestExecute::test_010_execute_async_mask_command",
"test/test_ssh_client.py::TestExecute::test_014_check_stdin_closed",
"test/test_ssh_client.py::TestExecute::test_019_execute",
"test/test_ssh_client.py::TestExecute::test_020_execute_verbose",
"test/test_ssh_client.py::TestExecute::test_021_execute_no_stdout",
"test/test_ssh_client.py::TestExecute::test_022_execute_no_stderr",
"test/test_ssh_client.py::TestExecute::test_023_execute_no_stdout_stderr",
"test/test_ssh_client.py::TestExecute::test_024_execute_timeout",
"test/test_ssh_client.py::TestExecute::test_026_execute_mask_command",
"test/test_ssh_client_init.py::TestSSHClientInit::test_013_init_clear_failed",
"test/test_ssh_client_init.py::TestSSHClientInit::test_021_init_no_sftp",
"test/test_ssh_client_init.py::TestSSHClientInit::test_022_init_sftp_repair",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_001_call",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_002_call_verbose",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_005_execute_no_stdout",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_006_execute_no_stderr",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_007_execute_no_stdout_stderr",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_008_execute_mask_global",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_009_execute_mask_local"
]
| []
| [
"test/test_ssh_client.py::TestExecute::test_003_execute_async_no_stdout_stderr",
"test/test_ssh_client.py::TestExecute::test_011_check_stdin_str",
"test/test_ssh_client.py::TestExecute::test_012_check_stdin_bytes",
"test/test_ssh_client.py::TestExecute::test_013_check_stdin_bytearray",
"test/test_ssh_client.py::TestExecute::test_015_keepalive",
"test/test_ssh_client.py::TestExecute::test_016_no_keepalive",
"test/test_ssh_client.py::TestExecute::test_017_keepalive_enforced",
"test/test_ssh_client.py::TestExecute::test_018_no_keepalive_enforced",
"test/test_ssh_client.py::TestExecute::test_025_execute_timeout_fail",
"test/test_ssh_client.py::TestExecute::test_027_execute_together",
"test/test_ssh_client.py::TestExecute::test_028_execute_together_exceptions",
"test/test_ssh_client.py::TestExecute::test_029_check_call",
"test/test_ssh_client.py::TestExecute::test_030_check_call_expected",
"test/test_ssh_client.py::TestExecute::test_031_check_stderr",
"test/test_ssh_client.py::TestExecuteThrowHost::test_01_execute_through_host_no_creds",
"test/test_ssh_client.py::TestExecuteThrowHost::test_02_execute_through_host_auth",
"test/test_ssh_client.py::TestExecuteThrowHost::test_03_execute_through_host_get_pty",
"test/test_ssh_client.py::TestSftp::test_download",
"test/test_ssh_client.py::TestSftp::test_exists",
"test/test_ssh_client.py::TestSftp::test_isdir",
"test/test_ssh_client.py::TestSftp::test_isfile",
"test/test_ssh_client.py::TestSftp::test_mkdir",
"test/test_ssh_client.py::TestSftp::test_open",
"test/test_ssh_client.py::TestSftp::test_rm_rf",
"test/test_ssh_client.py::TestSftp::test_stat",
"test/test_ssh_client.py::TestSftp::test_upload_dir",
"test/test_ssh_client.py::TestSftp::test_upload_file",
"test/test_ssh_client_init.py::TestSSHClientInit::test_001_init_host",
"test/test_ssh_client_init.py::TestSSHClientInit::test_002_init_alternate_port",
"test/test_ssh_client_init.py::TestSSHClientInit::test_003_init_username",
"test/test_ssh_client_init.py::TestSSHClientInit::test_004_init_username_password",
"test/test_ssh_client_init.py::TestSSHClientInit::test_005_init_username_password_empty_keys",
"test/test_ssh_client_init.py::TestSSHClientInit::test_006_init_username_single_key",
"test/test_ssh_client_init.py::TestSSHClientInit::test_007_init_username_password_single_key",
"test/test_ssh_client_init.py::TestSSHClientInit::test_008_init_username_multiple_keys",
"test/test_ssh_client_init.py::TestSSHClientInit::test_009_init_username_password_multiple_keys",
"test/test_ssh_client_init.py::TestSSHClientInit::test_010_init_auth",
"test/test_ssh_client_init.py::TestSSHClientInit::test_011_init_auth_break",
"test/test_ssh_client_init.py::TestSSHClientInit::test_012_init_context",
"test/test_ssh_client_init.py::TestSSHClientInit::test_014_init_reconnect",
"test/test_ssh_client_init.py::TestSSHClientInit::test_015_init_password_required",
"test/test_ssh_client_init.py::TestSSHClientInit::test_016_init_password_broken",
"test/test_ssh_client_init.py::TestSSHClientInit::test_017_init_auth_impossible_password",
"test/test_ssh_client_init.py::TestSSHClientInit::test_018_init_auth_impossible_key",
"test/test_ssh_client_init.py::TestSSHClientInit::test_019_init_auth_pass_no_key",
"test/test_ssh_client_init.py::TestSSHClientInit::test_020_init_auth_brute_impossible",
"test/test_ssh_client_init.py::TestSSHClientInit::test_023_init_memorize",
"test/test_ssh_client_init.py::TestSSHClientInit::test_024_init_memorize_close_unused",
"test/test_ssh_client_init.py::TestSSHClientInit::test_025_init_memorize_reconnect",
"test/test_sshauth.py::TestSSHAuth::test_equality_copy",
"test/test_sshauth.py::TestSSHAuth::test_init_username_key",
"test/test_sshauth.py::TestSSHAuth::test_init_username_only",
"test/test_sshauth.py::TestSSHAuth::test_init_username_password",
"test/test_sshauth.py::TestSSHAuth::test_init_username_password_key",
"test/test_sshauth.py::TestSSHAuth::test_init_username_password_key_keys",
"test/test_sshauth.py::TestSSHAuth::test_init_username_password_keys",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_003_context_manager",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_004_check_stdin_str",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_004_execute_timeout_fail",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_005_check_stdin_bytes",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_006_check_stdin_bytearray",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_007_check_stdin_fail_broken_pipe",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_008_check_stdin_fail_closed_win",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_009_check_stdin_fail_write",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_010_check_stdin_fail_close_pipe",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_011_check_stdin_fail_close_pipe_win",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_012_check_stdin_fail_close",
"test/test_subprocess_runner.py::TestSubprocessRunner::test_013_execute_timeout_done",
"test/test_subprocess_runner.py::TestSubprocessRunnerHelpers::test_001_check_call",
"test/test_subprocess_runner.py::TestSubprocessRunnerHelpers::test_002_check_call_expected",
"test/test_subprocess_runner.py::TestSubprocessRunnerHelpers::test_003_check_stderr"
]
| []
| Apache License 2.0 | 2,515 | [
"exec_helpers/_log_templates.py",
"exec_helpers/_api.py",
"exec_helpers/_ssh_client_base.py",
"exec_helpers/ssh_auth.py"
]
| [
"exec_helpers/_log_templates.py",
"exec_helpers/_api.py",
"exec_helpers/_ssh_client_base.py",
"exec_helpers/ssh_auth.py"
]
|
|
algoo__hapic-44 | a1e94438f12271721344ee459a7e31d4458a42ae | 2018-05-14 12:57:12 | a1e94438f12271721344ee459a7e31d4458a42ae | diff --git a/hapic/context.py b/hapic/context.py
index 5a9aa19..97aa0c4 100644
--- a/hapic/context.py
+++ b/hapic/context.py
@@ -1,4 +1,5 @@
# -*- coding: utf-8 -*-
+import json
import typing
from hapic.error import ErrorBuilderInterface
@@ -106,8 +107,121 @@ class ContextInterface(object):
"""
raise NotImplementedError()
+ def handle_exception(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ """
+ Enable management of this exception during execution of views. If this
+ exception caught, an http response will be returned with this http
+ code.
+ :param exception_class: Exception class to catch
+ :param http_code: HTTP code to use in response if exception caught
+ """
+ raise NotImplementedError()
+
+ def handle_exceptions(
+ self,
+ exception_classes: typing.List[typing.Type[Exception]],
+ http_code: int,
+ ) -> None:
+ """
+ Enable management of these exceptions during execution of views. If
+ this exception caught, an http response will be returned with this http
+ code.
+ :param exception_classes: Exception classes to catch
+ :param http_code: HTTP code to use in response if exception caught
+ """
+ raise NotImplementedError()
+
+
+class HandledException(object):
+ """
+ Representation of an handled exception with it's http code
+ """
+ def __init__(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int = 500,
+ ):
+ self.exception_class = exception_class
+ self.http_code = http_code
+
class BaseContext(ContextInterface):
def get_default_error_builder(self) -> ErrorBuilderInterface:
""" see hapic.context.ContextInterface#get_default_error_builder"""
return self.default_error_builder
+
+ def handle_exception(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ self._add_exception_class_to_catch(exception_class, http_code)
+
+ def handle_exceptions(
+ self,
+ exception_classes: typing.List[typing.Type[Exception]],
+ http_code: int,
+ ) -> None:
+ for exception_class in exception_classes:
+ self._add_exception_class_to_catch(exception_class, http_code)
+
+ def handle_exceptions_decorator_builder(
+ self,
+ func: typing.Callable[..., typing.Any],
+ ) -> typing.Callable[..., typing.Any]:
+ """
+ Return a decorator who catch exceptions raised during given function
+ execution and return a response built by the default error builder.
+
+ :param func: decorated function
+ :return: the decorator
+ """
+ def decorator(*args, **kwargs):
+ try:
+ return func(*args, **kwargs)
+ except Exception as exc:
+ # Reverse list to read first user given exception before
+ # the hapic default Exception catch
+ handled_exceptions = reversed(
+ self._get_handled_exception_class_and_http_codes(),
+ )
+ for handled_exception in handled_exceptions:
+ # TODO BS 2018-05-04: How to be attentive to hierarchy ?
+ if isinstance(exc, handled_exception.exception_class):
+ error_builder = self.get_default_error_builder()
+ error_body = error_builder.build_from_exception(exc)
+ return self.get_response(
+ json.dumps(error_body),
+ handled_exception.http_code,
+ )
+ raise exc
+
+ return decorator
+
+ def _get_handled_exception_class_and_http_codes(
+ self,
+ ) -> typing.List[HandledException]:
+ """
+ :return: A list of tuple where: thirst item of tuple is a exception
+ class and second tuple item is a http code. This list will be used by
+ `handle_exceptions_decorator_builder` decorator to catch exceptions.
+ """
+ raise NotImplementedError()
+
+ def _add_exception_class_to_catch(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ """
+ Add an exception class to catch and matching http code. Will be used by
+ `handle_exceptions_decorator_builder` decorator to catch exceptions.
+ :param exception_class: exception class to catch
+ :param http_code: http code to use if this exception catched
+ :return:
+ """
+ raise NotImplementedError()
diff --git a/hapic/ext/bottle/context.py b/hapic/ext/bottle/context.py
index 39237ab..c5090b8 100644
--- a/hapic/ext/bottle/context.py
+++ b/hapic/ext/bottle/context.py
@@ -12,6 +12,7 @@ import bottle
from multidict import MultiDict
from hapic.context import BaseContext
+from hapic.context import HandledException
from hapic.context import RouteRepresentation
from hapic.decorator import DecoratedController
from hapic.decorator import DECORATION_ATTRIBUTE_NAME
@@ -33,6 +34,8 @@ class BottleContext(BaseContext):
app: bottle.Bottle,
default_error_builder: ErrorBuilderInterface=None,
):
+ self._handled_exceptions = [] # type: typing.List[HandledException] # nopep8
+ self._exceptions_handler_installed = False
self.app = app
self.default_error_builder = \
default_error_builder or DefaultErrorBuilder() # FDV
@@ -134,3 +137,30 @@ class BottleContext(BaseContext):
if isinstance(response, bottle.HTTPResponse):
return True
return False
+
+ def _add_exception_class_to_catch(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ if not self._exceptions_handler_installed:
+ self._install_exceptions_handler()
+
+ self._handled_exceptions.append(
+ HandledException(exception_class, http_code),
+ )
+
+ def _install_exceptions_handler(self) -> None:
+ """
+ Setup the bottle app to enable exception catching with internal
+ hapic exception catcher.
+ """
+ self.app.install(self.handle_exceptions_decorator_builder)
+
+ def _get_handled_exception_class_and_http_codes(
+ self,
+ ) -> typing.List[HandledException]:
+ """
+ See hapic.context.BaseContext#_get_handled_exception_class_and_http_codes # nopep8
+ """
+ return self._handled_exceptions
diff --git a/hapic/ext/flask/context.py b/hapic/ext/flask/context.py
index 035fdfe..0908dc2 100644
--- a/hapic/ext/flask/context.py
+++ b/hapic/ext/flask/context.py
@@ -33,6 +33,7 @@ class FlaskContext(BaseContext):
app: Flask,
default_error_builder: ErrorBuilderInterface=None,
):
+ self._handled_exceptions = [] # type: typing.List[HandledException] # nopep8
self.app = app
self.default_error_builder = \
default_error_builder or DefaultErrorBuilder() # FDV
@@ -157,3 +158,10 @@ class FlaskContext(BaseContext):
)
def api_doc(path):
return send_from_directory(directory_path, path)
+
+ def _add_exception_class_to_catch(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ raise NotImplementedError('TODO')
diff --git a/hapic/ext/pyramid/context.py b/hapic/ext/pyramid/context.py
index e0dfcb7..d39b615 100644
--- a/hapic/ext/pyramid/context.py
+++ b/hapic/ext/pyramid/context.py
@@ -32,6 +32,7 @@ class PyramidContext(BaseContext):
configurator: 'Configurator',
default_error_builder: ErrorBuilderInterface = None,
):
+ self._handled_exceptions = [] # type: typing.List[HandledException] # nopep8
self.configurator = configurator
self.default_error_builder = \
default_error_builder or DefaultErrorBuilder() # FDV
@@ -181,3 +182,10 @@ class PyramidContext(BaseContext):
name=route_prefix,
path=directory_path,
)
+
+ def _add_exception_class_to_catch(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ raise NotImplementedError('TODO')
| Add a feature to manage exceptions globally
Currently : if I need to manage an exception for all my endpoints (for example catch `Exception` in order to generate a proper `HTTP 500`), I must add `hapic.handle_exception()` decorator on every route.
Expected : I'd like to be able to define generic exception handling which would be added automatically to every route of my APIs.
The way to do so:
Add two global methods `context.handle_exception()` and `context.handle_exceptions()` which will catch exception(s) and return properly built error responses
The implementation may be something like this:
```
# Bottle context
class BottleContext(...):
def try_catch_decorator_builder():
# return a decorator function
[...]
def handle_exception(exception_class, http_error_code):
bottle.install(try_catch_decorator_builder)
[...]
# Pyramid context
class PyramidContext(...):
def handle_exception(exception_class, http_error_code):
pyramid.add_error_view(exception_class, http_error_code)
[...]
[...]
```
| algoo/hapic | diff --git a/tests/base.py b/tests/base.py
index 7c54b09..1286176 100644
--- a/tests/base.py
+++ b/tests/base.py
@@ -1,5 +1,8 @@
# -*- coding: utf-8 -*-
+import json
import typing
+
+
try: # Python 3.5+
from http import HTTPStatus
except ImportError:
@@ -10,6 +13,7 @@ from multidict import MultiDict
from hapic.ext.bottle import BottleContext
from hapic.processor import RequestParameters
from hapic.processor import ProcessValidationError
+from hapic.context import HandledException
class Base(object):
@@ -29,6 +33,8 @@ class MyContext(BottleContext):
fake_files_parameters=None,
) -> None:
super().__init__(app=app)
+ self._handled_exceptions = [] # type: typing.List[HandledException] # nopep8
+ self._exceptions_handler_installed = False
self.fake_path_parameters = fake_path_parameters or {}
self.fake_query_parameters = fake_query_parameters or MultiDict()
self.fake_body_parameters = fake_body_parameters or {}
@@ -46,23 +52,34 @@ class MyContext(BottleContext):
files_parameters=self.fake_files_parameters,
)
- def get_response(
- self,
- response: str,
- http_code: int,
- mimetype: str='application/json',
- ) -> typing.Any:
- return {
- 'original_response': response,
- 'http_code': http_code,
- }
-
def get_validation_error_response(
self,
error: ProcessValidationError,
http_code: HTTPStatus=HTTPStatus.BAD_REQUEST,
) -> typing.Any:
- return {
- 'original_error': error,
- 'http_code': http_code,
- }
+ return self.get_response(
+ response=json.dumps({
+ 'original_error': {
+ 'details': error.details,
+ 'message': error.message,
+ },
+ 'http_code': http_code,
+ }),
+ http_code=http_code,
+ )
+
+ def _add_exception_class_to_catch(
+ self,
+ exception_class: typing.Type[Exception],
+ http_code: int,
+ ) -> None:
+ if not self._exceptions_handler_installed:
+ self._install_exceptions_handler()
+ self._handled_exceptions.append(
+ HandledException(exception_class, http_code),
+ )
+
+ def _get_handled_exception_class_and_http_codes(
+ self,
+ ) -> typing.List[HandledException]:
+ return self._handled_exceptions
diff --git a/tests/ext/unit/test_bottle.py b/tests/ext/unit/test_bottle.py
index cab6057..979e1e9 100644
--- a/tests/ext/unit/test_bottle.py
+++ b/tests/ext/unit/test_bottle.py
@@ -1,7 +1,9 @@
# -*- coding: utf-8 -*-
import bottle
+from webtest import TestApp
import hapic
+from hapic.ext.bottle import BottleContext
from tests.base import Base
@@ -74,3 +76,20 @@ class TestBottleExt(Base):
assert route.original_route_object.callback != MyControllers.controller_a # nopep8
assert route.original_route_object.callback != decoration.reference.wrapped # nopep8
assert route.original_route_object.callback != decoration.reference.wrapper # nopep8
+
+ def test_unit__general_exception_handling__ok__nominal_case(self):
+ hapic_ = hapic.Hapic()
+ app = bottle.Bottle()
+ context = BottleContext(app=app)
+ hapic_.set_context(context)
+
+ def my_view():
+ raise ZeroDivisionError('An exception message')
+
+ app.route('/my-view', method='GET', callback=my_view)
+ context.handle_exception(ZeroDivisionError, http_code=400)
+
+ test_app = TestApp(app)
+ response = test_app.get('/my-view', status='*')
+
+ assert 400 == response.status_code
diff --git a/tests/func/test_exception_handling.py b/tests/func/test_exception_handling.py
new file mode 100644
index 0000000..4458660
--- /dev/null
+++ b/tests/func/test_exception_handling.py
@@ -0,0 +1,27 @@
+# coding: utf-8
+import bottle
+from webtest import TestApp
+
+from hapic import Hapic
+from tests.base import Base
+from tests.base import MyContext
+
+
+class TestExceptionHandling(Base):
+ def test_func__catch_one_exception__ok__nominal_case(self):
+ hapic = Hapic()
+ # TODO BS 2018-05-04: Make this test non-bottle
+ app = bottle.Bottle()
+ context = MyContext(app=app)
+ hapic.set_context(context)
+
+ def my_view():
+ raise ZeroDivisionError('An exception message')
+
+ app.route('/my-view', method='GET', callback=my_view)
+ context.handle_exception(ZeroDivisionError, http_code=400)
+
+ test_app = TestApp(app)
+ response = test_app.get('/my-view', status='*')
+
+ assert 400 == response.status_code
diff --git a/tests/func/test_marshmallow_decoration.py b/tests/func/test_marshmallow_decoration.py
index 9223a14..331fe9a 100644
--- a/tests/func/test_marshmallow_decoration.py
+++ b/tests/func/test_marshmallow_decoration.py
@@ -1,4 +1,6 @@
# coding: utf-8
+import json
+
try: # Python 3.5+
from http import HTTPStatus
except ImportError:
@@ -52,11 +54,16 @@ class TestMarshmallowDecoration(Base):
return 'OK'
result = my_controller()
- assert 'http_code' in result
- assert HTTPStatus.BAD_REQUEST == result['http_code']
+ assert HTTPStatus.BAD_REQUEST == result.status_code
assert {
- 'file_abc': ['Missing data for required field.']
- } == result['original_error'].details
+ 'http_code': 400,
+ 'original_error': {
+ 'details': {
+ 'file_abc': ['Missing data for required field.']
+ },
+ 'message': 'Validation error of input data'
+ }
+ } == json.loads(result.body)
def test_unit__input_files__ok__file_is_empty_string(self):
hapic = Hapic()
@@ -77,6 +84,13 @@ class TestMarshmallowDecoration(Base):
return 'OK'
result = my_controller()
- assert 'http_code' in result
- assert HTTPStatus.BAD_REQUEST == result['http_code']
- assert {'file_abc': ['Missing data for required field']} == result['original_error'].details
+ assert HTTPStatus.BAD_REQUEST == result.status_code
+ assert {
+ 'http_code': 400,
+ 'original_error': {
+ 'details': {
+ 'file_abc': ['Missing data for required field']
+ },
+ 'message': 'Validation error of input data'
+ }
+ } == json.loads(result.body)
diff --git a/tests/unit/test_decorator.py b/tests/unit/test_decorator.py
index 43f6a7b..e088a8a 100644
--- a/tests/unit/test_decorator.py
+++ b/tests/unit/test_decorator.py
@@ -233,11 +233,8 @@ class TestOutputControllerWrapper(Base):
return foo
result = func(42)
- # see MyProcessor#process
- assert {
- 'http_code': HTTPStatus.OK,
- 'original_response': '43',
- } == result
+ assert HTTPStatus.OK == result.status_code
+ assert '43' == result.body
def test_unit__output_data_wrapping__fail__error_response(self):
context = MyContext(app=None)
@@ -250,14 +247,16 @@ class TestOutputControllerWrapper(Base):
return 'wrong result format'
result = func(42)
- # see MyProcessor#process
- assert isinstance(result, dict)
- assert 'http_code' in result
- assert result['http_code'] == HTTPStatus.INTERNAL_SERVER_ERROR
- assert 'original_error' in result
- assert result['original_error'].details == {
- 'name': ['Missing data for required field.']
- }
+ assert HTTPStatus.INTERNAL_SERVER_ERROR == result.status_code
+ assert {
+ 'original_error': {
+ 'details': {
+ 'name': ['Missing data for required field.']
+ },
+ 'message': 'Validation error of output data'
+ },
+ 'http_code': 500,
+ } == json.loads(result.body)
class TestExceptionHandlerControllerWrapper(Base):
@@ -275,14 +274,12 @@ class TestExceptionHandlerControllerWrapper(Base):
raise ZeroDivisionError('We are testing')
response = func(42)
- assert 'http_code' in response
- assert response['http_code'] == HTTPStatus.INTERNAL_SERVER_ERROR
- assert 'original_response' in response
- assert json.loads(response['original_response']) == {
- 'message': 'We are testing',
- 'details': {},
- 'code': None,
- }
+ assert HTTPStatus.INTERNAL_SERVER_ERROR == response.status_code
+ assert {
+ 'details': {},
+ 'message': 'We are testing',
+ 'code': None,
+ } == json.loads(response.body)
def test_unit__exception_handled__ok__exception_error_dict(self):
class MyException(Exception):
@@ -305,14 +302,12 @@ class TestExceptionHandlerControllerWrapper(Base):
raise exc
response = func(42)
- assert 'http_code' in response
- assert response['http_code'] == HTTPStatus.INTERNAL_SERVER_ERROR
- assert 'original_response' in response
- assert json.loads(response['original_response']) == {
+ assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
+ assert {
'message': 'We are testing',
'details': {'foo': 'bar'},
'code': None,
- }
+ } == json.loads(response.body)
def test_unit__exception_handler__error__error_content_malformed(self):
class MyException(Exception):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 4
} | 0.36 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
beautifulsoup4==4.12.3
bottle==0.13.2
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
dataclasses==0.8
Flask==2.0.3
-e git+https://github.com/algoo/hapic.git@a1e94438f12271721344ee459a7e31d4458a42ae#egg=hapic
hapic-apispec==0.35.0
hupper==1.10.3
idna==3.10
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
itsdangerous==2.0.1
Jinja2==3.0.3
MarkupSafe==2.0.1
marshmallow==2.21.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
multidict==5.2.0
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
PasteDeploy==2.1.1
plaster==1.0
plaster-pastedeploy==0.7
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pyramid==2.0.2
pytest==6.2.4
pytest-cov==4.0.0
PyYAML==6.0.1
requests==2.27.1
soupsieve==2.3.2.post1
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
translationstring==1.4
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
venusian==3.0.0
waitress==2.0.0
WebOb==1.8.9
WebTest==3.0.0
Werkzeug==2.0.3
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
zope.deprecation==4.4.0
zope.interface==5.5.2
| name: hapic
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- beautifulsoup4==4.12.3
- bottle==0.13.2
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- dataclasses==0.8
- flask==2.0.3
- hapic-apispec==0.35.0
- hupper==1.10.3
- idna==3.10
- itsdangerous==2.0.1
- jinja2==3.0.3
- markupsafe==2.0.1
- marshmallow==2.21.0
- multidict==5.2.0
- pastedeploy==2.1.1
- plaster==1.0
- plaster-pastedeploy==0.7
- pyramid==2.0.2
- pytest-cov==4.0.0
- pyyaml==6.0.1
- requests==2.27.1
- soupsieve==2.3.2.post1
- tomli==1.2.3
- translationstring==1.4
- urllib3==1.26.20
- venusian==3.0.0
- waitress==2.0.0
- webob==1.8.9
- webtest==3.0.0
- werkzeug==2.0.3
- zope-deprecation==4.4.0
- zope-interface==5.5.2
prefix: /opt/conda/envs/hapic
| [
"tests/ext/unit/test_bottle.py::TestBottleExt::test_unit__map_binding__ok__decorated_function",
"tests/ext/unit/test_bottle.py::TestBottleExt::test_unit__map_binding__ok__mapped_function",
"tests/ext/unit/test_bottle.py::TestBottleExt::test_unit__map_binding__ok__mapped_method",
"tests/ext/unit/test_bottle.py::TestBottleExt::test_unit__general_exception_handling__ok__nominal_case",
"tests/func/test_exception_handling.py::TestExceptionHandling::test_func__catch_one_exception__ok__nominal_case",
"tests/func/test_marshmallow_decoration.py::TestMarshmallowDecoration::test_unit__input_files__ok__file_is_present",
"tests/func/test_marshmallow_decoration.py::TestMarshmallowDecoration::test_unit__input_files__ok__file_is_not_present",
"tests/func/test_marshmallow_decoration.py::TestMarshmallowDecoration::test_unit__input_files__ok__file_is_empty_string",
"tests/unit/test_decorator.py::TestControllerWrapper::test_unit__base_controller_wrapper__ok__no_behaviour",
"tests/unit/test_decorator.py::TestControllerWrapper::test_unit__base_controller__ok__replaced_response",
"tests/unit/test_decorator.py::TestControllerWrapper::test_unit__controller_wrapper__ok__overload_input",
"tests/unit/test_decorator.py::TestInputControllerWrapper::test_unit__input_data_wrapping__ok__nominal_case",
"tests/unit/test_decorator.py::TestInputControllerWrapper::test_unit__multi_query_param_values__ok__use_as_list",
"tests/unit/test_decorator.py::TestInputControllerWrapper::test_unit__multi_query_param_values__ok__without_as_list",
"tests/unit/test_decorator.py::TestOutputControllerWrapper::test_unit__output_data_wrapping__ok__nominal_case",
"tests/unit/test_decorator.py::TestOutputControllerWrapper::test_unit__output_data_wrapping__fail__error_response",
"tests/unit/test_decorator.py::TestExceptionHandlerControllerWrapper::test_unit__exception_handled__ok__nominal_case",
"tests/unit/test_decorator.py::TestExceptionHandlerControllerWrapper::test_unit__exception_handled__ok__exception_error_dict",
"tests/unit/test_decorator.py::TestExceptionHandlerControllerWrapper::test_unit__exception_handler__error__error_content_malformed"
]
| []
| []
| []
| MIT License | 2,516 | [
"hapic/ext/bottle/context.py",
"hapic/ext/pyramid/context.py",
"hapic/ext/flask/context.py",
"hapic/context.py"
]
| [
"hapic/ext/bottle/context.py",
"hapic/ext/pyramid/context.py",
"hapic/ext/flask/context.py",
"hapic/context.py"
]
|
|
berkerpeksag__astor-104 | f87ae3b55f9ea403b4c04fb6cd9b0ab675aaa496 | 2018-05-14 20:10:36 | 991e6e9436c2512241e036464f99114438932d85 | berkerpeksag: Thanks for the PR, @Kodiologist! I will review this later today. Could you also add release notes for #85 and #100 to `docs/changelog.rst`?
Kodiologist: You're welcome. Done. | diff --git a/.travis.yml b/.travis.yml
index e6362f4..c0e59fa 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,18 +1,13 @@
sudo: false # run travis jobs in containers
language: python
python:
- - 2.6
- 2.7
- - 3.3
- 3.4
- 3.5
- 3.6
- pypy
- pypy3.5
- 3.7-dev
-matrix:
- allow_failures:
- - python: 2.6
cache: pip
install:
- pip install tox-travis
diff --git a/AUTHORS b/AUTHORS
index 9949ccc..f00747a 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -13,3 +13,4 @@ And with some modifications based on Armin's code:
* Ryan Gonzalez <[email protected]>
* Lenny Truong <[email protected]>
* Radomír Bosák <[email protected]>
+* Kodi Arfer <[email protected]>
diff --git a/astor/code_gen.py b/astor/code_gen.py
index 8e1e6fa..e37b320 100644
--- a/astor/code_gen.py
+++ b/astor/code_gen.py
@@ -18,6 +18,7 @@ this code came from here (in 2012):
"""
import ast
+import math
import sys
from .op_util import get_op_symbol, get_op_precedence, Precedence
@@ -638,22 +639,38 @@ class SourceGenerator(ExplicitNodeVisitor):
# constants
new=sys.version_info >= (3, 0)):
with self.delimit(node) as delimiters:
- s = repr(node.n)
-
- # Deal with infinities -- if detected, we can
- # generate them with 1e1000.
- signed = s.startswith('-')
- if s[signed].isalpha():
- im = s[-1] == 'j' and 'j' or ''
- assert s[signed:signed + 3] == 'inf', s
- s = '%s1e1000%s' % ('-' if signed else '', im)
+ x = node.n
+
+ def part(p, imaginary):
+ # Represent infinity as 1e1000 and NaN as 1e1000-1e1000.
+ s = 'j' if imaginary else ''
+ if math.isinf(p):
+ if p < 0:
+ return '-1e1000' + s
+ return '1e1000' + s
+ if math.isnan(p):
+ return '(1e1000%s-1e1000%s)' % (s, s)
+ return repr(p) + s
+
+ real = part(x.real if isinstance(x, complex) else x, imaginary=False)
+ if isinstance(x, complex):
+ imag = part(x.imag, imaginary=True)
+ if x.real == 0:
+ s = imag
+ elif x.imag == 0:
+ s = '(%s+0j)' % real
+ else:
+ # x has nonzero real and imaginary parts.
+ s = '(%s%s%s)' % (real, ['+', ''][imag.startswith('-')], imag)
+ else:
+ s = real
self.write(s)
# The Python 2.x compiler merges a unary minus
# with a number. This is a premature optimization
# that we deal with here...
if not new and delimiters.discard:
- if signed:
+ if not isinstance(node.n, complex) and node.n < 0:
pow_lhs = Precedence.Pow + 1
delimiters.discard = delimiters.pp != pow_lhs
else:
diff --git a/docs/changelog.rst b/docs/changelog.rst
index 706d322..9135d8b 100644
--- a/docs/changelog.rst
+++ b/docs/changelog.rst
@@ -22,6 +22,9 @@ New features
.. _`Issue 86`: https://github.com/berkerpeksag/astor/issues/86
+* Dropped support for Python 2.6 and Python 3.3. Even the latest version of pip
+ dropped its support for both of these versions.
+
Bug fixes
~~~~~~~~~
@@ -37,6 +40,12 @@ Bug fixes
.. _`Issue 89`: https://github.com/berkerpeksag/astor/issues/89
.. _`Issue 101`: https://github.com/berkerpeksag/astor/issues/101
+* Improved code generation to support ``ast.Num`` nodes containing infinities
+ or NaNs.
+ (Reported and fixed by Kodi Arfer in `Issue 85`_ and `Issue 100`_.)
+
+.. _`Issue 85`: https://github.com/berkerpeksag/astor/issues/85
+.. _`Issue 100`: https://github.com/berkerpeksag/astor/issues/100
0.6.2 - 2017-11-11
------------------
diff --git a/setup.cfg b/setup.cfg
index 8faaf1f..cf0bb95 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -33,6 +33,7 @@ classifiers =
zip_safe = True
include_package_data = True
packages = find:
+python_requires = >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
[options.packages.find]
exclude = tests
diff --git a/tox.ini b/tox.ini
index e364485..5f6ddf0 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = py26, py27, py33, py34, py35, py36, pypy, pypy3.5
+envlist = py27, py34, py35, py36, pypy, pypy3.5
skipsdist = True
[testenv]
@@ -7,4 +7,4 @@ usedevelop = True
commands = nosetests -v --nocapture {posargs}
deps =
-rrequirements-tox.txt
- py2{6,7},pypy: unittest2
+ py27,pypy: unittest2
| Can't generate code containing ast.Num(NaN)
Somewhat related to #82, the `codegen` of astor 0.5 would produce `nan` if given a `Num` node that contained NaN. In astor 0.6, the following raises an `AssertionError` in code_gen.py:
```py
import ast, astor
nan = 1e1000 - 1e1000
print(astor.code_gen.to_source(
ast.Module([ast.Expr(ast.BinOp(
ast.Num(1.0),
ast.Add(),
ast.Num(nan)))])))
```
While Python has no NaN literal, you could represent `ast.Num(nan)` as an expression (such as `1e1000 - 1e1000`) if you want to generate real Python. See hylang/hy#1447.
| berkerpeksag/astor | diff --git a/tests/test_code_gen.py b/tests/test_code_gen.py
index a71a939..b8c1c6a 100644
--- a/tests/test_code_gen.py
+++ b/tests/test_code_gen.py
@@ -8,6 +8,7 @@ Copyright (c) 2015 Patrick Maupin
"""
import ast
+import math
import sys
import textwrap
@@ -22,6 +23,8 @@ import astor
def canonical(srctxt):
return textwrap.dedent(srctxt).strip()
+def astornum(x):
+ return eval(astor.to_source(ast.Expression(body=ast.Num(n=x))))
class Comparisons(object):
@@ -245,6 +248,14 @@ class CodegenTestCase(unittest.TestCase, Comparisons):
"""
self.assertAstRoundtripsGtVer(source, (2, 7))
+ def test_complex(self):
+ source = """
+ (3) + (4j) + (1+2j) + (1+0j)
+ """
+ self.assertAstRoundtrips(source)
+
+ self.assertIsInstance(astornum(1+0j), complex)
+
def test_inf(self):
source = """
(1e1000) + (-1e1000) + (1e1000j) + (-1e1000j)
@@ -259,6 +270,19 @@ class CodegenTestCase(unittest.TestCase, Comparisons):
# Returns 'a = 1e1000'.
self.assertSrcDoesNotRoundtrip(source)
+ self.assertIsInstance(astornum((1e1000+1e1000)+0j), complex)
+
+ def test_nan(self):
+ self.assertTrue(math.isnan(astornum(float('nan'))))
+
+ v = astornum(complex(-1e1000, float('nan')))
+ self.assertEqual(v.real, -1e1000)
+ self.assertTrue(math.isnan(v.imag))
+
+ v = astornum(complex(float('nan'), -1e1000))
+ self.assertTrue(math.isnan(v.real))
+ self.assertEqual(v.imag, -1e1000)
+
def test_unary(self):
source = """
-(1) + ~(2) + +(3)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_issue_reference",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 6
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements-dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/berkerpeksag/astor.git@f87ae3b55f9ea403b4c04fb6cd9b0ab675aaa496#egg=astor
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: astor
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/astor
| [
"tests/test_code_gen.py::CodegenTestCase::test_inf",
"tests/test_code_gen.py::CodegenTestCase::test_nan"
]
| []
| [
"tests/test_code_gen.py::CodegenTestCase::test_annassign",
"tests/test_code_gen.py::CodegenTestCase::test_arguments",
"tests/test_code_gen.py::CodegenTestCase::test_async_comprehension",
"tests/test_code_gen.py::CodegenTestCase::test_async_def_with_for",
"tests/test_code_gen.py::CodegenTestCase::test_class_definition_with_starbases_and_kwargs",
"tests/test_code_gen.py::CodegenTestCase::test_compile_types",
"tests/test_code_gen.py::CodegenTestCase::test_complex",
"tests/test_code_gen.py::CodegenTestCase::test_comprehension",
"tests/test_code_gen.py::CodegenTestCase::test_del_statement",
"tests/test_code_gen.py::CodegenTestCase::test_dictionary_literals",
"tests/test_code_gen.py::CodegenTestCase::test_docstring_class",
"tests/test_code_gen.py::CodegenTestCase::test_docstring_function",
"tests/test_code_gen.py::CodegenTestCase::test_docstring_method",
"tests/test_code_gen.py::CodegenTestCase::test_docstring_module",
"tests/test_code_gen.py::CodegenTestCase::test_double_await",
"tests/test_code_gen.py::CodegenTestCase::test_elif",
"tests/test_code_gen.py::CodegenTestCase::test_fstring_trailing_newline",
"tests/test_code_gen.py::CodegenTestCase::test_fstrings",
"tests/test_code_gen.py::CodegenTestCase::test_imports",
"tests/test_code_gen.py::CodegenTestCase::test_matrix_multiplication",
"tests/test_code_gen.py::CodegenTestCase::test_multiple_call_unpackings",
"tests/test_code_gen.py::CodegenTestCase::test_non_string_leakage",
"tests/test_code_gen.py::CodegenTestCase::test_output_formatting",
"tests/test_code_gen.py::CodegenTestCase::test_pass_arguments_node",
"tests/test_code_gen.py::CodegenTestCase::test_pow",
"tests/test_code_gen.py::CodegenTestCase::test_right_hand_side_dictionary_unpacking",
"tests/test_code_gen.py::CodegenTestCase::test_slicing",
"tests/test_code_gen.py::CodegenTestCase::test_try_expect",
"tests/test_code_gen.py::CodegenTestCase::test_tuple_corner_cases",
"tests/test_code_gen.py::CodegenTestCase::test_unary",
"tests/test_code_gen.py::CodegenTestCase::test_unicode_literals",
"tests/test_code_gen.py::CodegenTestCase::test_with",
"tests/test_code_gen.py::CodegenTestCase::test_yield"
]
| []
| BSD 3-Clause "New" or "Revised" License | 2,519 | [
"docs/changelog.rst",
".travis.yml",
"setup.cfg",
"tox.ini",
"AUTHORS",
"astor/code_gen.py"
]
| [
"docs/changelog.rst",
".travis.yml",
"setup.cfg",
"tox.ini",
"AUTHORS",
"astor/code_gen.py"
]
|
certbot__certbot-5992 | 907ee797151f270bec3a2697743568362db497cd | 2018-05-14 21:20:12 | e48c653245bc08b7e517465aea32f678c5b9b64b | diff --git a/acme/acme/client.py b/acme/acme/client.py
index 7807f0ece..bdc07fb1c 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -12,7 +12,9 @@ from six.moves import http_client # pylint: disable=import-error
import josepy as jose
import OpenSSL
import re
+from requests_toolbelt.adapters.source import SourceAddressAdapter
import requests
+from requests.adapters import HTTPAdapter
import sys
from acme import crypto_util
@@ -857,9 +859,12 @@ class ClientNetwork(object): # pylint: disable=too-many-instance-attributes
:param bool verify_ssl: Whether to verify certificates on SSL connections.
:param str user_agent: String to send as User-Agent header.
:param float timeout: Timeout for requests.
+ :param source_address: Optional source address to bind to when making requests.
+ :type source_address: str or tuple(str, int)
"""
def __init__(self, key, account=None, alg=jose.RS256, verify_ssl=True,
- user_agent='acme-python', timeout=DEFAULT_NETWORK_TIMEOUT):
+ user_agent='acme-python', timeout=DEFAULT_NETWORK_TIMEOUT,
+ source_address=None):
# pylint: disable=too-many-arguments
self.key = key
self.account = account
@@ -869,6 +874,13 @@ class ClientNetwork(object): # pylint: disable=too-many-instance-attributes
self.user_agent = user_agent
self.session = requests.Session()
self._default_timeout = timeout
+ adapter = HTTPAdapter()
+
+ if source_address is not None:
+ adapter = SourceAddressAdapter(source_address)
+
+ self.session.mount("http://", adapter)
+ self.session.mount("https://", adapter)
def __del__(self):
# Try to close the session, but don't show exceptions to the
@@ -1018,7 +1030,7 @@ class ClientNetwork(object): # pylint: disable=too-many-instance-attributes
if response.headers.get("Content-Type") == DER_CONTENT_TYPE:
debug_content = base64.b64encode(response.content)
else:
- debug_content = response.content
+ debug_content = response.content.decode("utf-8")
logger.debug('Received response:\nHTTP %d\n%s\n\n%s',
response.status_code,
"\n".join(["{0}: {1}".format(k, v)
diff --git a/acme/setup.py b/acme/setup.py
index 72ab5919b..e91c36b3d 100644
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -19,6 +19,7 @@ install_requires = [
'pyrfc3339',
'pytz',
'requests[security]>=2.4.1', # security extras added in 2.4.1
+ 'requests-toolbelt>=0.3.0',
'setuptools',
'six>=1.9.0', # needed for python_2_unicode_compatible
]
diff --git a/certbot/main.py b/certbot/main.py
index a041b998f..0ae5b9d7a 100644
--- a/certbot/main.py
+++ b/certbot/main.py
@@ -324,7 +324,7 @@ def _find_lineage_for_domains_and_certname(config, domains, certname):
return "newcert", None
else:
raise errors.ConfigurationError("No certificate with name {0} found. "
- "Use -d to specify domains, or run certbot --certificates to see "
+ "Use -d to specify domains, or run certbot certificates to see "
"possible certificate names.".format(certname))
def _get_added_removed(after, before):
diff --git a/docs/using.rst b/docs/using.rst
index 272f5ac6e..40d8f8452 100644
--- a/docs/using.rst
+++ b/docs/using.rst
@@ -609,7 +609,7 @@ commands into your individual environment.
.. note:: ``certbot renew`` exit status will only be 1 if a renewal attempt failed.
This means ``certbot renew`` exit status will be 0 if no certificate needs to be updated.
If you write a custom script and expect to run a command only after a certificate was actually renewed
- you will need to use the ``--post-hook`` since the exit status will be 0 both on successful renewal
+ you will need to use the ``--deploy-hook`` since the exit status will be 0 both on successful renewal
and when renewal is not necessary.
.. _renewal-config-file:
| HTTP responses logged as hard-to-read bytes repr in Python 3
If you're having trouble using Certbot and aren't sure you've found a bug or
request for a new feature, please first try asking for help at
https://community.letsencrypt.org/. There is a much larger community there of
people familiar with the project who will be able to more quickly answer your
questions.
## My operating system is (include version):
Ubuntu 16.04 (x86-64)
## I installed Certbot with (certbot-auto, OS package manager, pip, etc):
certbot-auto (Python 2) and the PPA (0.22.2, Python 3) on different systems.
## I ran this command and it produced this output:
## Certbot's behavior differed from what I expected because:
Under Python 2, Certbot logs HTTP responses as easy-to-read text, e.g.:
2018-05-02 13:41:58,328:DEBUG:acme.client:Received response:
HTTP 200
Server: nginx
Content-Type: application/json
Content-Length: 724
X-Frame-Options: DENY
Strict-Transport-Security: max-age=604800
Expires: Wed, 02 May 2018 13:41:58 GMT
Cache-Control: max-age=0, no-cache, no-store
Pragma: no-cache
Date: Wed, 02 May 2018 13:41:58 GMT
Connection: keep-alive
{
"6ELa2lV28v0": "https://community.letsencrypt.org/t/adding-random-entries-to-the-directory/33417",
"keyChange": "https://acme-staging-v02.api.letsencrypt.org/acme/key-change",
"meta": {
"caaIdentities": [
"letsencrypt.org"
],
"termsOfService": "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf",
"website": "https://letsencrypt.org/docs/staging-environment/"
},
"newAccount": "https://acme-staging-v02.api.letsencrypt.org/acme/new-acct",
"newNonce": "https://acme-staging-v02.api.letsencrypt.org/acme/new-nonce",
"newOrder": "https://acme-staging-v02.api.letsencrypt.org/acme/new-order",
"revokeCert": "https://acme-staging-v02.api.letsencrypt.org/acme/revoke-cert"
}
Under Python 3, it logs the `repr()` of the bytes:
2018-05-02 13:44:24,936:DEBUG:acme.client:Received response:
HTTP 200
Server: nginx
Content-Type: application/json
Content-Length: 724
X-Frame-Options: DENY
Strict-Transport-Security: max-age=604800
Expires: Wed, 02 May 2018 13:44:24 GMT
Cache-Control: max-age=0, no-cache, no-store
Pragma: no-cache
Date: Wed, 02 May 2018 13:44:24 GMT
Connection: keep-alive
b'{\n "keyChange": "https://acme-staging-v02.api.letsencrypt.org/acme/key-change",\n "meta": {\n "caaIdentities": [\n "letsencrypt.org"\n ],\n "termsOfService": "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf",\n "website": "https://letsencrypt.org/docs/staging-environment/"\n },\n "newAccount": "https://acme-staging-v02.api.letsencrypt.org/acme/new-acct",\n "newNonce": "https://acme-staging-v02.api.letsencrypt.org/acme/new-nonce",\n "newOrder": "https://acme-staging-v02.api.letsencrypt.org/acme/new-order",\n "revokeCert": "https://acme-staging-v02.api.letsencrypt.org/acme/revoke-cert",\n "uog1GW6DtvQ": "https://community.letsencrypt.org/t/adding-random-entries-to-the-directory/33417"\n}'
It makes log files more difficult to read.
Ideally Certbot would try to decode the HTTP response and log it normally. (Or perhaps dump the raw bytes straight to the log file.)
This affects JSON responses and certificates, but a certificate's worth of base64 isn't human readable anyway.
## Here is a Certbot log showing the issue (if available):
###### Logs are stored in `/var/log/letsencrypt` by default. Feel free to redact domains, e-mail and IP addresses as you see fit.
## Here is the relevant nginx server block or Apache virtualhost for the domain I am configuring: | certbot/certbot | diff --git a/acme/acme/client_test.py b/acme/acme/client_test.py
index c17b83210..f3018ed81 100644
--- a/acme/acme/client_test.py
+++ b/acme/acme/client_test.py
@@ -1129,6 +1129,31 @@ class ClientNetworkWithMockedResponseTest(unittest.TestCase):
self.assertRaises(requests.exceptions.RequestException,
self.net.post, 'uri', obj=self.obj)
+class ClientNetworkSourceAddressBindingTest(unittest.TestCase):
+ """Tests that if ClientNetwork has a source IP set manually, the underlying library has
+ used the provided source address."""
+
+ def setUp(self):
+ self.source_address = "8.8.8.8"
+
+ def test_source_address_set(self):
+ from acme.client import ClientNetwork
+ net = ClientNetwork(key=None, alg=None, source_address=self.source_address)
+ for adapter in net.session.adapters.values():
+ self.assertTrue(self.source_address in adapter.source_address)
+
+ def test_behavior_assumption(self):
+ """This is a test that guardrails the HTTPAdapter behavior so that if the default for
+ a Session() changes, the assumptions here aren't violated silently."""
+ from acme.client import ClientNetwork
+ # Source address not specified, so the default adapter type should be bound -- this
+ # test should fail if the default adapter type is changed by requests
+ net = ClientNetwork(key=None, alg=None)
+ session = requests.Session()
+ for scheme in session.adapters.keys():
+ client_network_adapter = net.session.adapters.get(scheme)
+ default_adapter = session.adapters.get(scheme)
+ self.assertEqual(client_network_adapter.__class__, default_adapter.__class__)
if __name__ == '__main__':
unittest.main() # pragma: no cover
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 4
} | 0.24 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov pytest-xdist",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc musl-dev"
],
"python": "3.7",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | acme==2.7.4
astroid==1.3.5
backcall==0.2.0
bleach==6.0.0
cachetools==5.5.2
-e git+https://github.com/certbot/certbot.git@907ee797151f270bec3a2697743568362db497cd#egg=certbot
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
chardet==5.2.0
charset-normalizer==3.4.1
colorama==0.4.6
ConfigArgParse==1.7
configobj==5.0.9
coverage==7.2.7
cryptography==44.0.2
decorator==5.1.1
distlib==0.3.9
docutils==0.20.1
exceptiongroup==1.2.2
execnet==2.0.2
filelock==3.12.2
idna==3.10
importlib-metadata==6.7.0
importlib-resources==5.12.0
iniconfig==2.0.0
ipdb==0.13.13
ipython==7.34.0
jaraco.classes==3.2.3
jedi==0.19.2
jeepney==0.9.0
josepy==1.14.0
keyring==24.1.1
logilab-common==2.1.0
markdown-it-py==2.2.0
matplotlib-inline==0.1.6
mdurl==0.1.2
mock==5.2.0
more-itertools==9.1.0
mypy-extensions==1.0.0
packaging==24.0
parsedatetime==2.6
parso==0.8.4
pexpect==4.9.0
pickleshare==0.7.5
pkginfo==1.10.0
platformdirs==4.0.0
pluggy==1.2.0
prompt_toolkit==3.0.48
ptyprocess==0.7.0
pycparser==2.21
Pygments==2.17.2
pylint==1.4.2
pyOpenSSL==25.0.0
pyproject-api==1.5.3
pyRFC3339==2.0.1
pytest==7.4.4
pytest-cov==4.1.0
pytest-xdist==3.5.0
pytz==2025.2
readme-renderer==37.3
requests==2.31.0
requests-toolbelt==1.0.0
rfc3986==2.0.0
rich==13.8.1
SecretStorage==3.3.3
six==1.17.0
tomli==2.0.1
tox==4.8.0
traitlets==5.9.0
twine==4.0.2
typing_extensions==4.7.1
urllib3==2.0.7
virtualenv==20.26.6
wcwidth==0.2.13
webencodings==0.5.1
zipp==3.15.0
zope.component==6.0
zope.event==5.0
zope.hookable==6.0
zope.interface==6.4.post2
| name: certbot
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- acme==2.7.4
- astroid==1.3.5
- backcall==0.2.0
- bleach==6.0.0
- cachetools==5.5.2
- cffi==1.15.1
- chardet==5.2.0
- charset-normalizer==3.4.1
- colorama==0.4.6
- configargparse==1.7
- configobj==5.0.9
- coverage==7.2.7
- cryptography==44.0.2
- decorator==5.1.1
- distlib==0.3.9
- docutils==0.20.1
- exceptiongroup==1.2.2
- execnet==2.0.2
- filelock==3.12.2
- idna==3.10
- importlib-metadata==6.7.0
- importlib-resources==5.12.0
- iniconfig==2.0.0
- ipdb==0.13.13
- ipython==7.34.0
- jaraco-classes==3.2.3
- jedi==0.19.2
- jeepney==0.9.0
- josepy==1.14.0
- keyring==24.1.1
- logilab-common==2.1.0
- markdown-it-py==2.2.0
- matplotlib-inline==0.1.6
- mdurl==0.1.2
- mock==5.2.0
- more-itertools==9.1.0
- mypy-extensions==1.0.0
- packaging==24.0
- parsedatetime==2.6
- parso==0.8.4
- pexpect==4.9.0
- pickleshare==0.7.5
- pkginfo==1.10.0
- platformdirs==4.0.0
- pluggy==1.2.0
- prompt-toolkit==3.0.48
- ptyprocess==0.7.0
- pycparser==2.21
- pygments==2.17.2
- pylint==1.4.2
- pyopenssl==25.0.0
- pyproject-api==1.5.3
- pyrfc3339==2.0.1
- pytest==7.4.4
- pytest-cov==4.1.0
- pytest-xdist==3.5.0
- pytz==2025.2
- readme-renderer==37.3
- requests==2.31.0
- requests-toolbelt==1.0.0
- rfc3986==2.0.0
- rich==13.8.1
- secretstorage==3.3.3
- six==1.17.0
- tomli==2.0.1
- tox==4.8.0
- traitlets==5.9.0
- twine==4.0.2
- typing-extensions==4.7.1
- urllib3==2.0.7
- virtualenv==20.26.6
- wcwidth==0.2.13
- webencodings==0.5.1
- zipp==3.15.0
- zope-component==6.0
- zope-event==5.0
- zope-hookable==6.0
- zope-interface==6.4.post2
prefix: /opt/conda/envs/certbot
| [
"acme/acme/client_test.py::ClientNetworkSourceAddressBindingTest::test_source_address_set"
]
| [
"acme/acme/client_test.py::ClientTest::test_init_without_net"
]
| [
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_finalize_order_v1_fetch_chain_error",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_finalize_order_v1_success",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_finalize_order_v1_timeout",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_finalize_order_v2",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_forwarding",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_init_acme_version",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_init_downloads_directory",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_new_account_and_tos",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_new_order_v1",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_new_order_v2",
"acme/acme/client_test.py::BackwardsCompatibleClientV2Test::test_revoke",
"acme/acme/client_test.py::ClientTest::test_agree_to_tos",
"acme/acme/client_test.py::ClientTest::test_answer_challenge",
"acme/acme/client_test.py::ClientTest::test_answer_challenge_missing_next",
"acme/acme/client_test.py::ClientTest::test_check_cert",
"acme/acme/client_test.py::ClientTest::test_check_cert_missing_location",
"acme/acme/client_test.py::ClientTest::test_deactivate_account",
"acme/acme/client_test.py::ClientTest::test_fetch_chain_max",
"acme/acme/client_test.py::ClientTest::test_fetch_chain_no_up_link",
"acme/acme/client_test.py::ClientTest::test_fetch_chain_single",
"acme/acme/client_test.py::ClientTest::test_fetch_chain_too_many",
"acme/acme/client_test.py::ClientTest::test_init_downloads_directory",
"acme/acme/client_test.py::ClientTest::test_poll",
"acme/acme/client_test.py::ClientTest::test_poll_and_request_issuance",
"acme/acme/client_test.py::ClientTest::test_query_registration",
"acme/acme/client_test.py::ClientTest::test_refresh",
"acme/acme/client_test.py::ClientTest::test_register",
"acme/acme/client_test.py::ClientTest::test_request_challenges",
"acme/acme/client_test.py::ClientTest::test_request_challenges_custom_uri",
"acme/acme/client_test.py::ClientTest::test_request_challenges_deprecated_arg",
"acme/acme/client_test.py::ClientTest::test_request_challenges_unexpected_update",
"acme/acme/client_test.py::ClientTest::test_request_challenges_wildcard",
"acme/acme/client_test.py::ClientTest::test_request_domain_challenges",
"acme/acme/client_test.py::ClientTest::test_request_issuance",
"acme/acme/client_test.py::ClientTest::test_request_issuance_missing_location",
"acme/acme/client_test.py::ClientTest::test_request_issuance_missing_up",
"acme/acme/client_test.py::ClientTest::test_retry_after_date",
"acme/acme/client_test.py::ClientTest::test_retry_after_invalid",
"acme/acme/client_test.py::ClientTest::test_retry_after_missing",
"acme/acme/client_test.py::ClientTest::test_retry_after_overflow",
"acme/acme/client_test.py::ClientTest::test_retry_after_seconds",
"acme/acme/client_test.py::ClientTest::test_revocation_payload",
"acme/acme/client_test.py::ClientTest::test_revoke",
"acme/acme/client_test.py::ClientTest::test_revoke_bad_status_raises_error",
"acme/acme/client_test.py::ClientTest::test_update_registration",
"acme/acme/client_test.py::ClientV2Test::test_finalize_order_error",
"acme/acme/client_test.py::ClientV2Test::test_finalize_order_success",
"acme/acme/client_test.py::ClientV2Test::test_finalize_order_timeout",
"acme/acme/client_test.py::ClientV2Test::test_new_account",
"acme/acme/client_test.py::ClientV2Test::test_new_order",
"acme/acme/client_test.py::ClientV2Test::test_poll_and_finalize",
"acme/acme/client_test.py::ClientV2Test::test_poll_authorizations_failure",
"acme/acme/client_test.py::ClientV2Test::test_poll_authorizations_success",
"acme/acme/client_test.py::ClientV2Test::test_poll_authorizations_timeout",
"acme/acme/client_test.py::ClientV2Test::test_revoke",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_conflict",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_jobj",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_not_ok_jobj_error",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_not_ok_jobj_no_error",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_not_ok_no_jobj",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_ok_no_jobj_ct_required",
"acme/acme/client_test.py::ClientNetworkTest::test_check_response_ok_no_jobj_no_ct",
"acme/acme/client_test.py::ClientNetworkTest::test_del",
"acme/acme/client_test.py::ClientNetworkTest::test_del_error",
"acme/acme/client_test.py::ClientNetworkTest::test_init",
"acme/acme/client_test.py::ClientNetworkTest::test_requests_error_passthrough",
"acme/acme/client_test.py::ClientNetworkTest::test_send_request",
"acme/acme/client_test.py::ClientNetworkTest::test_send_request_get_der",
"acme/acme/client_test.py::ClientNetworkTest::test_send_request_post",
"acme/acme/client_test.py::ClientNetworkTest::test_send_request_timeout",
"acme/acme/client_test.py::ClientNetworkTest::test_send_request_user_agent",
"acme/acme/client_test.py::ClientNetworkTest::test_send_request_verify_ssl",
"acme/acme/client_test.py::ClientNetworkTest::test_urllib_error",
"acme/acme/client_test.py::ClientNetworkTest::test_wrap_in_jws",
"acme/acme/client_test.py::ClientNetworkTest::test_wrap_in_jws_v2",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_get",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_head",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_head_get_post_error_passthrough",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post_failed_retry",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post_no_content_type",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post_not_retried",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post_successful_retry",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post_wrong_initial_nonce",
"acme/acme/client_test.py::ClientNetworkWithMockedResponseTest::test_post_wrong_post_response_nonce",
"acme/acme/client_test.py::ClientNetworkSourceAddressBindingTest::test_behavior_assumption"
]
| []
| Apache License 2.0 | 2,520 | [
"docs/using.rst",
"acme/setup.py",
"acme/acme/client.py",
"certbot/main.py"
]
| [
"docs/using.rst",
"acme/setup.py",
"acme/acme/client.py",
"certbot/main.py"
]
|
|
fniessink__next-action-44 | 311de3cc3d7213693e8fff094d7b0ef2750faba7 | 2018-05-14 21:22:51 | 311de3cc3d7213693e8fff094d7b0ef2750faba7 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 347f866..471983a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
### Added
- Allow for excluding contexts from which the next action is selected: `next-action -@office`. Closes #20.
+- Allow for excluding projects from which the next action is selected: `next-action -+DogHouse`. Closes #32.
## [0.1.0] - 2018-05-13
diff --git a/README.md b/README.md
index 4a84ea5..ce06c3f 100644
--- a/README.md
+++ b/README.md
@@ -29,6 +29,7 @@ Don't know what *Todo.txt* is? See <https://github.com/todotxt/todo.txt> for the
$ next-action --help
usage: next-action [-h] [--version] [-f FILE] [-n N | -a]
[@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]] [-@CONTEXT [-@CONTEXT ...]]
+ [-+PROJECT [-+PROJECT ...]]
Show the next action in your todo.txt
@@ -36,6 +37,7 @@ positional arguments:
@CONTEXT show the next action in the specified contexts (default: None)
+PROJECT show the next action for the specified projects (default: None)
-@CONTEXT exclude actions in the specified contexts (default: None)
+ -+PROJECT exclude actions for the specified projects (default: None)
optional arguments:
-h, --help show this help message and exit
@@ -43,6 +45,7 @@ optional arguments:
-f FILE, --file FILE todo.txt file to read; argument can be repeated (default: ['todo.txt'])
-n N, --number N number of next actions to show (default: 1)
-a, --all show all next actions (default: False)
+
```
Assuming your todo.txt file is in the current folder, running *Next-action* without arguments will show the next action you should do. Given this [todo.txt](todo.txt), calling mom would be the next action:
@@ -85,6 +88,13 @@ $ next-action +PaintHouse -@store
Borrow ladder from the neighbors +PaintHouse @home
```
+And of course, in a similar vein, projects can be excluded:
+
+```console
+$ next-action -+PaintHouse @store
+(G) Buy wood for new +DogHouse @store
+```
+
### Extend next actions
To show more than one next action, supply the number you think you can handle:
diff --git a/next_action/arguments.py b/next_action/arguments.py
index da9407a..8e8e96b 100644
--- a/next_action/arguments.py
+++ b/next_action/arguments.py
@@ -4,7 +4,7 @@ import argparse
import os
import shutil
import sys
-from typing import Any, Sequence, Union
+from typing import Any, List, Sequence, Union
import next_action
@@ -61,19 +61,10 @@ def parse_arguments() -> argparse.Namespace:
nargs="*", type=str, default=None, action=ContextProjectAction)
parser.add_argument("excluded_contexts", metavar="-@CONTEXT", help="exclude actions in the specified contexts",
nargs="*", type=str, default=None)
+ parser.add_argument("excluded_projects", metavar="-+PROJECT", help="exclude actions for the specified projects",
+ nargs="*", type=str, default=None)
namespace, remaining = parser.parse_known_args()
- # Get the excluded contexts from the remaining arguments
- excluded_contexts = []
- for argument in remaining:
- if is_valid_prefixed_arg("context", "-@", argument, parser):
- context = argument[len("-@"):]
- if context in namespace.contexts:
- parser.error("context {0} is both included and excluded".format(context))
- else:
- excluded_contexts.append(context)
- else:
- parser.error("unrecognized arguments: {0}".format(argument))
- namespace.excluded_contexts = excluded_contexts
+ parse_remaining_args(parser, remaining, namespace)
# Work around the issue that the "append" action doesn't overwrite defaults.
# See https://bugs.python.org/issue16399.
if default_filenames != namespace.filenames:
@@ -85,3 +76,26 @@ def parse_arguments() -> argparse.Namespace:
if namespace.all:
namespace.number = sys.maxsize
return namespace
+
+
+def parse_remaining_args(parser: argparse.ArgumentParser, remaining: List[str], namespace: argparse.Namespace) -> None:
+ """ Parse the remaining command line arguments. """
+ excluded_contexts = []
+ excluded_projects = []
+ for argument in remaining:
+ if is_valid_prefixed_arg("context", "-@", argument, parser):
+ context = argument[len("-@"):]
+ if context in namespace.contexts:
+ parser.error("context {0} is both included and excluded".format(context))
+ else:
+ excluded_contexts.append(context)
+ elif is_valid_prefixed_arg("project", "-+", argument, parser):
+ project = argument[len("-+"):]
+ if project in namespace.projects:
+ parser.error("project {0} is both included and excluded".format(project))
+ else:
+ excluded_projects.append(project)
+ else:
+ parser.error("unrecognized arguments: {0}".format(argument))
+ namespace.excluded_contexts = excluded_contexts
+ namespace.excluded_projects = excluded_projects
diff --git a/next_action/cli.py b/next_action/cli.py
index 58cd0ed..b099f6f 100644
--- a/next_action/cli.py
+++ b/next_action/cli.py
@@ -23,5 +23,7 @@ def next_action() -> None:
return
with todotxt_file:
tasks.extend([Task(line.strip()) for line in todotxt_file.readlines() if line.strip()])
- actions = next_actions(tasks, set(arguments.contexts), set(arguments.projects), set(arguments.excluded_contexts))
+ actions = next_actions(tasks,
+ set(arguments.contexts), set(arguments.projects),
+ set(arguments.excluded_contexts), set(arguments.excluded_projects))
print("\n".join(action.text for action in actions[:arguments.number]) if actions else "Nothing to do!")
diff --git a/next_action/pick_action.py b/next_action/pick_action.py
index 9207d54..2542cfb 100644
--- a/next_action/pick_action.py
+++ b/next_action/pick_action.py
@@ -12,13 +12,16 @@ def sort_key(task: Task) -> Tuple[str, datetime.date, datetime.date]:
def next_actions(tasks: Sequence[Task], contexts: Set[str] = None, projects: Set[str] = None,
- excluded_contexts: Set[str] = None) -> Sequence[Task]:
+ excluded_contexts: Set[str] = None, excluded_projects: Set[str] = None) -> Sequence[Task]:
""" Return the next action(s) from the collection of tasks. """
# First, get the potential next actions by filtering out completed tasks and tasks with a future creation date
actionable_tasks = [task for task in tasks if task.is_actionable()]
# Then, exclude tasks that have an excluded context
eligible_tasks = filter(lambda task: not excluded_contexts & task.contexts() if excluded_contexts else True,
actionable_tasks)
+ # And, tasks that have an excluded project
+ eligible_tasks = filter(lambda task: not excluded_projects & task.projects() if excluded_projects else True,
+ eligible_tasks)
# Then, select the tasks that belong to all given contexts, if any
tasks_in_context = filter(lambda task: contexts <= task.contexts() if contexts else True, eligible_tasks)
# Next, select the tasks that belong to at least one of the given projects, if any
| Allow for excluding a project | fniessink/next-action | diff --git a/tests/unittests/test_arguments.py b/tests/unittests/test_arguments.py
index e35d00b..fe7b608 100644
--- a/tests/unittests/test_arguments.py
+++ b/tests/unittests/test_arguments.py
@@ -13,6 +13,7 @@ class ArgumentParserTest(unittest.TestCase):
usage_message = """usage: next-action [-h] [--version] [-f FILE] [-n N | -a]
[@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]] [-@CONTEXT [-@CONTEXT ...]]
+ [-+PROJECT [-+PROJECT ...]]
"""
@patch.object(sys, "argv", ["next-action"])
@@ -112,6 +113,21 @@ class ArgumentParserTest(unittest.TestCase):
self.assertEqual([call(self.usage_message), call("next-action: error: project name cannot be empty\n")],
mock_stderr_write.call_args_list)
+ @patch.object(sys, "argv", ["next-action", "-+DogHouse"])
+ def test_exclude_project(self):
+ """ Test that projects can be excluded. """
+ self.assertEqual(["DogHouse"], parse_arguments().excluded_projects)
+
+ @patch.object(sys, "argv", ["next-action", "+DogHouse", "-+DogHouse"])
+ @patch.object(sys.stderr, "write")
+ def test_include_exclude_project(self, mock_stderr_write):
+ """ Test that projects cannot be included and excluded. """
+ os.environ['COLUMNS'] = "120" # Fake that the terminal is wide enough.
+ self.assertRaises(SystemExit, parse_arguments)
+ self.assertEqual([call(self.usage_message),
+ call("next-action: error: project DogHouse is both included and excluded\n")],
+ mock_stderr_write.call_args_list)
+
@patch.object(sys, "argv", ["next-action", "+DogHouse", "@home", "+PaintHouse", "@weekend"])
def test_contexts_and_projects(self):
""" Test that the argument parser returns the contexts and the projects, even when mixed. """
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index d971c8e..4412083 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -61,6 +61,7 @@ class CLITest(unittest.TestCase):
self.assertRaises(SystemExit, next_action)
self.assertEqual(call("""usage: next-action [-h] [--version] [-f FILE] [-n N | -a]
[@CONTEXT [@CONTEXT ...]] [+PROJECT [+PROJECT ...]] [-@CONTEXT [-@CONTEXT ...]]
+ [-+PROJECT [-+PROJECT ...]]
Show the next action in your todo.txt
@@ -68,6 +69,7 @@ positional arguments:
@CONTEXT show the next action in the specified contexts (default: None)
+PROJECT show the next action for the specified projects (default: None)
-@CONTEXT exclude actions in the specified contexts (default: None)
+ -+PROJECT exclude actions for the specified projects (default: None)
optional arguments:
-h, --help show this help message and exit
diff --git a/tests/unittests/test_pick_action.py b/tests/unittests/test_pick_action.py
index a13ce1e..ba985d2 100644
--- a/tests/unittests/test_pick_action.py
+++ b/tests/unittests/test_pick_action.py
@@ -86,6 +86,21 @@ class PickActionTest(unittest.TestCase):
task3 = todotxt.Task("(A) Todo 3 +ProjectY")
self.assertEqual([task2, task1], pick_action.next_actions([task1, task2, task3], projects={"ProjectX"}))
+ def test_excluded_project(self):
+ """ Test that projects can be excluded. """
+ task = todotxt.Task("(A) Todo +DogHouse")
+ self.assertEqual([], pick_action.next_actions([task], excluded_projects={"DogHouse"}))
+
+ def test_excluded_projects(self):
+ """ Test that projects can be excluded. """
+ task = todotxt.Task("(A) Todo +DogHouse +PaintHouse")
+ self.assertEqual([], pick_action.next_actions([task], excluded_projects={"DogHouse"}))
+
+ def test_not_excluded_project(self):
+ """ Test that a task is not excluded if it doesn't belong to the excluded project. """
+ task = todotxt.Task("(A) Todo +DogHouse")
+ self.assertEqual([task], pick_action.next_actions([task], excluded_projects={"PaintHouse"}))
+
def test_project_and_context(self):
""" Test that the next action can be limited to a specific project and context. """
task1 = todotxt.Task("Todo 1 +ProjectX @office")
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 5
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup==1.2.2
iniconfig==2.1.0
-e git+https://github.com/fniessink/next-action.git@311de3cc3d7213693e8fff094d7b0ef2750faba7#egg=next_action
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
tomli==2.2.1
| name: next-action
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- tomli==2.2.1
prefix: /opt/conda/envs/next-action
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_exclude_project",
"tests/unittests/test_pick_action.py::PickActionTest::test_excluded_project",
"tests/unittests/test_pick_action.py::PickActionTest::test_excluded_projects",
"tests/unittests/test_pick_action.py::PickActionTest::test_not_excluded_project"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test_all_and_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_empty_project",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_faulty_option",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_include_exclude_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_include_exclude_project",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_invalid_extra_argument",
"tests/unittests/test_cli.py::CLITest::test_help"
]
| [
"tests/unittests/test_arguments.py::ArgumentParserTest::test__add_filename_twice",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_add_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_all_actions",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_contexts_and_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_and_non_default",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_filename",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_default_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_exclude_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_long_filename_argument",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_contexts",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_multiple_projects",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_no_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_number",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_context",
"tests/unittests/test_arguments.py::ArgumentParserTest::test_one_project",
"tests/unittests/test_cli.py::CLITest::test_context",
"tests/unittests/test_cli.py::CLITest::test_empty_task_file",
"tests/unittests/test_cli.py::CLITest::test_ignore_empty_lines",
"tests/unittests/test_cli.py::CLITest::test_missing_file",
"tests/unittests/test_cli.py::CLITest::test_number",
"tests/unittests/test_cli.py::CLITest::test_one_task",
"tests/unittests/test_cli.py::CLITest::test_project",
"tests/unittests/test_cli.py::CLITest::test_version",
"tests/unittests/test_pick_action.py::PickActionTest::test_context",
"tests/unittests/test_pick_action.py::PickActionTest::test_contexts",
"tests/unittests/test_pick_action.py::PickActionTest::test_creation_dates",
"tests/unittests/test_pick_action.py::PickActionTest::test_due_and_creation_dates",
"tests/unittests/test_pick_action.py::PickActionTest::test_due_dates",
"tests/unittests/test_pick_action.py::PickActionTest::test_excluded_context",
"tests/unittests/test_pick_action.py::PickActionTest::test_excluded_contexts",
"tests/unittests/test_pick_action.py::PickActionTest::test_higher_prio_goes_first",
"tests/unittests/test_pick_action.py::PickActionTest::test_ignore_completed_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_ignore_future_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_ignore_these_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_multiple_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_no_tasks",
"tests/unittests/test_pick_action.py::PickActionTest::test_not_excluded_context",
"tests/unittests/test_pick_action.py::PickActionTest::test_one_task",
"tests/unittests/test_pick_action.py::PickActionTest::test_priority_and_creation_date",
"tests/unittests/test_pick_action.py::PickActionTest::test_project",
"tests/unittests/test_pick_action.py::PickActionTest::test_project_and_context"
]
| []
| Apache License 2.0 | 2,521 | [
"next_action/pick_action.py",
"CHANGELOG.md",
"README.md",
"next_action/cli.py",
"next_action/arguments.py"
]
| [
"next_action/pick_action.py",
"CHANGELOG.md",
"README.md",
"next_action/cli.py",
"next_action/arguments.py"
]
|
|
CORE-GATECH-GROUP__serpent-tools-141 | d96de5b0f4bc4c93370cf508e0ea89701054ff41 | 2018-05-14 22:28:11 | 7c7da6012f509a2e71c3076ab510718585f75b11 | diff --git a/docs/defaultSettings.rst b/docs/defaultSettings.rst
index aa16202..8893098 100644
--- a/docs/defaultSettings.rst
+++ b/docs/defaultSettings.rst
@@ -206,6 +206,19 @@ If true, store the infinite medium cross sections.
Type: bool
+.. _xs-reshapeScatter:
+
+---------------------
+``xs.reshapeScatter``
+---------------------
+
+If true, reshape the scattering matrices to square matrices. By default, these matrices are stored as vectors.
+::
+
+ Default: False
+ Type: bool
+
+
.. _xs-variableExtras:
---------------------
diff --git a/docs/welcome/changelog.rst b/docs/welcome/changelog.rst
index 8628472..62f0e1d 100644
--- a/docs/welcome/changelog.rst
+++ b/docs/welcome/changelog.rst
@@ -12,6 +12,9 @@ Next
* :pull:`131` Updated variable groups between ``2.1.29`` and ``2.1.30`` - include
poison cross section, kinetic parameters, six factor formula (2.1.30 exclusive),
and minor differences
+* :pull:`141` - Setting :ref:`xs-reshapeScatter` can be used to reshape scatter
+ matrices on :py:class:`~serpentTools.objects.containers.HomogUniv`
+ objects to square matrices
.. _vDeprecated:
diff --git a/serpentTools/objects/containers.py b/serpentTools/objects/containers.py
index f9ec978..9626270 100644
--- a/serpentTools/objects/containers.py
+++ b/serpentTools/objects/containers.py
@@ -1,19 +1,22 @@
-""" Custom-built containers for storing data from serpent outputs
+"""
+Custom-built containers for storing data from serpent outputs
Contents
--------
-:py:class:`~serpentTools.objects.containers.HomogUniv`
-:py:class:`~serpentTools.objects.containers.BranchContainer
-:py:class:`~serpentTools.objects.containers.DetectorBase`
-:py:class:`~serpentTools.objects.containers.Detector`
+* :py:class:`~serpentTools.objects.containers.HomogUniv`
+* :py:class:`~serpentTools.objects.containers.BranchContainer
+* :py:class:`~serpentTools.objects.containers.DetectorBase`
+* :py:class:`~serpentTools.objects.containers.Detector`
"""
from collections import OrderedDict
+from itertools import product
from matplotlib import pyplot
+from numpy import (array, arange, unique, log, divide, ones_like, hstack,
+ ndarray)
-from numpy import array, arange, unique, log, divide, ones_like, hstack
-
+from serpentTools.settings import rc
from serpentTools.plot import cartMeshPlot, plot, magicPlotDocDecorator
from serpentTools.objects import NamedObject, convertVariableName
from serpentTools.messages import warning, SerpentToolsException, debug
@@ -22,9 +25,16 @@ DET_COLS = ('value', 'energy', 'universe', 'cell', 'material', 'lattice',
'reaction', 'zmesh', 'ymesh', 'xmesh', 'tally', 'error', 'scores')
"""Name of the columns of the data"""
+SCATTER_MATS = set()
+SCATTER_ORDERS = 8
+
+for xsSpectrum, xsType in product({'INF', 'B1'},
+ {'S', 'SP'}):
+ SCATTER_MATS.update({'{}_{}{}'.format(xsSpectrum, xsType, xx)
+ for xx in range(SCATTER_ORDERS)})
__all__ = ('DET_COLS', 'HomogUniv', 'BranchContainer', 'Detector',
- 'DetectorBase')
+ 'DetectorBase', 'SCATTER_MATS', 'SCATTER_ORDERS')
def isNonNeg(value):
@@ -69,6 +79,11 @@ class HomogUniv(NamedObject):
Relative uncertainties for leakage-corrected group constants
metadata: dict
Other values that do not not conform to inf/b1 dictionaries
+ reshapedMats: bool
+ ``True`` if scattering matrices have been reshaped to square
+ matrices. Otherwise, these matrices are stored as vectors.
+ numGroups: None or int
+ Number of energy groups
Raises
------
@@ -98,6 +113,12 @@ class HomogUniv(NamedObject):
self.b1Unc = {}
self.infUnc = {}
self.metadata = {}
+ self.__reshaped = rc['xs.reshapeScatter']
+ self.numGroups = None
+
+ @property
+ def reshaped(self):
+ return self.__reshaped
def __str__(self):
extras = []
@@ -113,8 +134,14 @@ class HomogUniv(NamedObject):
extras or '')
def addData(self, variableName, variableValue, uncertainty=False):
- """
- sets the value of the variable and, optionally, the associate s.d.
+ r"""
+ Sets the value of the variable and, optionally, the associate s.d.
+
+ .. versionadded:: 0.5.0
+
+ Reshapes scattering matrices according to setting
+ `xs.reshapeScatter`. Matrices are of the form
+ :math:`S[i, j]=\Sigma_{s,i\rightarrow j}`
.. warning::
@@ -127,8 +154,7 @@ class HomogUniv(NamedObject):
variableValue:
Variable Value
uncertainty: bool
- Set to ``True`` in order to retrieve the
- uncertainty associated to the expected values
+ Set to ``True`` if this data is an uncertainty
Raises
------
@@ -136,12 +162,31 @@ class HomogUniv(NamedObject):
If the uncertainty flag is not boolean
"""
-
- # 1. Check the input type
- variableName = convertVariableName(variableName)
if not isinstance(uncertainty, bool):
raise TypeError('The variable uncertainty has type {}, '
'should be boolean.'.format(type(uncertainty)))
+ if not isinstance(variableValue, ndarray):
+ debug("Converting {} from {} to array".format(
+ variableName, type(variableValue)))
+ variableValue = array(variableValue)
+ ng = self.numGroups
+ if self.__reshaped and variableName in SCATTER_MATS:
+ if ng is None:
+ warning("Number of groups is unknown at this time. "
+ "Will not reshape variable {}"
+ .format(variableName))
+ else:
+ variableValue = variableValue.reshape(ng, ng)
+ incomingGroups = variableValue.shape[0]
+ if ng is None:
+ self.numGroups = incomingGroups
+ elif incomingGroups != ng and variableName not in SCATTER_MATS:
+ warning("Variable {} appears to have different group structure. "
+ "Current: {} vs. incoming: {}"
+ .format(variableName, ng, incomingGroups))
+
+ variableName = convertVariableName(variableName)
+
# 2. Pointer to the proper dictionary
setter = self._lookup(variableName, uncertainty)
# 3. Check if variable is already present. Then set the variable.
@@ -166,7 +211,7 @@ class HomogUniv(NamedObject):
x:
Variable Value
dx:
- Associated uncertainty
+ Associated uncertainty if ``uncertainty``
Raises
------
diff --git a/serpentTools/parsers/branching.py b/serpentTools/parsers/branching.py
index fa771cb..ffda4bb 100644
--- a/serpentTools/parsers/branching.py
+++ b/serpentTools/parsers/branching.py
@@ -134,6 +134,10 @@ class BranchingReader(XSReader):
possibleEndOfFile=step == numVariables - 1)
varName = splitList[0]
varValues = [float(xx) for xx in splitList[2:]]
+ if not varValues:
+ debug("No data present for variable {}. Skipping"
+ .format(varName))
+ continue
if self._checkAddVariable(varName):
if self.settings['areUncsPresent']:
vals, uncs = splitItems(varValues)
diff --git a/serpentTools/settings.py b/serpentTools/settings.py
index a7ca265..745eea6 100644
--- a/serpentTools/settings.py
+++ b/serpentTools/settings.py
@@ -124,6 +124,12 @@ defaultSettings = {
'description': 'If true, store the critical leakage cross sections.',
'type': bool
},
+ 'xs.reshapeScatter': {
+ 'default': False,
+ 'description': 'If true, reshape the scattering matrices to square matrices. '
+ 'By default, these matrices are stored as vectors.',
+ 'type': bool
+ },
'xs.variableGroups': {
'default': [],
'description': ('Name of variable groups from variables.yaml to be '
@@ -254,10 +260,6 @@ class UserSettingsLoader(dict):
self.__originals = {}
dict.__init__(self, self._defaultLoader.retrieveDefaults())
- def __setitem__(self, key, value):
- self._checkStoreOriginal(key)
- self.setValue(key, value)
-
def __enter__(self):
self.__inside= True
return self
@@ -268,10 +270,6 @@ class UserSettingsLoader(dict):
self[key] = originalValue
self.__originals= {}
- def _checkStoreOriginal(self, key):
- if self.__inside:
- self.__originals[key] = self[key]
-
def setValue(self, name, value):
"""Set the value of a specific setting.
@@ -290,6 +288,8 @@ class UserSettingsLoader(dict):
If the value is not of the correct type
"""
+ if self.__inside:
+ self.__originals[name] = self[name]
if name not in self:
raise KeyError('Setting {} does not exist'.format(name))
self._defaultLoader[name].validate(value)
@@ -299,6 +299,8 @@ class UserSettingsLoader(dict):
dict.__setitem__(self, name, value)
messages.debug('Updated setting {} to {}'.format(name, value))
+ __setitem__ = setValue
+
def getReaderSettings(self, settingsPreffix):
"""Get all module-wide and reader-specific settings.
| [Feature] Auto-reshape scattering matrices for homogenized universes
Implement a setting that will automatically reshape scattering matrices for homogenized universes. This should be done in such a way that both the BranchingReader and ResultsReader honor this request.
Possibly a setting like `xs.reshapeScatter`? | CORE-GATECH-GROUP/serpent-tools | diff --git a/serpentTools/tests/test_container.py b/serpentTools/tests/test_container.py
index fbed4dc..25a1188 100644
--- a/serpentTools/tests/test_container.py
+++ b/serpentTools/tests/test_container.py
@@ -1,43 +1,43 @@
"""Test the container object. """
import unittest
-from serpentTools.objects import containers
+
+from six import iteritems
+from numpy import array, arange
+from numpy.testing import assert_allclose
+
+from serpentTools.settings import rc
+from serpentTools.objects.containers import HomogUniv
from serpentTools.parsers import DepletionReader
+NUM_GROUPS = 5
-class HomogenizedUniverseTester(unittest.TestCase):
- """ Class to test the Homogenized Universe """
- @classmethod
- def setUpClass(cls):
- cls.univ = containers.HomogUniv('dummy', 0, 0, 0)
- cls.Exp = {}
- cls.Unc = {}
- # Data definition
- cls.Exp = {'B1_1': 1, 'B1_2': [1, 2], 'B1_3': [1, 2, 3],
- 'INF_1': 3, 'INF_2': [4, 5], 'INF_3': [6, 7, 8],
- 'MACRO_E': (.1, .2, .3, .1, .2, .3), 'Test_1': 'ciao'}
- cls.Unc = {'B1_1': 1e-1, 'B1_2': [1e-1, 2e-1], 'B1_3': [1e-1, 2e-1,
- 3e-1],
- 'INF_1': 3e-1, 'INF_2': [4e-1, 5e-1], 'INF_3': [6e-1, 7e-1,
- 8e-1],
- 'MACRO_E': (.1e-1, .2e-1, .3e-1, .1e-1, .2e-1, .3e-1),
- 'Test_1': 'addio'}
+class _HomogUnivTestHelper(unittest.TestCase):
+ """Class that runs the tests for the two sub-classes
+
+ Subclasses will differ in how the ``mat`` data
+ is arranged. For one case, the ``mat`` will be a
+ 2D matrix.
+ """
+ def setUp(self):
+ self.univ, vec, mat = self.getParams()
+ # Data definition
+ rawData = {'B1_1': vec, 'B1_AS_LIST': list(range(NUM_GROUPS)),
+ 'INF_1': vec, 'INF_S0': mat}
+ meta = {'MACRO_E': vec}
# Partial dictionaries
- cls.b1Exp = {'b11': 1, 'b12': [1, 2], 'b13': [1, 2, 3]}
- cls.b1Unc = {'b11': (1, 0.1), 'b12': ([1, 2], [.1, .2]), 'b13':
- ([1, 2, 3], [.1, .2, .3])}
- cls.infExp = {'inf1': 3, 'inf2': [4, 5], 'inf3': [6, 7, 8]}
- cls.infUnc = {'inf1': (3, .3), 'inf2': ([4, 5], [.4, .5]), 'inf3':
- ([6, 7, 8], [.6, .7, .8])}
- cls.meta = {'macroE': (.1e-1, .2e-1, .3e-1, .1e-1, .2e-1, .3e-1),
- 'test1': 'addio'}
+ self.b1Unc = self.b1Exp = {'b11': vec}
+ self.infUnc = self.infExp = {'inf1': vec, 'infS0': mat}
+ self.meta = {'macroE': vec}
+
# Use addData
- for kk in cls.Exp:
- cls.univ.addData(kk, cls.Exp[kk], False)
- for kk in cls.Unc:
- cls.univ.addData(kk, cls.Unc[kk], True)
+ for key, value in iteritems(rawData):
+ self.univ.addData(key, value, uncertainty=False)
+ self.univ.addData(key, value, uncertainty=True)
+ for key, value in iteritems(meta):
+ self.univ.addData(key, value)
def test_getB1Exp(self):
""" Get Expected vales from B1 dictionary"""
@@ -45,16 +45,16 @@ class HomogenizedUniverseTester(unittest.TestCase):
# Comparison
for kk in self.univ.b1Exp:
d[kk] = self.univ.get(kk, False)
- self.assertDictEqual(self.b1Exp, d)
-
+ compareDictOfArrays(self.b1Exp, d, 'b1 values')
+
def test_getB1Unc(self):
""" Get Expected vales and associated uncertainties from B1 dictionary
"""
d = {}
# Comparison
for kk in self.univ.b1Exp:
- d[kk] = self.univ.get(kk, True)
- self.assertDictEqual(self.b1Unc, d)
+ d[kk] = self.univ.get(kk, True)[1]
+ compareDictOfArrays(self.b1Unc, d, 'b1 uncertainties')
def test_getInfExp(self):
""" Get Expected vales from Inf dictionary"""
@@ -62,7 +62,7 @@ class HomogenizedUniverseTester(unittest.TestCase):
# Comparison
for kk in self.univ.infExp:
d[kk] = self.univ.get(kk, False)
- self.assertDictEqual(self.infExp, d)
+ compareDictOfArrays(self.infExp, d, 'infinite values')
def test_getInfUnc(self):
""" Get Expected vales and associated uncertainties from Inf dictionary
@@ -70,8 +70,8 @@ class HomogenizedUniverseTester(unittest.TestCase):
d = {}
# Comparison
for kk in self.univ.infUnc:
- d[kk] = self.univ.get(kk, True)
- self.assertDictEqual(self.infUnc, d)
+ d[kk] = self.univ.get(kk, True)[1]
+ compareDictOfArrays(self.infUnc, d, 'infinite uncertainties')
def test_getMeta(self):
""" Get metaData from corresponding dictionary"""
@@ -79,8 +79,64 @@ class HomogenizedUniverseTester(unittest.TestCase):
# Comparison
for kk in self.univ.metadata:
d[kk] = self.univ.get(kk, False)
- self.assertDictEqual(self.meta, d)
+ compareDictOfArrays(self.meta, d, 'metadata')
+
+ def test_getBothInf(self):
+ """
+ Verify that the value and the uncertainty are returned if the
+ flag is passed.
+ """
+ expected, uncertainties = {}, {}
+ for key in self.infExp.keys():
+ value, unc = self.univ.get(key, True)
+ expected[key] = value
+ uncertainties[key] = unc
+ compareDictOfArrays(self.infExp, expected, 'infinite values')
+ compareDictOfArrays(self.infUnc, uncertainties,
+ 'infinite uncertainties')
+
+
+class VectoredHomogUnivTester(_HomogUnivTestHelper):
+ """Class for testing HomogUniv that does not reshape scatter matrices"""
+
+ def getParams(self):
+ univ, vec, mat = getParams()
+ self.assertFalse(univ.reshaped)
+ return univ, vec, mat
+
+
+class ReshapedHomogUnivTester(_HomogUnivTestHelper):
+ """Class for testing HomogUniv that does reshape scatter matrices"""
+
+ def getParams(self):
+ from serpentTools.settings import rc
+ with rc:
+ rc.setValue('xs.reshapeScatter', True)
+ univ, vec, mat = getParams()
+ univ.numGroups = NUM_GROUPS
+ self.assertTrue(univ.reshaped)
+ return univ, vec, mat.reshape(NUM_GROUPS, NUM_GROUPS)
+
+
+def getParams():
+ """Return the universe, vector, and matrix for testing."""
+ univ = HomogUniv(300, 0, 0, 0)
+ vec = arange(NUM_GROUPS)
+ mat = arange(NUM_GROUPS ** 2)
+ return univ, vec, mat
+
+
+def compareDictOfArrays(expected, actualDict, dataType):
+ for key, value in iteritems(expected):
+ actual = actualDict[key]
+ assert_allclose(value, actual,
+ err_msg="Error in {} dictionary: key={}"
+ .format(dataType, key))
+del _HomogUnivTestHelper
if __name__ == '__main__':
- unittest.main()
+ from serpentTools import rc
+ with rc:
+ rc['verbosity'] = 'debug'
+ unittest.main()
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 5
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "numpy>=1.11.1 matplotlib>=1.5.0 pyyaml>=3.08 scipy six",
"pip_packages": [
"pytest",
"coverage"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
coverage==6.2
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
importlib-metadata==4.8.3
iniconfig==1.1.1
kiwisolver @ file:///tmp/build/80754af9/kiwisolver_1612282412546/work
matplotlib @ file:///tmp/build/80754af9/matplotlib-suite_1613407855456/work
numpy @ file:///tmp/build/80754af9/numpy_and_numpy_base_1603483703303/work
olefile @ file:///Users/ktietz/demo/mc3/conda-bld/olefile_1629805411829/work
packaging==21.3
Pillow @ file:///tmp/build/80754af9/pillow_1625670622947/work
pluggy==1.0.0
py==1.11.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==7.0.1
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
PyYAML==5.4.1
scipy @ file:///tmp/build/80754af9/scipy_1597686635649/work
-e git+https://github.com/CORE-GATECH-GROUP/serpent-tools.git@d96de5b0f4bc4c93370cf508e0ea89701054ff41#egg=serpentTools
six @ file:///tmp/build/80754af9/six_1644875935023/work
tomli==1.2.3
tornado @ file:///tmp/build/80754af9/tornado_1606942266872/work
typing_extensions==4.1.1
zipp==3.6.0
| name: serpent-tools
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- blas=1.0=openblas
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- cycler=0.11.0=pyhd3eb1b0_0
- dbus=1.13.18=hb2f20db_0
- expat=2.6.4=h6a678d5_0
- fontconfig=2.14.1=h52c9d5c_1
- freetype=2.12.1=h4a9f257_0
- giflib=5.2.2=h5eee18b_0
- glib=2.69.1=h4ff587b_1
- gst-plugins-base=1.14.1=h6a678d5_1
- gstreamer=1.14.1=h5eee18b_1
- icu=58.2=he6710b0_3
- jpeg=9e=h5eee18b_3
- kiwisolver=1.3.1=py36h2531618_0
- lcms2=2.16=hb9589c4_0
- ld_impl_linux-64=2.40=h12ee557_0
- lerc=4.0.0=h6a678d5_0
- libdeflate=1.22=h5eee18b_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgfortran-ng=7.5.0=ha8ba4b0_17
- libgfortran4=7.5.0=ha8ba4b0_17
- libgomp=11.2.0=h1234567_1
- libopenblas=0.3.18=hf726d26_0
- libpng=1.6.39=h5eee18b_0
- libstdcxx-ng=11.2.0=h1234567_1
- libtiff=4.5.1=hffd6297_1
- libuuid=1.41.5=h5eee18b_0
- libwebp=1.2.4=h11a3e52_1
- libwebp-base=1.2.4=h5eee18b_1
- libxcb=1.15=h7f8727e_0
- libxml2=2.9.14=h74e7548_0
- lz4-c=1.9.4=h6a678d5_1
- matplotlib=3.3.4=py36h06a4308_0
- matplotlib-base=3.3.4=py36h62a2d02_0
- ncurses=6.4=h6a678d5_0
- numpy=1.19.2=py36h6163131_0
- numpy-base=1.19.2=py36h75fe3a5_0
- olefile=0.46=pyhd3eb1b0_0
- openssl=1.1.1w=h7f8727e_0
- pcre=8.45=h295c915_0
- pillow=8.3.1=py36h5aabda8_0
- pip=21.2.2=py36h06a4308_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pyqt=5.9.2=py36h05f1152_2
- python=3.6.13=h12debd9_1
- python-dateutil=2.8.2=pyhd3eb1b0_0
- pyyaml=5.4.1=py36h27cfd23_1
- qt=5.9.7=h5867ecd_1
- readline=8.2=h5eee18b_0
- scipy=1.5.2=py36habc2bb6_0
- setuptools=58.0.4=py36h06a4308_0
- sip=4.19.8=py36hf484d3e_0
- six=1.16.0=pyhd3eb1b0_1
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tornado=6.1=py36h27cfd23_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- yaml=0.2.5=h7b6447c_0
- zlib=1.2.13=h5eee18b_1
- zstd=1.5.6=hc292b87_0
- pip:
- attrs==22.2.0
- coverage==6.2
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pytest==7.0.1
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/serpent-tools
| [
"serpentTools/tests/test_container.py::VectoredHomogUnivTester::test_getB1Exp",
"serpentTools/tests/test_container.py::VectoredHomogUnivTester::test_getB1Unc",
"serpentTools/tests/test_container.py::VectoredHomogUnivTester::test_getBothInf",
"serpentTools/tests/test_container.py::VectoredHomogUnivTester::test_getInfExp",
"serpentTools/tests/test_container.py::VectoredHomogUnivTester::test_getInfUnc",
"serpentTools/tests/test_container.py::VectoredHomogUnivTester::test_getMeta",
"serpentTools/tests/test_container.py::ReshapedHomogUnivTester::test_getB1Exp",
"serpentTools/tests/test_container.py::ReshapedHomogUnivTester::test_getB1Unc",
"serpentTools/tests/test_container.py::ReshapedHomogUnivTester::test_getBothInf",
"serpentTools/tests/test_container.py::ReshapedHomogUnivTester::test_getInfExp",
"serpentTools/tests/test_container.py::ReshapedHomogUnivTester::test_getInfUnc",
"serpentTools/tests/test_container.py::ReshapedHomogUnivTester::test_getMeta"
]
| []
| []
| []
| MIT License | 2,522 | [
"docs/defaultSettings.rst",
"serpentTools/parsers/branching.py",
"serpentTools/settings.py",
"docs/welcome/changelog.rst",
"serpentTools/objects/containers.py"
]
| [
"docs/defaultSettings.rst",
"serpentTools/parsers/branching.py",
"serpentTools/settings.py",
"docs/welcome/changelog.rst",
"serpentTools/objects/containers.py"
]
|
|
bbangert__beaker-159 | c8f0a599f44d313b68a49c7a503ecbbec4286771 | 2018-05-15 13:56:00 | c8f0a599f44d313b68a49c7a503ecbbec4286771 | diff --git a/.travis.yml b/.travis.yml
index 7e04b10..9487d00 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -13,6 +13,7 @@ python:
- "3.4"
- "3.5"
- "3.6"
+ - "3.7-dev"
install:
- pip install -e .[testsuite]
script:
diff --git a/beaker/cookie.py b/beaker/cookie.py
index 8d8d0a4..729fbe3 100644
--- a/beaker/cookie.py
+++ b/beaker/cookie.py
@@ -11,6 +11,9 @@ cookie_pickles_properly = (
sys.version_info >= (3, 4, 3)
)
+# Add support for the SameSite attribute (obsolete when PY37 is unsupported).
+http_cookies.Morsel._reserved.setdefault('samesite', 'SameSite')
+
# Adapted from Django.http.cookies and always enabled the bad_cookies
# behaviour to cope with any invalid cookie key while keeping around
diff --git a/beaker/session.py b/beaker/session.py
index 442ba73..6b536d2 100644
--- a/beaker/session.py
+++ b/beaker/session.py
@@ -127,6 +127,8 @@ class Session(dict):
to keep backward compatibility with sessions generated before 1.8.0
set this to 48.
:param crypto_type: encryption module to use
+ :param samesite: SameSite value for the cookie -- should be either 'Lax',
+ 'Strict', or None.
"""
def __init__(self, request, id=None, invalidate_corrupt=False,
use_cookies=True, type=None, data_dir=None,
@@ -135,7 +137,7 @@ class Session(dict):
data_serializer='pickle', secret=None,
secure=False, namespace_class=None, httponly=False,
encrypt_key=None, validate_key=None, encrypt_nonce_bits=DEFAULT_NONCE_BITS,
- crypto_type='default',
+ crypto_type='default', samesite='Lax',
**namespace_args):
if not type:
if data_dir:
@@ -178,6 +180,7 @@ class Session(dict):
self.secret = secret
self.secure = secure
self.httponly = httponly
+ self.samesite = samesite
self.encrypt_key = encrypt_key
self.validate_key = validate_key
self.encrypt_nonce_size = get_nonce_size(encrypt_nonce_bits)
@@ -246,6 +249,8 @@ class Session(dict):
self.cookie[self.key]['domain'] = self._domain
if self.secure:
self.cookie[self.key]['secure'] = True
+ if self.samesite:
+ self.cookie[self.key]['samesite'] = self.samesite
self._set_cookie_http_only()
self.cookie[self.key]['path'] = self._path
@@ -556,13 +561,15 @@ class CookieSession(Session):
otherwise invalid data will cause an exception.
:type invalidate_corrupt: bool
:param crypto_type: The crypto module to use.
+ :param samesite: SameSite value for the cookie -- should be either 'Lax',
+ 'Strict', or None.
"""
def __init__(self, request, key='beaker.session.id', timeout=None,
save_accessed_time=True, cookie_expires=True, cookie_domain=None,
cookie_path='/', encrypt_key=None, validate_key=None, secure=False,
httponly=False, data_serializer='pickle',
encrypt_nonce_bits=DEFAULT_NONCE_BITS, invalidate_corrupt=False,
- crypto_type='default',
+ crypto_type='default', samesite='Lax',
**kwargs):
self.crypto_module = get_crypto_module(crypto_type)
@@ -582,6 +589,7 @@ class CookieSession(Session):
self.request['set_cookie'] = False
self.secure = secure
self.httponly = httponly
+ self.samesite = samesite
self._domain = cookie_domain
self._path = cookie_path
self.invalidate_corrupt = invalidate_corrupt
| Set SameSite option on session cookies
Documented here: https://blog.mozilla.org/security/2018/04/24/same-site-cookies-in-firefox-60/
Currently supported in Firefox and Chromium; it provides strong defense in depth against CSRF. | bbangert/beaker | diff --git a/tests/test_cookie_domain_only.py b/tests/test_cookie_domain_only.py
index 4eef4dc..800da7f 100644
--- a/tests/test_cookie_domain_only.py
+++ b/tests/test_cookie_domain_only.py
@@ -61,6 +61,7 @@ def test_cookie_attributes_are_preserved():
assert 'path=/app' in cookie.lower()
assert 'secure' in cookie.lower()
assert 'httponly' in cookie.lower()
+ assert 'samesite=lax' in cookie.lower()
if __name__ == '__main__':
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 0,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 3
} | 1.9 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[testsuite]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | async-timeout==4.0.2
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
-e git+https://github.com/bbangert/beaker.git@c8f0a599f44d313b68a49c7a503ecbbec4286771#egg=Beaker
beautifulsoup4==4.12.3
certifi==2021.5.30
cffi==1.15.1
coverage==6.2
cryptography==40.0.2
greenlet==2.0.2
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
mock==5.2.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
nose==1.3.7
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycparser==2.21
pycryptodome==3.21.0
pymongo==4.1.1
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
redis==4.3.6
soupsieve==2.3.2.post1
SQLAlchemy==1.4.54
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
waitress==2.0.0
WebOb==1.8.9
WebTest==3.0.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: beaker
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- async-timeout==4.0.2
- beautifulsoup4==4.12.3
- cffi==1.15.1
- coverage==6.2
- cryptography==40.0.2
- greenlet==2.0.2
- mock==5.2.0
- nose==1.3.7
- pycparser==2.21
- pycryptodome==3.21.0
- pymongo==4.1.1
- redis==4.3.6
- soupsieve==2.3.2.post1
- sqlalchemy==1.4.54
- waitress==2.0.0
- webob==1.8.9
- webtest==3.0.0
prefix: /opt/conda/envs/beaker
| [
"tests/test_cookie_domain_only.py::test_cookie_attributes_are_preserved"
]
| []
| [
"tests/test_cookie_domain_only.py::test_increment"
]
| []
| BSD License | 2,524 | [
"beaker/cookie.py",
".travis.yml",
"beaker/session.py"
]
| [
"beaker/cookie.py",
".travis.yml",
"beaker/session.py"
]
|
|
ofek__pypinfo-49 | 72ea0a3e5669757e3c625ea2f1f0e3463d11db86 | 2018-05-15 15:00:40 | 2a0628e63b50def718228a6b5b87a0e83b7cbf01 | hugovk: Passing CI build: https://travis-ci.org/hugovk/pypinfo/builds/379256544
ofek: Thanks so much! | diff --git a/pypinfo/core.py b/pypinfo/core.py
index f1ba663..5c633a4 100644
--- a/pypinfo/core.py
+++ b/pypinfo/core.py
@@ -12,13 +12,17 @@ FROM = """\
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
- DATE_ADD(CURRENT_TIMESTAMP(), {}, "day"),
- DATE_ADD(CURRENT_TIMESTAMP(), {}, "day")
+ {},
+ {}
)
"""
+DATE_ADD = 'DATE_ADD(CURRENT_TIMESTAMP(), {}, "day")'
+START_TIMESTAMP = 'TIMESTAMP("{} 00:00:00")'
+END_TIMESTAMP = 'TIMESTAMP("{} 23:59:59")'
START_DATE = '-31'
END_DATE = '-1'
DEFAULT_LIMIT = '10'
+YYYY_MM_DD = re.compile("^[0-9]{4}-[01][0-9]-[0-3][0-9]$")
def create_config():
@@ -42,6 +46,28 @@ def create_client(creds_file=None):
return Client.from_service_account_json(creds_file, project=project)
+def validate_date(date):
+ valid = False
+ try:
+ if int(date) < 0:
+ valid = True
+ except ValueError:
+ if YYYY_MM_DD.match(date):
+ valid = True
+
+ if not valid:
+ raise ValueError('Dates must be negative integers or YYYY-MM-DD in the past.')
+ return valid
+
+
+def format_date(date, timestamp_format):
+ try:
+ date = DATE_ADD.format(int(date))
+ except ValueError:
+ date = timestamp_format.format(date)
+ return date
+
+
def build_query(project, all_fields, start_date=None, end_date=None,
days=None, limit=None, where=None, order=None, pip=None):
project = normalize(project)
@@ -53,11 +79,18 @@ def build_query(project, all_fields, start_date=None, end_date=None,
if days:
start_date = str(int(end_date) - int(days))
- if int(start_date) > 0 or int(end_date) > 0:
- raise ValueError('Dates must be in the past (negative).')
+ validate_date(start_date)
+ validate_date(end_date)
+
+ try:
+ if int(start_date) >= int(end_date):
+ raise ValueError('End date must be greater than start date.')
+ except ValueError:
+ # Not integers, must be yyyy-mm-dd
+ pass
- if int(start_date) >= int(end_date):
- raise ValueError('End date must be greater than start date.')
+ start_date = format_date(start_date, START_TIMESTAMP)
+ end_date = format_date(end_date, END_TIMESTAMP)
fields = []
used_fields = set()
| Allow YYYY-MM-DD dates in --start-date and --end-date
It'd be handy to be able to use `YYYY-MM-DD` dates as the start and end date. For example:
```console
$ pypinfo --start-date 2018-01-01 --end-date 2018-01-31 pillow pyversion
```
Rather than having to work it out:
```console
$ pypinfo --start-date -43 --end-date -14 pillow pyversion
```
It wouldn't necessarily have to reuse `--start-date` and `--end-date`, but that's probably clearest and easiest (if not negative integer, it's a date).
What do you think?
| ofek/pypinfo | diff --git a/tests/test_core.py b/tests/test_core.py
index 0ec1b9a..121b80b 100644
--- a/tests/test_core.py
+++ b/tests/test_core.py
@@ -1,3 +1,6 @@
+import copy
+import pytest
+
from pypinfo import core
ROWS = [
@@ -16,7 +19,7 @@ ROWS = [
def test_tabulate_default():
# Arrange
- rows = list(ROWS)
+ rows = copy.deepcopy(ROWS)
expected = """\
| python_version | percent | download_count |
| -------------- | ------- | -------------- |
@@ -40,7 +43,7 @@ def test_tabulate_default():
def test_tabulate_markdown():
# Arrange
- rows = list(ROWS)
+ rows = copy.deepcopy(ROWS)
expected = """\
| python_version | percent | download_count |
| -------------- | ------: | -------------: |
@@ -60,3 +63,50 @@ def test_tabulate_markdown():
# Assert
assert tabulated == expected
+
+
+def test_validate_date_negative_number():
+ # Act
+ valid = core.validate_date("-1")
+
+ # Assert
+ assert valid
+
+
+def test_validate_date_positive_number():
+ # Act / Assert
+ with pytest.raises(ValueError):
+ core.validate_date("1")
+
+
+def test_validate_date_yyyy_mm_dd():
+ # Act
+ valid = core.validate_date("2018-05-15")
+
+ # Assert
+ assert valid
+
+
+def test_validate_date_other_string():
+ # Act / Assert
+ with pytest.raises(ValueError):
+ core.validate_date("somthing invalid")
+
+
+def test_format_date_negative_number():
+ # Arrange
+ dummy_format = "dummy format {}"
+
+ # Act
+ date = core.format_date("-1", dummy_format)
+
+ # Assert
+ assert date == 'DATE_ADD(CURRENT_TIMESTAMP(), -1, "day")'
+
+
+def test_format_date_yyy_mm_dd():
+ # Act
+ date = core.format_date("2018-05-15", core.START_TIMESTAMP)
+
+ # Assert
+ assert date == 'TIMESTAMP("2018-05-15 00:00:00")'
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 14.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | appdirs==1.4.4
binary==1.0.1
cachetools==5.5.2
certifi==2025.1.31
charset-normalizer==3.4.1
click==8.1.8
exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
google-api-core==2.24.2
google-auth==2.38.0
google-cloud-bigquery==3.31.0
google-cloud-core==2.4.3
google-crc32c==1.7.1
google-resumable-media==2.7.2
googleapis-common-protos==1.69.2
grpcio==1.71.0
grpcio-status==1.71.0
idna==3.10
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
proto-plus==1.26.1
protobuf==5.29.4
pyasn1==0.6.1
pyasn1_modules==0.4.2
-e git+https://github.com/ofek/pypinfo.git@72ea0a3e5669757e3c625ea2f1f0e3463d11db86#egg=pypinfo
pytest @ file:///croot/pytest_1738938843180/work
python-dateutil==2.9.0.post0
requests==2.32.3
rsa==4.9
six==1.17.0
tinydb==4.8.2
tinyrecord==0.2.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
urllib3==2.3.0
| name: pypinfo
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- appdirs==1.4.4
- binary==1.0.1
- cachetools==5.5.2
- certifi==2025.1.31
- charset-normalizer==3.4.1
- click==8.1.8
- google-api-core==2.24.2
- google-auth==2.38.0
- google-cloud-bigquery==3.31.0
- google-cloud-core==2.4.3
- google-crc32c==1.7.1
- google-resumable-media==2.7.2
- googleapis-common-protos==1.69.2
- grpcio==1.71.0
- grpcio-status==1.71.0
- idna==3.10
- proto-plus==1.26.1
- protobuf==5.29.4
- pyasn1==0.6.1
- pyasn1-modules==0.4.2
- python-dateutil==2.9.0.post0
- requests==2.32.3
- rsa==4.9
- six==1.17.0
- tinydb==4.8.2
- tinyrecord==0.2.0
- urllib3==2.3.0
prefix: /opt/conda/envs/pypinfo
| [
"tests/test_core.py::test_validate_date_negative_number",
"tests/test_core.py::test_validate_date_positive_number",
"tests/test_core.py::test_validate_date_yyyy_mm_dd",
"tests/test_core.py::test_validate_date_other_string",
"tests/test_core.py::test_format_date_negative_number",
"tests/test_core.py::test_format_date_yyy_mm_dd"
]
| []
| [
"tests/test_core.py::test_tabulate_default",
"tests/test_core.py::test_tabulate_markdown"
]
| []
| MIT License | 2,525 | [
"pypinfo/core.py"
]
| [
"pypinfo/core.py"
]
|
dask__dask-3499 | 48c4a589393ebc5b335cc5c7df291901401b0b15 | 2018-05-15 15:02:17 | 48c4a589393ebc5b335cc5c7df291901401b0b15 | mrocklin: Thanks @TomAugspurger . I'm curious, how bad is the user experience if people use latest release for both dask.dataframe and pandas?
mrocklin: I'm trying to determine how hard we should push on a release.
TomAugspurger: `dask.DataFrame.apply` will break, so if you're using that, not great :)
Otherwise, things should be mostly fine.
Some things like `.rolling().apply` will (correctly) show a FutureWarning for the new `raw` keyword, but there's not a way to silence it yet.
I don't think there has to be a release the same day as pandas (today probably), but soon would be nice.
If you'd like, I can
- spin a maintenance branch of the last release
- backport these fixes there
- tag the release from there
IIRC, some of the larger refactorings were merged into master?
mrocklin: > IIRC, some of the larger refactorings were merged into master?
That's correct.
If it's cheap (less than a couple hours) to do a maintenance release then I suggest that we do that. Otherwise we'll just be incompatible for a few days and I'll focus on cleaning up config work in downstream projects with a goal to have a dask/dask release out by next week.
TomAugspurger: tomaugspurger
For testing this, I think I'm going to bump one of the workers to the RC,
and just remove that after everything passes.
On Tue, May 15, 2018 at 10:24 AM, Matthew Rocklin <[email protected]>
wrote:
> Agreed. What is your PyPI username?
>
> On Tue, May 15, 2018 at 11:22 AM, Tom Augspurger <[email protected]
> >
> wrote:
>
> > If it's cheap (less than a couple hours) to do a maintenance release then
> > I suggest that we do that.
> >
> > That's my expectation. I should be able to do everything aside from
> upload
> > to PyPI, where I don't have permissions (those should probably be the
> same
> > as the dask owners from #3223 <https://github.com/dask/dask/issues/3223
> >).
> >
> > —
> > You are receiving this because you commented.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/dask/dask/pull/3499#issuecomment-389207303>, or mute
> > the thread
> > <https://github.com/notifications/unsubscribe-auth/AASszMBkOyPukdeHfP-
> TUrLOQ0__o2LEks5tyvLEgaJpZM4T_uWl>
>
> > .
> >
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/dask/dask/pull/3499#issuecomment-389207857>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIlinjadP6wKHlkb253NSfi1XuSQnks5tyvMggaJpZM4T_uWl>
> .
>
mrocklin: Agreed
On Wed, May 16, 2018 at 2:53 PM, Tom Augspurger <[email protected]>
wrote:
> I think the two failures here are unrelated. Any objections to merging?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/dask/dask/pull/3499#issuecomment-389627319>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AASszNGdbpxWJS7tsYPMqiguvOLdMPBmks5tzHWwgaJpZM4T_uWl>
> .
>
| diff --git a/dask/dataframe/core.py b/dask/dataframe/core.py
index 930bdecaa..5c55ab9b4 100644
--- a/dask/dataframe/core.py
+++ b/dask/dataframe/core.py
@@ -38,7 +38,8 @@ from .hashing import hash_pandas_object
from .optimize import optimize
from .utils import (meta_nonempty, make_meta, insert_meta_param_description,
raise_on_meta_error, clear_known_categories,
- is_categorical_dtype, has_known_categories, PANDAS_VERSION)
+ is_categorical_dtype, has_known_categories, PANDAS_VERSION,
+ index_summary)
no_default = '__no_default__'
@@ -2785,7 +2786,8 @@ class DataFrame(_Frame):
bind_method(cls, name, meth)
@insert_meta_param_description(pad=12)
- def apply(self, func, axis=0, args=(), meta=no_default, **kwds):
+ def apply(self, func, axis=0, broadcast=None, raw=False, reduce=None,
+ args=(), meta=no_default, **kwds):
""" Parallel version of pandas.DataFrame.apply
This mimics the pandas version except for the following:
@@ -2847,6 +2849,17 @@ class DataFrame(_Frame):
"""
axis = self._validate_axis(axis)
+ pandas_kwargs = {
+ 'axis': axis,
+ 'broadcast': broadcast,
+ 'raw': raw,
+ 'reduce': None,
+ }
+
+ if PANDAS_VERSION >= '0.23.0':
+ kwds.setdefault('result_type', None)
+
+ kwds.update(pandas_kwargs)
if axis == 0:
msg = ("dd.DataFrame.apply only supports axis=1\n"
@@ -2862,10 +2875,9 @@ class DataFrame(_Frame):
warnings.warn(msg)
meta = _emulate(M.apply, self._meta_nonempty, func,
- axis=axis, args=args, udf=True, **kwds)
+ args=args, udf=True, **kwds)
- return map_partitions(M.apply, self, func, axis,
- False, False, None, args, meta=meta, **kwds)
+ return map_partitions(M.apply, self, func, args=args, meta=meta, **kwds)
@derived_from(pd.DataFrame)
def applymap(self, func, meta='__no_default__'):
@@ -2914,7 +2926,7 @@ class DataFrame(_Frame):
if verbose:
index = computations['index']
counts = computations['count']
- lines.append(index.summary())
+ lines.append(index_summary(index))
lines.append('Data columns (total {} columns):'.format(len(self.columns)))
if PANDAS_VERSION >= '0.20.0':
@@ -2926,7 +2938,7 @@ class DataFrame(_Frame):
column_info = [column_template.format(pprint_thing(x[0]), x[1], x[2])
for x in zip(self.columns, counts, self.dtypes)]
else:
- column_info = [self.columns.summary(name='Columns')]
+ column_info = [index_summary(self.columns, name='Columns')]
lines.extend(column_info)
dtype_counts = ['%s(%d)' % k for k in sorted(self.dtypes.value_counts().iteritems(), key=str)]
diff --git a/dask/dataframe/rolling.py b/dask/dataframe/rolling.py
index 501ac6fa8..7b795aed2 100644
--- a/dask/dataframe/rolling.py
+++ b/dask/dataframe/rolling.py
@@ -8,7 +8,7 @@ from pandas.core.window import Rolling as pd_Rolling
from ..base import tokenize
from ..utils import M, funcname, derived_from
from .core import _emulate
-from .utils import make_meta
+from .utils import make_meta, PANDAS_VERSION
def overlap_chunk(func, prev_part, current_part, next_part, before, after,
@@ -292,8 +292,19 @@ class Rolling(object):
return self._call_method('quantile', quantile)
@derived_from(pd_Rolling)
- def apply(self, func, args=(), kwargs={}):
- return self._call_method('apply', func, args=args, kwargs=kwargs)
+ def apply(self, func, args=(), kwargs={}, **kwds):
+ # TODO: In a future version of pandas this will change to
+ # raw=False. Think about inspecting the function signature and setting
+ # to that?
+ if PANDAS_VERSION >= '0.23.0':
+ kwds.setdefault("raw", None)
+ else:
+ if kwargs:
+ msg = ("Invalid argument to 'apply'. Keyword arguments "
+ "should be given as a dict to the 'kwargs' arugment. ")
+ raise TypeError(msg)
+ return self._call_method('apply', func, args=args,
+ kwargs=kwargs, **kwds)
def __repr__(self):
diff --git a/dask/dataframe/utils.py b/dask/dataframe/utils.py
index 43a6e5ab0..55e5c7071 100644
--- a/dask/dataframe/utils.py
+++ b/dask/dataframe/utils.py
@@ -498,6 +498,22 @@ def check_meta(x, meta, funcname=None, numeric_equal=True):
errmsg))
+def index_summary(idx, name=None):
+ """Summarized representation of an Index.
+ """
+ n = len(idx)
+ if name is None:
+ name = idx.__class__.__name__
+ if n:
+ head = idx[0]
+ tail = idx[-1]
+ summary = ', {} to {}'.format(head, tail)
+ else:
+ summary = ''
+
+ return "{}: {} entries{}".format(name, n, summary)
+
+
###############################################################
# Testing
###############################################################
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
index 8fc0a8a56..248e0e29d 100644
--- a/docs/source/changelog.rst
+++ b/docs/source/changelog.rst
@@ -26,6 +26,15 @@ Core
-
+0.17.5 / 2018-05-16
+-------------------
+
+DataFrame
++++++++++
+
+- Compatibility with pandas 0.23.0 (:pr:`3499`) `Tom Augspurger`_
+
+
0.17.4 / 2018-05-03
-------------------
| pandas 0.23.0 compatibility
Forgot to check this earlier :/ I'll carve out some time to do CI maintenance soon.
### breaks
- `result_type` arg to http://pandas.pydata.org/pandas-docs/version/0.23/whatsnew.html#changes-to-make-output-of-dataframe-apply-consistent
### warnings from deprecations
- rolling / expanding raw: http://pandas.pydata.org/pandas-docs/version/0.23/whatsnew.html#rolling-expanding-apply-accepts-a-raw-keyword-to-pass-a-series-to-the-function
- str.cat align: http://pandas.pydata.org/pandas-docs/version/0.23/whatsnew.html#series-str-cat-has-gained-the-join-kwarg
... | dask/dask | diff --git a/dask/dataframe/tests/test_categorical.py b/dask/dataframe/tests/test_categorical.py
index c6a4341ef..9db76e6b5 100644
--- a/dask/dataframe/tests/test_categorical.py
+++ b/dask/dataframe/tests/test_categorical.py
@@ -119,12 +119,14 @@ def test_is_categorical_dtype():
def test_categorize():
- meta = clear_known_categories(frames4[0])
+ # rename y to y_ to avoid pandas future warning about ambiguous
+ # levels
+ meta = clear_known_categories(frames4[0]).rename(columns={'y': 'y_'})
ddf = dd.DataFrame({('unknown', i): df for (i, df) in enumerate(frames3)},
- 'unknown', meta, [None] * 4)
+ 'unknown', meta, [None] * 4).rename(columns={'y': 'y_'})
ddf = ddf.assign(w=ddf.w.cat.set_categories(['x', 'y', 'z']))
assert ddf.w.cat.known
- assert not ddf.y.cat.known
+ assert not ddf.y_.cat.known
assert not ddf.index.cat.known
df = ddf.compute()
@@ -132,27 +134,27 @@ def test_categorize():
known_index = index is not False
# By default categorize object and unknown cat columns
ddf2 = ddf.categorize(index=index)
- assert ddf2.y.cat.known
+ assert ddf2.y_.cat.known
assert ddf2.v.cat.known
assert ddf2.index.cat.known == known_index
assert_eq(ddf2, df.astype({'v': 'category'}), check_categorical=False)
# Specifying split_every works
ddf2 = ddf.categorize(index=index, split_every=2)
- assert ddf2.y.cat.known
+ assert ddf2.y_.cat.known
assert ddf2.v.cat.known
assert ddf2.index.cat.known == known_index
assert_eq(ddf2, df.astype({'v': 'category'}), check_categorical=False)
# Specifying one column doesn't affect others
ddf2 = ddf.categorize('v', index=index)
- assert not ddf2.y.cat.known
+ assert not ddf2.y_.cat.known
assert ddf2.v.cat.known
assert ddf2.index.cat.known == known_index
assert_eq(ddf2, df.astype({'v': 'category'}), check_categorical=False)
- ddf2 = ddf.categorize('y', index=index)
- assert ddf2.y.cat.known
+ ddf2 = ddf.categorize('y_', index=index)
+ assert ddf2.y_.cat.known
assert ddf2.v.dtype == 'object'
assert ddf2.index.cat.known == known_index
assert_eq(ddf2, df)
@@ -188,7 +190,7 @@ def test_categorize_index():
assert ddf.categorize(index=False) is ddf
# Non-object dtype
- ddf = dd.from_pandas(df.set_index(df.A), npartitions=5)
+ ddf = dd.from_pandas(df.set_index(df.A.rename('idx')), npartitions=5)
df = ddf.compute()
ddf2 = ddf.categorize(index=True)
diff --git a/dask/dataframe/tests/test_dataframe.py b/dask/dataframe/tests/test_dataframe.py
index fe226aa47..75caaa623 100644
--- a/dask/dataframe/tests/test_dataframe.py
+++ b/dask/dataframe/tests/test_dataframe.py
@@ -1,4 +1,5 @@
import sys
+import textwrap
from distutils.version import LooseVersion
from itertools import product
from operator import add
@@ -1612,12 +1613,19 @@ def test_select_dtypes(include, exclude):
# count dtypes
tm.assert_series_equal(a.get_dtype_counts(), df.get_dtype_counts())
- tm.assert_series_equal(a.get_ftype_counts(), df.get_ftype_counts())
tm.assert_series_equal(result.get_dtype_counts(),
expected.get_dtype_counts())
- tm.assert_series_equal(result.get_ftype_counts(),
- expected.get_ftype_counts())
+
+ if PANDAS_VERSION >= '0.23.0':
+ ctx = pytest.warns(FutureWarning)
+ else:
+ ctx = pytest.warns(None)
+
+ with ctx:
+ tm.assert_series_equal(a.get_ftype_counts(), df.get_ftype_counts())
+ tm.assert_series_equal(result.get_ftype_counts(),
+ expected.get_ftype_counts())
def test_deterministic_apply_concat_apply_names():
@@ -2097,7 +2105,7 @@ def test_cov_corr_stable():
def test_cov_corr_mixed():
size = 1000
- d = {'dates' : pd.date_range('2015-01-01', periods=size, frequency='1T'),
+ d = {'dates' : pd.date_range('2015-01-01', periods=size, freq='1T'),
'unique_id' : np.arange(0, size),
'ints' : np.random.randint(0, size, size=size),
'floats' : np.random.randn(size),
@@ -2415,9 +2423,11 @@ dtypes: int64(1)""")
buf = StringIO()
g.info(buf, verbose=False)
- assert buf.getvalue() == unicode("""<class 'dask.dataframe.core.DataFrame'>
-Columns: 2 entries, (C, count) to (C, sum)
-dtypes: int64(2)""")
+ expected = unicode(textwrap.dedent("""\
+ <class 'dask.dataframe.core.DataFrame'>
+ Columns: 2 entries, ('C', 'count') to ('C', 'sum')
+ dtypes: int64(2)"""))
+ assert buf.getvalue() == expected
def test_categorize_info():
diff --git a/dask/dataframe/tests/test_indexing.py b/dask/dataframe/tests/test_indexing.py
index adcf4c503..ebc124ea6 100644
--- a/dask/dataframe/tests/test_indexing.py
+++ b/dask/dataframe/tests/test_indexing.py
@@ -7,7 +7,7 @@ import pytest
import dask.dataframe as dd
from dask.dataframe.indexing import _coerce_loc_index
-from dask.dataframe.utils import assert_eq, make_meta
+from dask.dataframe.utils import assert_eq, make_meta, PANDAS_VERSION
dsk = {('x', 0): pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]},
@@ -32,18 +32,30 @@ def test_loc():
assert_eq(d.loc[:8], full.loc[:8])
assert_eq(d.loc[3:], full.loc[3:])
assert_eq(d.loc[[5]], full.loc[[5]])
- assert_eq(d.loc[[3, 4, 1, 8]], full.loc[[3, 4, 1, 8]])
- assert_eq(d.loc[[3, 4, 1, 9]], full.loc[[3, 4, 1, 9]])
- assert_eq(d.loc[np.array([3, 4, 1, 9])], full.loc[np.array([3, 4, 1, 9])])
+
+ if PANDAS_VERSION >= '0.23.0':
+ expected_warning = FutureWarning
+ else:
+ expected_warning = None
+
+ with pytest.warns(expected_warning):
+ assert_eq(d.loc[[3, 4, 1, 8]], full.loc[[3, 4, 1, 8]])
+ with pytest.warns(expected_warning):
+ assert_eq(d.loc[[3, 4, 1, 9]], full.loc[[3, 4, 1, 9]])
+ with pytest.warns(expected_warning):
+ assert_eq(d.loc[np.array([3, 4, 1, 9])], full.loc[np.array([3, 4, 1, 9])])
assert_eq(d.a.loc[5], full.a.loc[5:5])
assert_eq(d.a.loc[3:8], full.a.loc[3:8])
assert_eq(d.a.loc[:8], full.a.loc[:8])
assert_eq(d.a.loc[3:], full.a.loc[3:])
assert_eq(d.a.loc[[5]], full.a.loc[[5]])
- assert_eq(d.a.loc[[3, 4, 1, 8]], full.a.loc[[3, 4, 1, 8]])
- assert_eq(d.a.loc[[3, 4, 1, 9]], full.a.loc[[3, 4, 1, 9]])
- assert_eq(d.a.loc[np.array([3, 4, 1, 9])], full.a.loc[np.array([3, 4, 1, 9])])
+ with pytest.warns(expected_warning):
+ assert_eq(d.a.loc[[3, 4, 1, 8]], full.a.loc[[3, 4, 1, 8]])
+ with pytest.warns(expected_warning):
+ assert_eq(d.a.loc[[3, 4, 1, 9]], full.a.loc[[3, 4, 1, 9]])
+ with pytest.warns(expected_warning):
+ assert_eq(d.a.loc[np.array([3, 4, 1, 9])], full.a.loc[np.array([3, 4, 1, 9])])
assert_eq(d.a.loc[[]], full.a.loc[[]])
assert_eq(d.a.loc[np.array([])], full.a.loc[np.array([])])
diff --git a/dask/dataframe/tests/test_rolling.py b/dask/dataframe/tests/test_rolling.py
index 5b34ee690..28e17abb6 100644
--- a/dask/dataframe/tests/test_rolling.py
+++ b/dask/dataframe/tests/test_rolling.py
@@ -3,7 +3,7 @@ import pytest
import numpy as np
import dask.dataframe as dd
-from dask.dataframe.utils import assert_eq
+from dask.dataframe.utils import assert_eq, PANDAS_VERSION
N = 40
df = pd.DataFrame({'a': np.random.randn(N).cumsum(),
@@ -122,18 +122,28 @@ def test_rolling_methods(method, args, window, center, check_less_precise):
# DataFrame
prolling = df.rolling(window, center=center)
drolling = ddf.rolling(window, center=center)
- assert_eq(getattr(prolling, method)(*args),
- getattr(drolling, method)(*args),
+ if method == 'apply' and PANDAS_VERSION >= '0.23.0':
+ kwargs = {'raw': False}
+ else:
+ kwargs = {}
+ assert_eq(getattr(prolling, method)(*args, **kwargs),
+ getattr(drolling, method)(*args, **kwargs),
check_less_precise=check_less_precise)
# Series
prolling = df.a.rolling(window, center=center)
drolling = ddf.a.rolling(window, center=center)
- assert_eq(getattr(prolling, method)(*args),
- getattr(drolling, method)(*args),
+ assert_eq(getattr(prolling, method)(*args, **kwargs),
+ getattr(drolling, method)(*args, **kwargs),
check_less_precise=check_less_precise)
[email protected](PANDAS_VERSION >= '0.23.0', reason="Raw is allowed.")
+def test_rolling_raw_pandas_lt_0230_raises():
+ with pytest.raises(TypeError):
+ df.rolling(2).apply(mad, raw=True)
+
+
def test_rolling_raises():
df = pd.DataFrame({'a': np.random.randn(25).cumsum(),
'b': np.random.randint(100, size=(25,))})
@@ -209,17 +219,21 @@ def test_time_rolling_constructor():
@pytest.mark.parametrize('window', ['1S', '2S', '3S', pd.offsets.Second(5)])
def test_time_rolling_methods(method, args, window, check_less_precise):
# DataFrame
+ if method == 'apply' and PANDAS_VERSION >= '0.23.0':
+ kwargs = {"raw": False}
+ else:
+ kwargs = {}
prolling = ts.rolling(window)
drolling = dts.rolling(window)
- assert_eq(getattr(prolling, method)(*args),
- getattr(drolling, method)(*args),
+ assert_eq(getattr(prolling, method)(*args, **kwargs),
+ getattr(drolling, method)(*args, **kwargs),
check_less_precise=check_less_precise)
# Series
prolling = ts.a.rolling(window)
drolling = dts.a.rolling(window)
- assert_eq(getattr(prolling, method)(*args),
- getattr(drolling, method)(*args),
+ assert_eq(getattr(prolling, method)(*args, **kwargs),
+ getattr(drolling, method)(*args, **kwargs),
check_less_precise=check_less_precise)
diff --git a/dask/dataframe/tests/test_ufunc.py b/dask/dataframe/tests/test_ufunc.py
index a83cc9fa4..50ce640dd 100644
--- a/dask/dataframe/tests/test_ufunc.py
+++ b/dask/dataframe/tests/test_ufunc.py
@@ -345,17 +345,17 @@ def test_2args_with_array(ufunc, pandas, darray):
assert isinstance(dafunc(dask, darray), dask_type)
assert isinstance(dafunc(darray, dask), dask_type)
- tm.assert_numpy_array_equal(dafunc(dask, darray).compute().as_matrix(),
- npfunc(pandas.as_matrix(), darray).compute())
+ tm.assert_numpy_array_equal(dafunc(dask, darray).compute().values,
+ npfunc(pandas.values, darray).compute())
# applying NumPy ufunc is lazy
assert isinstance(npfunc(dask, darray), dask_type)
assert isinstance(npfunc(darray, dask), dask_type)
- tm.assert_numpy_array_equal(npfunc(dask, darray).compute().as_matrix(),
- npfunc(pandas.as_matrix(), darray.compute()))
- tm.assert_numpy_array_equal(npfunc(darray, dask).compute().as_matrix(),
- npfunc(darray.compute(), pandas.as_matrix()))
+ tm.assert_numpy_array_equal(npfunc(dask, darray).compute().values,
+ npfunc(pandas.values, darray.compute()))
+ tm.assert_numpy_array_equal(npfunc(darray, dask).compute().values,
+ npfunc(darray.compute(), pandas.values))
@pytest.mark.parametrize('redfunc', ['sum', 'prod', 'min', 'max', 'mean'])
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 4
} | 1.21 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[complete]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8",
"pytest-xdist",
"moto"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
boto3==1.23.10
botocore==1.26.10
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
click==8.0.4
cloudpickle==2.2.1
cryptography==40.0.2
-e git+https://github.com/dask/dask.git@48c4a589393ebc5b335cc5c7df291901401b0b15#egg=dask
dataclasses==0.8
distributed==1.21.8
execnet==1.9.0
flake8==5.0.4
HeapDict==1.0.1
idna==3.10
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
jmespath==0.10.0
locket==1.0.0
MarkupSafe==2.0.1
mccabe==0.7.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
moto==4.0.13
msgpack==1.0.5
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
partd==1.2.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-xdist==3.0.2
python-dateutil==2.9.0.post0
pytz==2025.2
requests==2.27.1
responses==0.17.0
s3transfer==0.5.2
six==1.17.0
sortedcontainers==2.4.0
tblib==1.7.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
toolz==0.12.0
tornado==6.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
Werkzeug==2.0.3
xmltodict==0.14.2
zict==2.1.0
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: dask
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- boto3==1.23.10
- botocore==1.26.10
- cffi==1.15.1
- charset-normalizer==2.0.12
- click==8.0.4
- cloudpickle==2.2.1
- cryptography==40.0.2
- dataclasses==0.8
- distributed==1.21.8
- execnet==1.9.0
- flake8==5.0.4
- heapdict==1.0.1
- idna==3.10
- importlib-metadata==4.2.0
- jinja2==3.0.3
- jmespath==0.10.0
- locket==1.0.0
- markupsafe==2.0.1
- mccabe==0.7.0
- moto==4.0.13
- msgpack==1.0.5
- numpy==1.19.5
- pandas==1.1.5
- partd==1.2.0
- psutil==7.0.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pytest-xdist==3.0.2
- python-dateutil==2.9.0.post0
- pytz==2025.2
- requests==2.27.1
- responses==0.17.0
- s3transfer==0.5.2
- six==1.17.0
- sortedcontainers==2.4.0
- tblib==1.7.0
- toolz==0.12.0
- tornado==6.1
- urllib3==1.26.20
- werkzeug==2.0.3
- xmltodict==0.14.2
- zict==2.1.0
prefix: /opt/conda/envs/dask
| [
"dask/dataframe/tests/test_dataframe.py::test_categorize_info",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-apply-args11-False]"
]
| [
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[remove_unused_categories-kwargs8-series2]",
"dask/dataframe/tests/test_dataframe.py::test_Dataframe",
"dask/dataframe/tests/test_dataframe.py::test_attributes",
"dask/dataframe/tests/test_dataframe.py::test_timezone_freq[npartitions1]",
"dask/dataframe/tests/test_dataframe.py::test_clip[2-5]",
"dask/dataframe/tests/test_dataframe.py::test_clip[2.5-3.5]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_picklable",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_divisions",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_month",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include0-None]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[None-exclude1]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include2-exclude2]",
"dask/dataframe/tests/test_dataframe.py::test_select_dtypes[include3-None]",
"dask/dataframe/tests/test_dataframe.py::test_to_timestamp",
"dask/dataframe/tests/test_dataframe.py::test_apply",
"dask/dataframe/tests/test_dataframe.py::test_apply_infer_columns",
"dask/dataframe/tests/test_dataframe.py::test_info",
"dask/dataframe/tests/test_dataframe.py::test_groupby_multilevel_info",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx2-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx2-False]",
"dask/dataframe/tests/test_dataframe.py::test_shift",
"dask/dataframe/tests/test_dataframe.py::test_shift_with_freq",
"dask/dataframe/tests/test_dataframe.py::test_first_and_last[first]",
"dask/dataframe/tests/test_dataframe.py::test_first_and_last[last]",
"dask/dataframe/tests/test_dataframe.py::test_datetime_loc_open_slicing",
"dask/dataframe/tests/test_indexing.py::test_loc",
"dask/dataframe/tests/test_indexing.py::test_loc_with_text_dates",
"dask/dataframe/tests/test_indexing.py::test_loc2d",
"dask/dataframe/tests/test_indexing.py::test_getitem",
"dask/dataframe/tests/test_indexing.py::test_loc_on_numpy_datetimes",
"dask/dataframe/tests/test_indexing.py::test_loc_on_pandas_datetimes",
"dask/dataframe/tests/test_indexing.py::test_loc_datetime_no_freq",
"dask/dataframe/tests/test_indexing.py::test_loc_timestamp_str",
"dask/dataframe/tests/test_indexing.py::test_getitem_timestamp_str",
"dask/dataframe/tests/test_indexing.py::test_getitem_period_str",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[1S-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[2S-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[3S-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_methods[window3-apply-args11-False]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_window_too_large[window0]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_window_too_large[window1]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling[6s-6s]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling[2s-2s]",
"dask/dataframe/tests/test_rolling.py::test_time_rolling[6s-2s]"
]
| [
"dask/dataframe/tests/test_categorical.py::test_concat_unions_categoricals",
"dask/dataframe/tests/test_categorical.py::test_unknown_categoricals",
"dask/dataframe/tests/test_categorical.py::test_is_categorical_dtype",
"dask/dataframe/tests/test_categorical.py::test_categorize",
"dask/dataframe/tests/test_categorical.py::test_categorize_index",
"dask/dataframe/tests/test_categorical.py::test_categorical_set_index[disk]",
"dask/dataframe/tests/test_categorical.py::test_categorical_set_index[tasks]",
"dask/dataframe/tests/test_categorical.py::test_repartition_on_categoricals[1]",
"dask/dataframe/tests/test_categorical.py::test_repartition_on_categoricals[4]",
"dask/dataframe/tests/test_categorical.py::test_categorical_accessor_presence",
"dask/dataframe/tests/test_categorical.py::test_categorize_nan",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[categories-assert_array_index_eq-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[categories-assert_array_index_eq-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[categories-assert_array_index_eq-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[ordered-assert_eq-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[ordered-assert_eq-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[ordered-assert_eq-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[codes-assert_array_index_eq-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[codes-assert_array_index_eq-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_properties[codes-assert_array_index_eq-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[add_categories-kwargs0-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[add_categories-kwargs0-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[add_categories-kwargs0-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_ordered-kwargs1-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_ordered-kwargs1-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_ordered-kwargs1-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_unordered-kwargs2-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_unordered-kwargs2-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_unordered-kwargs2-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_ordered-kwargs3-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_ordered-kwargs3-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[as_ordered-kwargs3-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[remove_categories-kwargs4-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[remove_categories-kwargs4-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[remove_categories-kwargs4-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[rename_categories-kwargs5-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[rename_categories-kwargs5-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[rename_categories-kwargs5-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[reorder_categories-kwargs6-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[reorder_categories-kwargs6-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[reorder_categories-kwargs6-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[set_categories-kwargs7-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[set_categories-kwargs7-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[set_categories-kwargs7-series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[remove_unused_categories-kwargs8-series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_callable[remove_unused_categories-kwargs8-series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_categorical_empty",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_unknown_categories[series0]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_unknown_categories[series1]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_unknown_categories[series2]",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_categorical_string_ops",
"dask/dataframe/tests/test_categorical.py::TestCategoricalAccessor::test_categorical_non_string_raises",
"dask/dataframe/tests/test_dataframe.py::test_head_tail",
"dask/dataframe/tests/test_dataframe.py::test_head_npartitions",
"dask/dataframe/tests/test_dataframe.py::test_head_npartitions_warn",
"dask/dataframe/tests/test_dataframe.py::test_index_head",
"dask/dataframe/tests/test_dataframe.py::test_Series",
"dask/dataframe/tests/test_dataframe.py::test_Index",
"dask/dataframe/tests/test_dataframe.py::test_Scalar",
"dask/dataframe/tests/test_dataframe.py::test_column_names",
"dask/dataframe/tests/test_dataframe.py::test_index_names",
"dask/dataframe/tests/test_dataframe.py::test_timezone_freq[1]",
"dask/dataframe/tests/test_dataframe.py::test_rename_columns",
"dask/dataframe/tests/test_dataframe.py::test_rename_series",
"dask/dataframe/tests/test_dataframe.py::test_rename_series_method",
"dask/dataframe/tests/test_dataframe.py::test_describe",
"dask/dataframe/tests/test_dataframe.py::test_describe_empty",
"dask/dataframe/tests/test_dataframe.py::test_cumulative",
"dask/dataframe/tests/test_dataframe.py::test_dropna",
"dask/dataframe/tests/test_dataframe.py::test_squeeze",
"dask/dataframe/tests/test_dataframe.py::test_where_mask",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_multi_argument",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_names",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_column_info",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_method_names",
"dask/dataframe/tests/test_dataframe.py::test_map_partitions_keeps_kwargs_readable",
"dask/dataframe/tests/test_dataframe.py::test_metadata_inference_single_partition_aligned_args",
"dask/dataframe/tests/test_dataframe.py::test_drop_duplicates",
"dask/dataframe/tests/test_dataframe.py::test_drop_duplicates_subset",
"dask/dataframe/tests/test_dataframe.py::test_get_partition",
"dask/dataframe/tests/test_dataframe.py::test_ndim",
"dask/dataframe/tests/test_dataframe.py::test_dtype",
"dask/dataframe/tests/test_dataframe.py::test_value_counts",
"dask/dataframe/tests/test_dataframe.py::test_unique",
"dask/dataframe/tests/test_dataframe.py::test_isin",
"dask/dataframe/tests/test_dataframe.py::test_len",
"dask/dataframe/tests/test_dataframe.py::test_size",
"dask/dataframe/tests/test_dataframe.py::test_nbytes",
"dask/dataframe/tests/test_dataframe.py::test_quantile",
"dask/dataframe/tests/test_dataframe.py::test_quantile_missing",
"dask/dataframe/tests/test_dataframe.py::test_empty_quantile",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_quantile",
"dask/dataframe/tests/test_dataframe.py::test_index",
"dask/dataframe/tests/test_dataframe.py::test_assign",
"dask/dataframe/tests/test_dataframe.py::test_map",
"dask/dataframe/tests/test_dataframe.py::test_concat",
"dask/dataframe/tests/test_dataframe.py::test_args",
"dask/dataframe/tests/test_dataframe.py::test_known_divisions",
"dask/dataframe/tests/test_dataframe.py::test_unknown_divisions",
"dask/dataframe/tests/test_dataframe.py::test_align[inner]",
"dask/dataframe/tests/test_dataframe.py::test_align[outer]",
"dask/dataframe/tests/test_dataframe.py::test_align[left]",
"dask/dataframe/tests/test_dataframe.py::test_align[right]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[inner]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[outer]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[left]",
"dask/dataframe/tests/test_dataframe.py::test_align_axis[right]",
"dask/dataframe/tests/test_dataframe.py::test_combine",
"dask/dataframe/tests/test_dataframe.py::test_combine_first",
"dask/dataframe/tests/test_dataframe.py::test_random_partitions",
"dask/dataframe/tests/test_dataframe.py::test_series_round",
"dask/dataframe/tests/test_dataframe.py::test_repartition_divisions",
"dask/dataframe/tests/test_dataframe.py::test_repartition_on_pandas_dataframe",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-int-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-float-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>0-M8[ns]-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-int-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-float-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-1-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-2-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-4-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-1-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-1-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-2-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-2-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-4-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-4-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-5-True]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions[<lambda>1-M8[ns]-5-5-False]",
"dask/dataframe/tests/test_dataframe.py::test_repartition_npartitions_same_limits",
"dask/dataframe/tests/test_dataframe.py::test_repartition_object_index",
"dask/dataframe/tests/test_dataframe.py::test_repartition_freq_errors",
"dask/dataframe/tests/test_dataframe.py::test_embarrassingly_parallel_operations",
"dask/dataframe/tests/test_dataframe.py::test_fillna",
"dask/dataframe/tests/test_dataframe.py::test_fillna_multi_dataframe",
"dask/dataframe/tests/test_dataframe.py::test_ffill_bfill",
"dask/dataframe/tests/test_dataframe.py::test_fillna_series_types",
"dask/dataframe/tests/test_dataframe.py::test_sample",
"dask/dataframe/tests/test_dataframe.py::test_sample_without_replacement",
"dask/dataframe/tests/test_dataframe.py::test_datetime_accessor",
"dask/dataframe/tests/test_dataframe.py::test_str_accessor",
"dask/dataframe/tests/test_dataframe.py::test_empty_max",
"dask/dataframe/tests/test_dataframe.py::test_deterministic_apply_concat_apply_names",
"dask/dataframe/tests/test_dataframe.py::test_aca_meta_infer",
"dask/dataframe/tests/test_dataframe.py::test_aca_split_every",
"dask/dataframe/tests/test_dataframe.py::test_reduction_method",
"dask/dataframe/tests/test_dataframe.py::test_reduction_method_split_every",
"dask/dataframe/tests/test_dataframe.py::test_pipe",
"dask/dataframe/tests/test_dataframe.py::test_gh_517",
"dask/dataframe/tests/test_dataframe.py::test_drop_axis_1",
"dask/dataframe/tests/test_dataframe.py::test_gh580",
"dask/dataframe/tests/test_dataframe.py::test_rename_dict",
"dask/dataframe/tests/test_dataframe.py::test_rename_function",
"dask/dataframe/tests/test_dataframe.py::test_rename_index",
"dask/dataframe/tests/test_dataframe.py::test_to_frame",
"dask/dataframe/tests/test_dataframe.py::test_applymap",
"dask/dataframe/tests/test_dataframe.py::test_abs",
"dask/dataframe/tests/test_dataframe.py::test_round",
"dask/dataframe/tests/test_dataframe.py::test_cov",
"dask/dataframe/tests/test_dataframe.py::test_corr",
"dask/dataframe/tests/test_dataframe.py::test_cov_corr_meta",
"dask/dataframe/tests/test_dataframe.py::test_cov_corr_mixed",
"dask/dataframe/tests/test_dataframe.py::test_autocorr",
"dask/dataframe/tests/test_dataframe.py::test_index_time_properties",
"dask/dataframe/tests/test_dataframe.py::test_nlargest_nsmallest",
"dask/dataframe/tests/test_dataframe.py::test_reset_index",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_compute_forward_kwargs",
"dask/dataframe/tests/test_dataframe.py::test_series_iteritems",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_iterrows",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_itertuples",
"dask/dataframe/tests/test_dataframe.py::test_astype",
"dask/dataframe/tests/test_dataframe.py::test_astype_categoricals",
"dask/dataframe/tests/test_dataframe.py::test_astype_categoricals_known",
"dask/dataframe/tests/test_dataframe.py::test_groupby_callable",
"dask/dataframe/tests/test_dataframe.py::test_methods_tokenize_differently",
"dask/dataframe/tests/test_dataframe.py::test_gh_1301",
"dask/dataframe/tests/test_dataframe.py::test_timeseries_sorted",
"dask/dataframe/tests/test_dataframe.py::test_column_assignment",
"dask/dataframe/tests/test_dataframe.py::test_columns_assignment",
"dask/dataframe/tests/test_dataframe.py::test_attribute_assignment",
"dask/dataframe/tests/test_dataframe.py::test_setitem_triggering_realign",
"dask/dataframe/tests/test_dataframe.py::test_inplace_operators",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx0-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx0-False]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx1-True]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin[idx1-False]",
"dask/dataframe/tests/test_dataframe.py::test_idxmaxmin_empty_partitions",
"dask/dataframe/tests/test_dataframe.py::test_getitem_meta",
"dask/dataframe/tests/test_dataframe.py::test_getitem_multilevel",
"dask/dataframe/tests/test_dataframe.py::test_getitem_string_subclass",
"dask/dataframe/tests/test_dataframe.py::test_diff",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[None-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[1-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[5-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-2-20]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-1]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-4]",
"dask/dataframe/tests/test_dataframe.py::test_hash_split_unique[20-5-20]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_drop_duplicates[None]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_drop_duplicates[2]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_value_counts[None]",
"dask/dataframe/tests/test_dataframe.py::test_split_out_value_counts[2]",
"dask/dataframe/tests/test_dataframe.py::test_values",
"dask/dataframe/tests/test_dataframe.py::test_copy",
"dask/dataframe/tests/test_dataframe.py::test_del",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[True-True]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[True-False]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[False-True]",
"dask/dataframe/tests/test_dataframe.py::test_memory_usage[False-False]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[sum]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[mean]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[std]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[var]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[count]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[min]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[max]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[idxmin]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[idxmax]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[prod]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[all]",
"dask/dataframe/tests/test_dataframe.py::test_dataframe_reductions_arithmetic[sem]",
"dask/dataframe/tests/test_dataframe.py::test_to_datetime",
"dask/dataframe/tests/test_dataframe.py::test_to_timedelta",
"dask/dataframe/tests/test_dataframe.py::test_isna[values0]",
"dask/dataframe/tests/test_dataframe.py::test_isna[values1]",
"dask/dataframe/tests/test_dataframe.py::test_slice_on_filtered_boundary[0]",
"dask/dataframe/tests/test_dataframe.py::test_slice_on_filtered_boundary[9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_nonmonotonic",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1-None-False-False-drop0]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1-None-False-True-drop1]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3-False-False-drop2]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3-True-False-drop3]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-0.5-None-False-False-drop4]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-0.5-None-False-True-drop5]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[-1.5-None-False-True-drop6]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3.5-False-False-drop7]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-3.5-True-False-drop8]",
"dask/dataframe/tests/test_dataframe.py::test_with_boundary[None-2.5-False-False-drop9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index0-0-9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index1--1-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index2-None-10]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index3-None-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index4--1-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index5-None-2]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index6--2-3]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index7-None-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index8-left8-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index9-None-right9]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index10-left10-None]",
"dask/dataframe/tests/test_dataframe.py::test_boundary_slice_same[index11-None-right11]",
"dask/dataframe/tests/test_dataframe.py::test_better_errors_object_reductions",
"dask/dataframe/tests/test_dataframe.py::test_sample_empty_partitions",
"dask/dataframe/tests/test_dataframe.py::test_coerce",
"dask/dataframe/tests/test_dataframe.py::test_bool",
"dask/dataframe/tests/test_dataframe.py::test_cumulative_multiple_columns",
"dask/dataframe/tests/test_dataframe.py::test_map_partition_array[asarray]",
"dask/dataframe/tests/test_dataframe.py::test_map_partition_array[func1]",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_operations",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_operations_errors",
"dask/dataframe/tests/test_dataframe.py::test_mixed_dask_array_multi_dimensional",
"dask/dataframe/tests/test_dataframe.py::test_meta_raises",
"dask/dataframe/tests/test_indexing.py::test_loc_non_informative_index",
"dask/dataframe/tests/test_indexing.py::test_loc_with_series",
"dask/dataframe/tests/test_indexing.py::test_loc_with_series_different_partition",
"dask/dataframe/tests/test_indexing.py::test_loc2d_with_known_divisions",
"dask/dataframe/tests/test_indexing.py::test_loc2d_with_unknown_divisions",
"dask/dataframe/tests/test_indexing.py::test_loc2d_duplicated_columns",
"dask/dataframe/tests/test_indexing.py::test_getitem_slice",
"dask/dataframe/tests/test_indexing.py::test_coerce_loc_index",
"dask/dataframe/tests/test_indexing.py::test_loc_period_str",
"dask/dataframe/tests/test_rolling.py::test_map_overlap[1]",
"dask/dataframe/tests/test_rolling.py::test_map_overlap[4]",
"dask/dataframe/tests/test_rolling.py::test_map_partitions_names",
"dask/dataframe/tests/test_rolling.py::test_map_partitions_errors",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-1-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-2-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-4-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[True-5-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-1-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-2-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-4-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-count-args0-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-sum-args1-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-mean-args2-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-median-args3-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-min-args4-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-max-args5-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-std-args6-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-var-args7-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-skew-args8-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-kurt-args9-True]",
"dask/dataframe/tests/test_rolling.py::test_rolling_methods[False-5-quantile-args10-False]",
"dask/dataframe/tests/test_rolling.py::test_rolling_raises",
"dask/dataframe/tests/test_rolling.py::test_rolling_names",
"dask/dataframe/tests/test_rolling.py::test_rolling_axis",
"dask/dataframe/tests/test_rolling.py::test_rolling_partition_size",
"dask/dataframe/tests/test_rolling.py::test_rolling_repr",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_repr",
"dask/dataframe/tests/test_rolling.py::test_time_rolling_constructor",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[conj-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log2-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log10-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[log1p-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[expm1-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sqrt-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[square-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sin-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cos-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tan-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsin-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccos-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctan-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sinh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cosh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[tanh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arcsinh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arccosh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[arctanh-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[deg2rad-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rad2deg-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isfinite-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isinf-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[isnan-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[signbit-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[degrees-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[radians-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[rint-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[fabs-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[sign-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[absolute-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[floor-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[ceil-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[trunc-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[logical_not-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[cbrt-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[exp2-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[negative-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[reciprocal-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input3]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input4]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc[spacing-pandas_input5]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[isreal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[iscomplex]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[real]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[imag]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[angle]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[i0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[sinc]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_array_wrap[nan_to_num]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-greater]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-less]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>0-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-greater]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-less]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-equal]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_2args[<lambda>1-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_clip[pandas0-5-50]",
"dask/dataframe/tests/test_ufunc.py::test_clip[pandas1-5.5-40.5]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[conj]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[exp]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log2]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log10]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[log1p]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[expm1]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sqrt]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[square]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sin]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[cos]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[tan]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arcsin]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arccos]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arctan]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sinh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[cosh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[tanh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arcsinh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arccosh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[arctanh]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[deg2rad]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[rad2deg]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[isfinite]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[isinf]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[isnan]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[signbit]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[degrees]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[radians]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[rint]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[fabs]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[sign]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[absolute]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[floor]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[ceil]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[trunc]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[logical_not]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[cbrt]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[exp2]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[negative]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[reciprocal]",
"dask/dataframe/tests/test_ufunc.py::test_frame_ufunc_out[spacing]",
"dask/dataframe/tests/test_ufunc.py::test_frame_2ufunc_out",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logaddexp2-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[arctan2-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[hypot-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[copysign-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[nextafter-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[ldexp-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmod-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and0-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or0-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor0-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[maximum-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[minimum-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmax-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[fmin-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[greater_equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[less_equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[not_equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[equal-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_or1-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_and1-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-2-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-2-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-arg21-arg10]",
"dask/dataframe/tests/test_ufunc.py::test_mixed_types[logical_xor1-arg21-arg11]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-greater]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-less]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas0-darray0-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logaddexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logaddexp2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-arctan2]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-hypot]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-copysign]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-nextafter]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-ldexp]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-fmod]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_and0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_or0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_xor0]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-maximum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-minimum]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-fmax]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-fmin]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-greater]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-greater_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-less]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-less_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-not_equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-equal]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_or1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_and1]",
"dask/dataframe/tests/test_ufunc.py::test_2args_with_array[pandas1-darray1-logical_xor1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-conj-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log10-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-log1p-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-expm1-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sqrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-square-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-tanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arcsinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arccosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-arctanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-deg2rad-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rad2deg-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isfinite-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isinf-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-isnan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-signbit-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-degrees-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-radians-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-rint-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-fabs-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-sign-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-absolute-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-floor-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-ceil-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-trunc-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-logical_not-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-cbrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-exp2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-negative-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-reciprocal-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas0-spacing-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-conj-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log10-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-log1p-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-expm1-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sqrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-square-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsin-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccos-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-tanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arcsinh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arccosh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-arctanh-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-deg2rad-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rad2deg-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isfinite-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isinf-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-isnan-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-signbit-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-degrees-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-radians-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-rint-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-fabs-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-sign-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-absolute-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-floor-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-ceil-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-trunc-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-logical_not-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-cbrt-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-exp2-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-negative-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-reciprocal-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-sum]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-prod]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-min]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-max]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_with_reduction[pandas1-spacing-mean]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[15-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[15-pandas1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.40-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.40-pandas1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[scalar2-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[scalar2-pandas1]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.41-pandas0]",
"dask/dataframe/tests/test_ufunc.py::test_ufunc_numpy_scalar_comparison[16.41-pandas1]"
]
| [
"dask/dataframe/tests/test_dataframe.py::test_apply_warns"
]
| BSD 3-Clause "New" or "Revised" License | 2,526 | [
"dask/dataframe/core.py",
"docs/source/changelog.rst",
"dask/dataframe/utils.py",
"dask/dataframe/rolling.py"
]
| [
"dask/dataframe/core.py",
"docs/source/changelog.rst",
"dask/dataframe/utils.py",
"dask/dataframe/rolling.py"
]
|
streamlink__streamlink-1655 | e2a55461decc6856912325e8103cefb359027811 | 2018-05-15 16:20:20 | 060d38d3f0acc2c4f3b463ea988361622a9b6544 | codecov[bot]: # [Codecov](https://codecov.io/gh/streamlink/streamlink/pull/1655?src=pr&el=h1) Report
> Merging [#1655](https://codecov.io/gh/streamlink/streamlink/pull/1655?src=pr&el=desc) into [master](https://codecov.io/gh/streamlink/streamlink/commit/e2a55461decc6856912325e8103cefb359027811?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `n/a`.
```diff
@@ Coverage Diff @@
## master #1655 +/- ##
==========================================
+ Coverage 33.16% 33.17% +<.01%
==========================================
Files 229 229
Lines 12898 12898
==========================================
+ Hits 4278 4279 +1
+ Misses 8620 8619 -1
```
gravyboat: @beardypig This is a good addition, what do you think about adding an example of `socks4a` or `socks5h` just in case there are any language barriers that could result in confusion:
```
.. code-block:: console
$ streamlink --http-proxy "socks5h://10.10.1.10:3128/" --https-proxy "socks5h://10.10.1.10:1242"
```
beardypig: Could add another example :) Where do you suggest? After the note? Or do we remove the note and just expand the socks explanation?
<sub>Sent with <a href="http://githawk.com">GitHawk</a></sub>
gravyboat: @beardypig I think keeping the note is important for clarity, so I'd say just after the note.
beardypig: I moved the note to above the examples, and added an example for `socks5h/socks4a`.
<img width="730" alt="socks-docs-preview" src="https://user-images.githubusercontent.com/16033421/40117387-1e1c3496-5917-11e8-91bb-bfca7c0845c4.png">
bastimeyer: Is there a requests doc site other than
https://github.com/requests/requests/blob/master/docs/user/advanced.rst#socks
which could be added here as an additional source of information?
Their external docs site is served via regular http and doesn't contain any information about the socks proxy usage :unamused: | diff --git a/docs/cli.rst b/docs/cli.rst
index 90a91512..e96b1c8b 100644
--- a/docs/cli.rst
+++ b/docs/cli.rst
@@ -337,14 +337,19 @@ change the proxy server that Streamlink will use for HTTP and HTTPS requests res
As HTTP and HTTPS requests can be handled by separate proxies, you may need to specify both
options if the plugin you use makes HTTP and HTTPS requests.
-Both HTTP and SOCKS5 proxies are supported, authentication is supported for both types.
+Both HTTP and SOCKS proxies are supported, authentication is supported for both types.
+
+.. note::
+ When using a SOCKS proxy the ``socks4`` and ``socks5`` schemes mean that DNS lookups are done
+ locally, rather than on the proxy server. To have the proxy server perform the DNS lookups, the
+ ``socks4a`` and ``socks5h`` schemes should be used instead.
For example:
.. code-block:: console
$ streamlink --http-proxy "http://user:[email protected]:3128/" --https-proxy "socks5://10.10.1.10:1242"
-
+ $ streamlink --http-proxy "socks4a://10.10.1.10:1235" --https-proxy "socks5h://10.10.1.10:1234"
Command-line usage
------------------
diff --git a/src/streamlink/plugins/tf1.py b/src/streamlink/plugins/tf1.py
index 189f124c..88b5e585 100644
--- a/src/streamlink/plugins/tf1.py
+++ b/src/streamlink/plugins/tf1.py
@@ -9,13 +9,20 @@ from streamlink.stream import HLSStream
class TF1(Plugin):
- url_re = re.compile(r"https?://(?:www\.)?(?:tf1\.fr/(tf1|tmc|tfx|tf1-series-films)/direct|(lci).fr/direct)/?")
+ url_re = re.compile(r"https?://(?:www\.)?(?:tf1\.fr/([\w-]+)/direct|(lci).fr/direct)/?")
embed_url = "http://www.wat.tv/embedframe/live{0}"
embed_re = re.compile(r"urlLive.*?:.*?\"(http.*?)\"", re.MULTILINE)
api_url = "http://www.wat.tv/get/{0}/591997"
swf_url = "http://www.wat.tv/images/v70/PlayerLite.swf"
- hds_channel_remap = {"tf1": "androidliveconnect", "lci": "androidlivelci", "tfx" : "nt1live", "tf1-series-films" : "hd1live" }
- hls_channel_remap = {"lci": "LCI", "tf1": "V4", "tfx" : "nt1", "tf1-series-films" : "hd1" }
+ hds_channel_remap = {"tf1": "androidliveconnect",
+ "lci": "androidlivelci",
+ "tfx": "nt1live",
+ "hd1": "hd1live", # renamed to tfx
+ "tf1-series-films": "hd1live"}
+ hls_channel_remap = {"lci": "LCI",
+ "tf1": "V4",
+ "tfx": "nt1",
+ "tf1-series-films": "hd1"}
@classmethod
def can_handle_url(cls, url):
@@ -23,6 +30,7 @@ class TF1(Plugin):
def _get_hds_streams(self, channel):
channel = self.hds_channel_remap.get(channel, "{0}live".format(channel))
+ self.logger.debug("Using HDS channel name: {0}".format(channel))
manifest_url = http.get(self.api_url.format(channel),
params={"getURL": 1},
headers={"User-Agent": useragents.FIREFOX}).text
| Need a option for DNS through proxy
### Checklist
- [ ] This is a bug report.
- [x] This is a feature request.
- [ ] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
Steamlink can't resolve hostnames through the proxy server. If you receive a incorrect ip of the host via DNS spoofing, the option --http-proxy and --https-proxy may not work correctly.
### Expected / Actual behavior
Expected: Steamkink connect to youtube server through proxy and work fine.
Actual: Read timed out.
### Reproduction steps / Explicit stream URLs to test
1. Run a socks5 proxy server on port 1080
2. streamlink --hls-live-restart --http-proxy "socks5://127.0.0.1:1080" --https-proxy "socks5://127.0.0.1:1080" https://www.youtube.com/watch?v=fO8x9MZ8m9g
### Logs
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.6.5
[cli][debug] Streamlink: 0.12.1
[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.47.0)
[cli][info] Found matching plugin youtube for URL https://www.youtube.com/watch?v=fO8x9MZ8m9g
error: Unable to open URL: https://youtube.com/get_video_info (SOCKSHTTPSConnectionPool(host='youtube.com', port=443): Read timed out. (read timeout=20.0))
### Comments, screenshots, etc.
This is the log of my socks5 proxy when I run the step 2:
[2018-05-15 21:26:52] connect to 8.7.198.45:443
And this is the log when I visit https://youtube.com on firefox(Use the same socks5 proxy and it works fine)
[2018-05-15 21:28:51] connect to youtube.com:443
And this is the ping log to youtube.com on my pc
Pinging youtube.com [8.7.198.45] with 32 bytes of data:
Request timed out.
Request timed out.
Stramlink resolve the hostname 'youtube.com' direct and receive the incorrect ip '8.7.198.45' via DNS spoofing. Firefox works fine because it send the DNS request through the proxy. | streamlink/streamlink | diff --git a/tests/test_plugin_tf1.py b/tests/test_plugin_tf1.py
index 77afd8d8..f8e48790 100644
--- a/tests/test_plugin_tf1.py
+++ b/tests/test_plugin_tf1.py
@@ -12,11 +12,11 @@ class TestPluginTF1(unittest.TestCase):
self.assertTrue(TF1.can_handle_url("http://lci.fr/direct"))
self.assertTrue(TF1.can_handle_url("http://www.lci.fr/direct"))
self.assertTrue(TF1.can_handle_url("http://tf1.fr/tmc/direct"))
+ self.assertTrue(TF1.can_handle_url("http://tf1.fr/lci/direct"))
+ def test_can_handle_url_negative(self):
# shouldn't match
self.assertFalse(TF1.can_handle_url("http://tf1.fr/direct"))
-# self.assertFalse(TF1.can_handle_url("http://tf1.fr/nt1/direct")) NOTE : TF1 redirect old channel names to new ones (for now).
-# self.assertFalse(TF1.can_handle_url("http://tf1.fr/hd1/direct"))
self.assertFalse(TF1.can_handle_url("http://www.tf1.fr/direct"))
self.assertFalse(TF1.can_handle_url("http://www.tvcatchup.com/"))
self.assertFalse(TF1.can_handle_url("http://www.youtube.com/"))
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 2
} | 0.12 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"codecov",
"coverage",
"mock",
"requests-mock",
"pynsist",
"unittest2"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"dev-requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
charset-normalizer==3.4.1
codecov==2.1.13
coverage==7.8.0
distlib==0.3.9
exceptiongroup==1.2.2
idna==3.10
iniconfig==2.1.0
iso-639==0.4.5
iso3166==2.1.1
Jinja2==3.1.6
linecache2==1.0.0
MarkupSafe==3.0.2
mock==5.2.0
packaging==24.2
pluggy==1.5.0
pycryptodome==3.22.0
pynsist==2.8
PySocks==1.7.1
pytest==8.3.5
pytest-cov==6.0.0
requests==2.32.3
requests-mock==1.12.1
requests_download==0.1.2
six==1.17.0
-e git+https://github.com/streamlink/streamlink.git@e2a55461decc6856912325e8103cefb359027811#egg=streamlink
tomli==2.2.1
traceback2==1.4.0
unittest2==1.1.0
urllib3==2.3.0
websocket-client==1.8.0
yarg==0.1.10
| name: streamlink
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- argparse==1.4.0
- certifi==2025.1.31
- charset-normalizer==3.4.1
- codecov==2.1.13
- coverage==7.8.0
- distlib==0.3.9
- exceptiongroup==1.2.2
- idna==3.10
- iniconfig==2.1.0
- iso-639==0.4.5
- iso3166==2.1.1
- jinja2==3.1.6
- linecache2==1.0.0
- markupsafe==3.0.2
- mock==5.2.0
- packaging==24.2
- pluggy==1.5.0
- pycryptodome==3.22.0
- pynsist==2.8
- pysocks==1.7.1
- pytest==8.3.5
- pytest-cov==6.0.0
- requests==2.32.3
- requests-download==0.1.2
- requests-mock==1.12.1
- six==1.17.0
- tomli==2.2.1
- traceback2==1.4.0
- unittest2==1.1.0
- urllib3==2.3.0
- websocket-client==1.8.0
- yarg==0.1.10
prefix: /opt/conda/envs/streamlink
| [
"tests/test_plugin_tf1.py::TestPluginTF1::test_can_handle_url"
]
| []
| [
"tests/test_plugin_tf1.py::TestPluginTF1::test_can_handle_url_negative"
]
| []
| BSD 2-Clause "Simplified" License | 2,527 | [
"src/streamlink/plugins/tf1.py",
"docs/cli.rst"
]
| [
"src/streamlink/plugins/tf1.py",
"docs/cli.rst"
]
|
mapbox__COGDumper-5 | eb6cbcfbdbc94ee8fd75450908b375fac93e3989 | 2018-05-15 19:54:57 | eb6cbcfbdbc94ee8fd75450908b375fac93e3989 | diff --git a/cogdumper/cog_tiles.py b/cogdumper/cog_tiles.py
index 0acc0c7..318bb1b 100644
--- a/cogdumper/cog_tiles.py
+++ b/cogdumper/cog_tiles.py
@@ -1,5 +1,7 @@
"""Function for extracting tiff tiles."""
+import os
+
from abc import abstractmethod
from math import ceil
import struct
@@ -41,16 +43,18 @@ class COGTiff:
reader:
A reader that implements the cogdumper.cog_tiles.AbstractReader methods
"""
- self._init = False
self._endian = '<'
self._version = 42
self.read = reader
self._big_tiff = False
+ self.header = ''
self._offset = 0
self._image_ifds = []
self._mask_ifds = []
- def ifds(self):
+ self.read_header()
+
+ def _ifds(self):
"""Reads TIFF image file directories from a COG recursively.
Parameters
-----------
@@ -68,10 +72,24 @@ class COGTiff:
next_offset = 0
pos = 0
tags = []
+
+ fallback_size = 4096 if self._big_tiff else 1024
+ if self._offset > len(self.header):
+ byte_starts = len(self.header)
+ byte_ends = byte_starts + self._offset + fallback_size
+ self.header += self.read(byte_starts, byte_ends)
+
if self._big_tiff:
- bytes = self.read(self._offset, 8)
+ bytes = self.header[self._offset: self._offset + 8]
num_tags = struct.unpack(f'{self._endian}Q', bytes)[0]
- bytes = self.read(self._offset + 8, (num_tags * 20) + 8)
+
+ byte_starts = self._offset + 8
+ byte_ends = (num_tags * 20) + 8 + byte_starts
+ if byte_ends > len(self.header):
+ s = len(self.header)
+ self.header += self.read(s, byte_ends)
+
+ bytes = self.header[byte_starts: byte_ends]
for t in range(0, num_tags):
code = struct.unpack(
@@ -100,7 +118,14 @@ class COGTiff:
f'{self._endian}Q',
bytes[pos + 12: pos + 20]
)[0]
- data = self.read(data_offset, tag_len)
+
+ byte_starts = data_offset
+ byte_ends = byte_starts + tag_len
+ if byte_ends > len(self.header):
+ s = len(self.header)
+ self.header += self.read(s, byte_ends)
+
+ data = self.header[byte_starts: byte_ends]
tags.append(
{
@@ -116,12 +141,20 @@ class COGTiff:
self._offset = self._offset + 8 + pos
next_offset = struct.unpack(
f'{self._endian}Q',
- self.read(self._offset, 8)
+ self.header[self._offset: self._offset + 8]
)[0]
else:
- bytes = self.read(self._offset, 2)
+ bytes = self.header[self._offset: self._offset + 2]
num_tags = struct.unpack(f'{self._endian}H', bytes)[0]
- bytes = self.read(self._offset + 2, (num_tags * 12) + 2)
+
+ byte_starts = self._offset + 2
+ byte_ends = (num_tags * 12) + 2 + byte_starts
+ if byte_ends > len(self.header):
+ s = len(self.header)
+ self.header += self.read(s, byte_ends)
+
+ bytes = self.header[byte_starts: byte_ends]
+
for t in range(0, num_tags):
code = struct.unpack(
f'{self._endian}H',
@@ -149,7 +182,13 @@ class COGTiff:
f'{self._endian}L',
bytes[pos + 8: pos + 12]
)[0]
- data = self.read(data_offset, tag_len)
+
+ byte_starts = data_offset
+ byte_ends = byte_starts + tag_len
+ if byte_ends > len(self.header):
+ s = len(self.header)
+ self.header += self.read(s, byte_ends)
+ data = self.header[byte_starts: byte_ends]
tags.append(
{
@@ -165,7 +204,7 @@ class COGTiff:
self._offset = self._offset + 2 + pos
next_offset = struct.unpack(
f'{self._endian}L',
- self.read(self._offset, 4)
+ self.header[self._offset: self._offset + 4]
)[0]
self._offset = next_offset
@@ -176,22 +215,25 @@ class COGTiff:
}
def read_header(self):
+ """Read and parse COG header."""
+ buff_size = int(os.environ.get('COG_INGESTED_BYTES_AT_OPEN', '16384'))
+ self.header = self.read(0, buff_size)
+
# read first 4 bytes to determine tiff or bigtiff and byte order
- bytes = self.read(0, 4)
- if bytes[:2] == b'MM':
+ if self.header[:2] == b'MM':
self._endian = '>'
- self._version = struct.unpack(f'{self._endian}H', bytes[2:4])[0]
+ self._version = struct.unpack(f'{self._endian}H', self.header[2:4])[0]
if self._version == 42:
# TIFF
self._big_tiff = False
# read offset to first IFD
- self._offset = struct.unpack(f'{self._endian}L', self.read(4, 4))[0]
+ self._offset = struct.unpack(f'{self._endian}L', self.header[4:8])[0]
elif self._version == 43:
# BIGTIFF
self._big_tiff = True
- bytes = self.read(4, 12)
+ bytes = self.header[4:16]
bytesize = struct.unpack(f'{self._endian}H', bytes[0:2])[0]
w = struct.unpack(f'{self._endian}H', bytes[2:4])[0]
self._offset = struct.unpack(f'{self._endian}Q', bytes[4:])[0]
@@ -203,7 +245,7 @@ class COGTiff:
self._init = True
# for JPEG we need to read all IFDs, they are at the front of the file
- for ifd in self.ifds():
+ for ifd in self._ifds():
mime_type = 'image/jpeg'
# tile offsets are an extension but if they aren't in the file then
# you can't get a tile back!
@@ -293,9 +335,7 @@ class COGTiff:
self._mask_ifds = []
def get_tile(self, x, y, z):
- if self._init is False:
- self.read_header()
-
+ """Read tile data."""
if z < len(self._image_ifds):
image_ifd = self._image_ifds[z]
idx = (y * image_ifd['ny_tiles']) + x
@@ -326,6 +366,4 @@ class COGTiff:
@property
def version(self):
- if self._init is False:
- self.read_header()
return self._version
diff --git a/cogdumper/filedumper.py b/cogdumper/filedumper.py
index f1454dd..27596a6 100644
--- a/cogdumper/filedumper.py
+++ b/cogdumper/filedumper.py
@@ -1,7 +1,10 @@
"""A utility to dump tiles directly from a local tiff file."""
+import logging
from cogdumper.cog_tiles import AbstractReader
+logger = logging.getLogger(__name__)
+
class Reader(AbstractReader):
"""Wraps the remote COG."""
@@ -10,5 +13,8 @@ class Reader(AbstractReader):
self._handle = handle
def read(self, offset, length):
+ start = offset
+ stop = offset + length - 1
+ logger.info(f'Reading bytes: {start} to {stop}')
self._handle.seek(offset)
return self._handle.read(length)
diff --git a/cogdumper/httpdumper.py b/cogdumper/httpdumper.py
index d76f225..8ea2a1d 100644
--- a/cogdumper/httpdumper.py
+++ b/cogdumper/httpdumper.py
@@ -1,11 +1,15 @@
"""A utility to dump tiles directly from a tiff file on a http server."""
+import logging
+
import requests
from requests.auth import HTTPBasicAuth
from cogdumper.errors import TIFFError
from cogdumper.cog_tiles import AbstractReader
+logger = logging.getLogger(__name__)
+
class Reader(AbstractReader):
"""Wraps the remote COG."""
@@ -37,6 +41,7 @@ class Reader(AbstractReader):
def read(self, offset, length):
start = offset
stop = offset + length - 1
+ logger.info(f'Reading bytes: {start} to {stop}')
headers = {'Range': f'bytes={start}-{stop}'}
r = self.session.get(self.url, auth=self.auth, headers=headers)
if r.status_code != requests.codes.partial_content:
diff --git a/cogdumper/s3dumper.py b/cogdumper/s3dumper.py
index ce60f6e..9b66652 100644
--- a/cogdumper/s3dumper.py
+++ b/cogdumper/s3dumper.py
@@ -1,11 +1,14 @@
"""A utility to dump tiles directly from a tiff file in an S3 bucket."""
import os
+import logging
import boto3
from cogdumper.cog_tiles import AbstractReader
+logger = logging.getLogger(__name__)
+
region = os.environ.get('AWS_REGION', 'us-east-1')
s3 = boto3.resource('s3', region_name=region)
@@ -14,12 +17,15 @@ class Reader(AbstractReader):
"""Wraps the remote COG."""
def __init__(self, bucket_name, key):
+ """Init reader object."""
self.bucket = bucket_name
self.key = key
+ self.source = s3.Object(self.bucket, self.key)
def read(self, offset, length):
+ """Read method."""
start = offset
stop = offset + length - 1
- r = s3.meta.client.get_object(Bucket=self.bucket, Key=self.key,
- Range=f'bytes={start}-{stop}')
+ logger.info(f'Reading bytes: {start} to {stop}')
+ r = self.source.get(Range=f'bytes={start}-{stop}')
return r['Body'].read()
diff --git a/cogdumper/scripts/cli.py b/cogdumper/scripts/cli.py
index 5fdccb3..bd366af 100644
--- a/cogdumper/scripts/cli.py
+++ b/cogdumper/scripts/cli.py
@@ -1,5 +1,5 @@
"""cli."""
-
+import logging
import mimetypes
import click
@@ -25,8 +25,13 @@ def cogdumper():
help='local output directory')
@click.option('--xyz', type=click.INT, default=[0, 0, 0], nargs=3,
help='xyz tile coordinates where z is the overview level')
-def s3(bucket, key, output, xyz):
[email protected]('--verbose', '-v', is_flag=True, help='Show logs')
[email protected]_option(version=cogdumper_version, message='%(version)s')
+def s3(bucket, key, output, xyz, verbose):
"""Read AWS S3 hosted dataset."""
+ if verbose:
+ logging.basicConfig(level=logging.INFO)
+
reader = S3Reader(bucket, key)
cog = COGTiff(reader.read)
mime_type, tile = cog.get_tile(*xyz)
@@ -50,9 +55,13 @@ def s3(bucket, key, output, xyz):
help='local output directory')
@click.option('--xyz', type=click.INT, default=[0, 0, 0], nargs=3,
help='xyz tile coordinates where z is the overview level')
[email protected]('--verbose', '-v', is_flag=True, help='Show logs')
@click.version_option(version=cogdumper_version, message='%(version)s')
-def http(server, path, resource, output, xyz=None):
+def http(server, path, resource, output, xyz, verbose):
"""Read web hosted dataset."""
+ if verbose:
+ logging.basicConfig(level=logging.INFO)
+
reader = HTTPReader(server, path, resource)
cog = COGTiff(reader.read)
mime_type, tile = cog.get_tile(*xyz)
@@ -74,9 +83,13 @@ def http(server, path, resource, output, xyz=None):
help='local output directory')
@click.option('--xyz', type=click.INT, default=[0, 0, 0], nargs=3,
help='xyz tile coordinate where z is the overview level')
[email protected]('--verbose', '-v', is_flag=True, help='Show logs')
@click.version_option(version=cogdumper_version, message='%(version)s')
-def file(file, output, xyz=None):
+def file(file, output, xyz, verbose):
"""Read local dataset."""
+ if verbose:
+ logging.basicConfig(level=logging.INFO)
+
with open(file, 'rb') as src:
reader = FileReader(src)
cog = COGTiff(reader.read)
| Implement chunk or stream read to reduce the number of `read` call
Right now when reading the header we have to loop through each FID to get each FID metadata.
https://github.com/mapbox/COGDumper/blob/dfc6b9b56879c7116f522518ed37617d570acba1/cogdumper/cog_tiles.py#L221-L222
While this permit to read only part we need to determine what will be the offsets for the data and mask part, this is also not efficient, resulting in 10s of small `read` calls.
```
cogdumper s3 --bucket mapbox --key playground/vincent/y.tif --xyz 0 0 0
Read Header
read 0 3
read 4 7
read 8 9
read 10 251
read 516 771
read 260 515
read 826 967
read 250 253
Read IFD
read 1378 1379
read 1380 1549
read 1808 2063
read 1552 1807
read 1548 1551
Read IFD
read 2064 2065
read 2066 2259
read 2332 2395
read 2268 2331
read 2450 2591
read 2258 2261
Read IFD
read 2592 2593
read 2594 2787
read 2812 2827
read 2796 2811
read 2882 3023
read 2786 2789
Read IFD
read 3024 3025
read 3026 3219
read 3282 3423
read 3218 3221
Read IFD
read 3424 3425
read 3426 3619
read 3682 3823
read 3618 3621
Read IFD
read 3824 3825
read 3826 4019
read 4082 4223
read 4018 4021
Read IFD
read 4224 4225
read 4226 4419
read 4482 4623
read 4418 4421
Read IFD
read 4624 4625
read 4626 4795
read 4862 4925
read 4798 4861
read 4794 4797
Read IFD
read 4926 4927
read 4928 5097
read 5116 5131
read 5100 5115
read 5096 5099
Read IFD
read 5132 5133
read 5134 5303
read 5302 5305
Read IFD
read 5306 5307
read 5308 5477
read 5476 5479
Read IFD
read 5480 5481
read 5482 5651
read 5650 5653
Read IFD
read 5654 5655
read 5656 5825
read 5824 5827
Read IFD
read 883331 908790
read 2341520 2341571
```
I don't remember exactly but it seems that GDAL is reading the first 16ko of the file and then determine all it needs. I think applying the same logic could be nice.
cc @normanb @sgillies | mapbox/COGDumper | diff --git a/tests/test_filedumper.py b/tests/test_filedumper.py
index 2ba8e1c..0bb2b8c 100644
--- a/tests/test_filedumper.py
+++ b/tests/test_filedumper.py
@@ -66,7 +66,6 @@ def test_tiff_ifds(tiff):
reader = FileReader(tiff)
cog = COGTiff(reader.read)
# read private variable directly for testing
- cog.read_header()
assert len(cog._image_ifds) > 0
assert 8 == len(cog._image_ifds[0]['tags'])
assert 0 == cog._image_ifds[4]['next_offset']
@@ -76,7 +75,6 @@ def test_be_tiff_ifds(be_tiff):
reader = FileReader(be_tiff)
cog = COGTiff(reader.read)
# read private variable directly for testing
- cog.read_header()
assert len(cog._image_ifds) > 0
assert 8 == len(cog._image_ifds[0]['tags'])
assert 0 == cog._image_ifds[4]['next_offset']
@@ -86,7 +84,6 @@ def test_bigtiff_ifds(bigtiff):
reader = FileReader(bigtiff)
cog = COGTiff(reader.read)
# read private variable directly for testing
- cog.read_header()
assert len(cog._image_ifds) > 0
assert 7 == len(cog._image_ifds[0]['tags'])
assert 0 == cog._image_ifds[4]['next_offset']
@@ -102,6 +99,19 @@ def test_tiff_tile(tiff):
assert 73 == len(cog._image_ifds[0]['jpeg_tables'])
assert mime_type == 'image/jpeg'
+
+def test_tiff_tile_env(tiff, monkeypatch):
+ monkeypatch.setenv("COG_INGESTED_BYTES_AT_OPEN", "1024")
+ reader = FileReader(tiff)
+ cog = COGTiff(reader.read)
+ mime_type, tile = cog.get_tile(0, 0, 0)
+ assert 1 == len(cog._image_ifds[0]['offsets'])
+ assert 1 == len(cog._image_ifds[0]['byte_counts'])
+ assert 'jpeg_tables' in cog._image_ifds[0]
+ assert 73 == len(cog._image_ifds[0]['jpeg_tables'])
+ assert mime_type == 'image/jpeg'
+
+
def test_bad_tiff_tile(tiff):
reader = FileReader(tiff)
cog = COGTiff(reader.read)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 2,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 5
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest_v2",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest -xvs"
} | atomicwrites==1.4.1
attrs==22.2.0
boto3==1.6.2
botocore==1.9.23
certifi==2021.5.30
chardet==3.0.4
click==6.7
codecov==2.1.13
-e git+https://github.com/mapbox/COGDumper.git@eb6cbcfbdbc94ee8fd75450908b375fac93e3989#egg=cogdumper
coverage==6.2
docutils==0.18.1
idna==2.6
jmespath==0.10.0
more-itertools==8.14.0
pluggy==0.6.0
py==1.11.0
pytest==3.6.4
pytest-cov==2.9.0
python-dateutil==2.6.1
requests==2.18.4
s3transfer==0.1.13
six==1.17.0
urllib3==1.22
| name: COGDumper
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- atomicwrites==1.4.1
- attrs==22.2.0
- boto3==1.6.2
- botocore==1.9.23
- chardet==3.0.4
- click==6.7
- codecov==2.1.13
- coverage==6.2
- docutils==0.18.1
- idna==2.6
- jmespath==0.10.0
- more-itertools==8.14.0
- pluggy==0.6.0
- py==1.11.0
- pytest==3.6.4
- pytest-cov==2.9.0
- python-dateutil==2.6.1
- requests==2.18.4
- s3transfer==0.1.13
- six==1.17.0
- urllib3==1.22
prefix: /opt/conda/envs/COGDumper
| [
"tests/test_filedumper.py::test_tiff_ifds",
"tests/test_filedumper.py::test_be_tiff_ifds",
"tests/test_filedumper.py::test_bigtiff_ifds",
"tests/test_filedumper.py::test_tiff_tile",
"tests/test_filedumper.py::test_tiff_tile_env",
"tests/test_filedumper.py::test_bad_tiff_tile",
"tests/test_filedumper.py::test_bigtiff_tile"
]
| []
| [
"tests/test_filedumper.py::test_tiff_version",
"tests/test_filedumper.py::test_bigtiff_version",
"tests/test_filedumper.py::test_be_tiff_version"
]
| []
| MIT License | 2,529 | [
"cogdumper/httpdumper.py",
"cogdumper/cog_tiles.py",
"cogdumper/scripts/cli.py",
"cogdumper/filedumper.py",
"cogdumper/s3dumper.py"
]
| [
"cogdumper/httpdumper.py",
"cogdumper/cog_tiles.py",
"cogdumper/scripts/cli.py",
"cogdumper/filedumper.py",
"cogdumper/s3dumper.py"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.