instance_id
stringlengths
10
57
patch
stringlengths
261
37.7k
repo
stringlengths
7
53
base_commit
stringlengths
40
40
hints_text
stringclasses
301 values
test_patch
stringlengths
212
2.22M
problem_statement
stringlengths
23
37.7k
version
stringclasses
1 value
environment_setup_commit
stringlengths
40
40
FAIL_TO_PASS
listlengths
1
4.94k
PASS_TO_PASS
listlengths
0
7.82k
meta
dict
created_at
stringlengths
25
25
license
stringclasses
8 values
__index_level_0__
int64
0
6.41k
astropenguin__pandas-dataclasses-109
diff --git a/README.md b/README.md index 61461ac..f9bf148 100644 --- a/README.md +++ b/README.md @@ -17,13 +17,13 @@ pandas-dataclass makes it easy to create [pandas] data (DataFrame and Series) by ```python from dataclasses import dataclass -from pandas_dataclasses import AsDataFrame, Data, Index +from pandas_dataclasses import AsFrame, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Index[int] @@ -72,7 +72,7 @@ pip install pandas-dataclasses pandas-dataclasses provides you the following features: - Type hints for dataclass fields (`Attr`, `Column`, `Data`, `Index`) to specify the data type and name of each element in pandas data -- Mix-in classes for dataclasses (`As`, `AsDataFrame`, `AsSeries`) to create pandas data by a classmethod (`new`) that takes the same arguments as dataclass initialization +- Mix-in classes for dataclasses (`As`, `AsFrame`, `AsSeries`) to create pandas data by a classmethod (`new`) that takes the same arguments as dataclass initialization When you call `new`, it will first create a dataclass object and then create a Series or DataFrame object from the dataclass object according the type hints and values in it. In the example above, `df = Weather.new(...)` is thus equivalent to: @@ -81,36 +81,36 @@ In the example above, `df = Weather.new(...)` is thus equivalent to: <summary>Click to see all imports</summary> ```python -from pandas_dataclasses import asdataframe +from pandas_dataclasses import asframe ``` </details> ```python obj = Weather([2020, ...], [1, ...], [7.1, ...], [2.4, ...]) -df = asdataframe(obj) +df = asframe(obj) ``` -where `asdataframe` is a conversion function. +where `asframe` is a conversion function. pandas-dataclasses does not touch the dataclass object creation itself; this allows you to fully customize your dataclass before conversion by the dataclass features (`field`, `__post_init__`, ...). ## Basic usage ### DataFrame creation -As shown in the example above, a dataclass that has the `AsDataFrame` mix-in will create DataFrame objects: +As shown in the example above, a dataclass that has the `AsFrame` mix-in will create DataFrame objects: <details> <summary>Click to see all imports</summary> ```python from dataclasses import dataclass -from pandas_dataclasses import AsDataFrame, Data, Index +from pandas_dataclasses import AsFrame, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Index[int] @@ -159,7 +159,7 @@ class Weather(AsSeries): ser = Weather.new(...) ``` -Unlike `AsDataFrame`, the second and subsequent data fields are ignored in the Series creation even if they exist. +Unlike `AsFrame`, the second and subsequent data fields are ignored in the Series creation even if they exist. Other rules are the same as for the DataFrame creation. ## Advanced usage @@ -173,13 +173,13 @@ Fields typed by `Attr` are *attribute fields*, each value of which will become a ```python from dataclasses import dataclass -from pandas_dataclasses import AsDataFrame, Attr, Data, Index +from pandas_dataclasses import AsFrame, Attr, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Index[int] @@ -210,13 +210,13 @@ The name of attribute, data, or index can be explicitly specified by adding a ha ```python from dataclasses import dataclass from typing import Annotated as Ann -from pandas_dataclasses import AsDataFrame, Attr, Data, Index +from pandas_dataclasses import AsFrame, Attr, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Ann[Index[int], "Year"] @@ -255,13 +255,13 @@ If an annotation is a [format string], it will be formatted by a dataclass objec ```python from dataclasses import dataclass from typing import Annotated as Ann -from pandas_dataclasses import AsDataFrame, Data, Index +from pandas_dataclasses import AsFrame, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Ann[Index[int], "Year"] @@ -287,13 +287,13 @@ Adding tuple annotations to data fields will create DataFrame objects with hiera ```python from dataclasses import dataclass from typing import Annotated as Ann -from pandas_dataclasses import AsDataFrame, Data, Index +from pandas_dataclasses import AsFrame, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Ann[Index[int], "Year"] @@ -328,13 +328,13 @@ Column names can be (explicitly) specified by *column fields* (with hashable ann ```python from dataclasses import dataclass from typing import Annotated as Ann -from pandas_dataclasses import AsDataFrame, Column, Data, Index +from pandas_dataclasses import AsFrame, Column, Data, Index ``` </details> ```python @dataclass -class Weather(AsDataFrame): +class Weather(AsFrame): """Weather information.""" year: Ann[Index[int], "Year"] @@ -368,7 +368,7 @@ If a tuple annotation has [format string]s, they will also be formatted by a dat ### Custom pandas factory -A custom class can be specified as a factory for the Series or DataFrame creation by `As`, the generic version of `AsDataFrame` and `AsSeries`. +A custom class can be specified as a factory for the Series or DataFrame creation by `As`, the generic version of `AsFrame` and `AsSeries`. Note that the custom class must be a subclass of either `pandas.Series` or `pandas.DataFrame`: <details> diff --git a/pandas_dataclasses/__init__.py b/pandas_dataclasses/__init__.py index fe0a274..18a5d75 100644 --- a/pandas_dataclasses/__init__.py +++ b/pandas_dataclasses/__init__.py @@ -1,6 +1,7 @@ __all__ = [ "As", "AsDataFrame", + "AsFrame", "AsSeries", "Attr", "Column", @@ -9,13 +10,14 @@ __all__ = [ "Other", "Spec", "asdataframe", + "asframe", "asseries", "core", ] from . import core -from .core.asdata import * +from .core.aspandas import * from .core.mixins import * from .core.specs import * from .core.typing import * @@ -23,3 +25,12 @@ from .core.typing import * # metadata __version__ = "0.8.0" + + +# aliases +AsDataFrame = AsFrame +"""Alias of ``core.mixins.AsFrame``.""" + + +asdataframe = asframe +"""Alias of ``core.aspandas.asframe``.""" diff --git a/pandas_dataclasses/core/__init__.py b/pandas_dataclasses/core/__init__.py index cd94a78..b44c4f9 100644 --- a/pandas_dataclasses/core/__init__.py +++ b/pandas_dataclasses/core/__init__.py @@ -1,7 +1,7 @@ -__all__ = ["asdata", "mixins", "specs", "typing"] +__all__ = ["aspandas", "mixins", "specs", "typing"] -from . import asdata +from . import aspandas from . import mixins from . import specs from . import typing diff --git a/pandas_dataclasses/core/asdata.py b/pandas_dataclasses/core/aspandas.py similarity index 88% rename from pandas_dataclasses/core/asdata.py rename to pandas_dataclasses/core/aspandas.py index 29265a5..b7003c4 100644 --- a/pandas_dataclasses/core/asdata.py +++ b/pandas_dataclasses/core/aspandas.py @@ -1,4 +1,4 @@ -__all__ = ["asdataframe", "asseries"] +__all__ = ["asframe", "asseries"] # standard library @@ -12,26 +12,26 @@ import pandas as pd # submodules from .specs import Spec -from .typing import P, DataClass, PandasClass, TDataFrame, TSeries +from .typing import P, DataClass, PandasClass, TFrame, TSeries # runtime functions @overload -def asdataframe(obj: PandasClass[P, TDataFrame], *, factory: None = None) -> TDataFrame: +def asframe(obj: PandasClass[P, TFrame], *, factory: None = None) -> TFrame: ... @overload -def asdataframe(obj: DataClass[P], *, factory: Callable[..., TDataFrame]) -> TDataFrame: +def asframe(obj: DataClass[P], *, factory: Callable[..., TFrame]) -> TFrame: ... @overload -def asdataframe(obj: DataClass[P], *, factory: None = None) -> pd.DataFrame: +def asframe(obj: DataClass[P], *, factory: None = None) -> pd.DataFrame: ... -def asdataframe(obj: Any, *, factory: Any = None) -> Any: +def asframe(obj: Any, *, factory: Any = None) -> Any: """Create a DataFrame object from a dataclass object.""" spec = Spec.from_dataclass(type(obj)) @ obj diff --git a/pandas_dataclasses/core/mixins.py b/pandas_dataclasses/core/mixins.py index 74b32f7..f1cfad1 100644 --- a/pandas_dataclasses/core/mixins.py +++ b/pandas_dataclasses/core/mixins.py @@ -1,4 +1,4 @@ -__all__ = ["As", "AsDataFrame", "AsSeries"] +__all__ = ["As", "AsFrame", "AsSeries"] # standard library @@ -14,7 +14,7 @@ from typing_extensions import get_args, get_origin # submodules -from .asdata import asdataframe, asseries +from .aspandas import asframe, asseries from .typing import P, T, Pandas, PandasClass, TPandas @@ -51,7 +51,7 @@ class As(Generic[TPandas]): return MethodType(get_creator(cls), cls) -AsDataFrame = As[pd.DataFrame] +AsFrame = As[pd.DataFrame] """Alias of ``As[pandas.DataFrame]``.""" @@ -72,7 +72,7 @@ def get_creator(cls: Any) -> Callable[..., Pandas]: origin = get_origin(return_) or return_ if issubclass(origin, pd.DataFrame): - converter: Any = asdataframe + converter: Any = asframe elif issubclass(origin, pd.Series): converter = asseries else: diff --git a/pandas_dataclasses/core/typing.py b/pandas_dataclasses/core/typing.py index dc0704a..4275ff7 100644 --- a/pandas_dataclasses/core/typing.py +++ b/pandas_dataclasses/core/typing.py @@ -38,7 +38,7 @@ Pandas = Union[pd.DataFrame, "pd.Series[Any]"] P = ParamSpec("P") T = TypeVar("T") TPandas = TypeVar("TPandas", bound=Pandas) -TDataFrame = TypeVar("TDataFrame", bound=pd.DataFrame) +TFrame = TypeVar("TFrame", bound=pd.DataFrame) TSeries = TypeVar("TSeries", bound="pd.Series[Any]")
astropenguin/pandas-dataclasses
5a2e8c3dad5615eb18c8928d9155a4cbafc5bace
diff --git a/tests/test_asdata.py b/tests/test_aspandas.py similarity index 86% rename from tests/test_asdata.py rename to tests/test_aspandas.py index d71000e..ea9b088 100644 --- a/tests/test_asdata.py +++ b/tests/test_aspandas.py @@ -6,8 +6,8 @@ from typing import cast import pandas as pd from pandas.testing import assert_frame_equal, assert_series_equal from data import Weather, weather, df_weather_true, ser_weather_true -from pandas_dataclasses import Spec, asdataframe, asseries -from pandas_dataclasses.core.asdata import get_attrs, get_columns, get_data, get_index +from pandas_dataclasses import Spec, asframe, asseries +from pandas_dataclasses.core.aspandas import get_attrs, get_columns, get_data, get_index # test data @@ -15,12 +15,12 @@ spec = Spec.from_dataclass(Weather) @ weather # test functions -def test_asseries() -> None: - assert_series_equal(asseries(weather), ser_weather_true) +def test_asframe() -> None: + assert_frame_equal(asframe(weather), df_weather_true) -def test_asdataframe() -> None: - assert_frame_equal(asdataframe(weather), df_weather_true) +def test_asseries() -> None: + assert_series_equal(asseries(weather), ser_weather_true) def test_get_attrs() -> None: diff --git a/tests/test_mixins.py b/tests/test_mixins.py index 7aed087..a6e39da 100644 --- a/tests/test_mixins.py +++ b/tests/test_mixins.py @@ -7,7 +7,7 @@ from typing import Any import pandas as pd from pandas.testing import assert_frame_equal, assert_series_equal from data import Weather, weather, df_weather_true, ser_weather_true -from pandas_dataclasses import As, AsDataFrame, AsSeries +from pandas_dataclasses import As, AsFrame, AsSeries # test data @@ -15,47 +15,47 @@ def factory(*args: Any, **kwargs: Any) -> pd.Series: # type: ignore return pd.Series(*args, **kwargs) # type: ignore -class CustomDataFrame(pd.DataFrame): +class UserFrame(pd.DataFrame): pass -class CustomSeries(pd.Series): # type: ignore +class UserSeries(pd.Series): # type: ignore pass @dataclass -class DataFrameWeather(Weather, AsDataFrame): +class Frame(Weather, AsFrame): pass @dataclass -class CustomDataFrameWeather(Weather, As[CustomDataFrame]): +class CustomFrame(Weather, As[UserFrame]): pass @dataclass -class SeriesWeather(Weather, AsSeries): +class Series(Weather, AsSeries): pass @dataclass -class FactorySeriesWeather(Weather, AsSeries, factory=factory): +class CustomSeries(Weather, As[UserSeries]): pass @dataclass -class CustomSeriesWeather(Weather, As[CustomSeries]): +class FactorySeries(Weather, AsSeries, factory=factory): pass @dataclass -class FloatSeriesWeather(Weather, As["pd.Series[float]"], factory=pd.Series): +class FloatSeries(Weather, As["pd.Series[float]"], factory=pd.Series): pass # test functions -def test_dataframe_weather() -> None: - df_weather = DataFrameWeather.new( +def test_frame() -> None: + df_weather = Frame.new( year=weather.year, month=weather.month, temp_avg=weather.temp_avg, @@ -68,8 +68,8 @@ def test_dataframe_weather() -> None: assert_frame_equal(df_weather, df_weather_true) -def test_custom_dataframe_weather() -> None: - df_weather = CustomDataFrameWeather.new( +def test_custom_frame() -> None: + df_weather = CustomFrame.new( year=weather.year, month=weather.month, temp_avg=weather.temp_avg, @@ -78,12 +78,12 @@ def test_custom_dataframe_weather() -> None: wind_max=weather.wind_max, ) - assert isinstance(df_weather, CustomDataFrame) + assert isinstance(df_weather, UserFrame) assert_frame_equal(df_weather, df_weather_true, check_frame_type=False) -def test_series_weather() -> None: - ser_weather = SeriesWeather.new( +def test_series() -> None: + ser_weather = Series.new( year=weather.year, month=weather.month, temp_avg=weather.temp_avg, @@ -96,8 +96,8 @@ def test_series_weather() -> None: assert_series_equal(ser_weather, ser_weather_true) -def test_factory_series_weather() -> None: - ser_weather = FactorySeriesWeather.new( +def test_custom_series() -> None: + ser_weather = CustomSeries.new( year=weather.year, month=weather.month, temp_avg=weather.temp_avg, @@ -106,12 +106,12 @@ def test_factory_series_weather() -> None: wind_max=weather.wind_max, ) - assert isinstance(ser_weather, pd.Series) - assert_series_equal(ser_weather, ser_weather_true) + assert isinstance(ser_weather, UserSeries) + assert_series_equal(ser_weather, ser_weather_true, check_series_type=False) -def test_custom_series_weather() -> None: - ser_weather = CustomSeriesWeather.new( +def test_factory_series() -> None: + ser_weather = FactorySeries.new( year=weather.year, month=weather.month, temp_avg=weather.temp_avg, @@ -120,12 +120,12 @@ def test_custom_series_weather() -> None: wind_max=weather.wind_max, ) - assert isinstance(ser_weather, CustomSeries) - assert_series_equal(ser_weather, ser_weather_true, check_series_type=False) + assert isinstance(ser_weather, pd.Series) + assert_series_equal(ser_weather, ser_weather_true) -def test_float_series_weather() -> None: - ser_weather = FloatSeriesWeather.new( +def test_float_series() -> None: + ser_weather = FloatSeries.new( year=weather.year, month=weather.month, temp_avg=weather.temp_avg,
Update asdata module - [x] Rename `core.asdata` module to `core.aspandas` (because the former is ambiguous) - [x] Rename `*dataframe*` to `*frame*` in functions and type hints - [x] Add aliases for the backward compatibility
0.0
5a2e8c3dad5615eb18c8928d9155a4cbafc5bace
[ "tests/test_aspandas.py::test_asframe", "tests/test_aspandas.py::test_asseries", "tests/test_aspandas.py::test_get_attrs", "tests/test_aspandas.py::test_get_columns", "tests/test_aspandas.py::test_get_data", "tests/test_aspandas.py::test_get_index", "tests/test_mixins.py::test_frame", "tests/test_mixins.py::test_custom_frame", "tests/test_mixins.py::test_series", "tests/test_mixins.py::test_custom_series", "tests/test_mixins.py::test_factory_series", "tests/test_mixins.py::test_float_series" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-10-26 11:52:17+00:00
mit
1,220
astropenguin__pandas-dataclasses-12
diff --git a/pandas_dataclasses/typing.py b/pandas_dataclasses/typing.py index dad9a08..1c98367 100644 --- a/pandas_dataclasses/typing.py +++ b/pandas_dataclasses/typing.py @@ -1,4 +1,4 @@ -__all__ = ["Attr", "Data", "Index", "Name", "NamedData", "NamedIndex"] +__all__ = ["Attr", "Data", "Index", "Name", "Named"] # standard library @@ -7,10 +7,10 @@ from typing import Any, Collection, Hashable, Optional, TypeVar, Union # dependencies +import numpy as np from typing_extensions import ( Annotated, Literal, - Protocol, get_args, get_origin, get_type_hints, @@ -23,18 +23,6 @@ TDtype = TypeVar("TDtype", covariant=True) TName = TypeVar("TName", bound=Hashable, covariant=True) -class Named(Protocol[TName]): - """Type hint for named objects.""" - - pass - - -class Collection(Named[TName], Collection[TDtype], Protocol): - """Type hint for named collection objects.""" - - pass - - # type hints (public) class FieldType(Enum): """Annotations for pandas-related type hints.""" @@ -59,39 +47,35 @@ class FieldType(Enum): Attr = Annotated[TAttr, FieldType.ATTR] """Type hint for attribute fields (``Attr[TAttr]``).""" -Data = Annotated[Union[Collection[None, TDtype], TDtype], FieldType.DATA] +Data = Annotated[Union[Collection[TDtype], TDtype], FieldType.DATA] """Type hint for data fields (``Data[TDtype]``).""" -Index = Annotated[Union[Collection[None, TDtype], TDtype], FieldType.INDEX] +Index = Annotated[Union[Collection[TDtype], TDtype], FieldType.INDEX] """Type hint for index fields (``Index[TDtype]``).""" Name = Annotated[TName, FieldType.NAME] """Type hint for name fields (``Name[TName]``).""" -NamedData = Annotated[Union[Collection[TName, TDtype], TDtype], FieldType.DATA] -"""Type hint for named data fields (``NamedData[TName, TDtype]``).""" - -NamedIndex = Annotated[Union[Collection[TName, TDtype], TDtype], FieldType.INDEX] -"""Type hint for named index fields (``NamedIndex[TName, TDtype]``).""" +Named = Annotated +"""Type hint for named fields (alias of Annotated).""" # runtime functions -def get_dtype(type_: Any) -> Optional[str]: - """Parse a type and return a dtype.""" - args = get_args(type_) - origin = get_origin(type_) - - if origin is Collection: - return get_dtype(args[1]) - - if origin is Literal: - return args[0] +def get_dtype(type_: Any) -> Optional["np.dtype[Any]"]: + """Parse a type and return a data type (dtype).""" + try: + t_dtype = get_args(unannotate(type_))[1] + except (IndexError, NameError): + raise ValueError(f"Could not convert {type_!r} to dtype.") - if type_ is Any or type_ is type(None): + if t_dtype is Any or t_dtype is type(None): return None - if isinstance(type_, type): - return type_.__name__ + if isinstance(t_dtype, type): + return np.dtype(t_dtype) + + if get_origin(t_dtype) is Literal: + return np.dtype(get_args(t_dtype)[0]) raise ValueError(f"Could not convert {type_!r} to dtype.") @@ -115,33 +99,21 @@ def get_ftype(type_: Any) -> FieldType: def get_name(type_: Any) -> Optional[Hashable]: """Parse a type and return a name.""" - args = get_args(type_) - origin = get_origin(type_) + if get_origin(type_) is not Annotated: + return - if origin is Collection: - return get_dtype(args[0]) + for arg in reversed(get_args(type_)[1:]): + if isinstance(arg, FieldType): + continue - if origin is Literal: - return args[0] + if isinstance(arg, Hashable): + return arg - if type_ is type(None): - return None - raise ValueError(f"Could not convert {type_!r} to name.") - - -def get_rtype(type_: Any) -> Any: - """Parse a type and return a representative type (rtype).""" +def unannotate(type_: Any) -> Any: + """Recursively remove annotations from a type.""" class Temporary: __annotations__ = dict(type=type_) - try: - unannotated = get_type_hints(Temporary)["type"] - except NameError: - raise ValueError(f"Could not convert {type_!r} to rtype.") - - if get_origin(unannotated) is Union: - return get_args(unannotated)[0] - else: - return unannotated + return get_type_hints(Temporary)["type"]
astropenguin/pandas-dataclasses
dc626afccb9d06d30014b3542a5fe7ae89bed87e
diff --git a/tests/test_typing.py b/tests/test_typing.py index ad20503..dc5aedf 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -1,42 +1,36 @@ # standard library -from typing import Any, Optional, Union +from typing import Any # dependencies +import numpy as np from pytest import mark -from typing_extensions import Annotated, Literal +from typing_extensions import Literal # submodules from pandas_dataclasses.typing import ( - Collection, Attr, Data, Index, Name, + Named, get_dtype, get_ftype, get_name, - get_rtype, ) -# type hints -Int64 = Literal["int64"] -Label = Literal["label"] -NoneType = type(None) - - # test datasets testdata_dtype = [ - (Any, None), - (NoneType, None), - (Int64, "int64"), - (int, "int"), - (Collection[Any, Any], None), - (Collection[Any, None], None), - (Collection[Any, Int64], "int64"), - (Collection[Any, int], "int"), + (Data[Any], None), + (Data[None], None), + (Data[int], np.dtype("int64")), + (Data[Literal["i8"]], np.dtype("int64")), + (Index[Any], None), + (Index[None], None), + (Index[int], np.dtype("int64")), + (Index[Literal["i8"]], np.dtype("int64")), ] testdata_ftype = [ @@ -47,20 +41,16 @@ testdata_ftype = [ ] testdata_name = [ - (NoneType, None), - (Label, "label"), - (Collection[None, Any], None), - (Collection[Label, Any], "label"), -] - -testdata_rtype = [ - (int, int), - (Annotated[int, "annotation"], int), - (Union[int, float], int), - (Optional[int], int), + (Attr[Any], None), + (Data[Any], None), + (Index[Any], None), + (Name[Any], None), + (Named[Attr[Any], "attr"], "attr"), + (Named[Data[Any], "data"], "data"), + (Named[Index[Any], "index"], "index"), + (Named[Name[Any], "name"], "name"), ] - # test functions @mark.parametrize("type_, dtype", testdata_dtype) def test_get_dtype(type_: Any, dtype: Any) -> None: @@ -68,15 +58,10 @@ def test_get_dtype(type_: Any, dtype: Any) -> None: @mark.parametrize("type_, ftype", testdata_ftype) -def test_get_field_type(type_: Any, ftype: Any) -> None: +def test_get_ftype(type_: Any, ftype: Any) -> None: assert get_ftype(type_).value == ftype @mark.parametrize("type_, name", testdata_name) def test_get_name(type_: Any, name: Any) -> None: assert get_name(type_) == name - - [email protected]("type_, rtype", testdata_rtype) -def test_get_rtype(type_: Any, rtype: Any) -> None: - assert get_rtype(type_) == rtype
Update named fields - [ ] Remove `NamedData` and `NamedIndex` - [ ] Add `Named` type hint for general named fields (e.g. `Named[Attr[int], "name"]`)
0.0
dc626afccb9d06d30014b3542a5fe7ae89bed87e
[ "tests/test_typing.py::test_get_dtype[type_0-None]", "tests/test_typing.py::test_get_dtype[type_1-None]", "tests/test_typing.py::test_get_dtype[type_2-dtype2]", "tests/test_typing.py::test_get_dtype[type_3-dtype3]", "tests/test_typing.py::test_get_dtype[type_4-None]", "tests/test_typing.py::test_get_dtype[type_5-None]", "tests/test_typing.py::test_get_dtype[type_6-dtype6]", "tests/test_typing.py::test_get_dtype[type_7-dtype7]", "tests/test_typing.py::test_get_ftype[type_0-attr]", "tests/test_typing.py::test_get_ftype[type_1-data]", "tests/test_typing.py::test_get_ftype[type_2-index]", "tests/test_typing.py::test_get_ftype[type_3-name]", "tests/test_typing.py::test_get_name[type_0-None]", "tests/test_typing.py::test_get_name[type_1-None]", "tests/test_typing.py::test_get_name[type_2-None]", "tests/test_typing.py::test_get_name[type_3-None]", "tests/test_typing.py::test_get_name[type_4-attr]", "tests/test_typing.py::test_get_name[type_5-data]", "tests/test_typing.py::test_get_name[type_6-index]", "tests/test_typing.py::test_get_name[type_7-name]" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-03-07 05:39:27+00:00
mit
1,221
astropenguin__pandas-dataclasses-120
diff --git a/pandas_dataclasses/core/typing.py b/pandas_dataclasses/core/typing.py index a97a6a0..161ce48 100644 --- a/pandas_dataclasses/core/typing.py +++ b/pandas_dataclasses/core/typing.py @@ -2,6 +2,7 @@ __all__ = ["Attr", "Column", "Data", "Index", "Other"] # standard library +import types from dataclasses import Field from enum import Enum, auto from itertools import chain @@ -149,6 +150,9 @@ def get_dtype(tp: Any) -> Optional[str]: if dtype is Any or dtype is type(None): return None + if is_union_type(dtype): + dtype = get_args(dtype)[0] + if get_origin(dtype) is Literal: dtype = get_args(dtype)[0] @@ -179,3 +183,12 @@ def get_role(tp: Any, default: Role = Role.OTHER) -> Role: return get_annotations(tp)[0] # type: ignore except TypeError: return default + + +def is_union_type(tp: Any) -> bool: + """Check if a type hint is a union type.""" + if get_origin(tp) is Union: + return True + + UnionType = getattr(types, "UnionType", None) + return UnionType is not None and isinstance(tp, UnionType)
astropenguin/pandas-dataclasses
018349844877e1984a527ce664bf11a2f2285413
diff --git a/tests/test_typing.py b/tests/test_typing.py index de6d646..4ef0e43 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -16,12 +16,14 @@ testdata_dtype = [ (Data[Any], None), (Data[None], None), (Data[int], np.dtype("i8")), + (Data[Union[int, None]], np.dtype("i8")), (Data[L["i8"]], np.dtype("i8")), (Data[L["boolean"]], pd.BooleanDtype()), (Data[L["category"]], pd.CategoricalDtype()), (Index[Any], None), (Index[None], None), (Index[int], np.dtype("i8")), + (Index[Union[int, None]], np.dtype("i8")), (Index[L["i8"]], np.dtype("i8")), (Index[L["boolean"]], pd.BooleanDtype()), (Index[L["category"]], pd.CategoricalDtype()),
Support for union type in type hints Update typing module to allow users to use union type in `Data` and `Index`: ```python @dataclass class Weather(AsFrame): """Weather information.""" year: Index[int] month: Index[int] temp: Data[float | None] wind: Data[float | None] ``` See also #115 for related discussions.
0.0
018349844877e1984a527ce664bf11a2f2285413
[ "tests/test_typing.py::test_get_dtype[tp3-dtype3]", "tests/test_typing.py::test_get_dtype[tp10-dtype10]" ]
[ "tests/test_typing.py::test_get_dtype[tp0-None]", "tests/test_typing.py::test_get_dtype[tp1-None]", "tests/test_typing.py::test_get_dtype[tp2-dtype2]", "tests/test_typing.py::test_get_dtype[tp4-dtype4]", "tests/test_typing.py::test_get_dtype[tp5-dtype5]", "tests/test_typing.py::test_get_dtype[tp6-dtype6]", "tests/test_typing.py::test_get_dtype[tp7-None]", "tests/test_typing.py::test_get_dtype[tp8-None]", "tests/test_typing.py::test_get_dtype[tp9-dtype9]", "tests/test_typing.py::test_get_dtype[tp11-dtype11]", "tests/test_typing.py::test_get_dtype[tp12-dtype12]", "tests/test_typing.py::test_get_dtype[tp13-dtype13]", "tests/test_typing.py::test_get_dtype[tp14-dtype14]", "tests/test_typing.py::test_get_dtype[tp15-dtype15]", "tests/test_typing.py::test_get_dtype[tp16-dtype16]", "tests/test_typing.py::test_get_dtype[tp17-dtype17]", "tests/test_typing.py::test_get_name[tp0-None]", "tests/test_typing.py::test_get_name[tp1-None]", "tests/test_typing.py::test_get_name[tp2-None]", "tests/test_typing.py::test_get_name[tp3-None]", "tests/test_typing.py::test_get_name[tp4-None]", "tests/test_typing.py::test_get_name[tp5-attr]", "tests/test_typing.py::test_get_name[tp6-column]", "tests/test_typing.py::test_get_name[tp7-data]", "tests/test_typing.py::test_get_name[tp8-index]", "tests/test_typing.py::test_get_name[tp9-None]", "tests/test_typing.py::test_get_name[tp10-None]", "tests/test_typing.py::test_get_name[tp11-None]", "tests/test_typing.py::test_get_name[tp12-None]", "tests/test_typing.py::test_get_name[tp13-None]", "tests/test_typing.py::test_get_name[tp14-None]", "tests/test_typing.py::test_get_name[tp15-attr]", "tests/test_typing.py::test_get_name[tp16-column]", "tests/test_typing.py::test_get_name[tp17-data]", "tests/test_typing.py::test_get_name[tp18-index]", "tests/test_typing.py::test_get_name[tp19-None]", "tests/test_typing.py::test_get_role[tp0-Role.ATTR]", "tests/test_typing.py::test_get_role[tp1-Role.COLUMN]", "tests/test_typing.py::test_get_role[tp2-Role.DATA]", "tests/test_typing.py::test_get_role[tp3-Role.INDEX]", "tests/test_typing.py::test_get_role[tp4-Role.OTHER]", "tests/test_typing.py::test_get_role[tp5-Role.ATTR]", "tests/test_typing.py::test_get_role[tp6-Role.COLUMN]", "tests/test_typing.py::test_get_role[tp7-Role.DATA]", "tests/test_typing.py::test_get_role[tp8-Role.INDEX]", "tests/test_typing.py::test_get_role[tp9-Role.OTHER]", "tests/test_typing.py::test_get_role[tp10-Role.ATTR]", "tests/test_typing.py::test_get_role[tp11-Role.COLUMN]", "tests/test_typing.py::test_get_role[tp12-Role.DATA]", "tests/test_typing.py::test_get_role[tp13-Role.INDEX]", "tests/test_typing.py::test_get_role[tp14-Role.OTHER]" ]
{ "failed_lite_validators": [ "has_issue_reference" ], "has_test_patch": true, "is_lite": false }
2022-11-15 15:37:28+00:00
mit
1,222
astropenguin__pandas-dataclasses-152
diff --git a/pandas_dataclasses/__init__.py b/pandas_dataclasses/__init__.py index 590fca2..a5d2e9d 100644 --- a/pandas_dataclasses/__init__.py +++ b/pandas_dataclasses/__init__.py @@ -7,6 +7,7 @@ __all__ = [ "Column", "Data", "Index", + "Multiple", "Spec", "Tag", "asdataframe", diff --git a/pandas_dataclasses/core/api.py b/pandas_dataclasses/core/api.py index 5a7d400..a449087 100644 --- a/pandas_dataclasses/core/api.py +++ b/pandas_dataclasses/core/api.py @@ -3,7 +3,7 @@ __all__ = ["asframe", "aspandas", "asseries"] # standard library from types import FunctionType -from typing import Any, Callable, Dict, Hashable, List, Optional, overload +from typing import Any, Callable, Dict, Hashable, Optional, overload # dependencies @@ -12,7 +12,7 @@ import pandas as pd from pandas.api.types import is_list_like from typing_extensions import get_origin from .specs import Spec -from .typing import P, DataClass, PandasClass, TFrame, TPandas, TSeries +from .typing import P, DataClass, PandasClass, TFrame, TPandas, TSeries, Tag # runtime functions @@ -206,18 +206,21 @@ def ensure(data: Any, dtype: Optional[str]) -> Any: def get_attrs(spec: Spec) -> Dict[Hashable, Any]: """Derive attributes from a specification.""" - attrs: Dict[Hashable, Any] = {} + data: Dict[Hashable, Any] = {} - for field in spec.fields.of_attr: - attrs[field.name] = field.default + for field in spec.fields.of(Tag.ATTR): + if field.has(Tag.MULTIPLE): + data.update(field.default) + else: + data[field.name] = field.default - return attrs + return data def get_columns(spec: Spec) -> Optional[pd.Index]: """Derive columns from a specification.""" - names = [field.name for field in spec.fields.of_column] - elems = [field.name for field in spec.fields.of_data] + names = [field.name for field in spec.fields.of(Tag.COLUMN)] + elems = [field.name for field in spec.fields.of(Tag.DATA)] if len(names) == 0: return None @@ -231,25 +234,40 @@ def get_data(spec: Spec) -> Dict[Hashable, Any]: """Derive data from a specification.""" data: Dict[Hashable, Any] = {} - for field in spec.fields.of_data: - data[field.name] = ensure(field.default, field.dtype) + for field in spec.fields.of(Tag.DATA): + if field.has(Tag.MULTIPLE): + items = field.default.items() + else: + items = {field.name: field.default}.items() + + for name, default in items: + data[name] = ensure(default, field.dtype) return data def get_index(spec: Spec) -> Optional[pd.Index]: """Derive index from a specification.""" - names: List[Hashable] = [] - elems: List[Any] = [] + data: Dict[Hashable, Any] = {} - for field in spec.fields.of_index: - names.append(field.name) - elems.append(ensure(field.default, field.dtype)) + for field in spec.fields.of(Tag.INDEX): + if field.has(Tag.MULTIPLE): + items = field.default.items() + else: + items = {field.name: field.default}.items() - if len(names) == 0: + for name, default in items: + data[name] = ensure(default, field.dtype) + + if len(data) == 0: return None - if len(names) == 1: - return pd.Index(elems[0], name=names[0]) + if len(data) == 1: + return pd.Index( + list(data.values())[0], + name=list(data.keys())[0], + ) else: - elems = np.broadcast_arrays(*elems) - return pd.MultiIndex.from_arrays(elems, names=names) + return pd.MultiIndex.from_arrays( + np.broadcast_arrays(*data.values()), + names=list(data.keys()), + ) diff --git a/pandas_dataclasses/core/specs.py b/pandas_dataclasses/core/specs.py index 4a85b17..09c0285 100644 --- a/pandas_dataclasses/core/specs.py +++ b/pandas_dataclasses/core/specs.py @@ -42,6 +42,10 @@ class Field: default: Any = None """Default value of the field data.""" + def has(self, tag: Tag) -> bool: + """Check if the specification has a tag.""" + return bool(tag & Tag.union(self.tags)) + def update(self, obj: Any) -> "Field": """Update the specification by an object.""" return replace( @@ -52,31 +56,11 @@ class Field: class Fields(List[Field]): - """List of field specifications (with selectors).""" - - @property - def of_attr(self) -> "Fields": - """Select only attribute field specifications.""" - return self.filter(lambda f: Tag.ATTR in Tag.union(f.tags)) - - @property - def of_column(self) -> "Fields": - """Select only column field specifications.""" - return self.filter(lambda f: Tag.COLUMN in Tag.union(f.tags)) - - @property - def of_data(self) -> "Fields": - """Select only data field specifications.""" - return self.filter(lambda f: Tag.DATA in Tag.union(f.tags)) - - @property - def of_index(self) -> "Fields": - """Select only index field specifications.""" - return self.filter(lambda f: Tag.INDEX in Tag.union(f.tags)) - - def filter(self, condition: Callable[[Field], bool]) -> "Fields": - """Select only fields that make a condition True.""" - return type(self)(filter(condition, self)) + """List of field specifications with selectors.""" + + def of(self, tag: Tag) -> "Fields": + """Select only fields that have a tag.""" + return type(self)(filter(lambda field: field.has(tag), self)) def update(self, obj: Any) -> "Fields": """Update the specifications by an object.""" diff --git a/pandas_dataclasses/core/typing.py b/pandas_dataclasses/core/typing.py index 4769684..7ec0ba5 100644 --- a/pandas_dataclasses/core/typing.py +++ b/pandas_dataclasses/core/typing.py @@ -1,4 +1,4 @@ -__all__ = ["Attr", "Column", "Data", "Index", "Tag"] +__all__ = ["Attr", "Column", "Data", "Index", "Multiple", "Tag"] # standard library @@ -28,7 +28,7 @@ from typing import ( # dependencies import pandas as pd from pandas.api.types import pandas_dtype -from typing_extensions import Annotated, ParamSpec, get_args, get_origin +from typing_extensions import Annotated, ParamSpec, TypeGuard, get_args, get_origin # type hints (private) @@ -77,22 +77,22 @@ class Tag(Flag): DTYPE = auto() """Tag for a type specifying a data type.""" + MULTIPLE = auto() + """Tag for a type specifying a multiple-item field.""" + FIELD = ATTR | COLUMN | DATA | INDEX """Union of field-related tags.""" - ANY = FIELD | DTYPE + ANY = FIELD | DTYPE | MULTIPLE """Union of all tags.""" def annotates(self, tp: Any) -> bool: """Check if the tag annotates a type hint.""" - return any(map(self.covers, get_args(tp))) - - def covers(self, obj: Any) -> bool: - """Check if the tag is superset of an object.""" - return type(self).creates(obj) and obj in self + tags = filter(type(self).creates, get_args(tp)) + return bool(self & type(self).union(tags)) @classmethod - def creates(cls, obj: Any) -> bool: + def creates(cls, obj: Any) -> TypeGuard["Tag"]: """Check if Tag is the type of an object.""" return isinstance(obj, cls) @@ -102,12 +102,12 @@ class Tag(Flag): return reduce(or_, tags, Tag(0)) def __repr__(self) -> str: - """Return the hashtag-style string of the tag.""" + """Return the bracket-style string of the tag.""" return str(self) def __str__(self) -> str: - """Return the hashtag-style string of the tag.""" - return f"#{str(self.name).lower()}" + """Return the bracket-style string of the tag.""" + return f"<{str(self.name).lower()}>" # type hints (public) @@ -123,6 +123,9 @@ Data = Annotated[Collection[Annotated[T, Tag.DTYPE]], Tag.DATA] Index = Annotated[Collection[Annotated[T, Tag.DTYPE]], Tag.INDEX] """Type hint for index fields (``Index[T]``).""" +Multiple = Dict[str, Annotated[T, Tag.MULTIPLE]] +"""Type hint for multiple-item fields (``Multiple[T]``).""" + # runtime functions def gen_annotated(tp: Any) -> Iterable[Any]: @@ -158,10 +161,10 @@ def get_nontags(tp: Any, bound: Tag = Tag.ANY) -> List[Any]: def get_dtype(tp: Any) -> Optional[str]: """Extract a data type of NumPy or pandas from a type hint.""" - if (tagged := get_tagged(tp, Tag.DATA | Tag.INDEX)) is None: + if (tp := get_tagged(tp, Tag.DATA | Tag.INDEX, True)) is None: return None - if (dtype := get_tagged(tagged, Tag.DTYPE)) is None: + if (dtype := get_tagged(tp, Tag.DTYPE)) is None: return None if dtype is Any or dtype is type(None):
astropenguin/pandas-dataclasses
04bcdf568f873f831efb50636fc1614585dc74be
diff --git a/tests/data.py b/tests/data.py index c860371..97a1722 100644 --- a/tests/data.py +++ b/tests/data.py @@ -8,7 +8,7 @@ from typing import Any # dependencies import pandas as pd -from pandas_dataclasses import Attr, Column, Data, Index +from pandas_dataclasses import Attr, Column, Data, Index, Multiple from typing_extensions import Annotated as Ann @@ -62,6 +62,9 @@ class Weather: lat_unit: str = "deg" """Units of the latitude.""" + attrs: Multiple[Attr[Any]] = field(default_factory=dict) + """Other attributes.""" + weather = Weather( [2020, 2020, 2021, 2021, 2022], diff --git a/tests/test_core_api.py b/tests/test_core_api.py index f977da1..b0ef09e 100644 --- a/tests/test_core_api.py +++ b/tests/test_core_api.py @@ -6,7 +6,7 @@ from typing import cast import pandas as pd from pandas.testing import assert_frame_equal, assert_series_equal from data import Weather, weather, df_weather_true, ser_weather_true -from pandas_dataclasses import Spec, asframe, asseries +from pandas_dataclasses import Spec, Tag, asframe, asseries from pandas_dataclasses.core.api import get_attrs, get_columns, get_data, get_index @@ -27,27 +27,27 @@ def test_get_attrs() -> None: attrs = get_attrs(spec) for i, (key, val) in enumerate(attrs.items()): - assert key == spec.fields.of_attr[i].name - assert val == spec.fields.of_attr[i].default + assert key == spec.fields.of(Tag.ATTR)[i].name + assert val == spec.fields.of(Tag.ATTR)[i].default def test_get_columns() -> None: columns = cast(pd.Index, get_columns(spec)) for i in range(len(columns)): - assert columns[i] == spec.fields.of_data[i].name + assert columns[i] == spec.fields.of(Tag.DATA)[i].name for i in range(columns.nlevels): - assert columns.names[i] == spec.fields.of_column[i].name + assert columns.names[i] == spec.fields.of(Tag.COLUMN)[i].name def test_get_data() -> None: data = get_data(spec) for i, (key, val) in enumerate(data.items()): - assert key == spec.fields.of_data[i].name - assert val.dtype.name == spec.fields.of_data[i].dtype - assert (val == spec.fields.of_data[i].default).all() + assert key == spec.fields.of(Tag.DATA)[i].name + assert val.dtype.name == spec.fields.of(Tag.DATA)[i].dtype + assert (val == spec.fields.of(Tag.DATA)[i].default).all() def test_get_index() -> None: @@ -55,6 +55,6 @@ def test_get_index() -> None: for i in range(index.nlevels): level = index.get_level_values(i) - assert level.name == spec.fields.of_index[i].name - assert level.dtype.name == spec.fields.of_index[i].dtype - assert (level == spec.fields.of_index[i].default).all() + assert level.name == spec.fields.of(Tag.INDEX)[i].name + assert level.dtype.name == spec.fields.of(Tag.INDEX)[i].dtype + assert (level == spec.fields.of(Tag.INDEX)[i].default).all() diff --git a/tests/test_core_specs.py b/tests/test_core_specs.py index c942d71..077dcd7 100644 --- a/tests/test_core_specs.py +++ b/tests/test_core_specs.py @@ -14,7 +14,7 @@ spec_updated = spec @ weather # test functions def test_year() -> None: - field = spec.fields.of_index[0] + field = spec.fields.of(Tag.INDEX)[0] assert field.id == "year" assert field.tags == [Tag.INDEX] @@ -24,7 +24,7 @@ def test_year() -> None: def test_year_updated() -> None: - field = spec_updated.fields.of_index[0] + field = spec_updated.fields.of(Tag.INDEX)[0] assert field.id == "year" assert field.tags == [Tag.INDEX] @@ -34,7 +34,7 @@ def test_year_updated() -> None: def test_month() -> None: - field = spec.fields.of_index[1] + field = spec.fields.of(Tag.INDEX)[1] assert field.id == "month" assert field.tags == [Tag.INDEX] @@ -44,7 +44,7 @@ def test_month() -> None: def test_month_updated() -> None: - field = spec_updated.fields.of_index[1] + field = spec_updated.fields.of(Tag.INDEX)[1] assert field.id == "month" assert field.tags == [Tag.INDEX] @@ -54,7 +54,7 @@ def test_month_updated() -> None: def test_meas() -> None: - field = spec.fields.of_column[0] + field = spec.fields.of(Tag.COLUMN)[0] assert field.id == "meas" assert field.tags == [Tag.COLUMN] @@ -63,7 +63,7 @@ def test_meas() -> None: def test_meas_updated() -> None: - field = spec_updated.fields.of_column[0] + field = spec_updated.fields.of(Tag.COLUMN)[0] assert field.id == "meas" assert field.tags == [Tag.COLUMN] @@ -72,7 +72,7 @@ def test_meas_updated() -> None: def test_stat() -> None: - field = spec.fields.of_column[1] + field = spec.fields.of(Tag.COLUMN)[1] assert field.id == "stat" assert field.tags == [Tag.COLUMN] @@ -81,7 +81,7 @@ def test_stat() -> None: def test_stat_updated() -> None: - field = spec_updated.fields.of_column[1] + field = spec_updated.fields.of(Tag.COLUMN)[1] assert field.id == "stat" assert field.tags == [Tag.COLUMN] @@ -90,7 +90,7 @@ def test_stat_updated() -> None: def test_temp_avg() -> None: - field = spec.fields.of_data[0] + field = spec.fields.of(Tag.DATA)[0] assert field.id == "temp_avg" assert field.tags == [Tag.DATA] @@ -100,7 +100,7 @@ def test_temp_avg() -> None: def test_temp_avg_updated() -> None: - field = spec_updated.fields.of_data[0] + field = spec_updated.fields.of(Tag.DATA)[0] assert field.id == "temp_avg" assert field.tags == [Tag.DATA] @@ -110,7 +110,7 @@ def test_temp_avg_updated() -> None: def test_temp_max() -> None: - field = spec.fields.of_data[1] + field = spec.fields.of(Tag.DATA)[1] assert field.id == "temp_max" assert field.tags == [Tag.DATA] @@ -120,7 +120,7 @@ def test_temp_max() -> None: def test_temp_max_updated() -> None: - field = spec_updated.fields.of_data[1] + field = spec_updated.fields.of(Tag.DATA)[1] assert field.id == "temp_max" assert field.tags == [Tag.DATA] @@ -130,7 +130,7 @@ def test_temp_max_updated() -> None: def test_wind_avg() -> None: - field = spec.fields.of_data[2] + field = spec.fields.of(Tag.DATA)[2] assert field.id == "wind_avg" assert field.tags == [Tag.DATA] @@ -140,7 +140,7 @@ def test_wind_avg() -> None: def test_wind_avg_updated() -> None: - field = spec_updated.fields.of_data[2] + field = spec_updated.fields.of(Tag.DATA)[2] assert field.id == "wind_avg" assert field.tags == [Tag.DATA] @@ -150,7 +150,7 @@ def test_wind_avg_updated() -> None: def test_wind_max() -> None: - field = spec.fields.of_data[3] + field = spec.fields.of(Tag.DATA)[3] assert field.id == "wind_max" assert field.tags == [Tag.DATA] @@ -160,7 +160,7 @@ def test_wind_max() -> None: def test_wind_max_updated() -> None: - field = spec_updated.fields.of_data[3] + field = spec_updated.fields.of(Tag.DATA)[3] assert field.id == "wind_max" assert field.tags == [Tag.DATA] @@ -170,7 +170,7 @@ def test_wind_max_updated() -> None: def test_loc() -> None: - field = spec.fields.of_attr[0] + field = spec.fields.of(Tag.ATTR)[0] assert field.id == "loc" assert field.tags == [Tag.ATTR] @@ -179,7 +179,7 @@ def test_loc() -> None: def test_loc_updated() -> None: - field = spec_updated.fields.of_attr[0] + field = spec_updated.fields.of(Tag.ATTR)[0] assert field.id == "loc" assert field.tags == [Tag.ATTR] @@ -188,7 +188,7 @@ def test_loc_updated() -> None: def test_lon() -> None: - field = spec.fields.of_attr[1] + field = spec.fields.of(Tag.ATTR)[1] assert field.id == "lon" assert field.tags == [Tag.ATTR] @@ -197,7 +197,7 @@ def test_lon() -> None: def test_lon_updated() -> None: - field = spec_updated.fields.of_attr[1] + field = spec_updated.fields.of(Tag.ATTR)[1] assert field.id == "lon" assert field.tags == [Tag.ATTR] @@ -206,7 +206,7 @@ def test_lon_updated() -> None: def test_lat() -> None: - field = spec.fields.of_attr[2] + field = spec.fields.of(Tag.ATTR)[2] assert field.id == "lat" assert field.tags == [Tag.ATTR] @@ -215,7 +215,7 @@ def test_lat() -> None: def test_lat_updated() -> None: - field = spec_updated.fields.of_attr[2] + field = spec_updated.fields.of(Tag.ATTR)[2] assert field.id == "lat" assert field.tags == [Tag.ATTR] @@ -223,6 +223,24 @@ def test_lat_updated() -> None: assert field.default == weather.lat +def test_attrs() -> None: + field = spec.fields.of(Tag.ATTR)[3] + + assert field.id == "attrs" + assert field.tags == [Tag.ATTR, Tag.MULTIPLE] + assert field.name == "attrs" + assert field.default is MISSING + + +def test_attrs_updated() -> None: + field = spec_updated.fields.of(Tag.ATTR)[3] + + assert field.id == "attrs" + assert field.tags == [Tag.ATTR, Tag.MULTIPLE] + assert field.name == "attrs" + assert field.default == weather.attrs + + def test_factory() -> None: assert spec.factory is None
Support for multiple items received in a single field This issue will introduce a new type hint, `Multiple[T]`, an alias of `dict[str, T]` with a dedicated tag, `Tag.MULTIPLE`. Combined with other type hints (e.g. `Multiple[Data[Any]]`), it will enable the corresponding field receive a dictionary of data, indexes, or attributes. For example, the `attrs` field in the following code will accept a dictionary of any values which will be stored in `attrs` in a DataFrame. ```python from dataclasses import dataclass, field from pandas_dataclasses import AsFrame, Attr, Data, Index, Multiple @dataclass class Weather(AsFrame): """Weather information.""" year: Index[int] month: Index[int] temp: Data[float] wind: Data[float] attrs: Multiple[Attr[Any]] = field(default_factory=dict) ```
0.0
04bcdf568f873f831efb50636fc1614585dc74be
[ "tests/test_core_api.py::test_asframe", "tests/test_core_api.py::test_asseries", "tests/test_core_api.py::test_get_attrs", "tests/test_core_api.py::test_get_columns", "tests/test_core_api.py::test_get_data", "tests/test_core_api.py::test_get_index", "tests/test_core_specs.py::test_year", "tests/test_core_specs.py::test_year_updated", "tests/test_core_specs.py::test_month", "tests/test_core_specs.py::test_month_updated", "tests/test_core_specs.py::test_meas", "tests/test_core_specs.py::test_meas_updated", "tests/test_core_specs.py::test_stat", "tests/test_core_specs.py::test_stat_updated", "tests/test_core_specs.py::test_temp_avg", "tests/test_core_specs.py::test_temp_avg_updated", "tests/test_core_specs.py::test_temp_max", "tests/test_core_specs.py::test_temp_max_updated", "tests/test_core_specs.py::test_wind_avg", "tests/test_core_specs.py::test_wind_avg_updated", "tests/test_core_specs.py::test_wind_max", "tests/test_core_specs.py::test_wind_max_updated", "tests/test_core_specs.py::test_loc", "tests/test_core_specs.py::test_loc_updated", "tests/test_core_specs.py::test_lon", "tests/test_core_specs.py::test_lon_updated", "tests/test_core_specs.py::test_lat", "tests/test_core_specs.py::test_lat_updated", "tests/test_core_specs.py::test_attrs", "tests/test_core_specs.py::test_attrs_updated", "tests/test_core_specs.py::test_factory", "tests/test_core_specs.py::test_name", "tests/test_core_specs.py::test_origin" ]
[]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-01-07 07:47:20+00:00
mit
1,223
astropenguin__pandas-dataclasses-23
diff --git a/pandas_dataclasses/__init__.py b/pandas_dataclasses/__init__.py index 7a5a5be..f6b4111 100644 --- a/pandas_dataclasses/__init__.py +++ b/pandas_dataclasses/__init__.py @@ -4,12 +4,12 @@ __version__ = "0.1.0" # submodules -from . import dataspec from . import series +from . import specs from . import typing # aliases -from .dataspec import * from .series import * +from .specs import * from .typing import * diff --git a/pandas_dataclasses/series.py b/pandas_dataclasses/series.py index eb327c8..00396a8 100644 --- a/pandas_dataclasses/series.py +++ b/pandas_dataclasses/series.py @@ -15,7 +15,7 @@ from typing_extensions import ParamSpec, Protocol # submodules -from .dataspec import DataSpec +from .specs import DataSpec from .typing import AnyDType, DataClass
astropenguin/pandas-dataclasses
43eb642b575ee027d556622d3e661bad89653dfa
diff --git a/pandas_dataclasses/dataspec.py b/pandas_dataclasses/specs.py similarity index 100% rename from pandas_dataclasses/dataspec.py rename to pandas_dataclasses/specs.py diff --git a/tests/test_dataspec.py b/tests/test_specs.py similarity index 98% rename from tests/test_dataspec.py rename to tests/test_specs.py index 961a35f..bd59d59 100644 --- a/tests/test_dataspec.py +++ b/tests/test_specs.py @@ -4,7 +4,7 @@ from dataclasses import MISSING, dataclass # dependencies import numpy as np -from pandas_dataclasses.dataspec import DataSpec +from pandas_dataclasses.specs import DataSpec from pandas_dataclasses.typing import Attr, Data, Index, Name from typing_extensions import Annotated, Literal
Rename dataspec module - [x] Rename module (`dataspec` → `specs`) - [x] Rename test script (`test_dataspec.py` → `test_specs.py`)
0.0
43eb642b575ee027d556622d3e661bad89653dfa
[ "tests/test_specs.py::test_time", "tests/test_specs.py::test_temperature", "tests/test_specs.py::test_humidity", "tests/test_specs.py::test_wind_speed", "tests/test_specs.py::test_wind_direction", "tests/test_specs.py::test_location", "tests/test_specs.py::test_longitude", "tests/test_specs.py::test_latitude", "tests/test_specs.py::test_name" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2022-04-30 12:01:11+00:00
mit
1,224
astropenguin__pandas-dataclasses-29
diff --git a/pandas_dataclasses/typing.py b/pandas_dataclasses/typing.py index 89f913e..0741001 100644 --- a/pandas_dataclasses/typing.py +++ b/pandas_dataclasses/typing.py @@ -4,11 +4,23 @@ __all__ = ["Attr", "Data", "Index", "Name", "Other"] # standard library from dataclasses import Field from enum import Enum -from typing import Any, ClassVar, Collection, Dict, Hashable, Optional, TypeVar, Union +from typing import ( + Any, + ClassVar, + Collection, + Dict, + Hashable, + Optional, + Type, + TypeVar, + Union, +) # dependencies -import numpy as np +from numpy import dtype +from pandas.api.extensions import ExtensionDtype +from pandas.api.types import pandas_dtype # type: ignore from typing_extensions import ( Annotated, Literal, @@ -21,12 +33,19 @@ from typing_extensions import ( # type hints (private) -AnyDType: TypeAlias = "np.dtype[Any]" +AnyDType: TypeAlias = Union["dtype[Any]", ExtensionDtype] AnyField: TypeAlias = "Field[Any]" T = TypeVar("T") +TCovariant = TypeVar("TCovariant", covariant=True) THashable = TypeVar("THashable", bound=Hashable) +class Collection(Collection[TCovariant], Protocol): + """Type hint equivalent to typing.Collection.""" + + pass + + class DataClass(Protocol): """Type hint for dataclass objects.""" @@ -79,28 +98,34 @@ def deannotate(tp: Any) -> Any: return get_type_hints(Temporary)["type"] -def get_dtype(tp: Any) -> Optional[AnyDType]: - """Extract a dtype (NumPy data type) from a type hint.""" +def get_collection(tp: Any) -> Type[Collection[Any]]: + """Extract the first collection type from a type hint.""" tp = deannotate(tp) if get_origin(tp) is not Union: - raise TypeError(f"{tp!r} is not arrayable.") + raise TypeError(f"{tp!r} was not a union type.") - try: - tp_array, tp_scalar = get_args(tp) - except ValueError: - raise TypeError(f"{tp!r} is not arrayable.") + # flatten union type after deannotation + tp = Union[get_args(tp)] # type: ignore - if get_args(tp_array)[0] is not tp_scalar: - raise TypeError(f"{tp!r} is not arrayable.") + for arg in get_args(tp): + if get_origin(arg) is Collection: + return arg + + raise TypeError(f"{tp!r} had no collection type.") + + +def get_dtype(tp: Any) -> Optional[AnyDType]: + """Extract a dtype (the first data type) from a type hint.""" + dtype = get_args(get_collection(tp))[0] - if tp_scalar is Any or tp_scalar is type(None): - return None + if dtype is Any or dtype is type(None): + return - if get_origin(tp_scalar) is Literal: - tp_scalar = get_args(tp_scalar)[0] + if get_origin(dtype) is Literal: + dtype = get_args(dtype)[0] - return np.dtype(tp_scalar) + return pandas_dtype(dtype) # type: ignore def get_ftype(tp: Any, default: FType = FType.OTHER) -> FType:
astropenguin/pandas-dataclasses
a3fe3d75a59cabf25d7e456804251b313cc94737
diff --git a/tests/test_typing.py b/tests/test_typing.py index 5ad02c0..5b039c4 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -1,9 +1,10 @@ # standard library -from typing import Any +from typing import Any, Optional, Union # dependencies import numpy as np +import pandas as pd from pytest import mark from typing_extensions import Annotated, Literal from pandas_dataclasses.typing import ( @@ -23,10 +24,16 @@ testdata_dtype = [ (Data[None], None), (Data[int], np.dtype("int64")), (Data[Literal["i8"]], np.dtype("int64")), + (Data[Literal["boolean"]], pd.BooleanDtype()), (Index[Any], None), (Index[None], None), (Index[int], np.dtype("int64")), (Index[Literal["i8"]], np.dtype("int64")), + (Index[Literal["boolean"]], pd.BooleanDtype()), + (Optional[Data[float]], np.dtype("float64")), + (Optional[Index[float]], np.dtype("float64")), + (Union[Data[float], str], np.dtype("float64")), + (Union[Index[float], str], np.dtype("float64")), ] testdata_ftype = [
Update typing module - [x] Support union in data and index fields - [x] Support [pandas extended data types](https://pandas.pydata.org/docs/reference/arrays.html)
0.0
a3fe3d75a59cabf25d7e456804251b313cc94737
[ "tests/test_typing.py::test_get_dtype[tp4-dtype4]", "tests/test_typing.py::test_get_dtype[tp9-dtype9]", "tests/test_typing.py::test_get_dtype[tp10-dtype10]", "tests/test_typing.py::test_get_dtype[tp11-dtype11]", "tests/test_typing.py::test_get_dtype[tp12-dtype12]", "tests/test_typing.py::test_get_dtype[tp13-dtype13]" ]
[ "tests/test_typing.py::test_get_dtype[tp0-None]", "tests/test_typing.py::test_get_dtype[tp1-None]", "tests/test_typing.py::test_get_dtype[tp2-dtype2]", "tests/test_typing.py::test_get_dtype[tp3-dtype3]", "tests/test_typing.py::test_get_dtype[tp5-None]", "tests/test_typing.py::test_get_dtype[tp6-None]", "tests/test_typing.py::test_get_dtype[tp7-dtype7]", "tests/test_typing.py::test_get_dtype[tp8-dtype8]", "tests/test_typing.py::test_get_ftype[tp0-attr]", "tests/test_typing.py::test_get_ftype[tp1-data]", "tests/test_typing.py::test_get_ftype[tp2-index]", "tests/test_typing.py::test_get_ftype[tp3-name]", "tests/test_typing.py::test_get_ftype[tp4-other]", "tests/test_typing.py::test_get_name[tp0-None]", "tests/test_typing.py::test_get_name[tp1-None]", "tests/test_typing.py::test_get_name[tp2-None]", "tests/test_typing.py::test_get_name[tp3-None]", "tests/test_typing.py::test_get_name[tp4-attr]", "tests/test_typing.py::test_get_name[tp5-data]", "tests/test_typing.py::test_get_name[tp6-index]", "tests/test_typing.py::test_get_name[tp7-name]" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2022-05-05 17:00:17+00:00
mit
1,225
astropenguin__pandas-dataclasses-39
diff --git a/pandas_dataclasses/specs.py b/pandas_dataclasses/specs.py index 7b81e76..d9e5830 100644 --- a/pandas_dataclasses/specs.py +++ b/pandas_dataclasses/specs.py @@ -17,7 +17,7 @@ from .typing import ( AnyField, DataClass, FType, - deannotate, + get_annotated, get_dtype, get_ftype, get_name, @@ -154,5 +154,5 @@ def get_fieldspec(field: AnyField) -> Optional[AnyFieldSpec]: return ScalarFieldSpec( type=ftype.value, name=name, - data=ScalarSpec(deannotate(field.type), field.default), + data=ScalarSpec(get_annotated(field.type), field.default), ) diff --git a/pandas_dataclasses/typing.py b/pandas_dataclasses/typing.py index 7fe4325..85a9c0f 100644 --- a/pandas_dataclasses/typing.py +++ b/pandas_dataclasses/typing.py @@ -13,7 +13,7 @@ from typing import ( Hashable, Iterator, Optional, - Type, + Tuple, TypeVar, Union, ) @@ -42,12 +42,6 @@ TCovariant = TypeVar("TCovariant", covariant=True) THashable = TypeVar("THashable", bound=Hashable) -class Collection(Collection[TCovariant], Protocol): - """Type hint equivalent to typing.Collection.""" - - pass - - class DataClass(Protocol): """Type hint for dataclass objects.""" @@ -72,6 +66,14 @@ class FType(Enum): OTHER = "other" """Annotation for other fields.""" + @classmethod + def annotates(cls, tp: Any) -> bool: + """Check if any ftype annotates a type hint.""" + if get_origin(tp) is not Annotated: + return False + + return any(isinstance(arg, cls) for arg in get_args(tp)) + # type hints (public) Attr = Annotated[T, FType.ATTR] @@ -92,7 +94,7 @@ Other = Annotated[T, FType.OTHER] # runtime functions def deannotate(tp: Any) -> Any: - """Recursively remove annotations from a type hint.""" + """Recursively remove annotations in a type hint.""" class Temporary: __annotations__ = dict(type=tp) @@ -100,36 +102,40 @@ def deannotate(tp: Any) -> Any: return get_type_hints(Temporary)["type"] -def get_annotations(tp: Any) -> Iterator[Any]: - """Extract all annotations from a type hint.""" +def find_annotated(tp: Any) -> Iterator[Any]: + """Generate all annotated types in a type hint.""" args = get_args(tp) if get_origin(tp) is Annotated: - yield from get_annotations(args[0]) - yield from args[1:] + yield tp + yield from find_annotated(args[0]) else: - yield from chain(*map(get_annotations, args)) + yield from chain(*map(find_annotated, args)) -def get_collections(tp: Any) -> Iterator[Type[Collection[Any]]]: - """Extract all collection types from a type hint.""" - args = get_args(tp) +def get_annotated(tp: Any) -> Any: + """Extract the first ftype-annotated type.""" + for annotated in filter(FType.annotates, find_annotated(tp)): + return deannotate(annotated) - if get_origin(tp) is Collection: - yield tp - else: - yield from chain(*map(get_collections, args)) + raise TypeError("Could not find any ftype-annotated type.") + + +def get_annotations(tp: Any) -> Tuple[Any, ...]: + """Extract annotations of the first ftype-annotated type.""" + for annotated in filter(FType.annotates, find_annotated(tp)): + return get_args(annotated)[1:] + + raise TypeError("Could not find any ftype-annotated type.") def get_dtype(tp: Any) -> Optional[AnyDType]: - """Extract a dtype (most outer data type) from a type hint.""" + """Extract a NumPy or pandas data type.""" try: - collection = list(get_collections(tp))[-1] - except IndexError: + dtype = get_args(get_annotated(tp))[1] + except TypeError: raise TypeError(f"Could not find any dtype in {tp!r}.") - dtype = get_args(collection)[0] - if dtype is Any or dtype is type(None): return @@ -140,21 +146,16 @@ def get_dtype(tp: Any) -> Optional[AnyDType]: def get_ftype(tp: Any, default: FType = FType.OTHER) -> FType: - """Extract an ftype (most outer FType) from a type hint.""" - for annotation in reversed(list(get_annotations(tp))): - if isinstance(annotation, FType): - return annotation - - return default + """Extract an ftype if found or return given default.""" + try: + return get_annotations(tp)[0] + except (IndexError, TypeError): + return default def get_name(tp: Any, default: Hashable = None) -> Hashable: - """Extract a name (most outer hashable) from a type hint.""" - for annotation in reversed(list(get_annotations(tp))): - if isinstance(annotation, FType): - continue - - if isinstance(annotation, Hashable): - return annotation - - return default + """Extract a name if found or return given default.""" + try: + return get_annotations(tp)[1] + except (IndexError, TypeError): + return default
astropenguin/pandas-dataclasses
5d2fca7857149e98cd50529f11fc385af42e5754
diff --git a/tests/test_typing.py b/tests/test_typing.py index 5b039c4..fa80d31 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -1,15 +1,17 @@ # standard library -from typing import Any, Optional, Union +from typing import Any, Union # dependencies import numpy as np import pandas as pd from pytest import mark -from typing_extensions import Annotated, Literal +from typing_extensions import Annotated as Ann +from typing_extensions import Literal as L from pandas_dataclasses.typing import ( Attr, Data, + FType, Index, Name, get_dtype, @@ -22,26 +24,38 @@ from pandas_dataclasses.typing import ( testdata_dtype = [ (Data[Any], None), (Data[None], None), - (Data[int], np.dtype("int64")), - (Data[Literal["i8"]], np.dtype("int64")), - (Data[Literal["boolean"]], pd.BooleanDtype()), + (Data[int], np.dtype("i8")), + (Data[L["i8"]], np.dtype("i8")), + (Data[L["boolean"]], pd.BooleanDtype()), + (Data[L["category"]], pd.CategoricalDtype()), (Index[Any], None), (Index[None], None), - (Index[int], np.dtype("int64")), - (Index[Literal["i8"]], np.dtype("int64")), - (Index[Literal["boolean"]], pd.BooleanDtype()), - (Optional[Data[float]], np.dtype("float64")), - (Optional[Index[float]], np.dtype("float64")), - (Union[Data[float], str], np.dtype("float64")), - (Union[Index[float], str], np.dtype("float64")), + (Index[int], np.dtype("i8")), + (Index[L["i8"]], np.dtype("i8")), + (Index[L["boolean"]], pd.BooleanDtype()), + (Index[L["category"]], pd.CategoricalDtype()), + (Ann[Data[float], "data"], np.dtype("f8")), + (Ann[Index[float], "index"], np.dtype("f8")), + (Union[Ann[Data[float], "data"], Ann[Any, "any"]], np.dtype("f8")), + (Union[Ann[Index[float], "index"], Ann[Any, "any"]], np.dtype("f8")), ] testdata_ftype = [ - (Attr[Any], "attr"), - (Data[Any], "data"), - (Index[Any], "index"), - (Name[Any], "name"), - (Any, "other"), + (Attr[Any], FType.ATTR), + (Data[Any], FType.DATA), + (Index[Any], FType.INDEX), + (Name[Any], FType.NAME), + (Any, FType.OTHER), + (Ann[Attr[Any], "attr"], FType.ATTR), + (Ann[Data[Any], "data"], FType.DATA), + (Ann[Index[Any], "index"], FType.INDEX), + (Ann[Name[Any], "name"], FType.NAME), + (Ann[Any, "other"], FType.OTHER), + (Union[Ann[Attr[Any], "attr"], Ann[Any, "any"]], FType.ATTR), + (Union[Ann[Data[Any], "data"], Ann[Any, "any"]], FType.DATA), + (Union[Ann[Index[Any], "index"], Ann[Any, "any"]], FType.INDEX), + (Union[Ann[Name[Any], "name"], Ann[Any, "any"]], FType.NAME), + (Union[Ann[Any, "other"], Ann[Any, "any"]], FType.OTHER), ] testdata_name = [ @@ -49,10 +63,17 @@ testdata_name = [ (Data[Any], None), (Index[Any], None), (Name[Any], None), - (Annotated[Attr[Any], "attr"], "attr"), - (Annotated[Data[Any], "data"], "data"), - (Annotated[Index[Any], "index"], "index"), - (Annotated[Name[Any], "name"], "name"), + (Any, None), + (Ann[Attr[Any], "attr"], "attr"), + (Ann[Data[Any], "data"], "data"), + (Ann[Index[Any], "index"], "index"), + (Ann[Name[Any], "name"], "name"), + (Ann[Any, "other"], None), + (Union[Ann[Attr[Any], "attr"], Ann[Any, "any"]], "attr"), + (Union[Ann[Data[Any], "data"], Ann[Any, "any"]], "data"), + (Union[Ann[Index[Any], "index"], Ann[Any, "any"]], "index"), + (Union[Ann[Name[Any], "name"], Ann[Any, "any"]], "name"), + (Union[Ann[Any, "other"], Ann[Any, "any"]], None), ] @@ -64,7 +85,7 @@ def test_get_dtype(tp: Any, dtype: Any) -> None: @mark.parametrize("tp, ftype", testdata_ftype) def test_get_ftype(tp: Any, ftype: Any) -> None: - assert get_ftype(tp).value == ftype + assert get_ftype(tp) is ftype @mark.parametrize("tp, name", testdata_name)
Fix support for union type in dataclass fields
0.0
5d2fca7857149e98cd50529f11fc385af42e5754
[ "tests/test_typing.py::test_get_name[tp9-None]", "tests/test_typing.py::test_get_name[tp10-attr]", "tests/test_typing.py::test_get_name[tp11-data]", "tests/test_typing.py::test_get_name[tp12-index]", "tests/test_typing.py::test_get_name[tp13-name]", "tests/test_typing.py::test_get_name[tp14-None]" ]
[ "tests/test_typing.py::test_get_dtype[tp0-None]", "tests/test_typing.py::test_get_dtype[tp1-None]", "tests/test_typing.py::test_get_dtype[tp2-dtype2]", "tests/test_typing.py::test_get_dtype[tp3-dtype3]", "tests/test_typing.py::test_get_dtype[tp4-dtype4]", "tests/test_typing.py::test_get_dtype[tp5-dtype5]", "tests/test_typing.py::test_get_dtype[tp6-None]", "tests/test_typing.py::test_get_dtype[tp7-None]", "tests/test_typing.py::test_get_dtype[tp8-dtype8]", "tests/test_typing.py::test_get_dtype[tp9-dtype9]", "tests/test_typing.py::test_get_dtype[tp10-dtype10]", "tests/test_typing.py::test_get_dtype[tp11-dtype11]", "tests/test_typing.py::test_get_dtype[tp12-dtype12]", "tests/test_typing.py::test_get_dtype[tp13-dtype13]", "tests/test_typing.py::test_get_dtype[tp14-dtype14]", "tests/test_typing.py::test_get_dtype[tp15-dtype15]", "tests/test_typing.py::test_get_ftype[tp0-FType.ATTR]", "tests/test_typing.py::test_get_ftype[tp1-FType.DATA]", "tests/test_typing.py::test_get_ftype[tp2-FType.INDEX]", "tests/test_typing.py::test_get_ftype[tp3-FType.NAME]", "tests/test_typing.py::test_get_ftype[tp4-FType.OTHER]", "tests/test_typing.py::test_get_ftype[tp5-FType.ATTR]", "tests/test_typing.py::test_get_ftype[tp6-FType.DATA]", "tests/test_typing.py::test_get_ftype[tp7-FType.INDEX]", "tests/test_typing.py::test_get_ftype[tp8-FType.NAME]", "tests/test_typing.py::test_get_ftype[tp9-FType.OTHER]", "tests/test_typing.py::test_get_ftype[tp10-FType.ATTR]", "tests/test_typing.py::test_get_ftype[tp11-FType.DATA]", "tests/test_typing.py::test_get_ftype[tp12-FType.INDEX]", "tests/test_typing.py::test_get_ftype[tp13-FType.NAME]", "tests/test_typing.py::test_get_ftype[tp14-FType.OTHER]", "tests/test_typing.py::test_get_name[tp0-None]", "tests/test_typing.py::test_get_name[tp1-None]", "tests/test_typing.py::test_get_name[tp2-None]", "tests/test_typing.py::test_get_name[tp3-None]", "tests/test_typing.py::test_get_name[tp4-None]", "tests/test_typing.py::test_get_name[tp5-attr]", "tests/test_typing.py::test_get_name[tp6-data]", "tests/test_typing.py::test_get_name[tp7-index]", "tests/test_typing.py::test_get_name[tp8-name]" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-05-12 15:41:17+00:00
mit
1,226
astropenguin__xarray-dataclasses-113
diff --git a/README.md b/README.md index 7f5efcc..cfd55fd 100644 --- a/README.md +++ b/README.md @@ -291,12 +291,15 @@ class Image(AsDataArray): y: Coordof[YAxis] = 0 ``` -### Custom DataArray and Dataset factories +### Options for DataArray and Dataset creation + +For customization, users can add a special class attribute, `__dataoptions__`, to a DataArray or Dataset class. +A custom factory for DataArray or Dataset creation is only supported in the current implementation. -For customization, users can use a function or a class to create an initial DataArray or Dataset object by specifying a special class attribute, `__dataarray_factory__` or `__dataset_factory__`, respectively. ```python import xarray as xr +from xarray_dataclasses import DataOptions class Custom(xr.DataArray): @@ -308,19 +311,23 @@ class Custom(xr.DataArray): print("Custom method!") +dataoptions = DataOptions(Custom) + + @dataclass class Image(AsDataArray): """Specs for a monochromatic image.""" + __dataoptions__ = dataoptions + data: Data[tuple[X, Y], float] x: Coord[X, int] = 0 y: Coord[Y, int] = 0 - __dataarray_factory__ = Custom image = Image.ones([3, 3]) -isinstance(image, Custom) # True -image.custom_method() # Custom method! +isinstance(image, Custom) # True +image.custom_method() # Custom method! ``` ### DataArray and Dataset creation without shorthands diff --git a/xarray_dataclasses/__init__.py b/xarray_dataclasses/__init__.py index 398052a..11f3de4 100644 --- a/xarray_dataclasses/__init__.py +++ b/xarray_dataclasses/__init__.py @@ -20,6 +20,7 @@ from . import dataarray from . import dataset from . import deprecated from . import datamodel +from . import dataoptions from . import typing @@ -28,6 +29,7 @@ from .dataarray import * from .dataset import * from .deprecated import * from .datamodel import * +from .dataoptions import * from .typing import * diff --git a/xarray_dataclasses/dataarray.py b/xarray_dataclasses/dataarray.py index 10a99ba..7d6afa5 100644 --- a/xarray_dataclasses/dataarray.py +++ b/xarray_dataclasses/dataarray.py @@ -5,7 +5,7 @@ __all__ = ["AsDataArray", "asdataarray"] from dataclasses import Field from functools import wraps from types import MethodType -from typing import Any, Callable, Dict, Type, TypeVar, Union, overload +from typing import Any, Callable, Dict, Optional, Type, TypeVar, Union, overload # dependencies @@ -16,8 +16,13 @@ from typing_extensions import ParamSpec, Protocol # submodules -from .datamodel import DataModel, Reference -from .typing import Order, Shape, Sizes +from .datamodel import DataModel +from .dataoptions import DataOptions +from .typing import DataType, Order, Shape, Sizes + + +# constants +DEFAULT_OPTIONS = DataOptions(xr.DataArray) # type hints @@ -38,7 +43,7 @@ class DataArrayClass(Protocol[P, TDataArray_]): __init__: Callable[P, None] __dataclass_fields__: Dict[str, Field[Any]] - __dataarray_factory__: Callable[..., TDataArray_] + __dataoptions__: DataOptions[TDataArray_] # custom classproperty @@ -65,8 +70,8 @@ class classproperty: @overload def asdataarray( dataclass: DataArrayClass[Any, TDataArray], - reference: Reference = None, - dataarray_factory: Any = xr.DataArray, + reference: Optional[DataType] = None, + dataoptions: Any = DEFAULT_OPTIONS, ) -> TDataArray: ... @@ -74,8 +79,8 @@ def asdataarray( @overload def asdataarray( dataclass: DataClass[Any], - reference: Reference = None, - dataarray_factory: Callable[..., TDataArray] = xr.DataArray, + reference: Optional[DataType] = None, + dataoptions: DataOptions[TDataArray] = DEFAULT_OPTIONS, ) -> TDataArray: ... @@ -83,26 +88,32 @@ def asdataarray( def asdataarray( dataclass: Any, reference: Any = None, - dataarray_factory: Any = xr.DataArray, + dataoptions: Any = DEFAULT_OPTIONS, ) -> Any: """Create a DataArray object from a dataclass object. Args: dataclass: Dataclass object that defines typed DataArray. reference: DataArray or Dataset object as a reference of shape. - dataset_factory: Factory function of DataArray. + dataoptions: Options for DataArray creation. Returns: DataArray object created from the dataclass object. """ try: - dataarray_factory = dataclass.__dataarray_factory__ + # for backward compatibility (deprecated in v1.0.0) + dataoptions = DataOptions(dataclass.__dataarray_factory__) + except AttributeError: + pass + + try: + dataoptions = dataclass.__dataoptions__ except AttributeError: pass model = DataModel.from_dataclass(dataclass) - dataarray = dataarray_factory(model.data[0](reference)) + dataarray = dataoptions.factory(model.data[0](reference)) for coord in model.coord: dataarray.coords.update({coord.name: coord(dataarray)}) @@ -119,9 +130,7 @@ def asdataarray( class AsDataArray: """Mix-in class that provides shorthand methods.""" - def __dataarray_factory__(self, data: Any = None) -> xr.DataArray: - """Default DataArray factory (xarray.DataArray).""" - return xr.DataArray(data) + __dataoptions__ = DEFAULT_OPTIONS @classproperty def new(cls: Type[DataArrayClass[P, TDataArray]]) -> Callable[P, TDataArray]: diff --git a/xarray_dataclasses/datamodel.py b/xarray_dataclasses/datamodel.py index a865899..2d3e3c6 100644 --- a/xarray_dataclasses/datamodel.py +++ b/xarray_dataclasses/datamodel.py @@ -3,7 +3,7 @@ __all__ = ["DataModel"] # standard library from dataclasses import Field, dataclass, field, is_dataclass -from typing import Any, List, Optional, Type, Union, cast +from typing import Any, List, Optional, Type, cast # dependencies @@ -17,6 +17,7 @@ from .deprecated import get_type_hints from .typing import ( ArrayLike, DataClass, + DataType, Dims, Dtype, FieldType, @@ -28,8 +29,7 @@ from .typing import ( # type hints -DataType = TypedDict("DataType", dims=Dims, dtype=Dtype) -Reference = Union[xr.DataArray, xr.Dataset, None] +DimsDtype = TypedDict("DimsDtype", dims=Dims, dtype=Dtype) # field models @@ -43,13 +43,13 @@ class Data: value: Any """Value assigned to the field.""" - type: DataType + type: DimsDtype """Type (dims and dtype) of the field.""" factory: Optional[Type[DataClass]] = None """Factory dataclass to create a DataArray object.""" - def __call__(self, reference: Reference = None) -> xr.DataArray: + def __call__(self, reference: Optional[DataType] = None) -> xr.DataArray: """Create a DataArray object from the value and a reference.""" from .dataarray import asdataarray @@ -173,7 +173,7 @@ def typedarray( data: Any, dims: Dims, dtype: Dtype, - reference: Reference = None, + reference: Optional[DataType] = None, ) -> xr.DataArray: """Create a DataArray object with given dims and dtype. diff --git a/xarray_dataclasses/dataoptions.py b/xarray_dataclasses/dataoptions.py new file mode 100644 index 0000000..3d479a3 --- /dev/null +++ b/xarray_dataclasses/dataoptions.py @@ -0,0 +1,23 @@ +__all__ = ["DataOptions"] + + +# standard library +from dataclasses import dataclass +from typing import Callable, Generic, TypeVar + + +# submodules +from .typing import DataType + + +# type hints +TDataType = TypeVar("TDataType", bound=DataType) + + +# dataclasses +@dataclass(frozen=True) +class DataOptions(Generic[TDataType]): + """Options for DataArray or Dataset creation.""" + + factory: Callable[..., TDataType] + """Factory function for DataArray or Dataset.""" diff --git a/xarray_dataclasses/dataset.py b/xarray_dataclasses/dataset.py index f71a1c9..507a954 100644 --- a/xarray_dataclasses/dataset.py +++ b/xarray_dataclasses/dataset.py @@ -5,7 +5,7 @@ __all__ = ["AsDataset", "asdataset"] from dataclasses import Field from functools import wraps from types import MethodType -from typing import Any, Callable, Dict, Type, TypeVar, overload +from typing import Any, Callable, Dict, Optional, Type, TypeVar, overload # dependencies @@ -16,8 +16,13 @@ from typing_extensions import ParamSpec, Protocol # submodules -from .datamodel import DataModel, Reference -from .typing import Order, Sizes +from .datamodel import DataModel +from .dataoptions import DataOptions +from .typing import DataType, Order, Sizes + + +# constants +DEFAULT_OPTIONS = DataOptions(xr.Dataset) # type hints @@ -38,7 +43,7 @@ class DatasetClass(Protocol[P, TDataset_]): __init__: Callable[P, None] __dataclass_fields__: Dict[str, Field[Any]] - __dataset_factory__: Callable[..., TDataset_] + __dataoptions__: DataOptions[TDataset_] # custom classproperty @@ -65,8 +70,8 @@ class classproperty: @overload def asdataset( dataclass: DatasetClass[Any, TDataset], - reference: Reference = None, - dataset_factory: Any = xr.Dataset, + reference: Optional[DataType] = None, + dataoptions: Any = DEFAULT_OPTIONS, ) -> TDataset: ... @@ -74,8 +79,8 @@ def asdataset( @overload def asdataset( dataclass: DataClass[Any], - reference: Reference = None, - dataset_factory: Callable[..., TDataset] = xr.Dataset, + reference: Optional[DataType] = None, + dataoptions: DataOptions[TDataset] = DEFAULT_OPTIONS, ) -> TDataset: ... @@ -83,26 +88,32 @@ def asdataset( def asdataset( dataclass: Any, reference: Any = None, - dataset_factory: Any = xr.Dataset, + dataoptions: Any = DEFAULT_OPTIONS, ) -> Any: """Create a Dataset object from a dataclass object. Args: dataclass: Dataclass object that defines typed Dataset. reference: DataArray or Dataset object as a reference of shape. - dataset_factory: Factory function of Dataset. + dataoptions: Options for Dataset creation. Returns: Dataset object created from the dataclass object. """ try: - dataset_factory = dataclass.__dataset_factory__ + # for backward compatibility (deprecated in v1.0.0) + dataoptions = DataOptions(dataclass.__dataset_factory__) + except AttributeError: + pass + + try: + dataoptions = dataclass.__dataoptions__ except AttributeError: pass model = DataModel.from_dataclass(dataclass) - dataset = dataset_factory() + dataset = dataoptions.factory() for data in model.data: dataset.update({data.name: data(reference)}) @@ -119,9 +130,7 @@ def asdataset( class AsDataset: """Mix-in class that provides shorthand methods.""" - def __dataset_factory__(self, data_vars: Any = None) -> xr.Dataset: - """Default Dataset factory (xarray.Dataset).""" - return xr.Dataset(data_vars) + __dataoptions__ = DEFAULT_OPTIONS @classproperty def new(cls: Type[DatasetClass[P, TDataset]]) -> Callable[P, TDataset]: diff --git a/xarray_dataclasses/typing.py b/xarray_dataclasses/typing.py index a6c0043..fc67d01 100644 --- a/xarray_dataclasses/typing.py +++ b/xarray_dataclasses/typing.py @@ -33,6 +33,7 @@ from typing import ( # dependencies +import xarray as xr from typing_extensions import ( Annotated, Literal, @@ -72,6 +73,7 @@ class FieldType(Enum): # type hints +DataType = Union[xr.DataArray, xr.Dataset] Dims = Tuple[str, ...] Dtype = Optional[str] Order = Literal["C", "F"]
astropenguin/xarray-dataclasses
8b68233fa0ab8faddeb48908fd0e10117dea7a43
diff --git a/tests/test_dataarray.py b/tests/test_dataarray.py index aa77377..88f982b 100644 --- a/tests/test_dataarray.py +++ b/tests/test_dataarray.py @@ -1,6 +1,6 @@ # standard library from dataclasses import dataclass -from typing import Any, Tuple +from typing import Tuple # third-party packages @@ -11,6 +11,7 @@ from typing_extensions import Literal # submodules from xarray_dataclasses.dataarray import AsDataArray +from xarray_dataclasses.dataoptions import DataOptions from xarray_dataclasses.typing import Attr, Coord, Data, Name @@ -31,19 +32,21 @@ class Custom(xr.DataArray): __slots__ = () +dataoptions = DataOptions(Custom) + + @dataclass class Image(AsDataArray): """Specs for a monochromatic image.""" + __dataoptions__ = dataoptions + data: Data[Tuple[X, Y], float] x: Coord[X, int] = 0 y: Coord[Y, int] = 0 units: Attr[str] = "cd / m^2" name: Name[str] = "luminance" - def __dataarray_factory__(self, data: Any = None) -> Custom: - return Custom(data) - # test datasets created = Image.ones(SHAPE) diff --git a/tests/test_dataset.py b/tests/test_dataset.py index 35c4e1d..6169033 100644 --- a/tests/test_dataset.py +++ b/tests/test_dataset.py @@ -1,6 +1,6 @@ # standard library from dataclasses import dataclass -from typing import Any, Tuple +from typing import Tuple # third-party packages @@ -12,6 +12,7 @@ from typing_extensions import Literal # submodules from xarray_dataclasses.dataarray import AsDataArray from xarray_dataclasses.dataset import AsDataset +from xarray_dataclasses.dataoptions import DataOptions from xarray_dataclasses.typing import Attr, Coord, Data # constants @@ -29,6 +30,9 @@ class Custom(xr.Dataset): __slots__ = () +dataoptions = DataOptions(Custom) + + @dataclass class Image(AsDataArray): """Specs for a monochromatic image.""" @@ -40,6 +44,8 @@ class Image(AsDataArray): class ColorImage(AsDataset): """Specs for a color image.""" + __dataoptions__ = dataoptions + red: Data[Tuple[X, Y], float] green: Data[Tuple[X, Y], float] blue: Data[Tuple[X, Y], float] @@ -47,9 +53,6 @@ class ColorImage(AsDataset): y: Coord[Y, int] = 0 units: Attr[str] = "cd / m^2" - def __dataset_factory__(self, data_vars: Any = None) -> Custom: - return Custom(data_vars) - # test datasets created = ColorImage.new(
Add data options Add `__dataoptions__` that allows users to control detailed behaviors of DataArray or Dataset creation.
0.0
8b68233fa0ab8faddeb48908fd0e10117dea7a43
[ "tests/test_dataarray.py::test_type", "tests/test_dataarray.py::test_data", "tests/test_dataarray.py::test_dtype", "tests/test_dataarray.py::test_dims", "tests/test_dataarray.py::test_attrs", "tests/test_dataarray.py::test_name", "tests/test_dataset.py::test_type", "tests/test_dataset.py::test_data_vars", "tests/test_dataset.py::test_dims", "tests/test_dataset.py::test_attrs" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_added_files", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2021-11-29 18:06:36+00:00
mit
1,227
astropenguin__xarray-dataclasses-142
diff --git a/xarray_dataclasses/datamodel.py b/xarray_dataclasses/datamodel.py index c4db0b4..675c98b 100644 --- a/xarray_dataclasses/datamodel.py +++ b/xarray_dataclasses/datamodel.py @@ -14,7 +14,6 @@ from typing_extensions import Literal, ParamSpec, get_type_hints # submodules from .typing import ( - ArrayLike, DataClass, DataType, Dims, @@ -254,27 +253,27 @@ def get_typedarray( DataArray object with given dims and dtype. """ - if isinstance(data, ArrayLike): - array = cast(np.ndarray, data) - else: - array = np.asarray(data) + try: + data.__array__ + except AttributeError: + data = np.asarray(data) if dtype is not None: - array = array.astype(dtype, copy=False) + data = data.astype(dtype, copy=False) - if array.ndim == len(dims): - dataarray = xr.DataArray(array, dims=dims) - elif array.ndim == 0 and reference is not None: - dataarray = xr.DataArray(array) + if data.ndim == len(dims): + dataarray = xr.DataArray(data, dims=dims) + elif data.ndim == 0 and reference is not None: + dataarray = xr.DataArray(data) else: raise ValueError( "Could not create a DataArray object from data. " - f"Mismatch between shape {array.shape} and dims {dims}." + f"Mismatch between shape {data.shape} and dims {dims}." ) if reference is None: return dataarray - - diff_dims = set(reference.dims) - set(dims) - subspace = reference.isel({dim: 0 for dim in diff_dims}) - return dataarray.broadcast_like(subspace) + else: + ddims = set(reference.dims) - set(dims) + reference = reference.isel({dim: 0 for dim in ddims}) + return dataarray.broadcast_like(reference) diff --git a/xarray_dataclasses/typing.py b/xarray_dataclasses/typing.py index f8113ed..c5eea7f 100644 --- a/xarray_dataclasses/typing.py +++ b/xarray_dataclasses/typing.py @@ -23,6 +23,7 @@ from enum import Enum from typing import ( Any, ClassVar, + Collection, Dict, Hashable, Optional, @@ -44,7 +45,6 @@ from typing_extensions import ( get_args, get_origin, get_type_hints, - runtime_checkable, ) @@ -65,23 +65,16 @@ Shape = Union[Sequence[int], int] Sizes = Dict[str, int] -@runtime_checkable -class ArrayLike(Protocol[TDims, TDtype]): - """Type hint for array-like objects.""" +class Labeled(Protocol[TDims]): + """Type hint for labeled objects.""" - def astype(self: T, dtype: Any) -> T: - """Method to convert data type of the object.""" - ... + pass - @property - def ndim(self) -> int: - """Number of dimensions of the object.""" - ... - @property - def shape(self) -> Tuple[int, ...]: - """Shape of the object.""" - ... +class Collection(Labeled[TDims], Collection[TDtype], Protocol): + """Type hint for labeled collection objects.""" + + pass class DataClass(Protocol[P]): @@ -138,7 +131,7 @@ Reference: """ -Coord = Annotated[Union[ArrayLike[TDims, TDtype], TDtype], FieldType.COORD] +Coord = Annotated[Union[Collection[TDims, TDtype], TDtype], FieldType.COORD] """Type hint to define coordinate fields (``Coord[TDims, TDtype]``). Example: @@ -189,7 +182,7 @@ Hint: """ -Data = Annotated[Union[ArrayLike[TDims, TDtype], TDtype], FieldType.DATA] +Data = Annotated[Union[Collection[TDims, TDtype], TDtype], FieldType.DATA] """Type hint to define data fields (``Coordof[TDims, TDtype]``). Examples: @@ -267,7 +260,7 @@ def get_dims(type_: Any) -> Dims: args = get_args(type_) origin = get_origin(type_) - if origin is ArrayLike: + if origin is Collection: return get_dims(args[0]) if origin is tuple or origin is Tuple: @@ -298,7 +291,7 @@ def get_dtype(type_: Any) -> Dtype: args = get_args(type_) origin = get_origin(type_) - if origin is ArrayLike: + if origin is Collection: return get_dtype(args[1]) if origin is Literal:
astropenguin/xarray-dataclasses
364216ec6813ec09703c45a1ec5b133172debbc3
diff --git a/tests/test_typing.py b/tests/test_typing.py index f03af26..f13a8f3 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -9,8 +9,8 @@ from typing_extensions import Annotated, Literal # submodules from xarray_dataclasses.typing import ( - ArrayLike, Attr, + Collection, Coord, Data, Name, @@ -34,10 +34,10 @@ testdata_dims = [ (Tuple[()], ()), (Tuple[X], ("x",)), (Tuple[X, Y], ("x", "y")), - (ArrayLike[X, Any], ("x",)), - (ArrayLike[Tuple[()], Any], ()), - (ArrayLike[Tuple[X], Any], ("x",)), - (ArrayLike[Tuple[X, Y], Any], ("x", "y")), + (Collection[X, Any], ("x",)), + (Collection[Tuple[()], Any], ()), + (Collection[Tuple[X], Any], ("x",)), + (Collection[Tuple[X, Y], Any], ("x", "y")), ] testdata_dtype = [ @@ -45,10 +45,10 @@ testdata_dtype = [ (NoneType, None), (Int64, "int64"), (int, "int"), - (ArrayLike[Any, Any], None), - (ArrayLike[Any, NoneType], None), - (ArrayLike[Any, Int64], "int64"), - (ArrayLike[Any, int], "int"), + (Collection[Any, Any], None), + (Collection[Any, NoneType], None), + (Collection[Any, Int64], "int64"), + (Collection[Any, int], "int"), ] testdata_field_type = [
Update protocol for public type hints This issue will update the protocol class used for the public type hints (`Coord`, `Data`, ...) so that it accepts [collection](https://docs.python.org/3/library/collections.abc.html#collections.abc.Collection)-typed objects. Note that this update will not affect any runtime behavior exposed to users, but change (improve) the behavior of static type check.
0.0
364216ec6813ec09703c45a1ec5b133172debbc3
[ "tests/test_typing.py::test_get_dims[type_0-dims0]", "tests/test_typing.py::test_get_dims[type_1-dims1]", "tests/test_typing.py::test_get_dims[type_2-dims2]", "tests/test_typing.py::test_get_dims[type_3-dims3]", "tests/test_typing.py::test_get_dims[type_4-dims4]", "tests/test_typing.py::test_get_dims[type_5-dims5]", "tests/test_typing.py::test_get_dims[type_6-dims6]", "tests/test_typing.py::test_get_dims[type_7-dims7]", "tests/test_typing.py::test_get_dtype[type_0-None]", "tests/test_typing.py::test_get_dtype[NoneType-None]", "tests/test_typing.py::test_get_dtype[type_2-int64]", "tests/test_typing.py::test_get_dtype[int-int]", "tests/test_typing.py::test_get_dtype[type_4-None]", "tests/test_typing.py::test_get_dtype[type_5-None]", "tests/test_typing.py::test_get_dtype[type_6-int64]", "tests/test_typing.py::test_get_dtype[type_7-int]", "tests/test_typing.py::test_get_field_type[type_0-attr]", "tests/test_typing.py::test_get_field_type[type_1-coord]", "tests/test_typing.py::test_get_field_type[type_2-data]", "tests/test_typing.py::test_get_field_type[type_3-name]", "tests/test_typing.py::test_get_repr_type[int-int]", "tests/test_typing.py::test_get_repr_type[type_1-int]", "tests/test_typing.py::test_get_repr_type[type_2-int]", "tests/test_typing.py::test_get_repr_type[type_3-int]" ]
[]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-03-05 13:58:29+00:00
mit
1,228
astropenguin__xarray-dataclasses-163
diff --git a/xarray_dataclasses/__init__.py b/xarray_dataclasses/__init__.py index 81cc1e9..dd19281 100644 --- a/xarray_dataclasses/__init__.py +++ b/xarray_dataclasses/__init__.py @@ -7,6 +7,7 @@ from .dataarray import * from .dataset import * from .datamodel import * from .dataoptions import * +from .specs import * from .typing import * diff --git a/xarray_dataclasses/datamodel.py b/xarray_dataclasses/datamodel.py index f488393..06fb0e0 100644 --- a/xarray_dataclasses/datamodel.py +++ b/xarray_dataclasses/datamodel.py @@ -17,16 +17,16 @@ from typing_extensions import Literal, ParamSpec, get_type_hints from .typing import ( AnyDType, AnyField, - DataClass, AnyXarray, + DataClass, Dims, - FType, + Role, get_annotated, get_dataclass, get_dims, get_dtype, - get_ftype, get_name, + get_role, ) @@ -209,29 +209,29 @@ def eval_dataclass(dataclass: AnyDataClass[PInit]) -> None: def get_entry(field: AnyField, value: Any) -> Optional[AnyEntry]: """Create an entry from a field and its value.""" - ftype = get_ftype(field.type) + role = get_role(field.type) name = get_name(field.type, field.name) - if ftype is FType.ATTR or ftype is FType.NAME: + if role is Role.ATTR or role is Role.NAME: return AttrEntry( name=name, - tag=ftype.value, + tag=role.value, value=value, type=get_annotated(field.type), ) - if ftype is FType.COORD or ftype is FType.DATA: + if role is Role.COORD or role is Role.DATA: try: return DataEntry( name=name, - tag=ftype.value, + tag=role.value, base=get_dataclass(field.type), value=value, ) except TypeError: return DataEntry( name=name, - tag=ftype.value, + tag=role.value, dims=get_dims(field.type), dtype=get_dtype(field.type), value=value, diff --git a/xarray_dataclasses/specs.py b/xarray_dataclasses/specs.py new file mode 100644 index 0000000..ce6f3fb --- /dev/null +++ b/xarray_dataclasses/specs.py @@ -0,0 +1,201 @@ +__all__ = ["DataOptions", "DataSpec"] + + +# standard library +from dataclasses import dataclass, field, fields +from functools import lru_cache +from typing import Any, Dict, Generic, Hashable, Optional, Type, TypeVar + + +# dependencies +from typing_extensions import Literal, ParamSpec, TypeAlias, get_type_hints + + +# submodules +from .typing import ( + AnyDType, + AnyField, + AnyXarray, + DataClass, + Dims, + Role, + get_annotated, + get_dataclass, + get_dims, + get_dtype, + get_name, + get_role, +) + + +# type hints +AnySpec: TypeAlias = "ArraySpec | ScalarSpec" +PInit = ParamSpec("PInit") +TReturn = TypeVar("TReturn", AnyXarray, None) + + +# runtime classes +@dataclass(frozen=True) +class ArraySpec: + """Specification of an array.""" + + name: Hashable + """Name of the array.""" + + role: Literal["coord", "data"] + """Role of the array.""" + + dims: Dims + """Dimensions of the array.""" + + dtype: Optional[AnyDType] + """Data type of the array.""" + + default: Any + """Default value of the array.""" + + origin: Optional[Type[DataClass[Any]]] = None + """Dataclass as origins of name, dims, and dtype.""" + + def __post_init__(self) -> None: + """Update name, dims, and dtype if origin exists.""" + if self.origin is None: + return + + dataspec = DataSpec.from_dataclass(self.origin) + setattr = object.__setattr__ + + for spec in dataspec.specs.of_name.values(): + setattr(self, "name", spec.default) + break + + for spec in dataspec.specs.of_data.values(): + setattr(self, "dims", spec.dims) + setattr(self, "dtype", spec.dtype) + break + + +@dataclass(frozen=True) +class ScalarSpec: + """Specification of a scalar.""" + + name: Hashable + """Name of the scalar.""" + + role: Literal["attr", "name"] + """Role of the scalar.""" + + type: Any + """Type (hint) of the scalar.""" + + default: Any + """Default value of the scalar.""" + + +class SpecDict(Dict[str, AnySpec]): + """Dictionary of any specifications.""" + + @property + def of_attr(self) -> Dict[str, ScalarSpec]: + """Limit to attribute specifications.""" + return {k: v for k, v in self.items() if v.role == "attr"} + + @property + def of_coord(self) -> Dict[str, ArraySpec]: + """Limit to coordinate specifications.""" + return {k: v for k, v in self.items() if v.role == "coord"} + + @property + def of_data(self) -> Dict[str, ArraySpec]: + """Limit to data specifications.""" + return {k: v for k, v in self.items() if v.role == "data"} + + @property + def of_name(self) -> Dict[str, ScalarSpec]: + """Limit to name specifications.""" + return {k: v for k, v in self.items() if v.role == "name"} + + +@dataclass(frozen=True) +class DataOptions(Generic[TReturn]): + """Options for xarray data creation.""" + + factory: Type[TReturn] + """Factory for xarray data creation.""" + + +@dataclass(frozen=True) +class DataSpec: + """Data specification of an xarray dataclass.""" + + specs: SpecDict = field(default_factory=SpecDict) + """Dictionary of any specifications.""" + + options: DataOptions[Any] = DataOptions(type(None)) + """Options for xarray data creation.""" + + @classmethod + def from_dataclass( + cls, + dataclass: Type[DataClass[PInit]], + dataoptions: Optional[DataOptions[Any]] = None, + ) -> "DataSpec": + """Create a data specification from a dataclass.""" + specs = SpecDict() + + for field in fields(eval_types(dataclass)): + spec = get_spec(field) + + if spec is not None: + specs[field.name] = spec + + if dataoptions is None: + return cls(specs) + else: + return cls(specs, dataoptions) + + +# runtime functions +@lru_cache(maxsize=None) +def eval_types(dataclass: Type[DataClass[PInit]]) -> Type[DataClass[PInit]]: + """Evaluate field types of a dataclass.""" + types = get_type_hints(dataclass, include_extras=True) + + for field in fields(dataclass): + field.type = types[field.name] + + return dataclass + + +@lru_cache(maxsize=None) +def get_spec(field: AnyField) -> Optional[AnySpec]: + """Convert a dataclass field to a specification.""" + name = get_name(field.type, field.name) + role = get_role(field.type) + + if role is Role.DATA or role is Role.COORD: + try: + return ArraySpec( + name=name, + role=role.value, + dims=(), # dummy + dtype=None, # dummy + default=field.default, + origin=get_dataclass(field.type), + ) + except TypeError: + return ArraySpec( + name=name, + role=role.value, + dims=get_dims(field.type), + dtype=get_dtype(field.type), + default=field.default, + ) + + if role is Role.ATTR or role is Role.NAME: + return ScalarSpec( + name=name, + role=role.value, + type=get_annotated(field.type), + default=field.default, + ) diff --git a/xarray_dataclasses/typing.py b/xarray_dataclasses/typing.py index bedb65f..4d21d28 100644 --- a/xarray_dataclasses/typing.py +++ b/xarray_dataclasses/typing.py @@ -64,7 +64,7 @@ THashable = TypeVar("THashable", bound=Hashable) AnyArray: TypeAlias = "np.ndarray[Any, Any]" AnyDType: TypeAlias = "np.dtype[Any]" AnyField: TypeAlias = "Field[Any]" -AnyXarray = Union[xr.DataArray, xr.Dataset] +AnyXarray: TypeAlias = "xr.DataArray | xr.Dataset" Dims = Tuple[str, ...] Order = Literal["C", "F"] Shape = Union[Sequence[int], int] @@ -87,7 +87,7 @@ class Labeled(Generic[TDims]): # type hints (public) -class FType(Enum): +class Role(Enum): """Annotations for typing dataclass fields.""" ATTR = "attr" @@ -107,14 +107,14 @@ class FType(Enum): @classmethod def annotates(cls, tp: Any) -> bool: - """Check if any ftype annotates a type hint.""" + """Check if any role annotates a type hint.""" if get_origin(tp) is not Annotated: return False return any(isinstance(arg, cls) for arg in get_args(tp)) -Attr = Annotated[T, FType.ATTR] +Attr = Annotated[T, Role.ATTR] """Type hint for attribute fields (``Attr[T]``). Example: @@ -137,7 +137,7 @@ Reference: """ -Coord = Annotated[Union[Labeled[TDims], Collection[TDType], TDType], FType.COORD] +Coord = Annotated[Union[Labeled[TDims], Collection[TDType], TDType], Role.COORD] """Type hint for coordinate fields (``Coord[TDims, TDType]``). Example: @@ -156,7 +156,7 @@ Hint: """ -Coordof = Annotated[Union[TDataClass, Any], FType.COORD] +Coordof = Annotated[Union[TDataClass, Any], Role.COORD] """Type hint for coordinate fields (``Coordof[TDataClass]``). Unlike ``Coord``, it specifies a dataclass that defines a DataArray class. @@ -188,7 +188,7 @@ Hint: """ -Data = Annotated[Union[Labeled[TDims], Collection[TDType], TDType], FType.DATA] +Data = Annotated[Union[Labeled[TDims], Collection[TDType], TDType], Role.DATA] """Type hint for data fields (``Coordof[TDims, TDType]``). Example: @@ -209,7 +209,7 @@ Example: """ -Dataof = Annotated[Union[TDataClass, Any], FType.DATA] +Dataof = Annotated[Union[TDataClass, Any], Role.DATA] """Type hint for data fields (``Coordof[TDataClass]``). Unlike ``Data``, it specifies a dataclass that defines a DataArray class. @@ -236,7 +236,7 @@ Hint: """ -Name = Annotated[THashable, FType.NAME] +Name = Annotated[THashable, Role.NAME] """Type hint for name fields (``Name[THashable]``). Example: @@ -272,19 +272,19 @@ def find_annotated(tp: Any) -> Iterable[Any]: def get_annotated(tp: Any) -> Any: - """Extract the first ftype-annotated type.""" - for annotated in filter(FType.annotates, find_annotated(tp)): + """Extract the first role-annotated type.""" + for annotated in filter(Role.annotates, find_annotated(tp)): return deannotate(annotated) - raise TypeError("Could not find any ftype-annotated type.") + raise TypeError("Could not find any role-annotated type.") def get_annotations(tp: Any) -> Tuple[Any, ...]: - """Extract annotations of the first ftype-annotated type.""" - for annotated in filter(FType.annotates, find_annotated(tp)): + """Extract annotations of the first role-annotated type.""" + for annotated in filter(Role.annotates, find_annotated(tp)): return get_args(annotated)[1:] - raise TypeError("Could not find any ftype-annotated type.") + raise TypeError("Could not find any role-annotated type.") def get_dataclass(tp: Any) -> Type[DataClass[Any]]: @@ -341,14 +341,6 @@ def get_dtype(tp: Any) -> Optional[AnyDType]: return np.dtype(dtype) -def get_ftype(tp: Any, default: FType = FType.OTHER) -> FType: - """Extract an ftype if found or return given default.""" - try: - return get_annotations(tp)[0] - except TypeError: - return default - - def get_name(tp: Any, default: Hashable = None) -> Hashable: """Extract a name if found or return given default.""" try: @@ -361,3 +353,11 @@ def get_name(tp: Any, default: Hashable = None) -> Hashable: return annotation return default + + +def get_role(tp: Any, default: Role = Role.OTHER) -> Role: + """Extract a role if found or return given default.""" + try: + return get_annotations(tp)[0] + except TypeError: + return default
astropenguin/xarray-dataclasses
214ab6b1bf1c35bded5af1ff8ad0ad05779edee8
diff --git a/tests/test_specs.py b/tests/test_specs.py new file mode 100644 index 0000000..b0589c7 --- /dev/null +++ b/tests/test_specs.py @@ -0,0 +1,181 @@ +# standard library +from dataclasses import MISSING, dataclass +from typing import Tuple + + +# dependencies +import numpy as np +import xarray as xr +from typing_extensions import Annotated as Ann +from typing_extensions import Literal as L +from xarray_dataclasses.specs import DataOptions, DataSpec +from xarray_dataclasses.typing import Attr, Coordof, Data, Name + + +# type hints +DataDims = Tuple[L["lon"], L["lat"], L["time"]] + + +# test datasets +@dataclass +class Lon: + """Specification of relative longitude.""" + + data: Data[L["lon"], float] + units: Attr[str] = "deg" + name: Name[str] = "Relative longitude" + + +@dataclass +class Lat: + """Specification of relative latitude.""" + + data: Data[L["lat"], float] + units: Attr[str] = "m" + name: Name[str] = "Relative latitude" + + +@dataclass +class Time: + """Specification of time.""" + + data: Data[L["time"], L["datetime64[ns]"]] + name: Name[str] = "Time in UTC" + + +@dataclass +class Weather: + """Time-series spatial weather information at a location.""" + + temperature: Ann[Data[DataDims, float], "Temperature"] + humidity: Ann[Data[DataDims, float], "Humidity"] + wind_speed: Ann[Data[DataDims, float], "Wind speed"] + wind_direction: Ann[Data[DataDims, float], "Wind direction"] + lon: Coordof[Lon] + lat: Coordof[Lat] + time: Coordof[Time] + location: Attr[str] = "Tokyo" + longitude: Attr[float] = 139.69167 + latitude: Attr[float] = 35.68944 + name: Name[str] = "weather" + + +# test functions +def test_temperature() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_data["temperature"] + + assert spec.name == "Temperature" + assert spec.role == "data" + assert spec.dims == ("lon", "lat", "time") + assert spec.dtype == np.dtype("f8") + assert spec.default is MISSING + assert spec.origin is None + + +def test_humidity() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_data["humidity"] + + assert spec.name == "Humidity" + assert spec.role == "data" + assert spec.dims == ("lon", "lat", "time") + assert spec.dtype == np.dtype("f8") + assert spec.default is MISSING + assert spec.origin is None + + +def test_wind_speed() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_data["wind_speed"] + + assert spec.name == "Wind speed" + assert spec.role == "data" + assert spec.dims == ("lon", "lat", "time") + assert spec.dtype == np.dtype("f8") + assert spec.default is MISSING + assert spec.origin is None + + +def test_wind_direction() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_data["wind_direction"] + + assert spec.name == "Wind direction" + assert spec.role == "data" + assert spec.dims == ("lon", "lat", "time") + assert spec.dtype == np.dtype("f8") + assert spec.default is MISSING + assert spec.origin is None + + +def test_lon() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_coord["lon"] + + assert spec.name == "Relative longitude" + assert spec.role == "coord" + assert spec.dims == ("lon",) + assert spec.dtype == np.dtype("f8") + assert spec.default is MISSING + assert spec.origin is Lon + + +def test_lat() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_coord["lat"] + + assert spec.name == "Relative latitude" + assert spec.role == "coord" + assert spec.dims == ("lat",) + assert spec.dtype == np.dtype("f8") + assert spec.default is MISSING + assert spec.origin is Lat + + +def test_time() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_coord["time"] + + assert spec.name == "Time in UTC" + assert spec.role == "coord" + assert spec.dims == ("time",) + assert spec.dtype == np.dtype("M8[ns]") + assert spec.default is MISSING + assert spec.origin is Time + + +def test_location() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_attr["location"] + + assert spec.name == "location" + assert spec.role == "attr" + assert spec.type is str + assert spec.default == "Tokyo" + + +def test_longitude() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_attr["longitude"] + + assert spec.name == "longitude" + assert spec.role == "attr" + assert spec.type is float + assert spec.default == 139.69167 + + +def test_latitude() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_attr["latitude"] + + assert spec.name == "latitude" + assert spec.role == "attr" + assert spec.type is float + assert spec.default == 35.68944 + + +def test_name() -> None: + spec = DataSpec.from_dataclass(Weather).specs.of_name["name"] + + assert spec.name == "name" + assert spec.role == "name" + assert spec.type is str + assert spec.default == "weather" + + +def test_dataoptions() -> None: + options = DataOptions(xr.DataArray) + + assert DataSpec().options.factory is type(None) + assert DataSpec(options=options).options.factory is xr.DataArray diff --git a/tests/test_typing.py b/tests/test_typing.py index a66619f..06cf3d4 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -14,12 +14,12 @@ from xarray_dataclasses.typing import ( Attr, Coord, Data, - FType, Name, + Role, get_dims, get_dtype, - get_ftype, get_name, + get_role, ) @@ -54,24 +54,6 @@ testdata_dtype = [ (Union[Ann[Data[Any, float], "data"], Ann[Any, "any"]], np.dtype("f8")), ] -testdata_ftype = [ - (Attr[Any], FType.ATTR), - (Data[Any, Any], FType.DATA), - (Coord[Any, Any], FType.COORD), - (Name[Any], FType.NAME), - (Any, FType.OTHER), - (Ann[Attr[Any], "attr"], FType.ATTR), - (Ann[Data[Any, Any], "data"], FType.DATA), - (Ann[Coord[Any, Any], "coord"], FType.COORD), - (Ann[Name[Any], "name"], FType.NAME), - (Ann[Any, "other"], FType.OTHER), - (Union[Ann[Attr[Any], "attr"], Ann[Any, "any"]], FType.ATTR), - (Union[Ann[Data[Any, Any], "data"], Ann[Any, "any"]], FType.DATA), - (Union[Ann[Coord[Any, Any], "coord"], Ann[Any, "any"]], FType.COORD), - (Union[Ann[Name[Any], "name"], Ann[Any, "any"]], FType.NAME), - (Union[Ann[Any, "other"], Ann[Any, "any"]], FType.OTHER), -] - testdata_name = [ (Attr[Any], None), (Data[Any, Any], None), @@ -90,6 +72,24 @@ testdata_name = [ (Union[Ann[Any, "other"], Ann[Any, "any"]], None), ] +testdata_role = [ + (Attr[Any], Role.ATTR), + (Data[Any, Any], Role.DATA), + (Coord[Any, Any], Role.COORD), + (Name[Any], Role.NAME), + (Any, Role.OTHER), + (Ann[Attr[Any], "attr"], Role.ATTR), + (Ann[Data[Any, Any], "data"], Role.DATA), + (Ann[Coord[Any, Any], "coord"], Role.COORD), + (Ann[Name[Any], "name"], Role.NAME), + (Ann[Any, "other"], Role.OTHER), + (Union[Ann[Attr[Any], "attr"], Ann[Any, "any"]], Role.ATTR), + (Union[Ann[Data[Any, Any], "data"], Ann[Any, "any"]], Role.DATA), + (Union[Ann[Coord[Any, Any], "coord"], Ann[Any, "any"]], Role.COORD), + (Union[Ann[Name[Any], "name"], Ann[Any, "any"]], Role.NAME), + (Union[Ann[Any, "other"], Ann[Any, "any"]], Role.OTHER), +] + # test functions @mark.parametrize("tp, dims", testdata_dims) @@ -102,11 +102,11 @@ def test_get_dtype(tp: Any, dtype: Any) -> None: assert get_dtype(tp) == dtype [email protected]("tp, ftype", testdata_ftype) -def test_get_ftype(tp: Any, ftype: Any) -> None: - assert get_ftype(tp) == ftype - - @mark.parametrize("tp, name", testdata_name) def test_get_name(tp: Any, name: Any) -> None: assert get_name(tp) == name + + [email protected]("tp, role", testdata_role) +def test_get_role(tp: Any, role: Any) -> None: + assert get_role(tp) == role
Add specs module In the current `DataModel`, type information (e.g. `dims`, `dtype`) and the dataclass values are handled in a single object, which complicates the code. Then we would like to introduce `DataSpec` that only contains type information, similar to the one used in [pandas-dataclasses](https://github.com/astropenguin/pandas-dataclasses).
0.0
214ab6b1bf1c35bded5af1ff8ad0ad05779edee8
[ "tests/test_specs.py::test_temperature", "tests/test_specs.py::test_humidity", "tests/test_specs.py::test_wind_speed", "tests/test_specs.py::test_wind_direction", "tests/test_specs.py::test_lon", "tests/test_specs.py::test_lat", "tests/test_specs.py::test_time", "tests/test_specs.py::test_location", "tests/test_specs.py::test_longitude", "tests/test_specs.py::test_latitude", "tests/test_specs.py::test_name", "tests/test_specs.py::test_dataoptions", "tests/test_typing.py::test_get_dims[tp0-dims0]", "tests/test_typing.py::test_get_dims[tp1-dims1]", "tests/test_typing.py::test_get_dims[tp2-dims2]", "tests/test_typing.py::test_get_dims[tp3-dims3]", "tests/test_typing.py::test_get_dims[tp4-dims4]", "tests/test_typing.py::test_get_dims[tp5-dims5]", "tests/test_typing.py::test_get_dims[tp6-dims6]", "tests/test_typing.py::test_get_dims[tp7-dims7]", "tests/test_typing.py::test_get_dims[tp8-dims8]", "tests/test_typing.py::test_get_dims[tp9-dims9]", "tests/test_typing.py::test_get_dims[tp10-dims10]", "tests/test_typing.py::test_get_dims[tp11-dims11]", "tests/test_typing.py::test_get_dtype[tp0-None]", "tests/test_typing.py::test_get_dtype[tp1-None]", "tests/test_typing.py::test_get_dtype[tp2-dtype2]", "tests/test_typing.py::test_get_dtype[tp3-dtype3]", "tests/test_typing.py::test_get_dtype[tp4-None]", "tests/test_typing.py::test_get_dtype[tp5-None]", "tests/test_typing.py::test_get_dtype[tp6-dtype6]", "tests/test_typing.py::test_get_dtype[tp7-dtype7]", "tests/test_typing.py::test_get_dtype[tp8-dtype8]", "tests/test_typing.py::test_get_dtype[tp9-dtype9]", "tests/test_typing.py::test_get_dtype[tp10-dtype10]", "tests/test_typing.py::test_get_dtype[tp11-dtype11]", "tests/test_typing.py::test_get_name[tp0-None]", "tests/test_typing.py::test_get_name[tp1-None]", "tests/test_typing.py::test_get_name[tp2-None]", "tests/test_typing.py::test_get_name[tp3-None]", "tests/test_typing.py::test_get_name[tp4-None]", "tests/test_typing.py::test_get_name[tp5-attr]", "tests/test_typing.py::test_get_name[tp6-data]", "tests/test_typing.py::test_get_name[tp7-coord]", "tests/test_typing.py::test_get_name[tp8-name]", "tests/test_typing.py::test_get_name[tp9-None]", "tests/test_typing.py::test_get_name[tp10-attr]", "tests/test_typing.py::test_get_name[tp11-data]", "tests/test_typing.py::test_get_name[tp12-coord]", "tests/test_typing.py::test_get_name[tp13-name]", "tests/test_typing.py::test_get_name[tp14-None]", "tests/test_typing.py::test_get_role[tp0-Role.ATTR]", "tests/test_typing.py::test_get_role[tp1-Role.DATA]", "tests/test_typing.py::test_get_role[tp2-Role.COORD]", "tests/test_typing.py::test_get_role[tp3-Role.NAME]", "tests/test_typing.py::test_get_role[tp4-Role.OTHER]", "tests/test_typing.py::test_get_role[tp5-Role.ATTR]", "tests/test_typing.py::test_get_role[tp6-Role.DATA]", "tests/test_typing.py::test_get_role[tp7-Role.COORD]", "tests/test_typing.py::test_get_role[tp8-Role.NAME]", "tests/test_typing.py::test_get_role[tp9-Role.OTHER]", "tests/test_typing.py::test_get_role[tp10-Role.ATTR]", "tests/test_typing.py::test_get_role[tp11-Role.DATA]", "tests/test_typing.py::test_get_role[tp12-Role.COORD]", "tests/test_typing.py::test_get_role[tp13-Role.NAME]", "tests/test_typing.py::test_get_role[tp14-Role.OTHER]" ]
[]
{ "failed_lite_validators": [ "has_hyperlinks", "has_added_files", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-06-10 17:19:53+00:00
mit
1,229
astropenguin__xarray-dataclasses-173
diff --git a/xarray_dataclasses/v2/__init__.py b/xarray_dataclasses/v2/__init__.py new file mode 100644 index 0000000..1f6e4f1 --- /dev/null +++ b/xarray_dataclasses/v2/__init__.py @@ -0,0 +1,4 @@ +__all__ = ["typing"] + + +from . import typing diff --git a/xarray_dataclasses/v2/typing.py b/xarray_dataclasses/v2/typing.py new file mode 100644 index 0000000..e4865a3 --- /dev/null +++ b/xarray_dataclasses/v2/typing.py @@ -0,0 +1,216 @@ +__all__ = ["Attr", "Coord", "Coordof", "Data", "Dataof", "Other"] + + +# standard library +from dataclasses import Field +from enum import Enum, auto +from itertools import chain +from typing import ( + Any, + Callable, + Collection, + Dict, + Generic, + Hashable, + Iterable, + Optional, + Tuple, + TypeVar, + Union, +) + + +# dependencies +import numpy as np +import xarray as xr +from typing_extensions import ( + Annotated, + Literal, + ParamSpec, + Protocol, + get_args, + get_origin, + get_type_hints, +) + + +# type hints (private) +P = ParamSpec("P") +T = TypeVar("T") +TDataClass = TypeVar("TDataClass", bound="DataClass[Any]") +TDataArray = TypeVar("TDataArray", bound=xr.DataArray) +TDataset = TypeVar("TDataset", bound=xr.Dataset) +TDims = TypeVar("TDims") +TDType = TypeVar("TDType") +TXarray = TypeVar("TXarray", bound="Xarray") +Xarray = Union[xr.DataArray, xr.Dataset] + + +class DataClass(Protocol[P]): + """Type hint for dataclass objects.""" + + __dataclass_fields__: Dict[str, "Field[Any]"] + + def __init__(self, *args: P.args, **kwargs: P.kwargs) -> None: + ... + + +class XarrayClass(Protocol[P, TXarray]): + """Type hint for dataclass objects with a xarray factory.""" + + __dataclass_fields__: Dict[str, "Field[Any]"] + __xarray_factory__: Callable[..., TXarray] + + def __init__(self, *args: P.args, **kwargs: P.kwargs) -> None: + ... + + +class Dims(Generic[TDims]): + """Empty class for storing type of dimensions.""" + + pass + + +class Role(Enum): + """Annotations for typing dataclass fields.""" + + ATTR = auto() + """Annotation for attribute fields.""" + + COORD = auto() + """Annotation for coordinate fields.""" + + DATA = auto() + """Annotation for data fields.""" + + OTHER = auto() + """Annotation for other fields.""" + + @classmethod + def annotates(cls, tp: Any) -> bool: + """Check if any role annotates a type hint.""" + return any(isinstance(arg, cls) for arg in get_args(tp)) + + +# type hints (public) +Attr = Annotated[T, Role.ATTR] +"""Type hint for attribute fields (``Attr[T]``).""" + +Coord = Annotated[Union[Dims[TDims], Collection[TDType]], Role.COORD] +"""Type hint for coordinate fields (``Coord[TDims, TDType]``).""" + +Coordof = Annotated[TDataClass, Role.COORD] +"""Type hint for coordinate fields (``Dataof[TDataClass]``).""" + +Data = Annotated[Union[Dims[TDims], Collection[TDType]], Role.DATA] +"""Type hint for data fields (``Coord[TDims, TDType]``).""" + +Dataof = Annotated[TDataClass, Role.DATA] +"""Type hint for data fields (``Dataof[TDataClass]``).""" + +Other = Annotated[T, Role.OTHER] +"""Type hint for other fields (``Other[T]``).""" + + +# runtime functions +def deannotate(tp: Any) -> Any: + """Recursively remove annotations in a type hint.""" + + class Temporary: + __annotations__ = dict(tp=tp) + + return get_type_hints(Temporary)["tp"] + + +def find_annotated(tp: Any) -> Iterable[Any]: + """Generate all annotated types in a type hint.""" + args = get_args(tp) + + if get_origin(tp) is Annotated: + yield tp + yield from find_annotated(args[0]) + else: + yield from chain(*map(find_annotated, args)) + + +def get_annotated(tp: Any) -> Any: + """Extract the first role-annotated type.""" + for annotated in filter(Role.annotates, find_annotated(tp)): + return deannotate(annotated) + + raise TypeError("Could not find any role-annotated type.") + + +def get_annotations(tp: Any) -> Tuple[Any, ...]: + """Extract annotations of the first role-annotated type.""" + for annotated in filter(Role.annotates, find_annotated(tp)): + return get_args(annotated)[1:] + + raise TypeError("Could not find any role-annotated type.") + + +def get_dims(tp: Any) -> Optional[Tuple[str, ...]]: + """Extract dimensions if found or return None.""" + try: + dims = get_args(get_args(get_annotated(tp))[0])[0] + except (IndexError, TypeError): + return None + + args = get_args(dims) + origin = get_origin(dims) + + if args == () or args == ((),): + return () + + if origin is Literal: + return (str(args[0]),) + + if not (origin is tuple or origin is Tuple): + raise TypeError(f"Could not find any dims in {tp!r}.") + + if not all(get_origin(arg) is Literal for arg in args): + raise TypeError(f"Could not find any dims in {tp!r}.") + + return tuple(str(get_args(arg)[0]) for arg in args) + + +def get_dtype(tp: Any) -> Optional[str]: + """Extract a data type if found or return None.""" + try: + dtype = get_args(get_args(get_annotated(tp))[1])[0] + except (IndexError, TypeError): + return None + + if dtype is Any or dtype is type(None): + return None + + if get_origin(dtype) is Literal: + dtype = get_args(dtype)[0] + + return np.dtype(dtype).name + + +def get_name(tp: Any, default: Hashable = None) -> Hashable: + """Extract a name if found or return given default.""" + try: + name = get_annotations(tp)[1] + except (IndexError, TypeError): + return default + + if name is Ellipsis: + return default + + try: + hash(name) + except TypeError: + raise ValueError("Could not find any valid name.") + + return name + + +def get_role(tp: Any, default: Role = Role.OTHER) -> Role: + """Extract a role if found or return given default.""" + try: + return get_annotations(tp)[0] + except TypeError: + return default
astropenguin/xarray-dataclasses
e1e259f54889071db7820c2c0c753da0dbbaeed3
diff --git a/tests/test_v2_typing.py b/tests/test_v2_typing.py new file mode 100644 index 0000000..e8b177f --- /dev/null +++ b/tests/test_v2_typing.py @@ -0,0 +1,139 @@ +# standard library +from dataclasses import dataclass +from typing import Any, Tuple, Union + + +# dependencies +import numpy as np +from xarray_dataclasses.v2.typing import ( + Attr, + Coord, + Coordof, + Data, + Dataof, + Role, + get_dims, + get_dtype, + get_name, + get_role, +) +from pytest import mark +from typing_extensions import Annotated as Ann, Literal as L + + +# test data +@dataclass +class DataClass: + data: Any + + +testdata_dims = [ + (Coord[Tuple[()], Any], ()), + (Coord[L["x"], Any], ("x",)), + (Coord[Tuple[L["x"]], Any], ("x",)), + (Coord[Tuple[L["x"], L["y"]], Any], ("x", "y")), + (Coordof[DataClass], None), + (Data[Tuple[()], Any], ()), + (Data[L["x"], Any], ("x",)), + (Data[Tuple[L["x"]], Any], ("x",)), + (Data[Tuple[L["x"], L["y"]], Any], ("x", "y")), + (Dataof[DataClass], None), + (Ann[Coord[L["x"], Any], "coord"], ("x",)), + (Ann[Coordof[DataClass], "coord"], None), + (Ann[Data[L["x"], Any], "data"], ("x",)), + (Ann[Dataof[DataClass], "data"], None), + (Union[Ann[Coord[L["x"], Any], "coord"], Ann[Any, "any"]], ("x",)), + (Union[Ann[Coordof[DataClass], "coord"], Ann[Any, "any"]], None), + (Union[Ann[Data[L["x"], Any], "data"], Ann[Any, "any"]], ("x",)), + (Union[Ann[Dataof[DataClass], "data"], Ann[Any, "any"]], None), +] + +testdata_dtype = [ + (Coord[Any, Any], None), + (Coord[Any, None], None), + (Coord[Any, int], np.dtype("i8")), + (Coord[Any, L["i8"]], np.dtype("i8")), + (Coordof[DataClass], None), + (Data[Any, Any], None), + (Data[Any, None], None), + (Data[Any, int], np.dtype("i8")), + (Data[Any, L["i8"]], np.dtype("i8")), + (Dataof[DataClass], None), + (Ann[Coord[Any, float], "coord"], np.dtype("f8")), + (Ann[Coordof[DataClass], "coord"], None), + (Ann[Data[Any, float], "data"], np.dtype("f8")), + (Ann[Dataof[DataClass], "data"], None), + (Union[Ann[Coord[Any, float], "coord"], Ann[Any, "any"]], np.dtype("f8")), + (Union[Ann[Coordof[DataClass], "coord"], Ann[Any, "any"]], None), + (Union[Ann[Data[Any, float], "data"], Ann[Any, "any"]], np.dtype("f8")), + (Union[Ann[Dataof[DataClass], "data"], Ann[Any, "any"]], None), +] + +testdata_name = [ + (Attr[Any], None), + (Coord[Any, Any], None), + (Coordof[DataClass], None), + (Data[Any, Any], None), + (Dataof[DataClass], None), + (Any, None), + (Ann[Attr[Any], "attr"], "attr"), + (Ann[Coord[Any, Any], "coord"], "coord"), + (Ann[Coordof[DataClass], "coord"], "coord"), + (Ann[Data[Any, Any], "data"], "data"), + (Ann[Dataof[DataClass], "data"], "data"), + (Ann[Any, "other"], None), + (Ann[Attr[Any], ..., "attr"], None), + (Ann[Coord[Any, Any], ..., "coord"], None), + (Ann[Coordof[DataClass], ..., "coord"], None), + (Ann[Data[Any, Any], ..., "data"], None), + (Ann[Dataof[DataClass], ..., "data"], None), + (Ann[Any, ..., "other"], None), + (Union[Ann[Attr[Any], "attr"], Ann[Any, "any"]], "attr"), + (Union[Ann[Coord[Any, Any], "coord"], Ann[Any, "any"]], "coord"), + (Union[Ann[Coordof[DataClass], "coord"], Ann[Any, "any"]], "coord"), + (Union[Ann[Data[Any, Any], "data"], Ann[Any, "any"]], "data"), + (Union[Ann[Dataof[DataClass], "data"], Ann[Any, "any"]], "data"), + (Union[Ann[Any, "other"], Ann[Any, "any"]], None), +] + +testdata_role = [ + (Attr[Any], Role.ATTR), + (Coord[Any, Any], Role.COORD), + (Coordof[DataClass], Role.COORD), + (Data[Any, Any], Role.DATA), + (Dataof[DataClass], Role.DATA), + (Any, Role.OTHER), + (Ann[Attr[Any], "attr"], Role.ATTR), + (Ann[Coord[Any, Any], "coord"], Role.COORD), + (Ann[Coordof[DataClass], "coord"], Role.COORD), + (Ann[Data[Any, Any], "data"], Role.DATA), + (Ann[Dataof[DataClass], "data"], Role.DATA), + (Ann[Any, "other"], Role.OTHER), + (Union[Ann[Attr[Any], "attr"], Ann[Any, "any"]], Role.ATTR), + (Union[Ann[Coord[Any, Any], "coord"], Ann[Any, "any"]], Role.COORD), + (Union[Ann[Coordof[DataClass], "coord"], Ann[Any, "any"]], Role.COORD), + (Union[Ann[Data[Any, Any], "data"], Ann[Any, "any"]], Role.DATA), + (Union[Ann[Dataof[DataClass], "data"], Ann[Any, "any"]], Role.DATA), + (Union[Ann[Any, "other"], Ann[Any, "any"]], Role.OTHER), +] + + +# test functions [email protected]("tp, dims", testdata_dims) +def test_get_dims(tp: Any, dims: Any) -> None: + assert get_dims(tp) == dims + + [email protected]("tp, dtype", testdata_dtype) +def test_get_dtype(tp: Any, dtype: Any) -> None: + assert get_dtype(tp) == dtype + + [email protected]("tp, name", testdata_name) +def test_get_name(tp: Any, name: Any) -> None: + assert get_name(tp) == name + + [email protected]("tp, role", testdata_role) +def test_get_role(tp: Any, role: Any) -> None: + assert get_role(tp) is role
Add v2 typing module Add a new typing module for v2 based on that of pandas-dataclasses (v0.9.0). Note that it will be included in the current package, but not used until the v2 release.
0.0
e1e259f54889071db7820c2c0c753da0dbbaeed3
[ "tests/test_v2_typing.py::test_get_dims[tp0-dims0]", "tests/test_v2_typing.py::test_get_dims[tp1-dims1]", "tests/test_v2_typing.py::test_get_dims[tp2-dims2]", "tests/test_v2_typing.py::test_get_dims[tp3-dims3]", "tests/test_v2_typing.py::test_get_dims[tp4-None]", "tests/test_v2_typing.py::test_get_dims[tp5-dims5]", "tests/test_v2_typing.py::test_get_dims[tp6-dims6]", "tests/test_v2_typing.py::test_get_dims[tp7-dims7]", "tests/test_v2_typing.py::test_get_dims[tp8-dims8]", "tests/test_v2_typing.py::test_get_dims[tp9-None]", "tests/test_v2_typing.py::test_get_dims[tp10-dims10]", "tests/test_v2_typing.py::test_get_dims[tp11-None]", "tests/test_v2_typing.py::test_get_dims[tp12-dims12]", "tests/test_v2_typing.py::test_get_dims[tp13-None]", "tests/test_v2_typing.py::test_get_dims[tp14-dims14]", "tests/test_v2_typing.py::test_get_dims[tp15-None]", "tests/test_v2_typing.py::test_get_dims[tp16-dims16]", "tests/test_v2_typing.py::test_get_dims[tp17-None]", "tests/test_v2_typing.py::test_get_dtype[tp0-None]", "tests/test_v2_typing.py::test_get_dtype[tp1-None]", "tests/test_v2_typing.py::test_get_dtype[tp2-dtype2]", "tests/test_v2_typing.py::test_get_dtype[tp3-dtype3]", "tests/test_v2_typing.py::test_get_dtype[tp4-None]", "tests/test_v2_typing.py::test_get_dtype[tp5-None]", "tests/test_v2_typing.py::test_get_dtype[tp6-None]", "tests/test_v2_typing.py::test_get_dtype[tp7-dtype7]", "tests/test_v2_typing.py::test_get_dtype[tp8-dtype8]", "tests/test_v2_typing.py::test_get_dtype[tp9-None]", "tests/test_v2_typing.py::test_get_dtype[tp10-dtype10]", "tests/test_v2_typing.py::test_get_dtype[tp11-None]", "tests/test_v2_typing.py::test_get_dtype[tp12-dtype12]", "tests/test_v2_typing.py::test_get_dtype[tp13-None]", "tests/test_v2_typing.py::test_get_dtype[tp14-dtype14]", "tests/test_v2_typing.py::test_get_dtype[tp15-None]", "tests/test_v2_typing.py::test_get_dtype[tp16-dtype16]", "tests/test_v2_typing.py::test_get_dtype[tp17-None]", "tests/test_v2_typing.py::test_get_name[tp0-None]", "tests/test_v2_typing.py::test_get_name[tp1-None]", "tests/test_v2_typing.py::test_get_name[tp2-None]", "tests/test_v2_typing.py::test_get_name[tp3-None]", "tests/test_v2_typing.py::test_get_name[tp4-None]", "tests/test_v2_typing.py::test_get_name[tp5-None]", "tests/test_v2_typing.py::test_get_name[tp6-attr]", "tests/test_v2_typing.py::test_get_name[tp7-coord]", "tests/test_v2_typing.py::test_get_name[tp8-coord]", "tests/test_v2_typing.py::test_get_name[tp9-data]", "tests/test_v2_typing.py::test_get_name[tp10-data]", "tests/test_v2_typing.py::test_get_name[tp11-None]", "tests/test_v2_typing.py::test_get_name[tp12-None]", "tests/test_v2_typing.py::test_get_name[tp13-None]", "tests/test_v2_typing.py::test_get_name[tp14-None]", "tests/test_v2_typing.py::test_get_name[tp15-None]", "tests/test_v2_typing.py::test_get_name[tp16-None]", "tests/test_v2_typing.py::test_get_name[tp17-None]", "tests/test_v2_typing.py::test_get_name[tp18-attr]", "tests/test_v2_typing.py::test_get_name[tp19-coord]", "tests/test_v2_typing.py::test_get_name[tp20-coord]", "tests/test_v2_typing.py::test_get_name[tp21-data]", "tests/test_v2_typing.py::test_get_name[tp22-data]", "tests/test_v2_typing.py::test_get_name[tp23-None]", "tests/test_v2_typing.py::test_get_role[tp0-Role.ATTR]", "tests/test_v2_typing.py::test_get_role[tp1-Role.COORD]", "tests/test_v2_typing.py::test_get_role[tp2-Role.COORD]", "tests/test_v2_typing.py::test_get_role[tp3-Role.DATA]", "tests/test_v2_typing.py::test_get_role[tp4-Role.DATA]", "tests/test_v2_typing.py::test_get_role[tp5-Role.OTHER]", "tests/test_v2_typing.py::test_get_role[tp6-Role.ATTR]", "tests/test_v2_typing.py::test_get_role[tp7-Role.COORD]", "tests/test_v2_typing.py::test_get_role[tp8-Role.COORD]", "tests/test_v2_typing.py::test_get_role[tp9-Role.DATA]", "tests/test_v2_typing.py::test_get_role[tp10-Role.DATA]", "tests/test_v2_typing.py::test_get_role[tp11-Role.OTHER]", "tests/test_v2_typing.py::test_get_role[tp12-Role.ATTR]", "tests/test_v2_typing.py::test_get_role[tp13-Role.COORD]", "tests/test_v2_typing.py::test_get_role[tp14-Role.COORD]", "tests/test_v2_typing.py::test_get_role[tp15-Role.DATA]", "tests/test_v2_typing.py::test_get_role[tp16-Role.DATA]", "tests/test_v2_typing.py::test_get_role[tp17-Role.OTHER]" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_added_files" ], "has_test_patch": true, "is_lite": false }
2022-10-27 16:42:33+00:00
mit
1,230
astropenguin__xarray-dataclasses-203
diff --git a/xarray_dataclasses/datamodel.py b/xarray_dataclasses/datamodel.py index 06fb0e0..5d8b7d1 100644 --- a/xarray_dataclasses/datamodel.py +++ b/xarray_dataclasses/datamodel.py @@ -4,13 +4,24 @@ __all__ = ["DataModel"] # standard library from dataclasses import dataclass, field, is_dataclass -from typing import Any, Dict, Hashable, List, Optional, Tuple, Type, Union, cast +from typing import ( + Any, + Dict, + Hashable, + List, + Literal, + Optional, + Tuple, + Type, + Union, + cast, +) # dependencies import numpy as np import xarray as xr -from typing_extensions import Literal, ParamSpec, get_type_hints +from typing_extensions import ParamSpec, get_type_hints # submodules diff --git a/xarray_dataclasses/typing.py b/xarray_dataclasses/typing.py index 4d21d28..5f16154 100644 --- a/xarray_dataclasses/typing.py +++ b/xarray_dataclasses/typing.py @@ -29,7 +29,9 @@ from typing import ( Generic, Hashable, Iterable, + Literal, Optional, + Protocol, Sequence, Tuple, Type, @@ -43,9 +45,7 @@ import numpy as np import xarray as xr from typing_extensions import ( Annotated, - Literal, ParamSpec, - Protocol, TypeAlias, get_args, get_origin,
astropenguin/xarray-dataclasses
102c962a7dfb1651d66b3156348dff1edcd3f21e
diff --git a/tests/test_data.py b/tests/test_data.py index 35ddf07..1d9c5c9 100644 --- a/tests/test_data.py +++ b/tests/test_data.py @@ -1,11 +1,11 @@ # standard library from dataclasses import dataclass -from typing import Collection, Tuple, Union +from typing import Collection, Literal as L, Tuple, Union # dependencies import numpy as np -from typing_extensions import Annotated as Ann, Literal as L +from typing_extensions import Annotated as Ann from xarray_dataclasses.typing import Attr, Coord, Coordof, Data diff --git a/tests/test_dataarray.py b/tests/test_dataarray.py index 23d3860..fa23ee7 100644 --- a/tests/test_dataarray.py +++ b/tests/test_dataarray.py @@ -1,12 +1,11 @@ # standard library from dataclasses import dataclass -from typing import Tuple +from typing import Literal, Tuple # dependencies import numpy as np import xarray as xr -from typing_extensions import Literal # submodules diff --git a/tests/test_datamodel.py b/tests/test_datamodel.py index a79a412..a95b1b0 100644 --- a/tests/test_datamodel.py +++ b/tests/test_datamodel.py @@ -1,10 +1,6 @@ # standard library from dataclasses import dataclass -from typing import Tuple - - -# dependencies -from typing_extensions import Literal +from typing import Literal, Tuple # submodules diff --git a/tests/test_dataset.py b/tests/test_dataset.py index 41e4470..7b3b66a 100644 --- a/tests/test_dataset.py +++ b/tests/test_dataset.py @@ -1,12 +1,11 @@ # standard library from dataclasses import dataclass -from typing import Tuple +from typing import Literal, Tuple # dependencies import numpy as np import xarray as xr -from typing_extensions import Literal # submodules diff --git a/tests/test_typing.py b/tests/test_typing.py index 06cf3d4..92de2e4 100644 --- a/tests/test_typing.py +++ b/tests/test_typing.py @@ -1,12 +1,11 @@ # standard library -from typing import Any, Tuple, Union +from typing import Any, Literal as L, Tuple, Union # dependencies import numpy as np from pytest import mark from typing_extensions import Annotated as Ann -from typing_extensions import Literal as L # submodules
Fix import of Literal and Protocol Import `Literal` (and `Protocol`) not from `typing_extensions` but from `typing`. Use of `typing_extensions.Literal` fails `get_dims(tp)` in Python 3.8 and 3.9.
0.0
102c962a7dfb1651d66b3156348dff1edcd3f21e
[ "tests/test_dataarray.py::test_type", "tests/test_dataarray.py::test_data", "tests/test_dataarray.py::test_dtype", "tests/test_dataarray.py::test_dims", "tests/test_dataarray.py::test_attrs", "tests/test_dataarray.py::test_name", "tests/test_datamodel.py::test_xaxis_attr", "tests/test_datamodel.py::test_xaxis_data", "tests/test_datamodel.py::test_yaxis_attr", "tests/test_datamodel.py::test_yaxis_data", "tests/test_datamodel.py::test_image_coord", "tests/test_datamodel.py::test_image_data", "tests/test_datamodel.py::test_color_data", "tests/test_dataset.py::test_type", "tests/test_dataset.py::test_data_vars", "tests/test_dataset.py::test_dims", "tests/test_dataset.py::test_attrs", "tests/test_typing.py::test_get_dims[tp0-dims0]", "tests/test_typing.py::test_get_dims[tp1-dims1]", "tests/test_typing.py::test_get_dims[tp2-dims2]", "tests/test_typing.py::test_get_dims[tp3-dims3]", "tests/test_typing.py::test_get_dims[tp4-dims4]", "tests/test_typing.py::test_get_dims[tp5-dims5]", "tests/test_typing.py::test_get_dims[tp6-dims6]", "tests/test_typing.py::test_get_dims[tp7-dims7]", "tests/test_typing.py::test_get_dims[tp8-dims8]", "tests/test_typing.py::test_get_dims[tp9-dims9]", "tests/test_typing.py::test_get_dims[tp10-dims10]", "tests/test_typing.py::test_get_dims[tp11-dims11]", "tests/test_typing.py::test_get_dtype[tp0-None]", "tests/test_typing.py::test_get_dtype[tp1-None]", "tests/test_typing.py::test_get_dtype[tp2-dtype2]", "tests/test_typing.py::test_get_dtype[tp3-dtype3]", "tests/test_typing.py::test_get_dtype[tp4-None]", "tests/test_typing.py::test_get_dtype[tp5-None]", "tests/test_typing.py::test_get_dtype[tp6-dtype6]", "tests/test_typing.py::test_get_dtype[tp7-dtype7]", "tests/test_typing.py::test_get_dtype[tp8-dtype8]", "tests/test_typing.py::test_get_dtype[tp9-dtype9]", "tests/test_typing.py::test_get_dtype[tp10-dtype10]", "tests/test_typing.py::test_get_dtype[tp11-dtype11]", "tests/test_typing.py::test_get_name[tp0-None]", "tests/test_typing.py::test_get_name[tp1-None]", "tests/test_typing.py::test_get_name[tp2-None]", "tests/test_typing.py::test_get_name[tp3-None]", "tests/test_typing.py::test_get_name[tp4-None]", "tests/test_typing.py::test_get_name[tp5-attr]", "tests/test_typing.py::test_get_name[tp6-data]", "tests/test_typing.py::test_get_name[tp7-coord]", "tests/test_typing.py::test_get_name[tp8-name]", "tests/test_typing.py::test_get_name[tp9-None]", "tests/test_typing.py::test_get_name[tp10-attr]", "tests/test_typing.py::test_get_name[tp11-data]", "tests/test_typing.py::test_get_name[tp12-coord]", "tests/test_typing.py::test_get_name[tp13-name]", "tests/test_typing.py::test_get_name[tp14-None]", "tests/test_typing.py::test_get_role[tp0-Role.ATTR]", "tests/test_typing.py::test_get_role[tp1-Role.DATA]", "tests/test_typing.py::test_get_role[tp2-Role.COORD]", "tests/test_typing.py::test_get_role[tp3-Role.NAME]", "tests/test_typing.py::test_get_role[tp4-Role.OTHER]", "tests/test_typing.py::test_get_role[tp5-Role.ATTR]", "tests/test_typing.py::test_get_role[tp6-Role.DATA]", "tests/test_typing.py::test_get_role[tp7-Role.COORD]", "tests/test_typing.py::test_get_role[tp8-Role.NAME]", "tests/test_typing.py::test_get_role[tp9-Role.OTHER]", "tests/test_typing.py::test_get_role[tp10-Role.ATTR]", "tests/test_typing.py::test_get_role[tp11-Role.DATA]", "tests/test_typing.py::test_get_role[tp12-Role.COORD]", "tests/test_typing.py::test_get_role[tp13-Role.NAME]", "tests/test_typing.py::test_get_role[tp14-Role.OTHER]" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2023-06-08 16:15:20+00:00
mit
1,231
astropy__astroquery-2160
diff --git a/astroquery/utils/commons.py b/astroquery/utils/commons.py index cba49f2a..de1e3a02 100644 --- a/astroquery/utils/commons.py +++ b/astroquery/utils/commons.py @@ -16,7 +16,7 @@ import six import astropy.units as u from astropy import coordinates as coord from collections import OrderedDict -from astropy.utils import minversion +from astropy.utils import deprecated, minversion import astropy.utils.data as aud from astropy.io import fits, votable @@ -60,6 +60,7 @@ ASTROPY_LT_4_1 = not minversion('astropy', '4.1') ASTROPY_LT_4_3 = not minversion('astropy', '4.3') +@deprecated('0.4.4', alternative='astroquery.query.BaseQuery._request') def send_request(url, data, timeout, request_type='POST', headers={}, **kwargs): """
astropy/astroquery
241d896bede51d68dc48d02f3569d532178492ef
diff --git a/astroquery/utils/tests/test_utils.py b/astroquery/utils/tests/test_utils.py index 5ac3ea41..3e4d78c8 100644 --- a/astroquery/utils/tests/test_utils.py +++ b/astroquery/utils/tests/test_utils.py @@ -15,6 +15,7 @@ import astropy.io.votable as votable import astropy.units as u from astropy.table import Table import astropy.utils.data as aud +from astropy.utils.exceptions import AstropyDeprecationWarning from ...utils import chunk_read, chunk_report, class_or_instance, commons from ...utils.process_asyncs import async_to_sync_docstr, async_to_sync @@ -96,8 +97,9 @@ def test_send_request_post(monkeypatch): status_code=status_code) monkeypatch.setattr(requests, 'post', mock_post) - response = commons.send_request('https://github.com/astropy/astroquery', - data=dict(msg='ok'), timeout=30) + with pytest.warns(AstropyDeprecationWarning): + response = commons.send_request('https://github.com/astropy/astroquery', + data=dict(msg='ok'), timeout=30) assert response.url == 'https://github.com/astropy/astroquery' assert response.data == dict(msg='ok') assert 'astroquery' in response.headers['User-Agent'] @@ -112,8 +114,9 @@ def test_send_request_get(monkeypatch): req.raise_for_status = lambda: None return req monkeypatch.setattr(requests, 'get', mock_get) - response = commons.send_request('https://github.com/astropy/astroquery', - dict(a='b'), 60, request_type='GET') + with pytest.warns(AstropyDeprecationWarning): + response = commons.send_request('https://github.com/astropy/astroquery', + dict(a='b'), 60, request_type='GET') assert response.url == 'https://github.com/astropy/astroquery?a=b' @@ -125,15 +128,18 @@ def test_quantity_timeout(monkeypatch): req.raise_for_status = lambda: None return req monkeypatch.setattr(requests, 'get', mock_get) - response = commons.send_request('https://github.com/astropy/astroquery', - dict(a='b'), 1 * u.min, request_type='GET') + with pytest.warns(AstropyDeprecationWarning): + response = commons.send_request('https://github.com/astropy/astroquery', + dict(a='b'), 1 * u.min, + request_type='GET') assert response.url == 'https://github.com/astropy/astroquery?a=b' def test_send_request_err(): with pytest.raises(ValueError): - commons.send_request('https://github.com/astropy/astroquery', - dict(a='b'), 60, request_type='PUT') + with pytest.warns(AstropyDeprecationWarning): + commons.send_request('https://github.com/astropy/astroquery', + dict(a='b'), 60, request_type='PUT') col_1 = [1, 2, 3]
Deprecate commons.send_request Once the last usage of ``commons.send_request`` is removed, see #824, it should be deprecated in favour of ``BaseQuery._request``.
0.0
241d896bede51d68dc48d02f3569d532178492ef
[ "astroquery/utils/tests/test_utils.py::test_send_request_post", "astroquery/utils/tests/test_utils.py::test_send_request_get", "astroquery/utils/tests/test_utils.py::test_quantity_timeout", "astroquery/utils/tests/test_utils.py::test_send_request_err" ]
[ "astroquery/utils/tests/test_utils.py::test_class_or_instance", "astroquery/utils/tests/test_utils.py::test_parse_coordinates_1[coordinates0]", "astroquery/utils/tests/test_utils.py::test_parse_coordinates_3", "astroquery/utils/tests/test_utils.py::test_TableDict", "astroquery/utils/tests/test_utils.py::test_TableDict_print_table_list", "astroquery/utils/tests/test_utils.py::test_suppress_vo_warnings", "astroquery/utils/tests/test_utils.py::test_process_async_docs", "astroquery/utils/tests/test_utils.py::test_async_to_sync", "astroquery/utils/tests/test_utils.py::test_return_chomper", "astroquery/utils/tests/test_utils.py::test_prepend_docstr[dummyfunc1-\\n", "astroquery/utils/tests/test_utils.py::test_prepend_docstr[dummyfunc2-\\n", "astroquery/utils/tests/test_utils.py::test_payload_return", "astroquery/utils/tests/test_utils.py::test_filecontainer_save", "astroquery/utils/tests/test_utils.py::test_filecontainer_get", "astroquery/utils/tests/test_utils.py::test_is_coordinate[5h0m0s", "astroquery/utils/tests/test_utils.py::test_is_coordinate[m1-False]", "astroquery/utils/tests/test_utils.py::test_radius_to_unit[radius0]", "astroquery/utils/tests/test_utils.py::test_radius_to_unit[0.01", "astroquery/utils/tests/test_utils.py::test_radius_to_unit[radius2]" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_issue_reference" ], "has_test_patch": true, "is_lite": false }
2021-09-28 14:41:36+00:00
bsd-3-clause
1,232
astropy__astroquery-2318
diff --git a/CHANGES.rst b/CHANGES.rst index c82d3a62..d834e8b4 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -58,6 +58,7 @@ sdss - Fix ``query_crossid`` to be able to query larger list of coordinates. [#2305] +- Fix ``query_crossid`` for very old data releases (< DR10). [#2318] Infrastructure, Utility and Other Changes and Additions diff --git a/astroquery/casda/core.py b/astroquery/casda/core.py index e021369e..223f9fcf 100644 --- a/astroquery/casda/core.py +++ b/astroquery/casda/core.py @@ -241,7 +241,7 @@ class CasdaClass(BaseQuery): filenames = [] for url in urls: parseResult = urlparse(url) - local_filename = os.path.basename(parseResult.path) + local_filename = unquote(os.path.basename(parseResult.path)) if os.name == 'nt': # Windows doesn't allow special characters in filenames like # ":" so replace them with an underscore diff --git a/astroquery/sdss/core.py b/astroquery/sdss/core.py index 293a7ae4..975e919e 100644 --- a/astroquery/sdss/core.py +++ b/astroquery/sdss/core.py @@ -32,7 +32,8 @@ class SDSSClass(BaseQuery): QUERY_URL_SUFFIX_DR_OLD = '/dr{dr}/en/tools/search/x_sql.asp' QUERY_URL_SUFFIX_DR_10 = '/dr{dr}/en/tools/search/x_sql.aspx' QUERY_URL_SUFFIX_DR_NEW = '/dr{dr}/en/tools/search/x_results.aspx' - XID_URL_SUFFIX_OLD = '/dr{dr}/en/tools/crossid/x_crossid.aspx' + XID_URL_SUFFIX_OLD = '/dr{dr}/en/tools/crossid/x_crossid.asp' + XID_URL_SUFFIX_DR_10 = '/dr{dr}/en/tools/crossid/x_crossid.aspx' XID_URL_SUFFIX_NEW = '/dr{dr}/en/tools/search/X_Results.aspx' IMAGING_URL_SUFFIX = ('{base}/dr{dr}/{instrument}/photoObj/frames/' '{rerun}/{run}/{camcol}/' @@ -123,7 +124,7 @@ class SDSSClass(BaseQuery): raise TypeError("radius should be either Quantity or " "convertible to float.") - sql_query = 'SELECT ' + sql_query = 'SELECT\r\n' # Older versions expect the CRLF to be there. if specobj_fields is None: if photoobj_fields is None: @@ -1078,8 +1079,10 @@ class SDSSClass(BaseQuery): return url def _get_crossid_url(self, data_release): - if data_release < 11: + if data_release < 10: suffix = self.XID_URL_SUFFIX_OLD + elif data_release == 10: + suffix = self.XID_URL_SUFFIX_DR_10 else: suffix = self.XID_URL_SUFFIX_NEW
astropy/astroquery
525837073d8fecda0610a2cc7c38dc6e6356e76b
diff --git a/astroquery/casda/tests/test_casda.py b/astroquery/casda/tests/test_casda.py index a868ed9b..fe74075f 100644 --- a/astroquery/casda/tests/test_casda.py +++ b/astroquery/casda/tests/test_casda.py @@ -284,7 +284,8 @@ def test_stage_data(patch_get): def test_download_file(patch_get): urls = ['https://ingest.pawsey.org/bucket_name/path/askap_img.fits?security=stuff', - 'http://casda.csiro.au/download/web/111-000-111-000/askap_img.fits.checksum'] + 'http://casda.csiro.au/download/web/111-000-111-000/askap_img.fits.checksum', + 'https://ingest.pawsey.org.au/casda-prd-as110-01/dc52217/primary_images/RACS-DR1_0000%2B18A.fits?security=stuff'] casda = Casda('user', 'password') # skip the actual downloading of the file @@ -294,3 +295,4 @@ def test_download_file(patch_get): filenames = casda.download_files(urls) assert filenames[0].endswith('askap_img.fits') assert filenames[1].endswith('askap_img.fits.checksum') + assert filenames[2].endswith('RACS-DR1_0000+18A.fits') diff --git a/astroquery/sdss/tests/test_sdss.py b/astroquery/sdss/tests/test_sdss.py index bb92c01b..38b7d1d4 100644 --- a/astroquery/sdss/tests/test_sdss.py +++ b/astroquery/sdss/tests/test_sdss.py @@ -136,7 +136,9 @@ def url_tester(data_release): def url_tester_crossid(data_release): - if data_release < 11: + if data_release < 10: + baseurl = 'http://skyserver.sdss.org/dr{}/en/tools/crossid/x_crossid.asp' + if data_release == 10: baseurl = 'http://skyserver.sdss.org/dr{}/en/tools/crossid/x_crossid.aspx' if data_release == 11: return diff --git a/astroquery/sdss/tests/test_sdss_remote.py b/astroquery/sdss/tests/test_sdss_remote.py index 723195f2..26251eaf 100644 --- a/astroquery/sdss/tests/test_sdss_remote.py +++ b/astroquery/sdss/tests/test_sdss_remote.py @@ -174,8 +174,7 @@ class TestSDSSRemote: assert query1.colnames == ['r', 'psfMag_r'] assert query2.colnames == ['ra', 'dec', 'r'] - # crossid doesn't work for DR<10, remove limitation once #2303 is fixed - @pytest.mark.parametrize("dr", dr_list[2:]) + @pytest.mark.parametrize("dr", dr_list) def test_query_crossid(self, dr): query1 = sdss.SDSS.query_crossid(self.coords, data_release=dr) query2 = sdss.SDSS.query_crossid([self.coords, self.coords]) @@ -185,8 +184,7 @@ class TestSDSSRemote: assert isinstance(query2, Table) assert query2['objID'][0] == query1['objID'][0] == query2['objID'][1] - # crossid doesn't work for DR<10, remove limitation once #2303 is fixed - @pytest.mark.parametrize("dr", dr_list[2:]) + @pytest.mark.parametrize("dr", dr_list) def test_spectro_query_crossid(self, dr): query1 = sdss.SDSS.query_crossid(self.coords, specobj_fields=['specObjID', 'z'],
SDSS.query_crossid doesn't work for DR<10 Run into this with the test suite, didn't investigate in detail. It likely just needs the URLs fixed. Could you maybe have a look @weaverba137? ``` In [1]: from astroquery import sdss In [2]: from astropy.coordinates import SkyCoord ...: from astropy.table import Table In [3]: coords = SkyCoord('0h8m05.63s +14d50m23.3s') In [4]: sdss.SDSS.query_crossid(coords,data_release=9) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-6418e02a3b0b> in <module> ----> 1 sdss.SDSS.query_crossid(coords,data_release=9) ~/munka/devel/worktrees/astroquery/testings/astroquery/utils/class_or_instance.py in f(*args, **kwds) 23 def f(*args, **kwds): 24 if obj is not None: ---> 25 return self.fn(obj, *args, **kwds) 26 else: 27 return self.fn(cls, *args, **kwds) ~/munka/devel/worktrees/astroquery/testings/astroquery/utils/process_asyncs.py in newmethod(self, *args, **kwargs) 27 if kwargs.get('get_query_payload') or kwargs.get('field_help'): 28 return response ---> 29 result = self._parse_result(response, verbose=verbose) 30 self.table = result 31 return result ~/munka/devel/worktrees/astroquery/testings/astroquery/sdss/core.py in _parse_result(self, response, verbose) 871 names=True, dtype=None, 872 delimiter=',', skip_header=skip_header, --> 873 comments='#')) 874 875 if len(arr) == 0: ~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/numpy/lib/npyio.py in genfromtxt(fname, dtype, comments, delimiter, skip_header, skip_footer, converters, missing_values, filling_values, usecols, names, excludelist, deletechars, replace_space, autostrip, case_sensitive, defaultfmt, unpack, usemask, loose, invalid_raise, max_rows, encoding) 2078 # Raise an exception ? 2079 if invalid_raise: -> 2080 raise ValueError(errmsg) 2081 # Issue a warning ? 2082 else: ValueError: Some errors were detected ! Line #12 (got 3 columns instead of 1) Line #32 (got 5 columns instead of 1) Line #34 (got 3 columns instead of 1) ``` Response: ``` <!DOCTYPE html> <html> <head> <title>The resource cannot be found.</title> <meta name="viewport" content="width=device-width" /> <style> body {font-family:"Verdana";font-weight:normal;font-size: .7em;color:black;} p {font-family:"Verdana";font-weight:normal;color:black;margin-top: -5px} b {font-family:"Verdana";font-weight:bold;color:black;margin-top: -5px} H1 { font-family:"Verdana";font-weight:normal;font-size:18pt;color:red } H2 { font-family:"Verdana";font-weight:normal;font-size:14pt;color:maroon } pre {font-family:"Consolas","Lucida Console",Monospace;font-size:11pt;margin:0;padding:0.5em;line-height:14pt} .marker {font-weight: bold; color: black;text-decoration: none;} .version {color: gray;} .error {margin-bottom: 10px;} .expandable { text-decoration:underline; font-weight:bold; color:navy; cursor:pointer; } @media screen and (max-width: 639px) { pre { width: 440px; overflow: auto; white-space: pre-wrap; word-wrap: break-word; } } @media screen and (max-width: 479px) { pre { width: 280px; } } </style> </head> <body bgcolor="white"> <span><H1>Server Error in '/dr9' Application.<hr width=100% size=1 color=silver></H1> <h2> <i>The resource cannot be found.</i> </h2></span> <font face="Arial, Helvetica, Geneva, SunSans-Regular, sans-serif "> <b> Description: </b>HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. &nbsp;Please review the following URL and make sure that it is spelled correctly. <br><br> <b> Requested URL: </b>/dr9/en/tools/crossid/x_crossid.aspx<br><br> </font> </body> </html> ```
0.0
525837073d8fecda0610a2cc7c38dc6e6356e76b
[ "astroquery/casda/tests/test_casda.py::test_download_file", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[1]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[2]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[3]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[4]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[5]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[6]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[7]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[8]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[9]" ]
[ "astroquery/casda/tests/test_casda.py::test_query_region_text_radius", "astroquery/casda/tests/test_casda.py::test_query_region_radius", "astroquery/casda/tests/test_casda.py::test_query_region_async_radius", "astroquery/casda/tests/test_casda.py::test_query_region_box", "astroquery/casda/tests/test_casda.py::test_query_region_async_box", "astroquery/casda/tests/test_casda.py::test_filter_out_unreleased", "astroquery/casda/tests/test_casda.py::test_stage_data_unauthorised", "astroquery/casda/tests/test_casda.py::test_stage_data_empty", "astroquery/casda/tests/test_casda.py::test_stage_data_invalid_credentials", "astroquery/casda/tests/test_casda.py::test_stage_data_no_link", "astroquery/casda/tests/test_casda.py::test_stage_data", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_template", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[16]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[1]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[2]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[4]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[5]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[7]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[8]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[9]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[10]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[11]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[12]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[14]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[15]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[4]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[6]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[16]", "astroquery/sdss/tests/test_sdss.py::test_query_timeout", "astroquery/sdss/tests/test_sdss.py::test_spectra_timeout", "astroquery/sdss/tests/test_sdss.py::test_images_timeout", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[10]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[11]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[12]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[13]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[14]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[15]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[16]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_field_help_region" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-03-02 22:26:07+00:00
bsd-3-clause
1,233
astropy__astroquery-2444
diff --git a/CHANGES.rst b/CHANGES.rst index e72116da..aa439514 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -62,6 +62,12 @@ oac - Fix bug in parsing events that contain html tags (e.g. in their alias field). [#2423] +svo_fps +^^^^^^^ + +- The wavelength limits in ``get_filter_index()`` can now be specified using any + length unit, not just angstroms. [#2444] + gaia ^^^^ diff --git a/astroquery/svo_fps/core.py b/astroquery/svo_fps/core.py index ad157c43..2192e351 100644 --- a/astroquery/svo_fps/core.py +++ b/astroquery/svo_fps/core.py @@ -68,9 +68,9 @@ class SvoFpsClass(BaseQuery): Parameters ---------- - wavelength_eff_min : `~astropy.units.Quantity` having units of angstrom, optional + wavelength_eff_min : `~astropy.units.Quantity` with units of length, optional Minimum value of Wavelength Effective (default is 0 angstrom) - wavelength_eff_max : `~astropy.units.Quantity` having units of angstrom, optional + wavelength_eff_max : `~astropy.units.Quantity` with units of length, optional Maximum value of Wavelength Effective (default is a very large quantity FLOAT_MAX angstroms i.e. maximum value of np.float64) kwargs : dict @@ -81,8 +81,8 @@ class SvoFpsClass(BaseQuery): astropy.table.table.Table object Table containing data fetched from SVO (in response to query) """ - query = {'WavelengthEff_min': wavelength_eff_min.value, - 'WavelengthEff_max': wavelength_eff_max.value} + query = {'WavelengthEff_min': wavelength_eff_min.to_value(u.angstrom), + 'WavelengthEff_max': wavelength_eff_max.to_value(u.angstrom)} error_msg = 'No filter found for requested Wavelength Effective range' return self.data_from_svo(query=query, error_msg=error_msg, **kwargs)
astropy/astroquery
2543cc26f4ee53a0aaa4c629e814adf8a3ff93d2
diff --git a/astroquery/svo_fps/tests/test_svo_fps.py b/astroquery/svo_fps/tests/test_svo_fps.py index ed61171a..59b4bedf 100644 --- a/astroquery/svo_fps/tests/test_svo_fps.py +++ b/astroquery/svo_fps/tests/test_svo_fps.py @@ -46,9 +46,14 @@ def get_mockreturn(method, url, params=None, timeout=10, cache=None, **kwargs): def test_get_filter_index(patch_get): - table = SvoFps.get_filter_index(TEST_LAMBDA*u.angstrom, (TEST_LAMBDA+100)*u.angstrom) + lambda_min = TEST_LAMBDA*u.angstrom + lambda_max = lambda_min + 100*u.angstrom + table = SvoFps.get_filter_index(lambda_min, lambda_max) # Check if column for Filter ID (named 'filterID') exists in table assert 'filterID' in table.colnames + # Results should not depend on the unit of the wavelength: #2443. If they do then + # `get_mockreturn` raises `NotImplementedError`. + SvoFps.get_filter_index(lambda_min.to(u.m), lambda_max) def test_get_transmission_data(patch_get): diff --git a/astroquery/svo_fps/tests/test_svo_fps_remote.py b/astroquery/svo_fps/tests/test_svo_fps_remote.py index 41591b6e..e771a293 100644 --- a/astroquery/svo_fps/tests/test_svo_fps_remote.py +++ b/astroquery/svo_fps/tests/test_svo_fps_remote.py @@ -1,5 +1,6 @@ import pytest import astropy.io.votable.exceptions +from astropy import units as u from ..core import SvoFps @@ -8,7 +9,7 @@ from ..core import SvoFps class TestSvoFpsClass: def test_get_filter_index(self): - table = SvoFps.get_filter_index() + table = SvoFps.get_filter_index(12_000*u.angstrom, 12_100*u.angstrom) # Check if column for Filter ID (named 'filterID') exists in table assert 'filterID' in table.colnames
SvoFps.get_filter_index unit conversion issue? ``` wmin = 5000 * units.angstrom wmax = 7000 * units.angstrom index_a = svo.get_filter_index(wavelength_eff_min = wmin, \ wavelength_eff_max = wmax, timeout = 180) ``` returns 3026 filters, but changing the units to micron, as follows: ``` wmin = 0.5 * units.micron wmax = 0.7 * units.micron index_m = svo.get_filter_index(wavelength_eff_min = wmin, \ wavelength_eff_max = wmax, timeout = 180) ``` throws an IndexError: ``` Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/astroquery/svo_fps/core.py", line 58, in data_from_svo return parse_single_table(votable).to_table() File "/usr/local/lib/python3.9/site-packages/astropy/io/votable/table.py", line 180, in parse_single_table return votable.get_first_table() File "/usr/local/lib/python3.9/site-packages/astropy/io/votable/tree.py", line 3719, in get_first_table raise IndexError("No table found in VOTABLE file.") IndexError: No table found in VOTABLE file. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/astroquery/svo_fps/core.py", line 87, in get_filter_index return self.data_from_svo(query=query, error_msg=error_msg, **kwargs) File "/usr/local/lib/python3.9/site-packages/astroquery/svo_fps/core.py", line 61, in data_from_svo raise IndexError(error_msg) IndexError: No filter found for requested Wavelength Effective range ``` `get_filter_index` should be able to internally convert the units to Angstrom if that's what causing this issue.
0.0
2543cc26f4ee53a0aaa4c629e814adf8a3ff93d2
[ "astroquery/svo_fps/tests/test_svo_fps.py::test_get_filter_index" ]
[ "astroquery/svo_fps/tests/test_svo_fps.py::test_get_transmission_data", "astroquery/svo_fps/tests/test_svo_fps.py::test_get_filter_list" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2022-06-17 01:13:04+00:00
bsd-3-clause
1,234
astropy__astroquery-2475
diff --git a/CHANGES.rst b/CHANGES.rst index d951b9a5..b716fc3d 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -24,6 +24,18 @@ alma ^^^^ - Fixed a regression to handle arrays of string input for the ``query`` methods. [#2094] +- Throws an error when an unsupported ``kwargs`` (or argument) is passed in to a function. [#2475] + + +astrometry.net +^^^^^^^^^^^^^^ + +- Added a ``verbose=`` keyword argument to ``AstrometryNet`` to control whether or not + to show any information during solving. [#2484] + +- Fixed a bug which caused ``solve_timeout`` to not be respected when an image was + solved by constructing a source list internally before sending data to + astrometry.net. [#2484] cadc ^^^^ diff --git a/astroquery/alma/core.py b/astroquery/alma/core.py index 54224d86..097bd233 100644 --- a/astroquery/alma/core.py +++ b/astroquery/alma/core.py @@ -162,6 +162,7 @@ ALMA_FORM_KEYS = { def _gen_sql(payload): sql = 'select * from ivoa.obscore' where = '' + unused_payload = payload.copy() if payload: for constraint in payload: for attrib_category in ALMA_FORM_KEYS.values(): @@ -181,6 +182,18 @@ def _gen_sql(payload): else: where = ' WHERE ' where += attrib_where + + # Delete this key to see what's left over afterward + # + # Use pop to avoid the slight possibility of trying to remove + # an already removed key + unused_payload.pop(constraint) + + if unused_payload: + # Left over (unused) constraints passed. Let the user know. + remaining = [f'{p} -> {unused_payload[p]}' for p in unused_payload] + raise TypeError(f'Unsupported arguments were passed:\n{remaining}') + return sql + where @@ -296,7 +309,8 @@ class AlmaClass(QueryWithLogin): payload=payload, **kwargs) def query_async(self, payload, *, public=True, science=True, - legacy_columns=False, get_query_payload=None, **kwargs): + legacy_columns=False, get_query_payload=None, + maxrec=None, **kwargs): """ Perform a generic query with user-specified payload @@ -313,6 +327,10 @@ class AlmaClass(QueryWithLogin): legacy_columns : bool True to return the columns from the obsolete ALMA advanced query, otherwise return the current columns based on ObsCore model. + get_query_payload : bool + Flag to indicate whether to simply return the payload. + maxrec : integer + Cap on the amount of records returned. Default is no limit. Returns ------- @@ -340,7 +358,7 @@ class AlmaClass(QueryWithLogin): return payload query = _gen_sql(payload) - result = self.query_tap(query, maxrec=payload.get('maxrec', None)) + result = self.query_tap(query, maxrec=maxrec) if result is not None: result = result.to_table() else: @@ -588,7 +606,7 @@ class AlmaClass(QueryWithLogin): proprietary or not. """ query = "select distinct data_rights from ivoa.obscore where " \ - "obs_id='{}'".format(uid) + "member_ous_uid='{}'".format(uid) result = self.query_tap(query) if result: tableresult = result.to_table() diff --git a/astroquery/astrometry_net/core.py b/astroquery/astrometry_net/core.py index dd7ed52e..a6dbc6fa 100644 --- a/astroquery/astrometry_net/core.py +++ b/astroquery/astrometry_net/core.py @@ -190,7 +190,7 @@ class AstrometryNetClass(BaseQuery): 'values for {}'.format(scale_type, required_keys)) def monitor_submission(self, submission_id, - solve_timeout=TIMEOUT): + solve_timeout=TIMEOUT, verbose=True): """ Monitor the submission for completion. @@ -202,6 +202,8 @@ class AstrometryNetClass(BaseQuery): solve_timeout : ``int`` Time, in seconds, to wait for the astrometry.net solver to find a solution. + verbose : bool, optional + Whether to print out information about the solving Returns ------- @@ -223,7 +225,8 @@ class AstrometryNetClass(BaseQuery): """ has_completed = False job_id = None - print('Solving', end='', flush=True) + if verbose: + print('Solving', end='', flush=True) start_time = time.time() status = '' while not has_completed: @@ -242,7 +245,8 @@ class AstrometryNetClass(BaseQuery): elapsed = now - start_time timed_out = elapsed > solve_timeout has_completed = (status in ['success', 'failure'] or timed_out) - print('.', end='', flush=True) + if verbose: + print('.', end='', flush=True) if status == 'success': wcs_url = url_helpers.join(self.URL, 'wcs_file', str(job_id)) wcs_response = self._request('GET', wcs_url) @@ -259,6 +263,7 @@ class AstrometryNetClass(BaseQuery): def solve_from_source_list(self, x, y, image_width, image_height, solve_timeout=TIMEOUT, + verbose=True, **settings ): """ @@ -278,6 +283,8 @@ class AstrometryNetClass(BaseQuery): solve_timeout : int Time, in seconds, to wait for the astrometry.net solver to find a solution. + verbose : bool, optional + Whether to print out information about the solving For a list of the remaining settings, use the method `~AstrometryNetClass.show_allowed_settings`. @@ -301,13 +308,15 @@ class AstrometryNetClass(BaseQuery): response_d = response.json() submission_id = response_d['subid'] return self.monitor_submission(submission_id, - solve_timeout=solve_timeout) + solve_timeout=solve_timeout, + verbose=verbose) def solve_from_image(self, image_file_path, force_image_upload=False, ra_key=None, dec_key=None, ra_dec_units=None, fwhm=3, detect_threshold=5, solve_timeout=TIMEOUT, + verbose=True, **settings): """ Plate solve from an image, either by uploading the image to @@ -343,10 +352,14 @@ class AstrometryNetClass(BaseQuery): ra_dec_units : tuple, optional Tuple specifying the units of the right ascension and declination in the header. The default value is ``('hour', 'degree')``. + solve_timeout : int Time, in seconds, to wait for the astrometry.net solver to find a solution. + verbose : bool, optional + Whether to print out information about the solving + For a list of the remaining settings, use the method `~AstrometryNetClass.show_allowed_settings`. """ @@ -386,32 +399,38 @@ class AstrometryNetClass(BaseQuery): else: with fits.open(image_file_path) as f: data = f[0].data - - print("Determining background stats", flush=True) + if verbose: + print("Determining background stats", flush=True) mean, median, std = sigma_clipped_stats(data, sigma=3.0, maxiters=5) daofind = DAOStarFinder(fwhm=fwhm, threshold=detect_threshold * std) - print("Finding sources", flush=True) + if verbose: + print("Finding sources", flush=True) sources = daofind(data - median) - print('Found {} sources'.format(len(sources)), flush=True) + if verbose: + print('Found {} sources'.format(len(sources)), flush=True) # astrometry.net wants a sorted list of sources # Sort first (which puts things in ascending order) sources.sort('flux') # Reverse to get descending order sources.reverse() - print(sources) + if verbose: + print(sources) return self.solve_from_source_list(sources['xcentroid'], sources['ycentroid'], ccd.header['naxis1'], ccd.header['naxis2'], + solve_timeout=solve_timeout, + verbose=verbose, **settings) if response.status_code != 200: raise RuntimeError('Post of job failed') response_d = response.json() submission_id = response_d['subid'] return self.monitor_submission(submission_id, - solve_timeout=solve_timeout) + solve_timeout=solve_timeout, + verbose=verbose) # the default tool for users to interact with is an instance of the Class
astropy/astroquery
b1fcfff5bf77255a7d24f17eafe0b9f455d5f598
diff --git a/astroquery/alma/tests/test_alma.py b/astroquery/alma/tests/test_alma.py index dd2c2025..4a623696 100644 --- a/astroquery/alma/tests/test_alma.py +++ b/astroquery/alma/tests/test_alma.py @@ -243,6 +243,18 @@ def test_pol_sql(): common_select + " WHERE (pol_states='/XX/' OR pol_states='/XX/YY/')" +def test_unused_args(): + alma = Alma() + alma._get_dataarchive_url = Mock() + # with patch('astroquery.alma.tapsql.coord.SkyCoord.from_name') as name_mock, pytest.raises(TypeError) as typeError: + with patch('astroquery.alma.tapsql.coord.SkyCoord.from_name') as name_mock: + with pytest.raises(TypeError) as typeError: + name_mock.return_value = SkyCoord(1, 2, unit='deg') + alma.query_object('M13', public=False, bogus=True, nope=False, band_list=[3]) + + assert "['bogus -> True', 'nope -> False']" in str(typeError.value) + + def test_query(): # Tests the query and return values tap_mock = Mock() diff --git a/astroquery/alma/tests/test_alma_remote.py b/astroquery/alma/tests/test_alma_remote.py index 5c1b18e2..1f6b277c 100644 --- a/astroquery/alma/tests/test_alma_remote.py +++ b/astroquery/alma/tests/test_alma_remote.py @@ -87,7 +87,8 @@ class TestAlma: def test_bands(self, alma): payload = {'band_list': ['5', '7']} - result = alma.query(payload) + # Added maxrec here as downloading and reading the results take too long. + result = alma.query(payload, maxrec=1000) assert len(result) > 0 for row in result: assert ('5' in row['band_list']) or ('7' in row['band_list']) @@ -136,7 +137,7 @@ class TestAlma: assert not alma.is_proprietary('uid://A001/X12a3/Xe9') IVOA_DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%f" now = datetime.utcnow().strftime(IVOA_DATE_FORMAT)[:-3] - query = "select top 1 obs_id from ivoa.obscore where " \ + query = "select top 1 member_ous_uid from ivoa.obscore where " \ "obs_release_date > '{}'".format(now) result = alma.query_tap(query) assert len(result.table) == 1 @@ -146,6 +147,7 @@ class TestAlma: with pytest.raises(AttributeError): alma.is_proprietary('uid://NON/EXI/STING') + @pytest.mark.xfail(reason="Depends on PR 2438 (https://github.com/astropy/astroquery/pull/2438)") def test_data_info(self, temp_dir, alma): alma.cache_location = temp_dir @@ -257,6 +259,7 @@ class TestAlma: gc_data = alma.query_region(galactic_center, 1 * u.deg) # assert len(gc_data) >= 425 # Feb 8, 2016 assert len(gc_data) >= 50 # Nov 16, 2016 + content_length_column_name = 'content_length' uids = np.unique(m83_data['Member ous id']) if ASTROPY_LT_4_1: @@ -271,11 +274,11 @@ class TestAlma: assert X30.sum() == 4 # Jul 13, 2020 assert X31.sum() == 4 # Jul 13, 2020 mous1 = alma.get_data_info('uid://A001/X11f/X30') - totalsize_mous1 = mous1['size'].sum() * u.Unit(mous1['size'].unit) + totalsize_mous1 = mous1[content_length_column_name].sum() * u.Unit(mous1[content_length_column_name].unit) assert (totalsize_mous1.to(u.B) > 1.9*u.GB) mous = alma.get_data_info('uid://A002/X3216af/X31') - totalsize_mous = mous['size'].sum() * u.Unit(mous['size'].unit) + totalsize_mous = mous[content_length_column_name].sum() * u.Unit(mous[content_length_column_name].unit) # More recent ALMA request responses do not include any information # about file size, so we have to allow for the possibility that all # file sizes are replaced with -1 @@ -313,11 +316,13 @@ class TestAlma: result = alma.query(payload={'pi_name': '*Bally*'}, public=False, maxrec=10) assert result - result.write('/tmp/alma-onerow.txt', format='ascii') + # Add overwrite=True in case the test previously died unexpectedly + # and left the temp file. + result.write('/tmp/alma-onerow.txt', format='ascii', overwrite=True) for row in result: assert 'Bally' in row['obs_creator_name'] result = alma.query(payload=dict(project_code='2016.1.00165.S'), - public=False, cache=False) + public=False) assert result for row in result: assert '2016.1.00165.S' == row['proposal_id'] @@ -336,8 +341,7 @@ class TestAlma: result = alma.query_region( coordinates.SkyCoord('5:35:14.461 -5:21:54.41', frame='fk5', - unit=(u.hour, u.deg)), radius=0.034 * u.deg, - payload={'energy.frequency-asu': '215 .. 220'}) + unit=(u.hour, u.deg)), radius=0.034 * u.deg) result = alma.query(payload=dict(project_code='2012.*', public_data=True))
Throw an error when unsupported arguments are passed in `astroquery.alma` A user recently tried to issue this: ```python from astroquery.alma import Alma from astropy import units as u rs = Alma.query_object("V* CW Cha", radius=0.05*u.arcmin, public=True, science=True) print(len(set(rs['obs_id']))) ``` The `query_object` function does *not* support the `radius` argument. The function they actually want is `query_region`, but calling `query_object` still succeeds quietly but is misleading to the caller. Functions should error out when unsupported arguments are passed to the query functions.
0.0
b1fcfff5bf77255a7d24f17eafe0b9f455d5f598
[ "astroquery/alma/tests/test_alma.py::test_unused_args" ]
[ "astroquery/alma/tests/test_alma.py::test_arg_parser", "astroquery/alma/tests/test_alma.py::test_help", "astroquery/alma/tests/test_alma.py::test_gen_pos_sql", "astroquery/alma/tests/test_alma.py::test_gen_numeric_sql", "astroquery/alma/tests/test_alma.py::test_gen_str_sql", "astroquery/alma/tests/test_alma.py::test_gen_array_sql", "astroquery/alma/tests/test_alma.py::test_gen_datetime_sql", "astroquery/alma/tests/test_alma.py::test_gen_spec_res_sql", "astroquery/alma/tests/test_alma.py::test_gen_public_sql", "astroquery/alma/tests/test_alma.py::test_gen_science_sql", "astroquery/alma/tests/test_alma.py::test_pol_sql", "astroquery/alma/tests/test_alma.py::test_query", "astroquery/alma/tests/test_alma.py::test_sia", "astroquery/alma/tests/test_alma.py::test_tap", "astroquery/alma/tests/test_alma.py::test_get_data_info", "astroquery/alma/tests/test_alma.py::test_galactic_query", "astroquery/alma/tests/test_alma.py::test_download_files" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-07-28 17:55:47+00:00
bsd-3-clause
1,235
astropy__astroquery-2509
diff --git a/CHANGES.rst b/CHANGES.rst index 9f01291d..a6d3f91f 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -31,6 +31,7 @@ alma - Fixed a regression to handle arrays of string input for the ``query`` methods. [#2094] - Throws an error when an unsupported ``kwargs`` (or argument) is passed in to a function. [#2475] - New DataLink API handling. [#2493] +- Fixed bug #2489 in which blank URLs were being sent to the downloader [#2490] astrometry.net @@ -88,6 +89,12 @@ linelists.cdms - Fix issues with the line name parser and the line data parser; the original implementation was incomplete and upstream was not fully documented. [#2385, #2411] +mast +^^^^ + +- Cull duplicate downloads for the same dataURI in ``Observations.download_products()`` + and duplicate URIs in ``Observations.get_cloud_uris``. [#2497] + oac ^^^ @@ -110,6 +117,9 @@ svo_fps - Queries with invalid parameter names now raise an ``InvalidQueryError``. [#2446] +- The default wavelength range used by ``get_filter_index()`` was far too + large. The user must now always specify both upper and lower limits. [#2509] + gaia ^^^^ diff --git a/astroquery/alma/core.py b/astroquery/alma/core.py index b41734f6..4368bbbf 100644 --- a/astroquery/alma/core.py +++ b/astroquery/alma/core.py @@ -828,7 +828,15 @@ class AlmaClass(QueryWithLogin): raise TypeError("Datasets must be given as a list of strings.") files = self.get_data_info(uids) - file_urls = files['access_url'] + # filter out blank access URLs + # it is possible for there to be length-1 lists + if len(files) == 1: + file_urls = files['access_url'] + if isinstance(file_urls, str) and file_urls == '': + raise ValueError(f"Cannot download uid {uid} because it has no file") + else: + file_urls = [url for url in files['access_url'] if url] + totalsize = files['content_length'].sum()*u.B # each_size, totalsize = self.data_size(files) diff --git a/astroquery/exceptions.py b/astroquery/exceptions.py index 27651416..8ec2dddf 100644 --- a/astroquery/exceptions.py +++ b/astroquery/exceptions.py @@ -7,8 +7,9 @@ from astropy.utils.exceptions import AstropyWarning __all__ = ['TimeoutError', 'InvalidQueryError', 'RemoteServiceError', 'TableParseError', 'LoginError', 'ResolverError', - 'NoResultsWarning', 'LargeQueryWarning', 'InputWarning', - 'AuthenticationWarning', 'MaxResultsWarning', 'CorruptDataWarning'] + 'NoResultsWarning', 'DuplicateResultsWarning', 'LargeQueryWarning', + 'InputWarning', 'AuthenticationWarning', 'MaxResultsWarning', + 'CorruptDataWarning'] class TimeoutError(Exception): @@ -67,6 +68,13 @@ class NoResultsWarning(AstropyWarning): pass +class DuplicateResultsWarning(AstropyWarning): + """ + Astroquery warning class to be issued when a query returns no result. + """ + pass + + class LargeQueryWarning(AstropyWarning): """ Astroquery warning class to be issued when a query is larger than diff --git a/astroquery/mast/observations.py b/astroquery/mast/observations.py index abe0a0ad..b00f915e 100644 --- a/astroquery/mast/observations.py +++ b/astroquery/mast/observations.py @@ -19,7 +19,7 @@ from requests import HTTPError import astropy.units as u import astropy.coordinates as coord -from astropy.table import Table, Row, vstack, MaskedColumn +from astropy.table import Table, Row, unique, vstack, MaskedColumn from astroquery import log from astropy.utils import deprecated @@ -31,7 +31,7 @@ from ..query import QueryWithLogin from ..utils import commons, async_to_sync from ..utils.class_or_instance import class_or_instance from ..exceptions import (TimeoutError, InvalidQueryError, RemoteServiceError, - ResolverError, MaxResultsWarning, + ResolverError, MaxResultsWarning, DuplicateResultsWarning, NoResultsWarning, InputWarning, AuthenticationWarning) from . import conf, utils @@ -716,6 +716,9 @@ class ObservationsClass(MastQueryWithLogin): products = vstack(product_lists) + # Remove duplicate products + products = self._remove_duplicate_products(products) + # apply filters products = self.filter_products(products, mrp_only=mrp_only, **filters) @@ -767,6 +770,9 @@ class ObservationsClass(MastQueryWithLogin): raise RemoteServiceError('Please enable anonymous cloud access by calling `enable_cloud_dataset` method. ' 'See MAST Labs documentation for an example: https://mast-labs.stsci.io/#example-data-access-with-astroquery-observations') + # Remove duplicate products + data_products = self._remove_duplicate_products(data_products) + return self._cloud_connection.get_cloud_uri_list(data_products, include_bucket, full_url) def get_cloud_uri(self, data_product, *, include_bucket=True, full_url=False): @@ -802,6 +808,30 @@ class ObservationsClass(MastQueryWithLogin): # Query for product URIs return self._cloud_connection.get_cloud_uri(data_product, include_bucket, full_url) + def _remove_duplicate_products(self, data_products): + """ + Removes duplicate data products that have the same dataURI. + + Parameters + ---------- + data_products : `~astropy.table.Table` + Table containing products to be checked for duplicates. + + Returns + ------- + unique_products : `~astropy.table.Table` + Table containing products with unique dataURIs. + + """ + number = len(data_products) + unique_products = unique(data_products, keys="dataURI") + number_unique = len(unique_products) + if number_unique < number: + warnings.warn(f"{number - number_unique} of {number} products were duplicates." + f"Only downloading {number_unique} unique product(s).", DuplicateResultsWarning) + + return unique_products + @async_to_sync class MastClass(MastQueryWithLogin): diff --git a/astroquery/svo_fps/core.py b/astroquery/svo_fps/core.py index 31bf1d61..6a94b5d8 100644 --- a/astroquery/svo_fps/core.py +++ b/astroquery/svo_fps/core.py @@ -9,13 +9,11 @@ from astropy.io.votable import parse_single_table from . import conf from ..query import BaseQuery -from astroquery.exceptions import InvalidQueryError +from astroquery.exceptions import InvalidQueryError, TimeoutError __all__ = ['SvoFpsClass', 'SvoFps'] -FLOAT_MAX = np.finfo(np.float64).max - # Valid query parameters taken from # http://svo2.cab.inta-csic.es/theory/fps/index.php?mode=voservice _params_with_range = {"WavelengthRef", "WavelengthMean", "WavelengthEff", @@ -80,19 +78,17 @@ class SvoFpsClass(BaseQuery): # If no table element found in VOTable raise IndexError(error_msg) - def get_filter_index(self, wavelength_eff_min=0*u.angstrom, - wavelength_eff_max=FLOAT_MAX*u.angstrom, **kwargs): + def get_filter_index(self, wavelength_eff_min, wavelength_eff_max, **kwargs): """Get master list (index) of all filters at SVO Optional parameters can be given to get filters data for specified Wavelength Effective range from SVO Parameters ---------- - wavelength_eff_min : `~astropy.units.Quantity` with units of length, optional - Minimum value of Wavelength Effective (default is 0 angstrom) - wavelength_eff_max : `~astropy.units.Quantity` with units of length, optional - Maximum value of Wavelength Effective (default is a very large - quantity FLOAT_MAX angstroms i.e. maximum value of np.float64) + wavelength_eff_min : `~astropy.units.Quantity` with units of length + Minimum value of Wavelength Effective + wavelength_eff_max : `~astropy.units.Quantity` with units of length + Maximum value of Wavelength Effective kwargs : dict Passed to `data_from_svo`. Relevant arguments include ``cache`` @@ -104,7 +100,13 @@ class SvoFpsClass(BaseQuery): query = {'WavelengthEff_min': wavelength_eff_min.to_value(u.angstrom), 'WavelengthEff_max': wavelength_eff_max.to_value(u.angstrom)} error_msg = 'No filter found for requested Wavelength Effective range' - return self.data_from_svo(query=query, error_msg=error_msg, **kwargs) + try: + return self.data_from_svo(query=query, error_msg=error_msg, **kwargs) + except requests.ReadTimeout: + raise TimeoutError( + "Query did not finish fast enough. A smaller wavelength range might " + "succeed. Try increasing the timeout limit if a large range is needed." + ) def get_transmission_data(self, filter_id, **kwargs): """Get transmission data for the requested Filter ID from SVO diff --git a/docs/svo_fps/svo_fps.rst b/docs/svo_fps/svo_fps.rst index 05fcb6bc..3ec9084e 100644 --- a/docs/svo_fps/svo_fps.rst +++ b/docs/svo_fps/svo_fps.rst @@ -1,5 +1,3 @@ -.. doctest-skip-all - .. _astroquery.svo_fps: ********************************************************** @@ -17,17 +15,19 @@ from the service as astropy tables. Get index list of all Filters ----------------------------- -The filter index (all available filters with their properties) can be listed -with `~astroquery.svo_fps.SvoFpsClass.get_filter_index`: +The filter index (the properties of all available filters in a wavelength +range) can be listed with +:meth:`~astroquery.svo_fps.SvoFpsClass.get_filter_index`: -.. code-block:: python +.. doctest-remote-data:: + >>> from astropy import units as u >>> from astroquery.svo_fps import SvoFps - >>> index = SvoFps.get_filter_index() + >>> index = SvoFps.get_filter_index(12_000*u.angstrom, 12_100*u.angstrom) >>> index.info - <Table masked=True length=5139> - name dtype unit - -------------------- ------- ---- + <Table length=14> + name dtype unit + -------------------- ------- --------------- FilterProfileService object filterID object WavelengthUnit object @@ -41,28 +41,31 @@ with `~astroquery.svo_fps.SvoFpsClass.get_filter_index`: CalibrationReference object Description object Comments object - WavelengthMean float32 AA - WavelengthEff float32 AA - WavelengthMin float32 AA - WavelengthMax float32 AA - WidthEff float32 AA - WavelengthCen float32 AA - WavelengthPivot float32 AA - WavelengthPeak float32 AA - WavelengthPhot float32 AA - FWHM float32 AA + WavelengthRef float64 AA + WavelengthMean float64 AA + WavelengthEff float64 AA + WavelengthMin float64 AA + WavelengthMax float64 AA + WidthEff float64 AA + WavelengthCen float64 AA + WavelengthPivot float64 AA + WavelengthPeak float64 AA + WavelengthPhot float64 AA + FWHM float64 AA + Fsun float64 erg s / (A cm2) PhotCalID object MagSys object - ZeroPoint float32 Jy + ZeroPoint float64 Jy ZeroPointUnit object - Mag0 float32 + Mag0 float64 ZeroPointType object - AsinhSoft float32 + AsinhSoft float64 TrasmissionCurve object -There are options to downselect based on the minimum -and maximum effective wavelength (``wavelength_eff_min`` -and ``wavelength_eff_max``, respectively). +If the wavelength range contains too many entries then a ``TimeoutError`` will +occur. A smaller wavelength range might succeed, but if a large range really is +required then you can use the ``timeout`` argument to allow for a longer +response time. Get list of Filters under a specified Facilty and Instrument ------------------------------------------------------------ @@ -72,14 +75,13 @@ Filters for an arbitrary combination of Facility & Instrument (the Facility must be specified, but the Instrument is optional). The data table returned is of the same form as that from `~astroquery.svo_fps.SvoFpsClass.get_filter_index`: -.. code-block:: python +.. doctest-remote-data:: >>> filter_list = SvoFps.get_filter_list(facility='Keck', instrument='NIRC2') >>> filter_list.info - - <Table masked=True length=11> - name dtype unit - -------------------- ------- ---- + <Table length=11> + name dtype unit + -------------------- ------- --------------- FilterProfileService object filterID object WavelengthUnit object @@ -93,26 +95,27 @@ is of the same form as that from `~astroquery.svo_fps.SvoFpsClass.get_filter_ind CalibrationReference object Description object Comments object - WavelengthMean float32 AA - WavelengthEff float32 AA - WavelengthMin float32 AA - WavelengthMax float32 AA - WidthEff float32 AA - WavelengthCen float32 AA - WavelengthPivot float32 AA - WavelengthPeak float32 AA - WavelengthPhot float32 AA - FWHM float32 AA + WavelengthRef float64 AA + WavelengthMean float64 AA + WavelengthEff float64 AA + WavelengthMin float64 AA + WavelengthMax float64 AA + WidthEff float64 AA + WavelengthCen float64 AA + WavelengthPivot float64 AA + WavelengthPeak float64 AA + WavelengthPhot float64 AA + FWHM float64 AA + Fsun float64 erg s / (A cm2) PhotCalID object MagSys object - ZeroPoint float32 Jy + ZeroPoint float64 Jy ZeroPointUnit object - Mag0 float32 + Mag0 float64 ZeroPointType object - AsinhSoft float32 + AsinhSoft float64 TrasmissionCurve object - Get transmission data for a specific Filter ------------------------------------------- @@ -122,40 +125,39 @@ If you know the ``filterID`` of the filter (which you can determine with transmission curve data using `~astroquery.svo_fps.SvoFpsClass.get_transmission_data`: -.. code-block:: python +.. doctest-remote-data:: >>> data = SvoFps.get_transmission_data('2MASS/2MASS.H') >>> print(data) Wavelength Transmission AA ---------- ------------ - 12890.0 0.0 - 13150.0 0.0 - 13410.0 0.0 - 13680.0 0.0 - 13970.0 0.0 - 14180.0 0.0 - 14400.0 0.0005 - 14620.0 0.0028 - 14780.0 0.0081 - 14860.0 0.0287 - ... ... - 18030.0 0.1077 - 18100.0 0.0707 - 18130.0 0.0051 - 18180.0 0.02 - 18280.0 0.0004 - 18350.0 0.0 - 18500.0 1e-04 - 18710.0 0.0 - 18930.0 0.0 - 19140.0 0.0 + 12890.0 0.0 + 13150.0 0.0 + 13410.0 0.0 + 13680.0 0.0 + 13970.0 0.0 + 14180.0 0.0 + 14400.0 0.0005 + 14620.0 0.0027999999 + 14780.0 0.0081000002 + 14860.0 0.0286999997 + ... ... + 18030.0 0.1076999977 + 18100.0 0.0706999972 + 18130.0 0.0051000002 + 18180.0 0.0199999996 + 18280.0 0.0004 + 18350.0 0.0 + 18500.0 0.0001 + 18710.0 0.0 + 18930.0 0.0 + 19140.0 0.0 Length = 58 rows - These are the data needed to plot the transmission curve for filter: -.. code-block:: python +.. doctest-skip:: >>> import matplotlib.pyplot as plt >>> plt.plot(data['Wavelength'], data['Transmission'])
astropy/astroquery
85b39375ca582d1da79a3d55889a1955546b6e0a
diff --git a/astroquery/alma/tests/test_alma_remote.py b/astroquery/alma/tests/test_alma_remote.py index 77068624..077b2d64 100644 --- a/astroquery/alma/tests/test_alma_remote.py +++ b/astroquery/alma/tests/test_alma_remote.py @@ -149,7 +149,18 @@ class TestAlma: with pytest.raises(AttributeError): alma.is_proprietary('uid://NON/EXI/STING') - @pytest.mark.xfail(reason="Depends on PR 2438 (https://github.com/astropy/astroquery/pull/2438)") + def test_retrieve_data(self, temp_path, alma): + """ + Regression test for issue 2490 (the retrieval step will simply fail if + given a blank line, so all we're doing is testing that it runs) + """ + alma.cache_location = temp_path + + # small solar TP-only data set (<1 GB) + uid = 'uid://A001/X87c/X572' + + alma.retrieve_data_from_uid([uid]) + def test_data_info(self, temp_dir, alma): alma.cache_location = temp_dir diff --git a/astroquery/mast/tests/test_mast_remote.py b/astroquery/mast/tests/test_mast_remote.py index 71578c35..7f70ec8d 100644 --- a/astroquery/mast/tests/test_mast_remote.py +++ b/astroquery/mast/tests/test_mast_remote.py @@ -14,7 +14,8 @@ import astropy.units as u from astroquery import mast from ..utils import ResolverError -from ...exceptions import InvalidQueryError, MaxResultsWarning, NoResultsWarning, RemoteServiceError +from ...exceptions import (InvalidQueryError, MaxResultsWarning, NoResultsWarning, + DuplicateResultsWarning, RemoteServiceError) OBSID = '1647157' @@ -274,7 +275,7 @@ class TestMast: assert os.path.isfile(row['Local Path']) # just get the curl script - result = mast.Observations.download_products(test_obs[0]["obsid"], + result = mast.Observations.download_products(test_obs_id[0]["obsid"], download_dir=str(tmpdir), curl_flag=True, productType=["SCIENCE"], @@ -283,12 +284,41 @@ class TestMast: assert os.path.isfile(result['Local Path'][0]) # check for row input - result1 = mast.Observations.get_product_list(test_obs[0]["obsid"]) + result1 = mast.Observations.get_product_list(test_obs_id[0]["obsid"]) result2 = mast.Observations.download_products(result1[0]) assert isinstance(result2, Table) assert os.path.isfile(result2['Local Path'][0]) assert len(result2) == 1 + def test_observations_download_products_no_duplicates(tmpdir): + + # Pull products for a JWST NIRSpec MSA observation with 6 known + # duplicates of the MSA configuration file, propID=2736 + products = mast.Observations.get_product_list("87602009") + + # Filter out everything but the MSA config file + mask = np.char.find(products["dataURI"], "_msa.fits") != -1 + products = products[mask] + + assert len(products) == 6 + + # Download the product + with pytest.warns(DuplicateResultsWarning): + manifest = mast.Observations.download_products(products, + download_dir=str(tmpdir)) + + # Check that it downloads the MSA config file only once + assert len(manifest) == 1 + + # enable access to public AWS S3 bucket + mast.Observations.enable_cloud_dataset() + + # Check duplicate cloud URIs as well + with pytest.warns(DuplicateResultsWarning): + uris = mast.Observations.get_cloud_uris(products) + + assert len(uris) == 1 + def test_observations_download_file(self, tmpdir): # enabling cloud connection diff --git a/astroquery/svo_fps/tests/test_svo_fps.py b/astroquery/svo_fps/tests/test_svo_fps.py index c2aef041..97a43a48 100644 --- a/astroquery/svo_fps/tests/test_svo_fps.py +++ b/astroquery/svo_fps/tests/test_svo_fps.py @@ -1,8 +1,9 @@ import pytest import os from astropy import units as u +from requests import ReadTimeout -from astroquery.exceptions import InvalidQueryError +from astroquery.exceptions import InvalidQueryError, TimeoutError from astroquery.utils.mocks import MockResponse from ..core import SvoFps @@ -46,7 +47,9 @@ def get_mockreturn(method, url, params=None, timeout=10, cache=None, **kwargs): return MockResponse(content, **kwargs) -def test_get_filter_index(patch_get): +def test_get_filter_index(patch_get, monkeypatch): + with pytest.raises(TypeError, match="missing 2 required positional arguments"): + SvoFps.get_filter_index() lambda_min = TEST_LAMBDA*u.angstrom lambda_max = lambda_min + 100*u.angstrom table = SvoFps.get_filter_index(lambda_min, lambda_max) @@ -56,6 +59,17 @@ def test_get_filter_index(patch_get): # `get_mockreturn` raises `NotImplementedError`. SvoFps.get_filter_index(lambda_min.to(u.m), lambda_max) + def get_mockreturn_timeout(*args, **kwargs): + raise ReadTimeout + + monkeypatch.setattr(SvoFps, '_request', get_mockreturn_timeout) + error_msg = ( + r"^Query did not finish fast enough\. A smaller wavelength range might " + r"succeed\. Try increasing the timeout limit if a large range is needed\.$" + ) + with pytest.raises(TimeoutError, match=error_msg): + SvoFps.get_filter_index(lambda_min, lambda_max) + def test_get_transmission_data(patch_get): table = SvoFps.get_transmission_data(TEST_FILTER_ID)
Cannot connect to SVO Filter Profile Service There seem to be connectivity issues with the SVO Filter Profile Service (FPS). The following code (from the [tutorial](https://astroquery.readthedocs.io/en/latest/svo_fps/svo_fps.html)) ``` from astroquery.svo_fps import SvoFps index = SvoFps.get_filter_index() ``` gives the following error: ``` ReadTimeout: HTTPConnectionPool(host='svo2.cab.inta-csic.es', port=80): Read timed out. (read timeout=60) ``` Since I can access the SVO FPS directly via my web browser, I assume the problem must lie somewhere in the interface between Astroquery and the SVO FPS.
0.0
85b39375ca582d1da79a3d55889a1955546b6e0a
[ "astroquery/svo_fps/tests/test_svo_fps.py::test_get_filter_index", "astroquery/svo_fps/tests/test_svo_fps.py::test_get_transmission_data", "astroquery/svo_fps/tests/test_svo_fps.py::test_get_filter_list", "astroquery/svo_fps/tests/test_svo_fps.py::test_invalid_query" ]
[]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-09-06 12:14:38+00:00
bsd-3-clause
1,236
astropy__astroquery-2532
diff --git a/CHANGES.rst b/CHANGES.rst index 373f9faf..bcf8521b 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -141,6 +141,7 @@ sdss - The default data release has been changed to DR17. [#2478] +- Optional keyword arguments are now keyword only. [#2477, #2532] Infrastructure, Utility and Other Changes and Additions diff --git a/astroquery/sdss/core.py b/astroquery/sdss/core.py index bad511b9..b9c80ebb 100644 --- a/astroquery/sdss/core.py +++ b/astroquery/sdss/core.py @@ -2,7 +2,6 @@ """ Access Sloan Digital Sky Survey database online. """ -import io import warnings import numpy as np @@ -518,9 +517,9 @@ class SDSSClass(BaseQuery): timeout=timeout, cache=cache) return response - def get_spectra_async(self, coordinates=None, radius=2. * u.arcsec, + def get_spectra_async(self, *, coordinates=None, radius=2. * u.arcsec, matches=None, plate=None, fiberID=None, mjd=None, - timeout=TIMEOUT, + timeout=TIMEOUT, get_query_payload=False, data_release=conf.default_release, cache=True, show_progress=True): """ @@ -559,6 +558,9 @@ class SDSSClass(BaseQuery): timeout : float, optional Time limit (in seconds) for establishing successful connection with remote server. Defaults to `SDSSClass.TIMEOUT`. + get_query_payload : bool, optional + If True, this will return the data the query would have sent out, + but does not actually do the query. data_release : int, optional The data release of the SDSS to use. With the default server, this only supports DR8 or later. @@ -599,12 +601,19 @@ class SDSSClass(BaseQuery): if coordinates is None: matches = self.query_specobj(plate=plate, mjd=mjd, fiberID=fiberID, fields=['run2d', 'plate', 'mjd', 'fiberID'], - timeout=timeout, data_release=data_release, cache=cache) + timeout=timeout, get_query_payload=get_query_payload, + data_release=data_release, cache=cache) else: - matches = self.query_crossid(coordinates, radius=radius, + matches = self.query_crossid(coordinates, radius=radius, timeout=timeout, specobj_fields=['run2d', 'plate', 'mjd', 'fiberID'], - spectro=True, - timeout=timeout, data_release=data_release, cache=cache) + spectro=True, get_query_payload=get_query_payload, + data_release=data_release, cache=cache) + if get_query_payload: + if coordinates is None: + return matches + else: + return matches[0] + if matches is None: warnings.warn("Query returned no results.", NoResultsWarning) return @@ -638,10 +647,10 @@ class SDSSClass(BaseQuery): return results @prepend_docstr_nosections(get_spectra_async.__doc__) - def get_spectra(self, coordinates=None, radius=2. * u.arcsec, + def get_spectra(self, *, coordinates=None, radius=2. * u.arcsec, matches=None, plate=None, fiberID=None, mjd=None, - timeout=TIMEOUT, cache=True, - data_release=conf.default_release, + timeout=TIMEOUT, get_query_payload=False, + data_release=conf.default_release, cache=True, show_progress=True): """ Returns @@ -654,9 +663,14 @@ class SDSSClass(BaseQuery): radius=radius, matches=matches, plate=plate, fiberID=fiberID, mjd=mjd, timeout=timeout, + get_query_payload=get_query_payload, data_release=data_release, + cache=cache, show_progress=show_progress) + if get_query_payload: + return readable_objs + if readable_objs is not None: if isinstance(readable_objs, dict): return readable_objs @@ -666,7 +680,7 @@ class SDSSClass(BaseQuery): def get_images_async(self, coordinates=None, radius=2. * u.arcsec, matches=None, run=None, rerun=301, camcol=None, field=None, band='g', timeout=TIMEOUT, - cache=True, + cache=True, get_query_payload=False, data_release=conf.default_release, show_progress=True): """ @@ -714,6 +728,9 @@ class SDSSClass(BaseQuery): timeout : float, optional Time limit (in seconds) for establishing successful connection with remote server. Defaults to `SDSSClass.TIMEOUT`. + get_query_payload : bool, optional + If True, this will return the data the query would have sent out, + but does not actually do the query. cache : bool, optional Cache the images using astropy's caching system data_release : int, optional @@ -753,12 +770,19 @@ class SDSSClass(BaseQuery): matches = self.query_photoobj(run=run, rerun=rerun, camcol=camcol, field=field, fields=['run', 'rerun', 'camcol', 'field'], - timeout=timeout, + timeout=timeout, get_query_payload=get_query_payload, data_release=data_release, cache=cache) else: - matches = self.query_crossid(coordinates, radius=radius, + matches = self.query_crossid(coordinates, radius=radius, timeout=timeout, fields=['run', 'rerun', 'camcol', 'field'], - timeout=timeout, data_release=data_release, cache=cache) + get_query_payload=get_query_payload, + data_release=data_release, cache=cache) + if get_query_payload: + if coordinates is None: + return matches + else: + return matches[0] + if matches is None: warnings.warn("Query returned no results.", NoResultsWarning) return @@ -786,7 +810,7 @@ class SDSSClass(BaseQuery): return results @prepend_docstr_nosections(get_images_async.__doc__) - def get_images(self, coordinates=None, radius=2. * u.arcsec, + def get_images(self, *, coordinates=None, radius=2. * u.arcsec, matches=None, run=None, rerun=301, camcol=None, field=None, band='g', timeout=TIMEOUT, cache=True, get_query_payload=False, data_release=conf.default_release, @@ -798,10 +822,22 @@ class SDSSClass(BaseQuery): """ - readable_objs = self.get_images_async( - coordinates=coordinates, radius=radius, matches=matches, run=run, - rerun=rerun, data_release=data_release, camcol=camcol, field=field, - band=band, timeout=timeout, show_progress=show_progress) + readable_objs = self.get_images_async(coordinates=coordinates, + radius=radius, + matches=matches, + run=run, + rerun=rerun, + camcol=camcol, + field=field, + band=band, + timeout=timeout, + cache=cache, + get_query_payload=get_query_payload, + data_release=data_release, + show_progress=show_progress) + + if get_query_payload: + return readable_objs if readable_objs is not None: if isinstance(readable_objs, dict): @@ -906,7 +942,7 @@ class SDSSClass(BaseQuery): else: return arr - def _args_to_payload(self, coordinates=None, + def _args_to_payload(self, *, coordinates=None, fields=None, spectro=False, region=False, plate=None, mjd=None, fiberID=None, run=None, rerun=301, camcol=None, field=None,
astropy/astroquery
436245056c3e624b08abb0095a1bbec642ed1526
diff --git a/astroquery/sdss/tests/test_sdss.py b/astroquery/sdss/tests/test_sdss.py index 495dcf71..6f5516a5 100644 --- a/astroquery/sdss/tests/test_sdss.py +++ b/astroquery/sdss/tests/test_sdss.py @@ -177,7 +177,7 @@ def test_sdss_spectrum_mjd(patch_request, patch_get_readable_fileobj, dr): @pytest.mark.parametrize("dr", dr_list) def test_sdss_spectrum_coords(patch_request, patch_get_readable_fileobj, dr, coords=coords): - sp = sdss.SDSS.get_spectra(coords, data_release=dr) + sp = sdss.SDSS.get_spectra(coordinates=coords, data_release=dr) image_tester(sp, 'spectra') @@ -220,7 +220,7 @@ def test_sdss_image_run(patch_request, patch_get_readable_fileobj, dr): @pytest.mark.parametrize("dr", dr_list) def test_sdss_image_coord(patch_request, patch_get_readable_fileobj, dr, coord=coords): - img = sdss.SDSS.get_images(coords, data_release=dr) + img = sdss.SDSS.get_images(coordinates=coords, data_release=dr) image_tester(img, 'images') @@ -454,6 +454,63 @@ def test_photoobj_run_camcol_field_payload(patch_request, dr): assert query_payload['format'] == 'csv' [email protected]("dr", dr_list) +def test_get_spectra_specobj_payload(patch_request, dr): + expect = ("SELECT DISTINCT " + "s.run2d, s.plate, s.mjd, s.fiberID " + "FROM PhotoObjAll AS p " + "JOIN SpecObjAll AS s ON p.objID = s.bestObjID " + "WHERE " + "(s.plate=751 AND s.mjd=52251)") + query_payload = sdss.SDSS.get_spectra_async(plate=751, mjd=52251, + get_query_payload=True, + data_release=dr) + assert query_payload['cmd'] == expect + assert query_payload['format'] == 'csv' + + [email protected]("dr", dr_list) +def test_get_spectra_coordinates_payload(patch_request, dr): + expect = ("SELECT\r\n" + "s.run2d, s.plate, s.mjd, s.fiberID, s.SpecObjID AS obj_id, dbo.fPhotoTypeN(p.type) AS type " + "FROM #upload u JOIN #x x ON x.up_id = u.up_id JOIN PhotoObjAll AS p ON p.objID = x.objID " + "JOIN SpecObjAll AS s ON p.objID = s.bestObjID " + "ORDER BY x.up_id") + query_payload = sdss.SDSS.get_spectra_async(coordinates=coords_column, + get_query_payload=True, + data_release=dr) + assert query_payload['uquery'] == expect + assert query_payload['format'] == 'csv' + assert query_payload['photoScope'] == 'nearPrim' + + [email protected]("dr", dr_list) +def test_get_images_photoobj_payload(patch_request, dr): + expect = ("SELECT DISTINCT " + "p.run, p.rerun, p.camcol, p.field " + "FROM PhotoObjAll AS p WHERE " + "(p.run=5714 AND p.camcol=6 AND p.rerun=301)") + query_payload = sdss.SDSS.get_images_async(run=5714, camcol=6, + get_query_payload=True, + data_release=dr) + assert query_payload['cmd'] == expect + assert query_payload['format'] == 'csv' + + [email protected]("dr", dr_list) +def test_get_images_coordinates_payload(patch_request, dr): + expect = ("SELECT\r\n" + "p.run, p.rerun, p.camcol, p.field, dbo.fPhotoTypeN(p.type) AS type " + "FROM #upload u JOIN #x x ON x.up_id = u.up_id JOIN PhotoObjAll AS p ON p.objID = x.objID " + "ORDER BY x.up_id") + query_payload = sdss.SDSS.get_images_async(coordinates=coords_column, + get_query_payload=True, + data_release=dr) + assert query_payload['uquery'] == expect + assert query_payload['format'] == 'csv' + assert query_payload['photoScope'] == 'nearPrim' + + @pytest.mark.parametrize("dr", dr_list) def test_spectra_plate_mjd_payload(patch_request, dr): expect = ("SELECT DISTINCT " diff --git a/astroquery/sdss/tests/test_sdss_remote.py b/astroquery/sdss/tests/test_sdss_remote.py index eb190cbd..329e94b8 100644 --- a/astroquery/sdss/tests/test_sdss_remote.py +++ b/astroquery/sdss/tests/test_sdss_remote.py @@ -63,7 +63,7 @@ class TestSDSSRemote: sp = sdss.SDSS.get_spectra(plate=2345, fiberID=572) def test_sdss_spectrum_coords(self): - sp = sdss.SDSS.get_spectra(self.coords) + sp = sdss.SDSS.get_spectra(coordinates=self.coords) def test_sdss_sql(self): query = """ @@ -91,7 +91,7 @@ class TestSDSSRemote: img = sdss.SDSS.get_images(run=1904, camcol=3, field=164) def test_sdss_image_coord(self): - img = sdss.SDSS.get_images(self.coords) + img = sdss.SDSS.get_images(coordinates=self.coords) def test_sdss_specobj(self): colnames = ['ra', 'dec', 'objid', 'run', 'rerun', 'camcol', 'field', @@ -161,7 +161,7 @@ class TestSDSSRemote: "self._request, fix it before merging #586")) def test_spectra_timeout(self): with pytest.raises(TimeoutError): - sdss.SDSS.get_spectra(self.coords, timeout=self.mintimeout) + sdss.SDSS.get_spectra(coordinates=self.coords, timeout=self.mintimeout) def test_query_non_default_field(self): # A regression test for #469
SDSS: add get_query_payload kwarg back to all methods With the recent refactoring in #2477 it got refactored out from a few methods, this is a reminder issue that it should be, and systematically, added back to all methods to help with quick debugging.
0.0
436245056c3e624b08abb0095a1bbec642ed1526
[ "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_get_images_coordinates_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_coordinates_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_get_images_photoobj_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_get_spectra_specobj_payload[5]" ]
[ "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[8]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[11]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[5]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[12]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[6]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[15]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[9]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[15]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[8]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[17]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid_explicit_angle_value", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[12]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[5]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[5]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[12]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[7]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[1]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[8]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[4]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[1]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[5]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[13]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[8]", "astroquery/sdss/tests/test_sdss.py::test_spectra_timeout", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[1]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_template", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[10]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[8]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[10]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[17]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[2]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[7]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[15]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[14]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid_invalid_names", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[4]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[7]", "astroquery/sdss/tests/test_sdss.py::test_field_help_region", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[17]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[3]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[6]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[15]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[10]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[13]", "astroquery/sdss/tests/test_sdss.py::test_query_timeout", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[1]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[4]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[11]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[3]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[5]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[10]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[9]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[7]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[4]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[3]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[14]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[8]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[17]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[14]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[10]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[11]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[8]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[11]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid_invalid_radius", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[13]", "astroquery/sdss/tests/test_sdss.py::test_images_timeout", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[11]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[17]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[8]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[10]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[17]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[12]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[14]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[14]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid_parse_angle_value", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[17]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[12]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates[13]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[16]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[13]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[11]", "astroquery/sdss/tests/test_sdss.py::test_sdss_specobj[16]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[16]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[11]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[1]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[7]", "astroquery/sdss/tests/test_sdss.py::test_sdss_sql[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[5]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[10]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload_custom_fields[10]", "astroquery/sdss/tests/test_sdss.py::test_sdss_photoobj[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[5]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_run[16]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_region_payload[12]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[9]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_mjd[17]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[6]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[17]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid_large_radius", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[13]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates_cross_id_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[9]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[12]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_coord[11]", "astroquery/sdss/tests/test_sdss.py::test_spectra_plate_mjd_payload[14]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_image_from_query_region[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[6]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum[14]", "astroquery/sdss/tests/test_sdss.py::test_query_crossid[4]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[16]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_payload_custom_fields[14]", "astroquery/sdss/tests/test_sdss.py::test_photoobj_run_camcol_field_payload[7]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[3]", "astroquery/sdss/tests/test_sdss.py::test_list_coordinates[6]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_cross_id_payload[15]", "astroquery/sdss/tests/test_sdss.py::test_column_coordinates_region_spectro_payload[2]", "astroquery/sdss/tests/test_sdss.py::test_sdss_spectrum_coords[8]" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-09-16 18:07:21+00:00
bsd-3-clause
1,237
astropy__ccdproc-757
diff --git a/AUTHORS.rst b/AUTHORS.rst index aaf3470..f067a45 100644 --- a/AUTHORS.rst +++ b/AUTHORS.rst @@ -56,6 +56,7 @@ Alphabetical list of contributors * Nathan Walker (@walkerna22) * Benjamin Weiner (@bjweiner) * Jiyong Youn (@hletrd) +* Yash Gondhalekar (@Yash-10) (If you have contributed to the ccdproc project and your name is missing, please send an email to the coordinators, or diff --git a/CHANGES.rst b/CHANGES.rst index 1d9ac18..d21f3a1 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -16,6 +16,8 @@ Bug Fixes - ``test_image_collection.py`` in the test suite no longer produces permanent files on disk and cleans up after itself. [#738] +- Change ``Combiner`` to allow accepting either a list or a generator [#757] + 2.1.0 (2019-12-24) ------------------ diff --git a/ccdproc/combiner.py b/ccdproc/combiner.py index 2e6eefc..39f3e28 100644 --- a/ccdproc/combiner.py +++ b/ccdproc/combiner.py @@ -22,8 +22,8 @@ class Combiner: Parameters ----------- - ccd_list : list - A list of CCDData objects that will be combined together. + ccd_iter : list or generator + A list or generator of CCDData objects that will be combined together. dtype : str or `numpy.dtype` or None, optional Allows user to set dtype. See `numpy.array` ``dtype`` parameter @@ -33,7 +33,7 @@ class Combiner: Raises ------ TypeError - If the ``ccd_list`` are not `~astropy.nddata.CCDData` objects, have different + If the ``ccd_iter`` are not `~astropy.nddata.CCDData` objects, have different units, or are different shapes. Examples @@ -56,15 +56,18 @@ class Combiner: [ 0.66666667, 0.66666667, 0.66666667, 0.66666667], [ 0.66666667, 0.66666667, 0.66666667, 0.66666667]]) """ - def __init__(self, ccd_list, dtype=None): - if ccd_list is None: - raise TypeError("ccd_list should be a list of CCDData objects.") + def __init__(self, ccd_iter, dtype=None): + if ccd_iter is None: + raise TypeError("ccd_iter should be a list or a generator of CCDData objects.") if dtype is None: dtype = np.float64 default_shape = None default_unit = None + + ccd_list = list(ccd_iter) + for ccd in ccd_list: # raise an error if the objects aren't CCDData objects if not isinstance(ccd, CCDData):
astropy/ccdproc
4e6da3b2ba516cafbe65683d68e26b988dac6a4f
diff --git a/ccdproc/tests/test_combiner.py b/ccdproc/tests/test_combiner.py index a9988d3..b758c66 100644 --- a/ccdproc/tests/test_combiner.py +++ b/ccdproc/tests/test_combiner.py @@ -696,3 +696,13 @@ def test_ystep_calculation(num_chunks, expected): xstep, ystep = _calculate_step_sizes(2000, 2000, num_chunks) assert xstep == expected[0] and ystep == expected[1] + +def test_combiner_gen(): + ccd_data = ccd_data_func() + def create_gen(): + yield ccd_data + yield ccd_data + yield ccd_data + c = Combiner(create_gen()) + assert c.data_arr.shape == (3, 100, 100) + assert c.data_arr.mask.shape == (3, 100, 100)
`Combiner` and `combine` should accept a generator as argument Both `Combine` and `combiner` should accept a generator (e.g. `Combiner(ifo.ccds())`) instead of requiring that the input be a list (e.g. `Combiner ([ccd for ccd in ifc.ccds()])`). This issue was first raised by @janerigby in #754.
0.0
4e6da3b2ba516cafbe65683d68e26b988dac6a4f
[ "ccdproc/tests/test_combiner.py::test_combiner_gen" ]
[ "ccdproc/tests/test_combiner.py::test_combiner_empty", "ccdproc/tests/test_combiner.py::test_combiner_init_with_none", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_objects", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_size", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_units", "ccdproc/tests/test_combiner.py::test_combiner_create", "ccdproc/tests/test_combiner.py::test_combiner_dtype", "ccdproc/tests/test_combiner.py::test_combiner_mask", "ccdproc/tests/test_combiner.py::test_weights", "ccdproc/tests/test_combiner.py::test_weights_shape", "ccdproc/tests/test_combiner.py::test_1Dweights", "ccdproc/tests/test_combiner.py::test_combiner_minmax", "ccdproc/tests/test_combiner.py::test_combiner_minmax_max", "ccdproc/tests/test_combiner.py::test_combiner_minmax_min", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_high", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_single_pix", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_low", "ccdproc/tests/test_combiner.py::test_combiner_median", "ccdproc/tests/test_combiner.py::test_combiner_average", "ccdproc/tests/test_combiner.py::test_combiner_sum", "ccdproc/tests/test_combiner.py::test_combiner_mask_average", "ccdproc/tests/test_combiner.py::test_combiner_with_scaling", "ccdproc/tests/test_combiner.py::test_combiner_scaling_fails", "ccdproc/tests/test_combiner.py::test_combiner_mask_median", "ccdproc/tests/test_combiner.py::test_combiner_mask_sum", "ccdproc/tests/test_combiner.py::test_combine_average_fitsimages", "ccdproc/tests/test_combiner.py::test_combine_numpyndarray", "ccdproc/tests/test_combiner.py::test_combiner_result_dtype", "ccdproc/tests/test_combiner.py::test_combine_average_ccddata", "ccdproc/tests/test_combiner.py::test_combine_limitedmem_fitsimages", "ccdproc/tests/test_combiner.py::test_combine_limitedmem_scale_fitsimages", "ccdproc/tests/test_combiner.py::test_average_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_median_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_sum_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_average", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_average_mask", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_median_mask", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_sum_mask", "ccdproc/tests/test_combiner.py::test_combiner_3d", "ccdproc/tests/test_combiner.py::test_3d_combiner_with_scaling", "ccdproc/tests/test_combiner.py::test_clip_extrema_3d", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[average_combine]", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[median_combine]", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[sum_combine]", "ccdproc/tests/test_combiner.py::test_clip_extrema", "ccdproc/tests/test_combiner.py::test_clip_extrema_via_combine", "ccdproc/tests/test_combiner.py::test_clip_extrema_with_other_rejection", "ccdproc/tests/test_combiner.py::test_ystep_calculation[53-expected0]", "ccdproc/tests/test_combiner.py::test_ystep_calculation[1500-expected1]", "ccdproc/tests/test_combiner.py::test_ystep_calculation[2001-expected2]", "ccdproc/tests/test_combiner.py::test_ystep_calculation[2999-expected3]", "ccdproc/tests/test_combiner.py::test_ystep_calculation[10000-expected4]" ]
{ "failed_lite_validators": [ "has_issue_reference", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2021-01-16 19:42:16+00:00
bsd-3-clause
1,238
astropy__ccdproc-762
diff --git a/AUTHORS.rst b/AUTHORS.rst index f067a45..8f9d29a 100644 --- a/AUTHORS.rst +++ b/AUTHORS.rst @@ -24,6 +24,7 @@ Alphabetical list of contributors * Mihai Cara (@mcara) * James Davenport (@jradavenport) * Christoph Deil (@cdeil) +* Timothy P. Ellsworth-Bowers (@tbowers7) * Carlos Gomez (@carlgogo) * Hans Moritz Günther (@hamogu) * Forrest Gasdia (@EP-Guy) diff --git a/CHANGES.rst b/CHANGES.rst index d21f3a1..4205d74 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -4,6 +4,9 @@ New Features ^^^^^^^^^^^^ +- Improve integration of ``ImageFileCollection`` with image combination + and document that integration [#762] + Other Changes and Additions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Add memory_profiler as a test requirement [#739] @@ -14,10 +17,14 @@ Bug Fixes ^^^^^^^^^ - ``test_image_collection.py`` in the test suite no longer produces - permanent files on disk and cleans up after itself. [#738] + permanent files on disk and cleans up after itself. [#738] - Change ``Combiner`` to allow accepting either a list or a generator [#757] +- ``ImageFileCollection`` now correctly returns an empty collection when + an existing collection is filtered restrictively enough to remove all + files. [#750] + 2.1.0 (2019-12-24) ------------------ diff --git a/ccdproc/combiner.py b/ccdproc/combiner.py index 39f3e28..d85c55e 100644 --- a/ccdproc/combiner.py +++ b/ccdproc/combiner.py @@ -676,8 +676,12 @@ def combine(img_list, output_file=None, elif isinstance(img_list, str) and (',' in img_list): img_list = img_list.split(',') else: - raise ValueError( - "unrecognised input for list of images to combine.") + try: + # Maybe the input can be made into a list, so try that + img_list = list(img_list) + except TypeError: + raise ValueError( + "unrecognised input for list of images to combine.") # Select Combine function to call in Combiner if method == 'average': diff --git a/ccdproc/image_collection.py b/ccdproc/image_collection.py index 6fd9f6b..ff58c46 100644 --- a/ccdproc/image_collection.py +++ b/ccdproc/image_collection.py @@ -448,7 +448,9 @@ class ImageFileCollection: else: files = self._filenames else: - files = self._fits_files_in_directory() + # Check if self.location is set, otherwise proceed with empty list + if self.location != '': + files = self._fits_files_in_directory() if self.glob_include is not None: files = fnmatch.filter(files, self.glob_include) diff --git a/docs/image_combination.rst b/docs/image_combination.rst index 0922d21..012fa5b 100644 --- a/docs/image_combination.rst +++ b/docs/image_combination.rst @@ -4,9 +4,13 @@ Combining images and generating masks from clipping =================================================== .. note:: - No attempt has been made yet to optimize memory usage in - `~ccdproc.Combiner`. A copy is made, and a mask array - constructed, for each input image. + There are currently two interfaces to image combination. One is through + the `~ccdproc.Combiner` class, the other through the `~ccdproc.combine` + function. They offer *almost* identical capabilities. The primary + difference is that `~ccdproc.combine` allows you to place an upper + limit on the amount of memory used. + + Work to improve the performance of image combination is ongoing. The first step in combining a set of images is creating a @@ -133,6 +137,48 @@ using `~ccdproc.Combiner.average_combine` or `~ccdproc.Combiner.median_combine`). +.. _combination_with_IFC +Image combination using `~ccdproc.ImageFileCollection` +------------------------------------------------------ + +There are a couple of ways that image combination can be done if you are using +`~ccdproc.ImageFileCollection` to +:ref:`manage a folder of images <image_management>`. + +For this example, a temporary folder with images in it is created: + + >>> from tempfile import mkdtemp + >>> from pathlib import Path + >>> import numpy as np + >>> from astropy.nddata import CCDData + >>> from ccdproc import ImageFileCollection, Combiner, combine + >>> + >>> ccd = CCDData(np.ones([5, 5]), unit='adu') + >>> + >>> # Make a temporary folder as a path object + >>> image_folder = Path(mkdtemp()) + >>> # Put several copies ccd in the temporary folder + >>> _ = [ccd.write(image_folder / f"ccd-{i}.fits") for i in range(3)] + >>> ifc = ImageFileCollection(image_folder) + +To combine images using the `~ccdproc.Combiner` class you can use the ``ccds`` +method of the `~ccdproc.ImageFileCollection`: + + >>> c = Combiner(ifc.ccds()) + >>> avg_combined = c.average_combine() + +There two ways combine images using the `~ccdproc.combine` function. If the +images are large enough to combine in memory, then use the file names as the argument to `~ccdproc.combine`, like this: + + >>> avg_combo_mem_lim = combine(ifc.files_filtered(include_path=True), + ... mem_limit=1e9) + +If memory use is not an issue, then the ``ccds`` method can be used here too: + + >>> avg_combo = combine(ifc.ccds()) + + + .. _reprojection: Combination with image transformation and alignment diff --git a/setup.cfg b/setup.cfg index fac559b..827bd83 100644 --- a/setup.cfg +++ b/setup.cfg @@ -4,6 +4,7 @@ minversion = 2.2 testpaths = "ccdproc" "docs" norecursedirs = build docs/_build doctest_plus = enabled +addopts = --doctest-rst markers = data_size(N): set dimension of square data array for ccd_data fixture data_scale(s): set the scale of the normal distribution used to generate data
astropy/ccdproc
1a5934c7dd8010cdfdf7fce34850eded775ba055
diff --git a/ccdproc/tests/test_combiner.py b/ccdproc/tests/test_combiner.py index b758c66..1c158a0 100644 --- a/ccdproc/tests/test_combiner.py +++ b/ccdproc/tests/test_combiner.py @@ -10,6 +10,7 @@ from astropy.utils.data import get_pkg_data_filename from astropy.nddata import CCDData from ccdproc.combiner import Combiner, combine, _calculate_step_sizes +from ccdproc.image_collection import ImageFileCollection from ccdproc.tests.pytest_fixtures import ccd_data as ccd_data_func @@ -373,6 +374,43 @@ def test_combiner_result_dtype(): np.testing.assert_array_almost_equal(res.data, ref) +def test_combiner_image_file_collection_input(tmp_path): + # Regression check for #754 + ccd = ccd_data_func() + for i in range(3): + ccd.write(tmp_path / f'ccd-{i}.fits') + + ifc = ImageFileCollection(tmp_path) + comb = Combiner(ifc.ccds()) + np.testing.assert_array_almost_equal(ccd.data, + comb.average_combine().data) + + +def test_combine_image_file_collection_input(tmp_path): + # Another regression check for #754 but this time with the + # combine function instead of Combiner + ccd = ccd_data_func() + for i in range(3): + ccd.write(tmp_path / f'ccd-{i}.fits') + + ifc = ImageFileCollection(tmp_path) + + comb_files = combine(ifc.files_filtered(include_path=True), + method='average') + + comb_ccds = combine(ifc.ccds(), method='average') + + np.testing.assert_array_almost_equal(ccd.data, + comb_files.data) + np.testing.assert_array_almost_equal(ccd.data, + comb_ccds.data) + + with pytest.raises(FileNotFoundError): + # This should fail because the test is not running in the + # folder where the images are. + _ = combine(ifc.files_filtered()) + + # test combiner convenience function works with list of ccddata objects def test_combine_average_ccddata(): fitsfile = get_pkg_data_filename('data/a8280271.fits')
ImageFileCollection not integrated into combine and Combiner The documentation for ccdproc's ImageFileCollection does not explain how to use ImageFileCollections to combine files. In addition, since ImageFileCollection works on a directory, the results of the method ImageFileCollection.files doesn't have the needed path. As a result: - `combine(image_collection.files)` fails with a `FileNotFoundError`. - `combine(image_collection.ccds` fails with `ValueError: unrecognised input for list of images to combine.` - `Combiner(image_collection.ccds())` fails with `TypeError: object of type 'generator' has no len()` How can one gather a bunch of fits files in ImageFileCollection, and then combine them with ccdproc.combine? It almost seems like ImageFileCollection and combine/Combiner are two different packages that don't talk to each other.
0.0
1a5934c7dd8010cdfdf7fce34850eded775ba055
[ "ccdproc/tests/test_combiner.py::test_combine_image_file_collection_input" ]
[ "ccdproc/tests/test_combiner.py::test_weights", "ccdproc/tests/test_combiner.py::test_average_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_combine_numpyndarray", "ccdproc/tests/test_combiner.py::test_3d_combiner_with_scaling", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_objects", "ccdproc/tests/test_combiner.py::test_combiner_sum", "ccdproc/tests/test_combiner.py::test_combiner_init_with_none", "ccdproc/tests/test_combiner.py::test_clip_extrema_via_combine", "ccdproc/tests/test_combiner.py::test_combiner_scaling_fails", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_size", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[average_combine]", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_single_pix", "ccdproc/tests/test_combiner.py::test_combiner_minmax_min", "ccdproc/tests/test_combiner.py::test_combine_limitedmem_scale_fitsimages", "ccdproc/tests/test_combiner.py::test_1Dweights", "ccdproc/tests/test_combiner.py::test_combiner_empty", "ccdproc/tests/test_combiner.py::test_combiner_mask_sum", "ccdproc/tests/test_combiner.py::test_combiner_minmax_max", "ccdproc/tests/test_combiner.py::test_ystep_calculation[10000-expected4]", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_high", "ccdproc/tests/test_combiner.py::test_ystep_calculation[2999-expected3]", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[median_combine]", "ccdproc/tests/test_combiner.py::test_combiner_gen", "ccdproc/tests/test_combiner.py::test_combiner_minmax", "ccdproc/tests/test_combiner.py::test_combiner_mask_average", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[sum_combine]", "ccdproc/tests/test_combiner.py::test_combiner_result_dtype", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_average_mask", "ccdproc/tests/test_combiner.py::test_combiner_with_scaling", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_units", "ccdproc/tests/test_combiner.py::test_weights_shape", "ccdproc/tests/test_combiner.py::test_median_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_low", "ccdproc/tests/test_combiner.py::test_combiner_dtype", "ccdproc/tests/test_combiner.py::test_combiner_create", "ccdproc/tests/test_combiner.py::test_clip_extrema_with_other_rejection", "ccdproc/tests/test_combiner.py::test_combiner_average", "ccdproc/tests/test_combiner.py::test_combine_average_fitsimages", "ccdproc/tests/test_combiner.py::test_ystep_calculation[2001-expected2]", "ccdproc/tests/test_combiner.py::test_combiner_mask_median", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_sum_mask", "ccdproc/tests/test_combiner.py::test_combiner_mask", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_average", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_median_mask", "ccdproc/tests/test_combiner.py::test_clip_extrema_3d", "ccdproc/tests/test_combiner.py::test_ystep_calculation[1500-expected1]", "ccdproc/tests/test_combiner.py::test_combine_limitedmem_fitsimages", "ccdproc/tests/test_combiner.py::test_ystep_calculation[53-expected0]", "ccdproc/tests/test_combiner.py::test_sum_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_combiner_image_file_collection_input", "ccdproc/tests/test_combiner.py::test_combine_average_ccddata", "ccdproc/tests/test_combiner.py::test_combiner_median", "ccdproc/tests/test_combiner.py::test_combiner_3d", "ccdproc/tests/test_combiner.py::test_clip_extrema" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2021-03-08 19:21:29+00:00
bsd-3-clause
1,239
astropy__ccdproc-775
diff --git a/ccdproc/combiner.py b/ccdproc/combiner.py index 176d554..a807d65 100644 --- a/ccdproc/combiner.py +++ b/ccdproc/combiner.py @@ -709,6 +709,16 @@ def combine(img_list, output_file=None, if ccd.data.dtype != dtype: ccd.data = ccd.data.astype(dtype) + # If the template image doesn't have an uncertainty, add one, because the + # result always has an uncertainty. + if ccd.uncertainty is None: + ccd.uncertainty = StdDevUncertainty(np.zeros_like(ccd.data)) + + # If the template doesn't have a mask, add one, because the result may have + # a mask + if ccd.mask is None: + ccd.mask = np.zeros_like(ccd.data, dtype=bool) + size_of_an_img = _calculate_size_of_image(ccd, combine_uncertainty_function)
astropy/ccdproc
91f2c46d19432d4e818bb7201bb41e8d7383616f
diff --git a/ccdproc/tests/test_combiner.py b/ccdproc/tests/test_combiner.py index 1c158a0..766e50c 100644 --- a/ccdproc/tests/test_combiner.py +++ b/ccdproc/tests/test_combiner.py @@ -509,6 +509,47 @@ def test_sum_combine_uncertainty(): np.testing.assert_array_equal( ccd.uncertainty.array, ccd2.uncertainty.array) [email protected]('mask_point', [True, False]) [email protected]('comb_func', + ['average_combine', 'median_combine', 'sum_combine']) +def test_combine_result_uncertainty_and_mask(comb_func, mask_point): + # Regression test for #774 + # Turns out combine does not return an uncertainty or mask if the input + # CCDData has no uncertainty or mask, which makes very little sense. + ccd_data = ccd_data_func() + + # Make sure the initial ccd_data has no uncertainty, which was the condition that + # led to no uncertainty being returned. + assert ccd_data.uncertainty is None + + if mask_point: + # Make one pixel really negative so we can clip it and guarantee a resulting + # pixel is masked. + ccd_data.data[0, 0] = -1000 + + ccd_list = [ccd_data, ccd_data, ccd_data] + c = Combiner(ccd_list) + + c.minmax_clipping(min_clip=-100) + + expected_result = getattr(c, comb_func)() + + # Just need the first part of the name for the combine function + combine_method_name = comb_func.split('_')[0] + + ccd_comb = combine(ccd_list, method=combine_method_name, + minmax_clip=True, minmax_clip_min=-100) + + np.testing.assert_array_almost_equal(ccd_comb.uncertainty.array, + expected_result.uncertainty.array) + + # Check that the right point is masked, and only one point is + # masked + assert expected_result.mask[0, 0] == mask_point + assert expected_result.mask.sum() == mask_point + assert ccd_comb.mask[0, 0] == mask_point + assert ccd_comb.mask.sum() == mask_point + # test resulting uncertainty is corrected for the number of images def test_combiner_uncertainty_average(): diff --git a/ccdproc/tests/test_memory_use.py b/ccdproc/tests/test_memory_use.py index 89de076..eb8247f 100644 --- a/ccdproc/tests/test_memory_use.py +++ b/ccdproc/tests/test_memory_use.py @@ -65,8 +65,8 @@ def test_memory_use_in_combine(combine_method): # memory_factor in the combine function should perhaps be modified # If the peak is coming in under the limit something need to be fixed - assert np.max(mem_use) >= 0.95 * memory_limit_mb + # assert np.max(mem_use) >= 0.95 * memory_limit_mb # If the average is really low perhaps we should look at reducing peak # usage. Nothing special, really, about the factor 0.4 below. - assert np.mean(mem_use) > 0.4 * memory_limit_mb + # assert np.mean(mem_use) > 0.4 * memory_limit_mb
Uncertainty and mask not returned from `combine()` From Tim-Oliver Husser @thusser on slack: Just took me an hour to find this "feature" in combiner.py: ```python def _calculate_size_of_image(ccd, combine_uncertainty_function): # If uncertainty_func is given for combine this will create an uncertainty # even if the originals did not have one. In that case we need to create # an empty placeholder. if ccd.uncertainty is None and combine_uncertainty_function is not None: ccd.uncertainty = StdDevUncertainty(np.zeros(ccd.data.shape)) ``` Is this really, what you want? The docstring for combine_uncertainty_function in combine() says: ``` combine_uncertainty_function : callable, None, optional If ``None`` use the default uncertainty func when using average, median or sum combine, otherwise use the function provided. Default is ``None``. ``` So to me it seems totally valid to not give a combine_uncertainty_function (since it should use the default of the combine method). But in that case no uncertainties are returned from combine() . The fix would be easy, just wondering whether there is a reason for this?
0.0
91f2c46d19432d4e818bb7201bb41e8d7383616f
[ "ccdproc/tests/test_combiner.py::test_combine_result_uncertainty_and_mask[average_combine-False]", "ccdproc/tests/test_combiner.py::test_combine_result_uncertainty_and_mask[sum_combine-True]", "ccdproc/tests/test_combiner.py::test_combine_result_uncertainty_and_mask[median_combine-False]", "ccdproc/tests/test_combiner.py::test_combine_result_uncertainty_and_mask[average_combine-True]", "ccdproc/tests/test_combiner.py::test_combine_result_uncertainty_and_mask[median_combine-True]", "ccdproc/tests/test_combiner.py::test_combine_result_uncertainty_and_mask[sum_combine-False]" ]
[ "ccdproc/tests/test_memory_use.py::test_memory_use_in_combine[sum]", "ccdproc/tests/test_memory_use.py::test_memory_use_in_combine[median]", "ccdproc/tests/test_memory_use.py::test_memory_use_in_combine[average]", "ccdproc/tests/test_combiner.py::test_combiner_mask", "ccdproc/tests/test_combiner.py::test_combiner_result_dtype", "ccdproc/tests/test_combiner.py::test_sum_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_combine_image_file_collection_input", "ccdproc/tests/test_combiner.py::test_combiner_sum", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[sum_combine]", "ccdproc/tests/test_combiner.py::test_combiner_average", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_units", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_single_pix", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_average_mask", "ccdproc/tests/test_combiner.py::test_clip_extrema_3d", "ccdproc/tests/test_combiner.py::test_3d_combiner_with_scaling", "ccdproc/tests/test_combiner.py::test_combiner_minmax", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_median_mask", "ccdproc/tests/test_combiner.py::test_combiner_mask_median", "ccdproc/tests/test_combiner.py::test_combiner_mask_sum", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_objects", "ccdproc/tests/test_combiner.py::test_clip_extrema_via_combine", "ccdproc/tests/test_combiner.py::test_combiner_minmax_max", "ccdproc/tests/test_combiner.py::test_combiner_median", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_high", "ccdproc/tests/test_combiner.py::test_median_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_weights", "ccdproc/tests/test_combiner.py::test_ystep_calculation[2999-expected3]", "ccdproc/tests/test_combiner.py::test_clip_extrema_with_other_rejection", "ccdproc/tests/test_combiner.py::test_combine_limitedmem_fitsimages", "ccdproc/tests/test_combiner.py::test_combiner_empty", "ccdproc/tests/test_combiner.py::test_ystep_calculation[10000-expected4]", "ccdproc/tests/test_combiner.py::test_combiner_dtype", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[average_combine]", "ccdproc/tests/test_combiner.py::test_ccddata_combiner_size", "ccdproc/tests/test_combiner.py::test_combiner_create", "ccdproc/tests/test_combiner.py::test_combiner_image_file_collection_input", "ccdproc/tests/test_combiner.py::test_combiner_gen", "ccdproc/tests/test_combiner.py::test_clip_extrema", "ccdproc/tests/test_combiner.py::test_ystep_calculation[1500-expected1]", "ccdproc/tests/test_combiner.py::test_ystep_calculation[2001-expected2]", "ccdproc/tests/test_combiner.py::test_combine_limitedmem_scale_fitsimages", "ccdproc/tests/test_combiner.py::test_writeable_after_combine[median_combine]", "ccdproc/tests/test_combiner.py::test_combiner_with_scaling", "ccdproc/tests/test_combiner.py::test_weights_shape", "ccdproc/tests/test_combiner.py::test_combiner_minmax_min", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_sum_mask", "ccdproc/tests/test_combiner.py::test_combiner_init_with_none", "ccdproc/tests/test_combiner.py::test_average_combine_uncertainty", "ccdproc/tests/test_combiner.py::test_combine_average_ccddata", "ccdproc/tests/test_combiner.py::test_combiner_sigmaclip_low", "ccdproc/tests/test_combiner.py::test_1Dweights", "ccdproc/tests/test_combiner.py::test_ystep_calculation[53-expected0]", "ccdproc/tests/test_combiner.py::test_combiner_scaling_fails", "ccdproc/tests/test_combiner.py::test_combine_numpyndarray", "ccdproc/tests/test_combiner.py::test_combiner_mask_average", "ccdproc/tests/test_combiner.py::test_combiner_uncertainty_average", "ccdproc/tests/test_combiner.py::test_combiner_3d", "ccdproc/tests/test_combiner.py::test_combine_average_fitsimages" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2021-05-23 21:43:36+00:00
bsd-3-clause
1,240
astropy__extension-helpers-48
diff --git a/CHANGES.rst b/CHANGES.rst index 0df9e73..6d63f25 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -5,6 +5,8 @@ recommends the ``-Qopenmp`` flag rather than ``-fopenmp`` for greater performance. +* Add support for enabling extension-helpers from pyproject.toml. [#48] + 1.0.0 (2022-03-16) ------------------ diff --git a/docs/using.rst b/docs/using.rst index b6002e0..6e29ea8 100644 --- a/docs/using.rst +++ b/docs/using.rst @@ -45,3 +45,13 @@ It is also possible to enable extension-helpers in ``setup.cfg`` instead of [extension-helpers] use_extension_helpers = true + +Moreover, one can also enable extension-helpers in ``pyproject.toml`` by adding +the following configuration to the ``pyproject.toml`` file:: + + [tool.extension-helpers] + use_extension_helpers = true + +.. note:: + For backwards compatibility, the setting of ``use_extension_helpers`` in + ``setup.cfg`` will override any setting of it in ``pyproject.toml``. diff --git a/extension_helpers/__init__.py b/extension_helpers/__init__.py index 98f7953..d6323fc 100644 --- a/extension_helpers/__init__.py +++ b/extension_helpers/__init__.py @@ -11,11 +11,31 @@ def _finalize_distribution_hook(distribution): Entry point for setuptools which allows extension-helpers to be enabled from setup.cfg without the need for setup.py. """ + import os + from pathlib import Path + + import tomli + config_files = distribution.find_config_files() if len(config_files) == 0: return + cfg = ConfigParser() cfg.read(config_files[0]) - if (cfg.has_option("extension-helpers", "use_extension_helpers") and - cfg.get("extension-helpers", "use_extension_helpers").lower() == 'true'): - distribution.ext_modules = get_extensions() + found_config = False + if cfg.has_option("extension-helpers", "use_extension_helpers"): + found_config = True + + if cfg.get("extension-helpers", "use_extension_helpers").lower() == 'true': + distribution.ext_modules = get_extensions() + + pyproject = Path(distribution.src_root or os.curdir, "pyproject.toml") + if pyproject.exists() and not found_config: + with pyproject.open("rb") as f: + pyproject_cfg = tomli.load(f) + if ('tool' in pyproject_cfg and + 'extension-helpers' in pyproject_cfg['tool'] and + 'use_extension_helpers' in pyproject_cfg['tool']['extension-helpers'] and + pyproject_cfg['tool']['extension-helpers']['use_extension_helpers']): + + distribution.ext_modules = get_extensions() diff --git a/setup.cfg b/setup.cfg index 5769b70..93c9b8f 100644 --- a/setup.cfg +++ b/setup.cfg @@ -26,6 +26,7 @@ python_requires = >=3.7 packages = find: install_requires = setuptools>=40.2 + tomli>=1.0.0 [options.package_data] extension_helpers = src/compiler.c
astropy/extension-helpers
5bb189521db47b216a368e7161d086addd80f005
diff --git a/extension_helpers/tests/test_setup_helpers.py b/extension_helpers/tests/test_setup_helpers.py index 05fc7ab..8eeea0c 100644 --- a/extension_helpers/tests/test_setup_helpers.py +++ b/extension_helpers/tests/test_setup_helpers.py @@ -184,7 +184,8 @@ def test_compiler_module(capsys, c_extension_test_package): @pytest.mark.parametrize('use_extension_helpers', [None, False, True]) -def test_no_setup_py(tmpdir, use_extension_helpers): [email protected]('pyproject_use_helpers', [None, False, True]) +def test_no_setup_py(tmpdir, use_extension_helpers, pyproject_use_helpers): """ Test that makes sure that extension-helpers can be enabled without a setup.py file. @@ -242,12 +243,23 @@ def test_no_setup_py(tmpdir, use_extension_helpers): use_extension_helpers = {str(use_extension_helpers).lower()} """)) - test_pkg.join('pyproject.toml').write(dedent("""\ - [build-system] - requires = ["setuptools>=43.0.0", - "wheel"] - build-backend = 'setuptools.build_meta' - """)) + if pyproject_use_helpers is None: + test_pkg.join('pyproject.toml').write(dedent("""\ + [build-system] + requires = ["setuptools>=43.0.0", + "wheel"] + build-backend = 'setuptools.build_meta' + """)) + else: + test_pkg.join('pyproject.toml').write(dedent(f"""\ + [build-system] + requires = ["setuptools>=43.0.0", + "wheel"] + build-backend = 'setuptools.build_meta' + + [tool.extension-helpers] + use_extension_helpers = {str(pyproject_use_helpers).lower()} + """)) install_temp = test_pkg.mkdir('install_temp') @@ -267,7 +279,7 @@ def test_no_setup_py(tmpdir, use_extension_helpers): importlib.import_module(package_name) - if use_extension_helpers: + if use_extension_helpers or (use_extension_helpers is None and pyproject_use_helpers): compiler_version_mod = importlib.import_module(package_name + '.compiler_version') assert compiler_version_mod.compiler != 'unknown' else:
Support for `pyproject.toml` configuration In this https://github.com/astropy/astropy/pull/14361#issuecomment-1419210239, it was requested that `extension-helpers` support something like: ```ini [extension-helpers] use_extension_helpers = true ``` configurations, but in the `pyproject.toml` instead of the `setup.cfg`. This is so that projects like `astropy` can move towards adopting [PEP621](https://peps.python.org/pep-0621/) (storing all project metadata in `pyproject.toml` instead of across the `setup.cfg` and `setup.py` files).
0.0
5bb189521db47b216a368e7161d086addd80f005
[ "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[True-None]" ]
[ "extension_helpers/tests/test_setup_helpers.py::test_get_compiler", "extension_helpers/tests/test_setup_helpers.py::test_cython_autoextensions", "extension_helpers/tests/test_setup_helpers.py::test_compiler_module", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[None-None]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[None-False]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[None-True]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[False-None]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[False-False]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[False-True]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[True-False]", "extension_helpers/tests/test_setup_helpers.py::test_no_setup_py[True-True]" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-02-08 17:42:38+00:00
bsd-3-clause
1,241
astropy__pyvo-357
diff --git a/CHANGES.rst b/CHANGES.rst index 5b9953d..c6ada61 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -19,6 +19,9 @@ - Allow session to be passed through in SSA and DataLink. [#327] +- pyvo.dal.tap.AsyncTAPJob treats parameter names as case-insensitive when + retrieving the query from the job record. [#356] + 1.3 (2022-02-19) ================== diff --git a/pyvo/dal/tap.py b/pyvo/dal/tap.py index c414872..4a7043e 100644 --- a/pyvo/dal/tap.py +++ b/pyvo/dal/tap.py @@ -646,7 +646,7 @@ class AsyncTAPJob: """ self._update() for parameter in self._job.parameters: - if parameter.id_ == 'query': + if parameter.id_.lower() == 'query': return parameter.content return ''
astropy/pyvo
861298fbff5395d61af8192b9b739cff911cb025
diff --git a/pyvo/dal/tests/test_tap.py b/pyvo/dal/tests/test_tap.py index e06ec6f..d419638 100644 --- a/pyvo/dal/tests/test_tap.py +++ b/pyvo/dal/tests/test_tap.py @@ -12,7 +12,7 @@ from urllib.parse import parse_qsl import pytest import requests_mock -from pyvo.dal.tap import escape, search, TAPService +from pyvo.dal.tap import escape, search, AsyncTAPJob, TAPService from pyvo.io.uws import JobFile from pyvo.io.uws.tree import Parameter, Result @@ -105,7 +105,7 @@ class MockAsyncTAPServer: job.destruction = Time.now() + TimeDelta(3600, format='sec') for key, value in data.items(): - param = Parameter(id=key.lower()) + param = Parameter(id=key) param.content = value job.parameters.append(param) @@ -175,11 +175,11 @@ class MockAsyncTAPServer: if 'QUERY' in data: assert data['QUERY'] == 'SELECT TOP 42 * FROM ivoa.obsCore' for param in job.parameters: - if param.id_ == 'query': + if param.id_.lower() == 'query': param.content = data['QUERY'] if 'UPLOAD' in data: for param in job.parameters: - if param.id_ == 'upload': + if param.id_.lower() == 'upload': uploads1 = {data[0]: data[1] for data in [ data.split(',') for data in data['UPLOAD'].split(';') @@ -426,19 +426,46 @@ class TestTAPService: @pytest.mark.usefixtures('async_fixture') def test_submit_job(self): service = TAPService('http://example.com/tap') - job = service.submit_job( - 'http://example.com/tap', "SELECT * FROM ivoa.obscore") + job = service.submit_job("SELECT * FROM ivoa.obscore") assert job.url == 'http://example.com/tap/async/' + job.job_id assert job.phase == 'PENDING' assert job.execution_duration == TimeDelta(3600, format='sec') assert isinstance(job.destruction, Time) assert isinstance(job.quote, Time) + assert job.query == "SELECT * FROM ivoa.obscore" job.run() job.wait() job.delete() + @pytest.mark.usefixtures('async_fixture') + def test_submit_job_case(self): + """Test using mixed case in the QUERY parameter to a job. + + DALI requires that query parameter names be case-insensitive, and + some TAP servers reflect the input case into the job record, so the + TAP client has to be prepared for any case for the QUERY parameter + name. + """ + service = TAPService('http://example.com/tap') + + # This has to be tested manually, bypassing the normal client layer, + # in order to force a mixed-case parameter name. + response = service._session.post( + "http://example.com/tap/async", + data={ + "REQUEST": "doQuery", + "LANG": "ADQL", + "quERy": "SELECT * FROM ivoa.obscore", + } + ) + response.raw.read = partial(response.raw.read, decode_content=True) + job = AsyncTAPJob(response.url, session=service._session) + + assert job.url == 'http://example.com/tap/async/' + job.job_id + assert job.query == "SELECT * FROM ivoa.obscore" + @pytest.mark.usefixtures('async_fixture') def test_modify_job(self): service = TAPService('http://example.com/tap')
AsyncTAPJob's "query" attribute is not robust to case variation among community TAP servers We have recently noticed that the "query" attribute of AsyncTAPJob (https://pyvo.readthedocs.io/en/latest/api/pyvo.dal.AsyncTAPJob.html#pyvo.dal.AsyncTAPJob.query) does not work with either the Rubin Observatory TAP service or the CADC TAP service under some conditions. "query" works by interrogating the `<uws:parameters>` element of the UWS job's XML for a `<uws:parameter>` whose "id" attribute is "query" - that is, it's trying to find the value of the URL parameter "QUERY" from the original TAP invocation, which UWS says should be reflected back in this XML. The problem here stems from the fact that, while the IVOA standards, including TAP, generally explicitly document the names of the services' URL query parameters in UPPERCASE, the DALI standard makes clear that _in the URL interface, a conformant service must accept any case rendering of the parameter name_. So "query", "QUERY", and "Query", for instance, are equally valid. The UWS standard does not appear to say anything about how query parameters should be reflected back - in their original case, or in a standardized case. It appears that some of the well-known community TAP services return them in their original case, while others force them to lowercase. Notably, the OpenCADC TAP service seems to return parameters in their original case, which seems like a reasonable thing to do in the absence of words to the contrary in UWS. The Rubin Observatory also uses this code base, so our TAP service is affected by the same issue. There is existing community code that generates queries with the parameter names in uppercase, following the style of all the IVOA documentation (though, of course, this is not required) and we are currently unable to work successfully with the resulting UWS jobs. It seems as if PyVO should apply the same case-blindness here that DALI requires to be used on input, and recognize any case form of "QUERY" in its implementation. The Rubin team is willing to make this change but wanted to trigger a discussion first to see if there will be resistance to this. Potentially interested parties: @rra @athornton @pdowler @cbanek
0.0
861298fbff5395d61af8192b9b739cff911cb025
[ "pyvo/dal/tests/test_tap.py::TestTAPService::test_submit_job", "pyvo/dal/tests/test_tap.py::TestTAPService::test_submit_job_case" ]
[ "pyvo/dal/tests/test_tap.py::test_escape", "pyvo/dal/tests/test_tap.py::test_search", "pyvo/dal/tests/test_tap.py::TestTAPService::test_init", "pyvo/dal/tests/test_tap.py::TestTAPService::test_tables", "pyvo/dal/tests/test_tap.py::TestTAPService::test_examples", "pyvo/dal/tests/test_tap.py::TestTAPService::test_maxrec", "pyvo/dal/tests/test_tap.py::TestTAPService::test_hardlimit", "pyvo/dal/tests/test_tap.py::TestTAPService::test_upload_methods", "pyvo/dal/tests/test_tap.py::TestTAPService::test_run_sync", "pyvo/dal/tests/test_tap.py::TestTAPService::test_search", "pyvo/dal/tests/test_tap.py::TestTAPService::test_run_async", "pyvo/dal/tests/test_tap.py::TestTAPService::test_modify_job", "pyvo/dal/tests/test_tap.py::TestTAPService::test_get_job" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2022-09-16 14:39:46+00:00
bsd-3-clause
1,242
astropy__pyvo-427
diff --git a/pyvo/registry/regtap.py b/pyvo/registry/regtap.py index 4fb5f2c..89f1355 100644 --- a/pyvo/registry/regtap.py +++ b/pyvo/registry/regtap.py @@ -25,6 +25,8 @@ from astropy import table from astropy.utils.decorators import deprecated from astropy.utils.exceptions import AstropyDeprecationWarning +import numpy + from . import rtcons from ..dal import scs, sia, sia2, ssa, sla, tap, query as dalq from ..io.vosi import vodataservice @@ -539,7 +541,11 @@ class RegistryResource(dalq.Record): by which a positional query against this resource should be "blurred" in order to get an appropriate match. """ - return float(self.get("region_of_regard", 0)) + # we get NULLs as NaNs here + val = self["region_of_regard"] + if numpy.isnan(val): + return None + return val @property def waveband(self): @@ -734,15 +740,16 @@ class RegistryResource(dalq.Record): Raises ------ - RuntimeError + DALServiceError if the resource does not describe a searchable service. """ - if not self.service: + try: + return self.service.search(*args, **keys) + except ValueError: + # I blindly assume the ValueError comes out of get_interface. + # But then that's likely enough. raise dalq.DALServiceError( - "resource, {}, is not a searchable service".format( - self.short_name)) - - return self.service.search(*args, **keys) + f"Resource {self.ivoid} is not a searchable service") def describe(self, verbose=False, width=78, file=None): """
astropy/pyvo
a89748389b66f5f69afc14cb76652060538e42d6
diff --git a/pyvo/registry/tests/test_regtap.py b/pyvo/registry/tests/test_regtap.py index a534c70..622604c 100644 --- a/pyvo/registry/tests/test_regtap.py +++ b/pyvo/registry/tests/test_regtap.py @@ -383,6 +383,7 @@ def test_record_fields(rt_pulsar_distance): assert rec.content_types == ['catalog'] assert rec.source_format == "bibcode" assert rec.source_value == "1993ApJS...88..529T" + assert rec.region_of_regard is None assert rec.waveband == ['radio'] # access URL, standard_id and friends exercised in TestInterfaceSelection @@ -453,6 +454,17 @@ class TestInterfaceSelection: assert (svc.access_url == "http://dc.zah.uni-heidelberg.de/flashheros/q/web/form") + import webbrowser + orig_open = webbrowser.open + try: + open_args = [] + webbrowser.open = lambda *args: open_args.append(args) + svc.search() + assert open_args == [ + ("http://dc.zah.uni-heidelberg.de/flashheros/q/web/form", 2)] + finally: + webbrowser.open = orig_open + def test_get_aux_interface(self, flash_service): svc = flash_service.get_service("tap#aux") assert (svc._baseurl @@ -540,6 +552,22 @@ class TestInterfaceSelection: assert rec.get_interface("sia2").access_url == 'http://sia2.example.com' assert rec.get_interface("sia").access_url == 'http://sia.example.com' + def test_non_standard_interface(self): + intf = regtap.Interface("http://url", "", "", "") + assert intf.supports("ivo://ivoa.net/std/sia") is False + + def test_supports_none(self): + intf = regtap.Interface("http://url", "", "", "") + assert intf.supports(None) is False + + def test_non_searchable_service(self): + rec = _makeRegistryRecord() + with pytest.raises(dalq.DALServiceError) as excinfo: + rec.search() + + assert str(excinfo.value) == ( + "Resource ivo://pyvo/test_regtap.py is not a searchable service") + class _FakeResult: """A fake class just sufficient for giving dal.query.Record enough @@ -623,6 +651,9 @@ class TestInterfaceRejection: intf_roles=["", "std"]) assert (rsc.service._baseurl == "http://b") + # this makes sure caching the service obtained doesn't break + # things + assert (rsc.service._baseurl == "http://b") def test_capless(self): rsc = _makeRegistryRecord()
MAINT: improve test coverage of regtap Currently, it's only 55% with missing coverage of numerous methods leading to wonder whether hash-based splitting of standardid works as intended for services with IDs including a hash, e.g. the distinction of SIA and SIA2. https://github.com/astropy/pyvo/blob/main/pyvo/registry/regtap.py#L311 https://github.com/astropy/pyvo/blob/main/pyvo/registry/regtap.py#L338
0.0
a89748389b66f5f69afc14cb76652060538e42d6
[ "pyvo/registry/tests/test_regtap.py::test_record_fields", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_non_searchable_service" ]
[ "pyvo/registry/tests/test_regtap.py::TestInterfaceClass::test_basic", "pyvo/registry/tests/test_regtap.py::TestInterfaceClass::test_repr", "pyvo/registry/tests/test_regtap.py::TestInterfaceClass::test_unknown_standard", "pyvo/registry/tests/test_regtap.py::TestInterfaceClass::test_known_standard", "pyvo/registry/tests/test_regtap.py::TestInterfaceClass::test_secondary_interface", "pyvo/registry/tests/test_regtap.py::TestInterfaceClass::test_VOSI", "pyvo/registry/tests/test_regtap.py::test_keywords", "pyvo/registry/tests/test_regtap.py::test_single_keyword", "pyvo/registry/tests/test_regtap.py::test_servicetype", "pyvo/registry/tests/test_regtap.py::test_waveband", "pyvo/registry/tests/test_regtap.py::test_datamodel", "pyvo/registry/tests/test_regtap.py::test_servicetype_aux", "pyvo/registry/tests/test_regtap.py::test_bad_servicetype_aux", "pyvo/registry/tests/test_regtap.py::test_spatial", "pyvo/registry/tests/test_regtap.py::test_spectral", "pyvo/registry/tests/test_regtap.py::test_to_table", "pyvo/registry/tests/test_regtap.py::TestResultIndexing::test_get_with_index", "pyvo/registry/tests/test_regtap.py::TestResultIndexing::test_get_with_short_name", "pyvo/registry/tests/test_regtap.py::TestResultIndexing::test_get_with_ivoid", "pyvo/registry/tests/test_regtap.py::TestResultIndexing::test_out_of_range", "pyvo/registry/tests/test_regtap.py::TestResultIndexing::test_bad_key", "pyvo/registry/tests/test_regtap.py::TestResultIndexing::test_not_indexable", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_exactly_one_result", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_access_modes", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_standard_id_multi", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get_web_interface", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get_aux_interface", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get_aux_as_main", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get__main_from_aux", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get_by_alias", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get_unsupported_standard", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_get_nonexisting_standard", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_unconstrained", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_interface_without_role", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_sia2_query", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_sia2_aux", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_non_standard_interface", "pyvo/registry/tests/test_regtap.py::TestInterfaceSelection::test_supports_none", "pyvo/registry/tests/test_regtap.py::TestInterfaceRejection::test_nonunique", "pyvo/registry/tests/test_regtap.py::TestInterfaceRejection::test_nonunique_lax", "pyvo/registry/tests/test_regtap.py::TestInterfaceRejection::test_nonstd_ignored", "pyvo/registry/tests/test_regtap.py::TestInterfaceRejection::test_select_single_matching_service", "pyvo/registry/tests/test_regtap.py::TestInterfaceRejection::test_capless", "pyvo/registry/tests/test_regtap.py::TestExtraResourceMethods::test_unique_standard_id", "pyvo/registry/tests/test_regtap.py::TestExtraResourceMethods::test_describe_multi", "pyvo/registry/tests/test_regtap.py::TestExtraResourceMethods::test_no_access_url", "pyvo/registry/tests/test_regtap.py::TestExtraResourceMethods::test_unique_access_url", "pyvo/registry/tests/test_regtap.py::TestExtraResourceMethods::test_ambiguous_access_url_warns" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2023-02-22 08:07:26+00:00
bsd-3-clause
1,243
astropy__pyvo-459
diff --git a/CHANGES.rst b/CHANGES.rst index ca48efe..63c634e 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -30,6 +30,8 @@ - Fix handling of nan values for Time properties in SIA2 records. [#463] +- Fix SIA2 search to accept SkyCoord position inputs. [#459] + 1.4.1 (2023-03-07) ================== diff --git a/pyvo/dal/params.py b/pyvo/dal/params.py index 124af28..89c1b1e 100644 --- a/pyvo/dal/params.py +++ b/pyvo/dal/params.py @@ -7,6 +7,7 @@ from collections.abc import MutableSet import abc from astropy import units as u +from astropy.coordinates import SkyCoord from astropy.units import Quantity, Unit from astropy.time import Time from astropy.io.votable.converters import ( @@ -301,7 +302,7 @@ class PosQueryParam(AbstractDalQueryParam): entries in values are either quantities or assumed to be degrees """ self._validate_pos(val) - if len(val) == 3: + if len(val) == 2 or len(val) == 3: shape = 'CIRCLE' elif len(val) == 4: shape = 'RANGE' @@ -313,6 +314,7 @@ class PosQueryParam(AbstractDalQueryParam): 'even 6 and above (POLYGON) accepted.'.format(val)) return '{} {}'.format(shape, ' '.join( [str(val.to(u.deg).value) if isinstance(val, Quantity) else + val.transform_to('icrs').to_string() if isinstance(val, SkyCoord) else str((val * u.deg).value) for val in val])) def _validate_pos(self, pos): @@ -321,7 +323,17 @@ class PosQueryParam(AbstractDalQueryParam): This has probably done already somewhere else """ - if len(pos) == 3: + + if len(pos) == 2: + if not isinstance(pos[0], SkyCoord): + raise ValueError + if not isinstance(pos[1], Quantity): + radius = pos[1] * u.deg + else: + radius = pos[1] + if radius <= 0 * u.deg or radius.to(u.deg) > 90 * u.deg: + raise ValueError('Invalid circle radius: {}'.format(radius)) + elif len(pos) == 3: self._validate_ra(pos[0]) self._validate_dec(pos[1]) if not isinstance(pos[2], Quantity):
astropy/pyvo
b96a95056d378731aa0c2676dcc325264dab7e78
diff --git a/pyvo/dal/tests/test_sia.py b/pyvo/dal/tests/test_sia.py index 87ccda2..7356f0d 100644 --- a/pyvo/dal/tests/test_sia.py +++ b/pyvo/dal/tests/test_sia.py @@ -11,6 +11,7 @@ import pytest from pyvo.dal.sia import search, SIAService from astropy.io.fits import HDUList +from astropy.coordinates import SkyCoord from astropy.utils.data import get_pkg_data_contents get_pkg_data_contents = partial( @@ -45,8 +46,9 @@ def _test_result(result): @pytest.mark.usefixtures('sia') @pytest.mark.usefixtures('register_mocks') @pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.W06") -def test_search(): - results = search('http://example.com/sia', pos=(288, 15)) [email protected]("position", ((288, 15), SkyCoord(288, 15, unit="deg"))) +def test_search(position): + results = search('http://example.com/sia', pos=position) result = results[0] _test_result(result) diff --git a/pyvo/dal/tests/test_sia2.py b/pyvo/dal/tests/test_sia2.py index 95df62f..de20687 100644 --- a/pyvo/dal/tests/test_sia2.py +++ b/pyvo/dal/tests/test_sia2.py @@ -12,6 +12,7 @@ import pytest from pyvo.dal.sia2 import search, SIA2Service, SIA2Query, SIAService, SIAQuery import astropy.units as u +from astropy.coordinates import SkyCoord from astropy.utils.data import get_pkg_data_contents from astropy.utils.exceptions import AstropyDeprecationWarning @@ -101,7 +102,8 @@ class TestSIA2Service: (12.0 * u.deg, 34.0 * u.deg, 14.0 * u.deg, 35.0 * u.deg, 14.0 * u.deg, 36.0 * u.deg, - 12.0 * u.deg, 35.0 * u.deg)] + 12.0 * u.deg, 35.0 * u.deg), + (SkyCoord(2, 4, unit='deg'), 0.166 * u.deg)] @pytest.mark.usefixtures('sia') @pytest.mark.usefixtures('capabilities')
sia query doesn't accept scalar coordinate @andamian this looks like a bug in how `pyvo` is handlign the `pos` keyword (or, I don't understand what's expected, but in that case it's clearly a documentation issue) ```python >>> from astroquery.alma import Alma >>> from astropy import coordinates >>> from astropy import units as u >>> galactic_center = coordinates.SkyCoord(0*u.deg, 0*u.deg, frame='galactic') >>> Alma.query_sia(pos=galactic_center, pol='XX') Traceback (most recent call last): File "<ipython-input-3-cfb9f36d29ef>", line 1, in <module> Alma.query_sia(pos=galactic_center, pol='XX') File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/astroquery/alma/core.py", line 406, in query_sia return self.sia.search( File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/pyvo/dal/sia2.py", line 192, in search return SIAQuery(self.query_ep, pos=pos, band=band, File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/pyvo/dal/sia2.py", line 265, in __init__ self.pos.add(pp) File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/pyvo/dal/params.py", line 255, in add if item in self: File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/pyvo/dal/params.py", line 276, in __contains__ return self.get_dal_format(item) in self.dal File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/pyvo/dal/params.py", line 298, in get_dal_format self._validate_pos(val) File "/Users/adam/miniconda3/envs/python3.9/lib/python3.9/site-packages/pyvo/dal/params.py", line 319, in _validate_pos if len(pos) == 3: File "/Users/adam/repos/astropy/astropy/utils/shapes.py", line 209, in __len__ raise TypeError("Scalar {!r} object has no len()" TypeError: Scalar 'SkyCoord' object has no len() ```
0.0
b96a95056d378731aa0c2676dcc325264dab7e78
[ "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_search_scalar[position3]", "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_search_vector", "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_search_deprecation" ]
[ "pyvo/dal/tests/test_sia.py::test_search[position0]", "pyvo/dal/tests/test_sia.py::test_search[position1]", "pyvo/dal/tests/test_sia.py::TestSIAService::test_search", "pyvo/dal/tests/test_sia2.py::test_search", "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_capabilities", "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_search_scalar[position0]", "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_search_scalar[position1]", "pyvo/dal/tests/test_sia2.py::TestSIA2Service::test_search_scalar[position2]", "pyvo/dal/tests/test_sia2.py::TestSIA2Query::test_query", "pyvo/dal/tests/test_sia2.py::test_variable_deprecation" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-07-25 18:27:26+00:00
bsd-3-clause
1,244
astropy__reproject-349
diff --git a/reproject/utils.py b/reproject/utils.py index d79b13b1..a098d095 100644 --- a/reproject/utils.py +++ b/reproject/utils.py @@ -277,7 +277,12 @@ def _reproject_blocked( if a.ndim == 0 or block_info is None or block_info == []: return np.array([a, a]) slices = [slice(*x) for x in block_info[None]["array-location"][-wcs_out.pixel_n_dim :]] - wcs_out_sub = HighLevelWCSWrapper(SlicedLowLevelWCS(wcs_out, slices=slices)) + + if isinstance(wcs_out, BaseHighLevelWCS): + low_level_wcs = SlicedLowLevelWCS(wcs_out.low_level_wcs, slices=slices) + else: + low_level_wcs = SlicedLowLevelWCS(wcs_out, slices=slices) + wcs_out_sub = HighLevelWCSWrapper(low_level_wcs) if isinstance(array_in_or_path, str): array_in = np.memmap(array_in_or_path, dtype=float, shape=shape_in) else: diff --git a/reproject/wcs_utils.py b/reproject/wcs_utils.py index 1a6baf76..9fe71862 100644 --- a/reproject/wcs_utils.py +++ b/reproject/wcs_utils.py @@ -8,6 +8,7 @@ import numpy as np from astropy.coordinates import SkyCoord from astropy.wcs import WCS from astropy.wcs.utils import pixel_to_pixel +from astropy.wcs.wcsapi.high_level_api import BaseHighLevelWCS __all__ = ["has_celestial", "pixel_to_pixel_with_roundtrip"]
astropy/reproject
214544874e838d03ce9b44dacc13621a375df15f
diff --git a/reproject/interpolation/tests/test_core.py b/reproject/interpolation/tests/test_core.py index 2dac25c5..1941eefb 100644 --- a/reproject/interpolation/tests/test_core.py +++ b/reproject/interpolation/tests/test_core.py @@ -672,8 +672,9 @@ def test_broadcast_reprojection(input_extra_dims, output_shape, input_as_wcs, ou @pytest.mark.parametrize("input_extra_dims", (1, 2)) @pytest.mark.parametrize("output_shape", (None, "single", "full")) @pytest.mark.parametrize("parallel", [True, False]) [email protected]("header_or_wcs", (lambda x: x, WCS)) @pytest.mark.filterwarnings("ignore::astropy.wcs.wcs.FITSFixedWarning") -def test_blocked_broadcast_reprojection(input_extra_dims, output_shape, parallel): +def test_blocked_broadcast_reprojection(input_extra_dims, output_shape, parallel, header_or_wcs): image_stack, array_ref, footprint_ref, header_in, header_out = _setup_for_broadcast_test() # Test both single and multiple dimensions being broadcast if input_extra_dims == 2: @@ -689,6 +690,9 @@ def test_blocked_broadcast_reprojection(input_extra_dims, output_shape, parallel # Provide the broadcast dimensions as part of the output shape output_shape = image_stack.shape + # test different behavior when the output projection is a WCS + header_out = header_or_wcs(header_out) + array_broadcast, footprint_broadcast = reproject_interp( (image_stack, header_in), header_out, output_shape, parallel=parallel, block_size=[5, 5] ) @@ -701,9 +705,12 @@ def test_blocked_broadcast_reprojection(input_extra_dims, output_shape, parallel @pytest.mark.parametrize("block_size", [[500, 500], [500, 100], None]) @pytest.mark.parametrize("return_footprint", [False, True]) @pytest.mark.parametrize("existing_outputs", [False, True]) [email protected]("header_or_wcs", (lambda x: x, WCS)) @pytest.mark.remote_data @pytest.mark.filterwarnings("ignore::astropy.wcs.wcs.FITSFixedWarning") -def test_blocked_against_single(parallel, block_size, return_footprint, existing_outputs): +def test_blocked_against_single( + parallel, block_size, return_footprint, existing_outputs, header_or_wcs +): # Ensure when we break a reprojection down into multiple discrete blocks # it has the same result as if all pixels where reprejcted at once @@ -727,7 +734,7 @@ def test_blocked_against_single(parallel, block_size, return_footprint, existing result_test = reproject_interp( hdu2, - hdu1.header, + header_or_wcs(hdu1.header), parallel=parallel, block_size=block_size, return_footprint=return_footprint, @@ -737,7 +744,7 @@ def test_blocked_against_single(parallel, block_size, return_footprint, existing result_reference = reproject_interp( hdu2, - hdu1.header, + header_or_wcs(hdu1.header), parallel=False, block_size=None, return_footprint=return_footprint, diff --git a/reproject/tests/test_utils.py b/reproject/tests/test_utils.py index 88c2510d..d842cb94 100644 --- a/reproject/tests/test_utils.py +++ b/reproject/tests/test_utils.py @@ -7,6 +7,7 @@ from astropy.wcs import WCS from reproject.tests.helpers import assert_wcs_allclose from reproject.utils import parse_input_data, parse_input_shape, parse_output_projection +from reproject.wcs_utils import has_celestial @pytest.mark.filterwarnings("ignore:unclosed file:ResourceWarning") @@ -89,3 +90,22 @@ def test_parse_output_projection_invalid_header(simple_celestial_fits_wcs): def test_parse_output_projection_invalid_wcs(simple_celestial_fits_wcs): with pytest.raises(ValueError, match="Need to specify shape"): parse_output_projection(simple_celestial_fits_wcs) + + [email protected]("ignore::astropy.utils.exceptions.AstropyUserWarning") [email protected]("ignore::astropy.wcs.wcs.FITSFixedWarning") +def test_has_celestial(): + from .test_high_level import INPUT_HDR + + hdr = fits.Header.fromstring(INPUT_HDR) + ww = WCS(hdr) + assert ww.has_celestial + assert has_celestial(ww) + + from astropy.wcs.wcsapi import HighLevelWCSWrapper, SlicedLowLevelWCS + + wwh = HighLevelWCSWrapper(SlicedLowLevelWCS(ww, Ellipsis)) + assert has_celestial(wwh) + + wwh2 = HighLevelWCSWrapper(SlicedLowLevelWCS(ww, [slice(0, 1), slice(0, 1)])) + assert has_celestial(wwh2)
Failure w/astropy-dev from spectral-cube: APE14 issue? I haven't tracked this all the way down yet, but this looks to me like a problem b/w reproject and astropy-dev? ``` def test_mosaic_cubes(use_memmap, data_adv, use_dask, spectral_block_size): pytest.importorskip('reproject') # Read in data to use cube, data = cube_and_raw(data_adv, use_dask=use_dask) # cube is doppler-optical by default, which uses the rest wavelength, # which isn't auto-computed, resulting in nan pixels in the WCS transform cube._wcs.wcs.restwav = constants.c.to(u.m/u.s).value / cube.wcs.wcs.restfrq expected_wcs = WCS(combine_headers(cube.header, cube.header)).celestial # Make two overlapping cubes of the data part1 = cube[:, :round(cube.shape[1]*2./3.), :] part2 = cube[:, round(cube.shape[1]/3.):, :] assert part1.wcs.wcs.restwav != 0 assert part2.wcs.wcs.restwav != 0 > result = mosaic_cubes([part1, part2], order='nearest-neighbor', roundtrip_coords=False, spectral_block_size=spectral_block_size) spectral_cube/tests/test_regrid.py:623: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ spectral_cube/cube_utils.py:832: in mosaic_cubes cube_repr = cube.reproject(header, spectral_cube/utils.py:49: in wrapper return function(self, *args, **kwargs) spectral_cube/spectral_cube.py:2697: in reproject newcube, newcube_valid = reproject_interp((data, ../astropy/astropy/utils/decorators.py:604: in wrapper return function(*args, **kwargs) ../python-reprojection/reproject/interpolation/high_level.py:121: in reproject_interp return _reproject_blocked( ../python-reprojection/reproject/utils.py:323: in _reproject_blocked da.store( ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/array/core.py:1236: in store compute_as_if_collection(Array, store_dsk, map_keys, **kwargs) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/base.py:342: in compute_as_if_collection return schedule(dsk2, keys, **kwargs) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:557: in get_sync return get_async( ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:500: in get_async for key, res_info, failed in queue_get(queue).result(): ../../mambaforge/envs/py39forge/lib/python3.9/concurrent/futures/_base.py:439: in result return self.__get_result() ../../mambaforge/envs/py39forge/lib/python3.9/concurrent/futures/_base.py:391: in __get_result raise self._exception ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:542: in submit fut.set_result(fn(*args, **kwargs)) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:238: in batch_execute_tasks return [execute_task(*a) for a in it] ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:238: in <listcomp> return [execute_task(*a) for a in it] ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:229: in execute_task result = pack_exception(e, dumps) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/local.py:224: in execute_task result = _execute_task(task, data) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/core.py:119: in _execute_task return func(*(_execute_task(a, cache) for a in args)) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/optimization.py:990: in __call__ return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args))) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/core.py:149: in get result = _execute_task(task, cache) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/core.py:119: in _execute_task return func(*(_execute_task(a, cache) for a in args)) ../../mambaforge/envs/py39forge/lib/python3.9/site-packages/dask/array/core.py:523: in _pass_extra_kwargs return func(*args[len(keys) :], **kwargs) ../python-reprojection/reproject/utils.py:285: in reproject_single_block array, footprint = reproject_func( ../python-reprojection/reproject/interpolation/core.py:98: in _reproject_full _validate_wcs(wcs_in, wcs_out, array.shape, shape_out) ../python-reprojection/reproject/interpolation/core.py:28: in _validate_wcs if has_celestial(wcs_in) and not has_celestial(wcs_out): ../python-reprojection/reproject/wcs_utils.py:22: in has_celestial for world_axis_class in wcs.low_level_wcs.world_axis_object_classes.values(): ../astropy/astropy/wcs/wcsapi/wrappers/sliced_wcs.py:295: in world_axis_object_classes keys_keep = [item[0] for item in self.world_axis_object_components] ../astropy/astropy/wcs/wcsapi/wrappers/sliced_wcs.py:291: in world_axis_object_components return [self._wcs.world_axis_object_components[idx] for idx in self._world_keep] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .0 = <iterator object at 0x17957bf40> > return [self._wcs.world_axis_object_components[idx] for idx in self._world_keep] E AttributeError: 'HighLevelWCSWrapper' object has no attribute 'world_axis_object_components' ../astropy/astropy/wcs/wcsapi/wrappers/sliced_wcs.py:291: AttributeError ```
0.0
214544874e838d03ce9b44dacc13621a375df15f
[ "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-True-full-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-False-single-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-True-single-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-False-None-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-False-full-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-False-single-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-True-full-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-False-None-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-True-None-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-True-None-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-True-single-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[WCS-False-full-1]" ]
[ "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-True-full-2]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-False-None-1]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-False-None-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-wcs-ape14_high_level_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-False-full-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-True-single-1]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-False-single-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[comp_image_hdu-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[hdulist-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[path-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-True-None-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[path-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-False-None-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[image_hdu-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-wcs_shape-ape14_low_level_wcs]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-True-full-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-wcs-ape14_low_level_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[primary_hdu-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[primary_hdu-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[primary_hdu-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-True-full-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[filename-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-True-single-2]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-False-None-1]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-True-full-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-True-single-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-wcs_shape-ape14_high_level_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-True-single-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-False-single-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[hdulist-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[comp_image_hdu-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-True-None-1]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-False-single-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[path-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-False-single-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-True-None-2]", "reproject/interpolation/tests/test_core.py::test_naxis_mismatch[False]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-True-full-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-wcs_shape-ape14_low_level_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[image_hdu-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-wcs-ape14_high_level_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[data_wcs_tuple-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-wcs_shape-ape14_high_level_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[hdulist-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-True-full-1]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-False-None-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[primary_hdu-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-True-None-2]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-True-None-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[path-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-False-full-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[filename-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[image_hdu-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-wcs-ape14_low_level_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-True-single-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-False-full-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[comp_image_hdu-header_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[nddata-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-False-None-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-False-full-2]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-True-single-1]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[image_hdu-wcs-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-False-single-2]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-False-single-1]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[False-False-full-2]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[filename-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[comp_image_hdu-header-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[hdulist-wcs_shape-fits_wcs]", "reproject/interpolation/tests/test_core.py::test_broadcast_reprojection[True-False-full-2]", "reproject/interpolation/tests/test_core.py::test_blocked_broadcast_reprojection[<lambda>-True-None-2]", "reproject/interpolation/tests/test_core.py::test_naxis_mismatch[True]", "reproject/interpolation/tests/test_core.py::test_interp_input_output_types[filename-header-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[wcs-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[path-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[data_wcs_tuple-fits_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[wcs_shape-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[nddata-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[data_wcs_tuple-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[hdulist-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_data_missing_hdu_in", "reproject/tests/test_utils.py::test_has_celestial", "reproject/tests/test_utils.py::test_parse_input_data[comp_image_hdu-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[nddata-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[image_hdu-fits_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[wcs_shape-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[nddata-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[nddata-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_data_invalid", "reproject/tests/test_utils.py::test_parse_input_data[primary_hdu-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[wcs-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[nddata-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape_invalid", "reproject/tests/test_utils.py::test_parse_input_shape[shape_wcs_tuple-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[path-fits_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[header_shape-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[data_wcs_tuple-fits_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[wcs_shape-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[shape_wcs_tuple-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[filename-fits_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection_invalid_header", "reproject/tests/test_utils.py::test_parse_input_shape[comp_image_hdu-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[data_wcs_tuple-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[hdulist-fits_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[wcs-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[image_hdu-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[filename-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_data[nddata-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[data_wcs_tuple-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[wcs-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape_missing_hdu_in", "reproject/tests/test_utils.py::test_parse_output_projection_invalid_wcs", "reproject/tests/test_utils.py::test_parse_input_data[data_wcs_tuple-ape14_low_level_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[wcs-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[primary_hdu-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[shape_wcs_tuple-fits_wcs]", "reproject/tests/test_utils.py::test_parse_input_shape[wcs-ape14_high_level_wcs]", "reproject/tests/test_utils.py::test_parse_output_projection[header-fits_wcs]" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2023-03-10 19:35:02+00:00
bsd-3-clause
1,245
astropy__specreduce-190
diff --git a/pyproject.toml b/pyproject.toml index 98b0520..56d7d63 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -8,3 +8,7 @@ requires = ["setuptools", "cython"] build-backend = 'setuptools.build_meta' + +[tool.pytest.ini_options] + +filterwarnings = ["ignore::DeprecationWarning:datetime",] diff --git a/tox.ini b/tox.ini index 98cfc17..29a45de 100644 --- a/tox.ini +++ b/tox.ini @@ -1,12 +1,10 @@ [tox] envlist = - py{38,39,310,311}-test{,-devdeps}{,-cov} - py{38,39,310,311}-test-numpy{120,121,122,123} - py{38,39,310,311}-test-astropy{lts,rc} + py{310,311,312}-test{,-devdeps,-predeps}{,-cov} build_docs codestyle requires = - setuptools >= 30.3.0 + setuptools pip >= 19.3.1 isolated_build = true @@ -16,7 +14,8 @@ isolated_build = true passenv = HOME,WINDIR,LC_ALL,LC_CTYPE,CC,CI setenv = - devdeps: PIP_EXTRA_INDEX_URL = https://pypi.anaconda.org/scientific-python-nightly-wheels/simple + devdeps: PIP_EXTRA_INDEX_URL = https://pypi.anaconda.org/astropy/simple https://pypi.anaconda.org/scientific-python-nightly-wheels/simple + py312: PIP_EXTRA_INDEX_URL = https://pypi.anaconda.org/astropy/simple # Run the tests in a temporary directory to make sure that we don't import # this package from the source tree @@ -35,40 +34,36 @@ description = devdeps: with the latest developer version of key dependencies oldestdeps: with the oldest supported version of key dependencies cov: enable remote data and measure test coverage - numpy120: with numpy 1.20.* - numpy121: with numpy 1.21.* - numpy122: with numpy 1.22.* - numpy123: with numpy 1.23.* - astropylts: with the latest astropy LTS # The following provides some specific pinnings for key packages deps = - numpy120: numpy==1.20.* - numpy121: numpy==1.21.* - numpy122: numpy==1.22.* - numpy123: numpy==1.23.* - - astropy51: astropy==5.1.* - astropylts: astropy==5.1.* - devdeps: numpy>=0.0.dev0 - devdeps: git+https://github.com/astropy/astropy.git#egg=astropy + devdeps: scipy>=0.0.dev0 + devdeps: astropy>=0.0.dev0 devdeps: git+https://github.com/astropy/specutils.git#egg=specutils + devdeps: git+https://github.com/astropy/photutils.git#egg=photutils - oldestdeps: numpy==1.20 + oldestdeps: numpy==1.22.4 oldestdeps: astropy==5.1 - oldestdeps: scipy==1.6.0 + oldestdeps: scipy==1.8.0 oldestdeps: matplotlib==3.5 oldestdeps: photutils==1.0.0 oldestdeps: specutils==1.9.1 + # Currently need dev astropy with python 3.12 as well + py312: astropy>=0.0.dev0 + # The following indicates which extras_require from setup.cfg will be installed extras = test: test build_docs: docs commands = + # Force numpy-dev after matplotlib downgrades it (https://github.com/matplotlib/matplotlib/issues/26847) + devdeps: python -m pip install --pre --upgrade --extra-index-url https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy + # Maybe we also have to do this for scipy? + devdeps: python -m pip install --pre --upgrade --extra-index-url https://pypi.anaconda.org/scientific-python-nightly-wheels/simple scipy pip freeze !cov: pytest --pyargs specreduce {toxinidir}/docs {posargs} cov: pytest --pyargs specreduce {toxinidir}/docs --cov specreduce --cov-config={toxinidir}/setup.cfg --remote-data {posargs}
astropy/specreduce
0d800fd584ed484a35cb4d064894620cd71501fd
diff --git a/.github/workflows/cron-tests.yml b/.github/workflows/cron-tests.yml index 6401c86..8453015 100644 --- a/.github/workflows/cron-tests.yml +++ b/.github/workflows/cron-tests.yml @@ -25,14 +25,14 @@ jobs: # For example -- os: [ubuntu-latest, macos-latest, windows-latest] include: - os: ubuntu-latest - python: '3.10' + python: '3.11' tox_env: 'linkcheck' - os: ubuntu-latest - python: '3.10' - tox_env: 'py310-test-datadeps-devdeps' + python: '3.12' + tox_env: 'py312-test-devdeps' - os: ubuntu-latest - python: '3.10' - tox_env: 'py310-test-datadeps-predeps' + python: '3.12' + tox_env: 'py312-test-predeps' steps: - name: Check out repository @@ -47,12 +47,6 @@ jobs: run: | python -m pip install --upgrade pip python -m pip install tox - - name: Print Python, pip, setuptools, and tox versions - run: | - python -c "import sys; print(f'Python {sys.version}')" - python -c "import pip; print(f'pip {pip.__version__}')" - python -c "import setuptools; print(f'setuptools {setuptools.__version__}')" - python -c "import tox; print(f'tox {tox.__version__}')" - name: Test with tox run: | tox -e ${{ matrix.tox_env }} diff --git a/.github/workflows/tox-tests.yml b/.github/workflows/tox-tests.yml index 9b9a708..3005622 100644 --- a/.github/workflows/tox-tests.yml +++ b/.github/workflows/tox-tests.yml @@ -31,27 +31,24 @@ jobs: # Only run on ubuntu by default, but can add other os's to the test matrix here. # For example -- os: [ubuntu-latest, macos-latest, windows-latest] include: - - os: ubuntu-latest - python: '3.8' - tox_env: 'py38-test' - - os: ubuntu-latest - python: '3.9' - tox_env: 'py39-test' - os: ubuntu-latest python: '3.10' tox_env: 'py310-test-cov' - - os: macos-latest - python: '3.10' - tox_env: 'py310-test-devdeps' - os: ubuntu-latest python: '3.11' tox_env: 'py311-test' - os: ubuntu-latest - python: '3.10' + python: '3.12' + tox_env: 'py312-test' + - os: macos-latest + python: '3.12' + tox_env: 'py312-test-devdeps' + - os: ubuntu-latest + python: '3.12' tox_env: 'codestyle' - os: ubuntu-latest - python: '3.8' - tox_env: 'py38-test-oldestdeps' + python: '3.10' + tox_env: 'py310-test-oldestdeps' steps: - name: Check out repository @@ -66,12 +63,6 @@ jobs: run: | python -m pip install --upgrade pip python -m pip install tox - - name: Print Python, pip, setuptools, and tox versions - run: | - python -c "import sys; print(f'Python {sys.version}')" - python -c "import pip; print(f'pip {pip.__version__}')" - python -c "import setuptools; print(f'setuptools {setuptools.__version__}')" - python -c "import tox; print(f'tox {tox.__version__}')" - name: Test with tox run: | tox -e ${{ matrix.tox_env }}
TST: Do not build astropy dev from source You can pull down astropy dev wheel like you do over at `specutils`. https://github.com/astropy/specreduce/blob/0d800fd584ed484a35cb4d064894620cd71501fd/tox.ini#L56
0.0
0d800fd584ed484a35cb4d064894620cd71501fd
[ "specreduce/tests/test_extract.py::test_horne_variance_errors", "specreduce/tests/test_extract.py::test_boxcar_extraction", "specreduce/tests/test_extract.py::test_horne_interpolated_nbins_fails", "specreduce/tests/test_extract.py::test_horne_no_bkgrnd", "specreduce/tests/test_extract.py::test_horne_interpolated_profile", "specreduce/tests/test_extract.py::test_boxcar_array_trace", "specreduce/tests/test_extract.py::test_horne_image_validation", "specreduce/tests/test_extract.py::test_horne_interpolated_profile_norm", "specreduce/tests/test_extract.py::test_boxcar_outside_image_condition", "specreduce/tests/test_extract.py::test_horne_non_flat_trace", "specreduce/tests/test_tracing.py::test_basic_trace", "specreduce/tests/test_tracing.py::test_array_trace", "specreduce/tests/test_tracing.py::test_flat_trace", "specreduce/tests/test_tracing.py::test_fit_trace", "specreduce/tests/test_synth_data.py::test_make_2d_trace_image", "specreduce/tests/test_background.py::test_background", "specreduce/tests/test_background.py::test_warnings_errors", "specreduce/tests/test_image_parsing.py::test_parse_horne", "specreduce/tests/test_image_parsing.py::test_parse_general", "specreduce/tests/test_wavelength_calibration.py::test_linear_from_table", "specreduce/tests/test_wavelength_calibration.py::test_poly_from_table", "specreduce/tests/test_wavelength_calibration.py::test_linear_from_list", "specreduce/tests/test_wavelength_calibration.py::test_replace_spectrum", "specreduce/tests/test_wavelength_calibration.py::test_wavelength_from_table", "specreduce/tests/test_wavelength_calibration.py::test_unsorted_pixels_wavelengths", "specreduce/tests/test_wavelength_calibration.py::test_expected_errors", "specreduce/tests/test_wavelength_calibration.py::test_fit_residuals_access", "specreduce/tests/test_wavelength_calibration.py::test_fit_residuals" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-10-03 17:04:10+00:00
bsd-3-clause
1,246
astropy__specutils-973
diff --git a/CHANGES.rst b/CHANGES.rst index 026afba5..6a81e107 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,8 @@ Bug Fixes - Fixed a bug with moment map orders greater than 1 not being able to handle cubes with non-square spatial dimensions. [#970] +- Added a workaround for reading JWST IFUs with incorrect GWCS. [#973] + Other Changes and Additions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/specutils/analysis/width.py b/specutils/analysis/width.py index 5d6bf897..609081fa 100644 --- a/specutils/analysis/width.py +++ b/specutils/analysis/width.py @@ -4,9 +4,11 @@ spectral features. """ import numpy as np +from astropy.nddata import StdDevUncertainty from astropy.stats.funcs import gaussian_sigma_to_fwhm from ..manipulation import extract_region from . import centroid +from .uncertainty import _convert_uncertainty from .utils import computation_wrapper from scipy.signal import find_peaks, peak_widths @@ -186,8 +188,15 @@ def _compute_gaussian_fwhm(spectrum, regions=None): This is a helper function for the above `gaussian_fwhm()` method. """ - fwhm = _compute_gaussian_sigma_width(spectrum, regions) * gaussian_sigma_to_fwhm + sigma = _compute_gaussian_sigma_width(spectrum, regions) + fwhm = sigma * gaussian_sigma_to_fwhm + if sigma.uncertainty is not None: + fwhm.uncertainty = sigma.uncertainty * gaussian_sigma_to_fwhm + else: + fwhm.uncertainty = None + + fwhm.uncertainty_type = 'std' return fwhm @@ -201,9 +210,16 @@ def _compute_gaussian_sigma_width(spectrum, regions=None): else: calc_spectrum = spectrum + if spectrum.uncertainty is not None: + flux_uncert = _convert_uncertainty(calc_spectrum.uncertainty, StdDevUncertainty) + else: + # dummy value for uncertainties to avoid extra if-statements when applying mask + flux_uncert = np.zeros_like(calc_spectrum.flux) + if hasattr(spectrum, 'mask') and spectrum.mask is not None: flux = calc_spectrum.flux[~spectrum.mask] spectral_axis = calc_spectrum.spectral_axis[~spectrum.mask] + flux_uncert = flux_uncert[~calc_spectrum.mask] else: flux = calc_spectrum.flux spectral_axis = calc_spectrum.spectral_axis @@ -212,11 +228,36 @@ def _compute_gaussian_sigma_width(spectrum, regions=None): if flux.ndim > 1: spectral_axis = np.broadcast_to(spectral_axis, flux.shape, subok=True) + centroid_result_uncert = centroid_result.uncertainty centroid_result = centroid_result[:, np.newaxis] + centroid_result.uncertainty = centroid_result_uncert[:, np.newaxis] if centroid_result_uncert is not None else None # noqa dx = (spectral_axis - centroid_result) - sigma = np.sqrt(np.sum((dx * dx) * flux, axis=-1) / np.sum(flux, axis=-1)) + numerator = np.sum((dx * dx) * flux, axis=-1) + denom = np.sum(flux, axis=-1) + sigma2 = numerator / denom + sigma = np.sqrt(sigma2) + if centroid_result.uncertainty is not None: + # NOTE: until/unless disp_uncert is supported, dx_uncert == centroid_result.uncertainty + disp_uncert = 0.0 * spectral_axis.unit + dx_uncert = np.sqrt(disp_uncert**2 + centroid_result.uncertainty**2) + # dx_uncert = centroid_result.uncertainty + + # uncertainty for each term in the numerator sum + num_term_uncerts = dx * dx * flux * np.sqrt(2*(dx_uncert/dx)**2 + (flux_uncert/flux)**2) + # uncertainty (squared) for the numerator, added in quadrature + num_uncertsq = np.sum(num_term_uncerts**2, axis=-1) + # uncertainty (squared) for the denomenator + denom_uncertsq = np.sum(flux_uncert**2) + + sigma2_uncert = numerator/denom * np.sqrt(num_uncertsq * numerator**-2 + + denom_uncertsq * denom**-2) + + sigma.uncertainty = 0.5 * sigma2_uncert / sigma2 * sigma.unit + else: + sigma.uncertainty = None + sigma.uncertainty_type = 'std' return sigma diff --git a/specutils/io/default_loaders/jwst_reader.py b/specutils/io/default_loaders/jwst_reader.py index 8f9ee5f5..9bb895f8 100644 --- a/specutils/io/default_loaders/jwst_reader.py +++ b/specutils/io/default_loaders/jwst_reader.py @@ -7,6 +7,8 @@ from astropy.units import Quantity from astropy.table import Table from astropy.io import fits from astropy.nddata import StdDevUncertainty, VarianceUncertainty, InverseVariance +from astropy.time import Time +from astropy.wcs import WCS import asdf from gwcs.wcstools import grid_from_bounding_box @@ -585,6 +587,24 @@ def _jwst_s3d_loader(filename, **kwargs): wavelength = Quantity(wavelength_array, unit=wavelength_unit) + # The GWCS is currently broken for some IFUs, here we work around that + wcs = None + if wavelength.shape[0] != flux.shape[-1]: + # Need MJD-OBS for this workaround + if 'MJD-OBS' not in hdu.header: + for key in ('MJD-BEG', 'DATE-OBS'): # Possible alternatives + if key in hdu.header: + if key.startswith('MJD'): + hdu.header['MJD-OBS'] = hdu.header[key] + break + else: + t = Time(hdu.header[key]) + hdu.header['MJD-OBS'] = t.mjd + break + wcs = WCS(hdu.header) + # Swap to match the flux transpose + wcs = wcs.swapaxes(-1, 0) + # Merge primary and slit headers and dump into meta slit_header = hdu.header header = primary_header.copy() @@ -615,8 +635,11 @@ def _jwst_s3d_loader(filename, **kwargs): mask_name = primary_header.get("MASKEXT", "DQ") mask = hdulist[mask_name].data.T - spec = Spectrum1D(flux=flux, spectral_axis=wavelength, meta=meta, - uncertainty=err, mask=mask) + if wcs is not None: + spec = Spectrum1D(flux=flux, wcs=wcs, meta=meta, uncertainty=err, mask=mask) + else: + spec = Spectrum1D(flux=flux, spectral_axis=wavelength, meta=meta, + uncertainty=err, mask=mask) spectra.append(spec) return SpectrumList(spectra)
astropy/specutils
cac046bfd83ab256076e694413fd1f39aacb6f88
diff --git a/specutils/tests/test_analysis.py b/specutils/tests/test_analysis.py index 64561d90..81824440 100644 --- a/specutils/tests/test_analysis.py +++ b/specutils/tests/test_analysis.py @@ -469,6 +469,7 @@ def test_centroid(simulated_spectra): assert isinstance(spec_centroid, u.Quantity) assert np.allclose(spec_centroid.value, spec_centroid_expected.value) assert hasattr(spec_centroid, 'uncertainty') + # NOTE: value has not been scientifically validated assert quantity_allclose(spec_centroid.uncertainty, 3.91834165e-06*u.um, rtol=5e-5) @@ -557,14 +558,19 @@ def test_gaussian_sigma_width(): # Create a (centered) gaussian spectrum for testing mean = 5 - frequencies = np.linspace(0, mean*2, 100) * u.GHz + frequencies = np.linspace(0, mean*2, 101)[1:] * u.GHz g1 = models.Gaussian1D(amplitude=5*u.Jy, mean=mean*u.GHz, stddev=0.8*u.GHz) spectrum = Spectrum1D(spectral_axis=frequencies, flux=g1(frequencies)) + uncertainty = StdDevUncertainty(0.1*np.random.random(len(spectrum.flux))*u.mJy) + spectrum.uncertainty = uncertainty result = gaussian_sigma_width(spectrum) assert quantity_allclose(result, g1.stddev, atol=0.01*u.GHz) + assert hasattr(result, 'uncertainty') + # NOTE: value has not been scientifically validated! + assert quantity_allclose(result.uncertainty, 4.8190546890398186e-05*u.GHz, rtol=5e-5) def test_gaussian_sigma_width_masked(): @@ -573,7 +579,7 @@ def test_gaussian_sigma_width_masked(): # Create a (centered) gaussian masked spectrum for testing mean = 5 - frequencies = np.linspace(0, mean*2, 100) * u.GHz + frequencies = np.linspace(0, mean*2, 101)[1:] * u.GHz g1 = models.Gaussian1D(amplitude=5*u.Jy, mean=mean*u.GHz, stddev=0.8*u.GHz) uncertainty = StdDevUncertainty(0.1*np.random.random(len(frequencies))*u.Jy) @@ -585,13 +591,16 @@ def test_gaussian_sigma_width_masked(): result = gaussian_sigma_width(spectrum) assert quantity_allclose(result, g1.stddev, atol=0.01*u.GHz) + assert hasattr(result, 'uncertainty') + # NOTE: value has not been scientifically validated! + assert quantity_allclose(result.uncertainty, 0.06852821940808544*u.GHz, rtol=5e-5) def test_gaussian_sigma_width_regions(): np.random.seed(42) - frequencies = np.linspace(100, 0, 10000) * u.GHz + frequencies = np.linspace(100, 0, 10000)[:-1] * u.GHz g1 = models.Gaussian1D(amplitude=5*u.Jy, mean=10*u.GHz, stddev=0.8*u.GHz) g2 = models.Gaussian1D(amplitude=5*u.Jy, mean=2*u.GHz, stddev=0.3*u.GHz) g3 = models.Gaussian1D(amplitude=5*u.Jy, mean=70*u.GHz, stddev=10*u.GHz) @@ -654,15 +663,20 @@ def test_gaussian_fwhm(): # Create a (centered) gaussian spectrum for testing mean = 5 - frequencies = np.linspace(0, mean*2, 100) * u.GHz + frequencies = np.linspace(0, mean*2, 101)[1:] * u.GHz g1 = models.Gaussian1D(amplitude=5*u.Jy, mean=mean*u.GHz, stddev=0.8*u.GHz) spectrum = Spectrum1D(spectral_axis=frequencies, flux=g1(frequencies)) + uncertainty = StdDevUncertainty(0.1*np.random.random(len(spectrum.flux))*u.mJy) + spectrum.uncertainty = uncertainty result = gaussian_fwhm(spectrum) expected = g1.stddev * gaussian_sigma_to_fwhm assert quantity_allclose(result, expected, atol=0.01*u.GHz) + assert hasattr(result, 'uncertainty') + # NOTE: value has not been scientifically validated! + assert quantity_allclose(result.uncertainty, 0.00011348006579851353*u.GHz, rtol=5e-5) def test_gaussian_fwhm_masked(): @@ -684,6 +698,9 @@ def test_gaussian_fwhm_masked(): expected = g1.stddev * gaussian_sigma_to_fwhm assert quantity_allclose(result, expected, atol=0.01*u.GHz) + assert hasattr(result, 'uncertainty') + # NOTE: value has not been scientifically validated! + assert quantity_allclose(result.uncertainty, 0.16688079501948674*u.GHz, rtol=5e-5) @pytest.mark.parametrize('mean', range(3, 8)) @@ -795,7 +812,6 @@ def test_fwhm(): spectrum = Spectrum1D(spectral_axis=wavelengths, flux=flux) result = fwhm(spectrum) - assert quantity_allclose(result, 1.01 * u.um) # Highest point at the last point
JWST s3d loader broken for latest cube data The specutils loader for JWST s3d files is broken for recent NIRSpec IFU cube data cal_ver 1.4.6. It gives the error `ValueError: Spectral axis length (973) must be the same size or one greater (if specifying bin edges) than that of the last flux axis (950)`. I think there's a mismatch between the input data and the loader code to extract WCS and convert it to a wavelength. To reproduce: ``` data = "jw01409-o018_t011_nirspec_prism-clear_s3d.fits" ss = Spectrum1D.read(data) ``` Traceback: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [24], in <cell line: 1>() ----> 1 ss=Spectrum1D.read(hdu.filename()) File ~/anaconda3/envs/vmpy3.9/lib/python3.9/site-packages/astropy/nddata/mixins/ndio.py:59, in NDDataRead.__call__(self, *args, **kwargs) 58 def __call__(self, *args, **kwargs): ---> 59 return self.registry.read(self._cls, *args, **kwargs) File ~/anaconda3/envs/vmpy3.9/lib/python3.9/site-packages/astropy/io/registry/core.py:199, in UnifiedInputRegistry.read(self, cls, format, cache, *args, **kwargs) 195 format = self._get_valid_format( 196 'read', cls, path, fileobj, args, kwargs) 198 reader = self.get_reader(format, cls) --> 199 data = reader(*args, **kwargs) 201 if not isinstance(data, cls): 202 # User has read with a subclass where only the parent class is 203 # registered. This returns the parent class, so try coercing 204 # to desired subclass. 205 try: File ~/anaconda3/envs/vmpy3.9/lib/python3.9/site-packages/specutils/io/default_loaders/jwst_reader.py:531, in jwst_s3d_single_loader(filename, **kwargs) 513 @data_loader( 514 "JWST s3d", identifier=identify_jwst_s3d_fits, dtype=Spectrum1D, 515 extensions=['fits'], priority=10, 516 ) 517 def jwst_s3d_single_loader(filename, **kwargs): 518 """ 519 Loader for JWST s3d 3D rectified spectral data in FITS format. 520 (...) 529 The spectrum contained in the file. 530 """ --> 531 spectrum_list = _jwst_s3d_loader(filename, **kwargs) 532 if len(spectrum_list) == 1: 533 return spectrum_list[0] File ~/anaconda3/envs/vmpy3.9/lib/python3.9/site-packages/specutils/io/default_loaders/jwst_reader.py:618, in _jwst_s3d_loader(filename, **kwargs) 615 mask_name = primary_header.get("MASKEXT", "DQ") 616 mask = hdulist[mask_name].data.T --> 618 spec = Spectrum1D(flux=flux, spectral_axis=wavelength, meta=meta, 619 uncertainty=err, mask=mask) 620 spectra.append(spec) 622 return SpectrumList(spectra) File ~/anaconda3/envs/vmpy3.9/lib/python3.9/site-packages/specutils/spectra/spectrum1d.py:178, in Spectrum1D.__init__(self, flux, spectral_axis, wcs, velocity_convention, rest_value, redshift, radial_velocity, bin_specification, **kwargs) 176 bin_specification = "edges" 177 else: --> 178 raise ValueError( 179 "Spectral axis length ({}) must be the same size or one " 180 "greater (if specifying bin edges) than that of the last " 181 "flux axis ({})".format(spectral_axis.shape[0], 182 flux.shape[-1])) 184 # If a WCS is provided, check that the spectral axis is last and reorder 185 # the arrays if not 186 if wcs is not None and hasattr(wcs, "naxis"): ValueError: Spectral axis length (973) must be the same size or one greater (if specifying bin edges) than that of the last flux axis (950) ```
0.0
cac046bfd83ab256076e694413fd1f39aacb6f88
[ "specutils/tests/test_analysis.py::test_gaussian_fwhm", "specutils/tests/test_analysis.py::test_gaussian_sigma_width_masked", "specutils/tests/test_analysis.py::test_gaussian_sigma_width", "specutils/tests/test_analysis.py::test_gaussian_fwhm_masked" ]
[ "specutils/tests/test_analysis.py::test_moment_cube_order_1_to_6", "specutils/tests/test_analysis.py::test_line_flux", "specutils/tests/test_analysis.py::test_line_flux_uncertainty", "specutils/tests/test_analysis.py::test_snr_derived", "specutils/tests/test_analysis.py::test_snr_no_uncertainty", "specutils/tests/test_analysis.py::test_moment", "specutils/tests/test_analysis.py::test_equivalent_width_continuum[continuum2]", "specutils/tests/test_analysis.py::test_gaussian_fwhm_uncentered[3]", "specutils/tests/test_analysis.py::test_moment_cube_order_2", "specutils/tests/test_analysis.py::test_centroid", "specutils/tests/test_analysis.py::test_gaussian_fwhm_uncentered[7]", "specutils/tests/test_analysis.py::test_equivalent_width_masked", "specutils/tests/test_analysis.py::test_equivalent_width_bin_edges[centers]", "specutils/tests/test_analysis.py::test_gaussian_sigma_width_multi_spectrum", "specutils/tests/test_analysis.py::test_equivalent_width_continuum[continuum0]", "specutils/tests/test_analysis.py::test_is_continuum_below_threshold", "specutils/tests/test_analysis.py::test_moment_collection", "specutils/tests/test_analysis.py::test_gaussian_sigma_width_regions", "specutils/tests/test_analysis.py::test_snr_single_region", "specutils/tests/test_analysis.py::test_snr_multiple_flux", "specutils/tests/test_analysis.py::test_gaussian_fwhm_uncentered[4]", "specutils/tests/test_analysis.py::test_fwhm_multi_spectrum", "specutils/tests/test_analysis.py::test_equivalent_width_continuum[continuum1]", "specutils/tests/test_analysis.py::test_snr", "specutils/tests/test_analysis.py::test_moment_cube", "specutils/tests/test_analysis.py::test_line_flux_masked", "specutils/tests/test_analysis.py::test_snr_derived_masked", "specutils/tests/test_analysis.py::test_inverted_centroid_masked", "specutils/tests/test_analysis.py::test_fwzi", "specutils/tests/test_analysis.py::test_centroid_masked", "specutils/tests/test_analysis.py::test_gaussian_fwhm_uncentered[6]", "specutils/tests/test_analysis.py::test_equivalent_width_bin_edges[edges]", "specutils/tests/test_analysis.py::test_equivalent_width_regions", "specutils/tests/test_analysis.py::test_snr_masked", "specutils/tests/test_analysis.py::test_inverted_centroid", "specutils/tests/test_analysis.py::test_fwzi_masked", "specutils/tests/test_analysis.py::test_gaussian_fwhm_uncentered[5]", "specutils/tests/test_analysis.py::test_equivalent_width", "specutils/tests/test_analysis.py::test_snr_two_regions", "specutils/tests/test_analysis.py::test_equivalent_width_absorption", "specutils/tests/test_analysis.py::test_centroid_multiple_flux", "specutils/tests/test_analysis.py::test_fwhm_masked", "specutils/tests/test_analysis.py::test_fwhm", "specutils/tests/test_analysis.py::test_fwzi_multi_spectrum" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-08-09 20:40:51+00:00
bsd-3-clause
1,247
astropy__specutils-976
diff --git a/CHANGES.rst b/CHANGES.rst index 57103267..f19d7994 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -9,6 +9,8 @@ Bug Fixes - Arithmetic with constants and Spectrum1D now works in either order. [#964] +- Fixed uncertainty propagation in FluxConservingResampler. [#976] + Other Changes and Additions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/specutils/manipulation/resample.py b/specutils/manipulation/resample.py index c55d3c5c..8babeb95 100644 --- a/specutils/manipulation/resample.py +++ b/specutils/manipulation/resample.py @@ -196,7 +196,7 @@ class FluxConservingResampler(ResamplerBase): pixel_uncer = pixel_uncer.reshape(new_flux_shape) out_variance = np.sum(pixel_uncer * resample_grid**2, axis=-1) / np.sum( - resample_grid**2, axis=-1) + resample_grid, axis=-1)**2 out_uncertainty = InverseVariance(np.reciprocal(out_variance)) else: out_uncertainty = None
astropy/specutils
573c4ad7496f3beb1af3c847a031308be6ad162a
diff --git a/specutils/tests/test_resample.py b/specutils/tests/test_resample.py index 65ad9b2d..ee7136c4 100644 --- a/specutils/tests/test_resample.py +++ b/specutils/tests/test_resample.py @@ -64,7 +64,7 @@ def test_stddev_uncert_propogation(): results = inst(input_spectra, [25, 35, 50, 55]*u.AA) assert np.allclose(results.uncertainty.array, - np.array([27.5862069, 38.23529412, 17.46724891, 27.5862069])) + np.array([55.17241379, 73.52941176, 27.94759825, 55.17241379])) def delta_wl(saxis): @@ -127,9 +127,9 @@ def test_multi_dim_spectrum1D(): [6., 6., 6., 6.], [7., 7., 7., 7.]]) * u.Jy) assert np.allclose(results.uncertainty.array, - np.array([[4., 4., 4., 4.], - [2.77777778, 2.77777778, 2.77777778, 2.77777778], - [2.04081633, 2.04081633, 2.04081633, 2.04081633]] )) + np.array([[10.66666667, 10.66666667, 10.66666667, 10.66666667], + [ 7.40740741, 7.40740741, 7.40740741, 7.40740741], + [ 5.44217687, 5.44217687, 5.44217687, 5.44217687]])) def test_expanded_grid_interp_linear():
FluxConservingResampler does not properly propagate the uncertainty The FluxConservingResampler function does not properly propagate the uncertainties. This function, as stated in the documentation (https://specutils.readthedocs.io/en/stable/api/specutils.manipulation.FluxConservingResampler.html#specutils.manipulation.FluxConservingResampler.resample1d) should be based on formulas reported on Carnall+2017 paper (https://arxiv.org/pdf/1705.05165.pdf). However, you can verify that the result obtained in a simple case, is different from the predictions of the native python package SpectRes (https://github.com/ACCarnall/SpectRes). The error in the source code is probably in computing the output uncertainties as ` # Calculate output uncertainty if pixel_uncer is not None: pixel_uncer = pixel_uncer.reshape(new_flux_shape) out_variance = np.sum(pixel_uncer * resample_grid**2, axis=-1) / np.sum( resample_grid**2, axis=-1) out_uncertainty = InverseVariance(np.reciprocal(out_variance))` While, following Equation (4) in Carnall+2017, the correct "out_variance" should be instead ` out_variance = np.sum(pixel_uncer * resample_grid**2, axis=-1) / np.sum( resample_grid, axis=-1)**2` Could you please check these lines?
0.0
573c4ad7496f3beb1af3c847a031308be6ad162a
[ "specutils/tests/test_resample.py::test_stddev_uncert_propogation", "specutils/tests/test_resample.py::test_multi_dim_spectrum1D" ]
[ "specutils/tests/test_resample.py::test_same_grid_fluxconserving", "specutils/tests/test_resample.py::test_expanded_grid_fluxconserving", "specutils/tests/test_resample.py::test_flux_conservation[specflux0-specwavebins0-outwavebins0]", "specutils/tests/test_resample.py::test_flux_conservation[specflux1-specwavebins1-outwavebins1]", "specutils/tests/test_resample.py::test_expanded_grid_interp_linear", "specutils/tests/test_resample.py::test_expanded_grid_interp_spline", "specutils/tests/test_resample.py::test_resample_edges[FluxConservingResampler-nan_fill-nan]", "specutils/tests/test_resample.py::test_resample_edges[FluxConservingResampler-zero_fill-0]", "specutils/tests/test_resample.py::test_resample_edges[LinearInterpolatedResampler-nan_fill-nan]", "specutils/tests/test_resample.py::test_resample_edges[LinearInterpolatedResampler-zero_fill-0]", "specutils/tests/test_resample.py::test_resample_edges[SplineInterpolatedResampler-nan_fill-nan]", "specutils/tests/test_resample.py::test_resample_edges[SplineInterpolatedResampler-zero_fill-0]", "specutils/tests/test_resample.py::test_resample_different_units[LinearInterpolatedResampler]", "specutils/tests/test_resample.py::test_resample_different_units[SplineInterpolatedResampler]", "specutils/tests/test_resample.py::test_resample_uncs[FluxConservingResampler]", "specutils/tests/test_resample.py::test_resample_uncs[LinearInterpolatedResampler]", "specutils/tests/test_resample.py::test_resample_uncs[SplineInterpolatedResampler]" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2022-08-13 22:41:01+00:00
bsd-3-clause
1,248
astropy__sphinx-automodapi-142
diff --git a/CHANGES.rst b/CHANGES.rst index 62b20ed..53fa563 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -4,6 +4,8 @@ Changes in sphinx-automodapi 0.15.0 (unreleased) ------------------- +- Fixed issue with ``:skip:`` introduced by ``:include:`` feature. [#142] + 0.14.0 (2021-12-22) ------------------- diff --git a/setup.cfg b/setup.cfg index fdd19cc..0dfbcc6 100644 --- a/setup.cfg +++ b/setup.cfg @@ -43,6 +43,7 @@ filterwarnings = ignore:The `docutils\.parsers\.rst\.directive\.html` module will be removed:DeprecationWarning ignore:'contextfunction' is renamed to 'pass_context':DeprecationWarning ignore:'environmentfilter' is renamed to 'pass_environment':DeprecationWarning + ignore:distutils Version classes are deprecated:DeprecationWarning [flake8] max-line-length = 125 diff --git a/sphinx_automodapi/automodapi.py b/sphinx_automodapi/automodapi.py index 1957194..28cff8a 100644 --- a/sphinx_automodapi/automodapi.py +++ b/sphinx_automodapi/automodapi.py @@ -411,12 +411,12 @@ def _mod_info(modname, toskip=[], include=[], onlylocals=True): hascls = hasfunc = hasother = False - skips = [] + skips = toskip.copy() for localnm, fqnm, obj in zip(*find_mod_objs(modname, onlylocals=onlylocals)): - if localnm in toskip or (include and localnm not in include): + if include and localnm not in include and localnm not in skips: skips.append(localnm) - else: + elif localnm not in toskip: hascls = hascls or inspect.isclass(obj) hasfunc = hasfunc or inspect.isroutine(obj) hasother = hasother or (not inspect.isclass(obj) and
astropy/sphinx-automodapi
bcc41ff14a4a1df24091f1836f7b1e506beca4f6
diff --git a/sphinx_automodapi/tests/example_module/stdlib.py b/sphinx_automodapi/tests/example_module/stdlib.py new file mode 100644 index 0000000..626dc69 --- /dev/null +++ b/sphinx_automodapi/tests/example_module/stdlib.py @@ -0,0 +1,15 @@ +""" +A module that imports objects from the standard library. +""" +from pathlib import Path +from datetime import time + + +__all__ = ['Path', 'time', 'add'] + + +def add(a, b): + """ + Add two numbers + """ + return a + b diff --git a/sphinx_automodapi/tests/test_automodapi.py b/sphinx_automodapi/tests/test_automodapi.py index cd0550e..72e52fd 100644 --- a/sphinx_automodapi/tests/test_automodapi.py +++ b/sphinx_automodapi/tests/test_automodapi.py @@ -327,6 +327,107 @@ def test_am_replacer_skip(tmpdir): assert result == am_replacer_skip_expected +am_replacer_skip_stdlib_str = """ +This comes before + +.. automodapi:: sphinx_automodapi.tests.example_module.stdlib + :skip: time + :skip: Path + +This comes after +""" + + +am_replacer_skip_stdlib_expected = """ +This comes before + + +sphinx_automodapi.tests.example_module.stdlib Module +---------------------------------------------------- + +.. automodule:: sphinx_automodapi.tests.example_module.stdlib + +Functions +^^^^^^^^^ + +.. automodsumm:: sphinx_automodapi.tests.example_module.stdlib + :functions-only: + :toctree: api + :skip: time,Path + + +This comes after +""".format(empty='') + + +def test_am_replacer_skip_stdlib(tmpdir): + """ + Tests using the ":skip:" option in an ".. automodapi::" + that skips objects imported from the standard library. + This is a regression test for #141 + """ + + with open(tmpdir.join('index.rst').strpath, 'w') as f: + f.write(am_replacer_skip_stdlib_str.format(options='')) + + run_sphinx_in_tmpdir(tmpdir) + + with open(tmpdir.join('index.rst.automodapi').strpath) as f: + result = f.read() + + assert result == am_replacer_skip_stdlib_expected + + +am_replacer_include_stdlib_str = """ +This comes before + +.. automodapi:: sphinx_automodapi.tests.example_module.stdlib + :include: add + :allowed-package-names: pathlib, datetime, sphinx_automodapi + +This comes after +""" + +am_replacer_include_stdlib_expected = """ +This comes before + + +sphinx_automodapi.tests.example_module.stdlib Module +---------------------------------------------------- + +.. automodule:: sphinx_automodapi.tests.example_module.stdlib + +Functions +^^^^^^^^^ + +.. automodsumm:: sphinx_automodapi.tests.example_module.stdlib + :functions-only: + :toctree: api + :skip: Path,time + :allowed-package-names: pathlib,datetime,sphinx_automodapi + + +This comes after +""".format(empty='') + + +def test_am_replacer_include_stdlib(tmpdir): + """ + Tests using the ":include: option in an ".. automodapi::" + in the presence of objects imported from the standard library. + """ + + with open(tmpdir.join('index.rst').strpath, 'w') as f: + f.write(am_replacer_include_stdlib_str.format(options='')) + + run_sphinx_in_tmpdir(tmpdir) + + with open(tmpdir.join('index.rst.automodapi').strpath) as f: + result = f.read() + + assert result == am_replacer_include_stdlib_expected + + am_replacer_include_str = """ This comes before
:skip: no longer works with standard library objects Prior to the 0.14 release, it was possible to use `:skip:` on standard library objects that were imported by a module: ``` .. automodapi:: sphinx_automodapi.tests.example_module.stdlib :skip: time :skip: Path ``` In 0.14, those `:skip:` directives are disappeared by code introduced to support `:include:`. I opened a PR with a test showing the new failure: https://github.com/astropy/sphinx-automodapi/pull/140. The test passes if I revert the commits from #127.
0.0
bcc41ff14a4a1df24091f1836f7b1e506beca4f6
[ "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_skip_stdlib" ]
[ "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_noinh", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_titleandhdrs_invalid", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_nomain", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_skip", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_include_stdlib", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_include", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_invalidop", "sphinx_automodapi/tests/test_automodapi.py::test_am_replacer_cython" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2021-12-28 23:13:39+00:00
bsd-3-clause
1,249
astropy__sphinx-automodapi-51
diff --git a/.travis.yml b/.travis.yml index dea55d7..994a60f 100644 --- a/.travis.yml +++ b/.travis.yml @@ -31,10 +31,12 @@ env: - PYTHON_VERSION=3.4 SPHINX_VERSION=1.3 - PYTHON_VERSION=3.5 SPHINX_VERSION=1.4 - PYTHON_VERSION=3.5 SPHINX_VERSION=1.5 - - PYTHON_VERSION=3.6 SPHINX_VERSION=1.6 CONDA_CHANNELS="conda-forge" - - PYTHON_VERSION=3.6 SPHINX_VERSION=dev CONDA_DEPENDENCIES="setuptools cython pytest-cov" + - PYTHON_VERSION=3.6 SPHINX_VERSION=1.6 + - PYTHON_VERSION=3.6 SPHINX_VERSION=1.7 + - PYTHON_VERSION=3.7 SPHINX_VERSION=1.8 + - PYTHON_VERSION=3.7 SPHINX_VERSION=dev CONDA_DEPENDENCIES="setuptools cython pytest-cov" - PYTHON_VERSION=2.7 LOCALE=C - - PYTHON_VERSION=3.6 LOCALE=C + - PYTHON_VERSION=3.7 LOCALE=C global: - LOCALE=default - CONDA_DEPENDENCIES="setuptools sphinx cython pytest-cov" diff --git a/appveyor.yml b/appveyor.yml index cf51b6e..a73ae4e 100644 --- a/appveyor.yml +++ b/appveyor.yml @@ -18,6 +18,7 @@ environment: - PYTHON_VERSION: "3.4" - PYTHON_VERSION: "3.5" - PYTHON_VERSION: "3.6" + - PYTHON_VERSION: "3.7" platform: -x64 diff --git a/sphinx_automodapi/automodsumm.py b/sphinx_automodapi/automodsumm.py index 163d9ac..fcfda34 100644 --- a/sphinx_automodapi/automodsumm.py +++ b/sphinx_automodapi/automodsumm.py @@ -96,6 +96,10 @@ from docutils.parsers.rst.directives import flag from .utils import find_mod_objs, cleanup_whitespace +__all__ = ['Automoddiagram', 'Automodsumm', 'automodsumm_to_autosummary_lines', + 'generate_automodsumm_docs', 'process_automodsumm_generation'] + +SPHINX_LT_16 = LooseVersion(__version__) < LooseVersion('1.6') SPHINX_LT_17 = LooseVersion(__version__) < LooseVersion('1.7') @@ -266,7 +270,7 @@ def process_automodsumm_generation(app): suffix = os.path.splitext(sfn)[1] if len(lines) > 0: generate_automodsumm_docs( - lines, sfn, app=app, builder=app.builder, warn=app.warn, info=app.info, + lines, sfn, app=app, builder=app.builder, suffix=suffix, base_path=app.srcdir, inherited_members=app.config.automodsumm_inherited_members) @@ -401,8 +405,8 @@ def automodsumm_to_autosummary_lines(fn, app): return newlines -def generate_automodsumm_docs(lines, srcfn, app=None, suffix='.rst', warn=None, - info=None, base_path=None, builder=None, +def generate_automodsumm_docs(lines, srcfn, app=None, suffix='.rst', + base_path=None, builder=None, template_dir=None, inherited_members=False): """ @@ -415,7 +419,6 @@ def generate_automodsumm_docs(lines, srcfn, app=None, suffix='.rst', warn=None, from sphinx.jinja2glue import BuiltinTemplateLoader from sphinx.ext.autosummary import import_by_name, get_documenter - from sphinx.ext.autosummary.generate import _simple_info, _simple_warn from sphinx.util.osutil import ensuredir from sphinx.util.inspect import safe_getattr from jinja2 import FileSystemLoader, TemplateNotFound @@ -423,10 +426,14 @@ def generate_automodsumm_docs(lines, srcfn, app=None, suffix='.rst', warn=None, from .utils import find_autosummary_in_lines_for_automodsumm as find_autosummary_in_lines - if info is None: - info = _simple_info - if warn is None: - warn = _simple_warn + if SPHINX_LT_16: + info = app.info + warn = app.warn + else: + from sphinx.util import logging + logger = logging.getLogger(__name__) + info = logger.info + warn = logger.warning # info('[automodsumm] generating automodsumm for: ' + srcfn)
astropy/sphinx-automodapi
a856c2948129929aba83c4b26aee0a31ca530de2
diff --git a/sphinx_automodapi/tests/test_automodsumm.py b/sphinx_automodapi/tests/test_automodsumm.py index 0334e75..9336e56 100644 --- a/sphinx_automodapi/tests/test_automodsumm.py +++ b/sphinx_automodapi/tests/test_automodsumm.py @@ -61,11 +61,9 @@ ams_to_asmry_expected = """\ Automoddiagram Automodsumm - SPHINX_LT_17 automodsumm_to_autosummary_lines generate_automodsumm_docs process_automodsumm_generation - setup """
Sphinx warning: deprecated app.info With Sphinx v1.8.1, I get these warnings: > [automodsumm] tools.rst: found 16 automodsumm entries to generate /home/simon/.pyenv/versions/3.7.0/lib/python3.7/site-packages/sphinx/application.py:402: RemovedInSphinx20Warning: app.info() is now deprecated. Use sphinx.util.logging instead. RemovedInSphinx20Warning) Probably because of this : https://github.com/astropy/sphinx-automodapi/blob/23d9f1254599b3b70664ddb634c3dd11d56e30f8/sphinx_automodapi/automodsumm.py#L453
0.0
a856c2948129929aba83c4b26aee0a31ca530de2
[ "sphinx_automodapi/tests/test_automodsumm.py::test_ams_to_asmry" ]
[ "sphinx_automodapi/tests/test_automodsumm.py::test_ams_cython" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2018-10-07 18:23:44+00:00
bsd-3-clause
1,250
ateliedocodigo__py-healthcheck-35
diff --git a/healthcheck/healthcheck.py b/healthcheck/healthcheck.py index 25c8701..c8ae8d8 100644 --- a/healthcheck/healthcheck.py +++ b/healthcheck/healthcheck.py @@ -135,7 +135,10 @@ class HealthCheck(object): # Reduce to 6 decimal points to have consistency with timestamp elapsed_time = float('{:.6f}'.format(elapsed_time)) - if not passed: + if passed: + msg = 'Health check "{}" passed'.format(checker.__name__) + logger.debug(msg) + else: msg = 'Health check "{}" failed with output "{}"'.format(checker.__name__, output) logger.error(msg)
ateliedocodigo/py-healthcheck
e5b9643b1f6b5cc5ddf96e4c7d6f66920145cf69
diff --git a/tests/unit/test_healthcheck.py b/tests/unit/test_healthcheck.py index bf0c7d1..7192434 100644 --- a/tests/unit/test_healthcheck.py +++ b/tests/unit/test_healthcheck.py @@ -31,7 +31,9 @@ class BasicHealthCheckTest(unittest.TestCase): def test_success_check(self): hc = HealthCheck(checkers=[self.check_that_works]) - message, status, headers = hc.run() + with self.assertLogs('healthcheck', level='DEBUG') as cm: + message, status, headers = hc.run() + self.assertEqual(cm.output, ['DEBUG:healthcheck.healthcheck:Health check "check_that_works" passed']) self.assertEqual(200, status) jr = json.loads(message) self.assertEqual("success", jr["status"])
Add logger.debug if all health checks passed As of now only [failures](https://github.com/ateliedocodigo/py-healthcheck/blob/e6205bdcc32099d12cda6eba172b4a801104448f/healthcheck/healthcheck.py#L140) and [exceptions](https://github.com/ateliedocodigo/py-healthcheck/blob/e6205bdcc32099d12cda6eba172b4a801104448f/healthcheck/healthcheck.py#L130) are logged. Could we also log passing tests with a low log level, e.g. using `logger.debug`?
0.0
e5b9643b1f6b5cc5ddf96e4c7d6f66920145cf69
[ "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_success_check" ]
[ "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_basic_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_function_failing_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_function_success_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_prevent_duplication", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_signature_function_failure_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_signature_function_success_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_signature_value_failing_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_signature_value_success_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_value_failing_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_custom_section_value_success_check", "tests/unit/test_healthcheck.py::BasicHealthCheckTest::test_failing_check", "tests/unit/test_healthcheck.py::TimeoutHealthCheckTest::test_default_timeout_should_success_check", "tests/unit/test_healthcheck.py::TimeoutHealthCheckTest::test_error_timeout_function_should_failing_check" ]
{ "failed_lite_validators": [ "has_short_problem_statement" ], "has_test_patch": true, "is_lite": false }
2021-07-05 11:07:05+00:00
mit
1,251
atscub__nautapy-10
diff --git a/README.md b/README.md index 8b50314..fe931d2 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ __NautaPy__ Python API para el portal cautivo [Nauta](https://secure.etecsa.net: Instalación: ```bash -pip3 install --upgrade https://github.com/abrahamtoledo/nautapy.git#v0.2.0 +pip3 install --upgrade git+https://github.com/abrahamtoledo/nautapy.git#v0.2.0 ``` ## Modo de uso @@ -106,6 +106,8 @@ nauta --help ``` ## Contribuir +__IMPORTANTE__: Notificame por Telegram sobre cualquier actividad en el proyecto (Issue o PR). + Todas las contribuciones son bienvenidas. Puedes ayudar trabajando en uno de los issues existentes. Clona el repo, crea una rama para el issue que estes trabajando y cuando estes listo crea un Pull Request. diff --git a/nautapy/nauta_api.py b/nautapy/nauta_api.py index 8515d1d..c12348e 100644 --- a/nautapy/nauta_api.py +++ b/nautapy/nauta_api.py @@ -36,7 +36,7 @@ from nautapy.exceptions import NautaLoginException, NautaLogoutException, NautaE MAX_DISCONNECT_ATTEMPTS = 10 -ETECSA_HOMEPAGE = "http://www.etecsa.cu" +CHECK_PAGE = "http://www.cubadebate.cu" _re_login_fail_reason = re.compile('alert\("(?P<reason>[^"]*?)"\)') @@ -110,7 +110,7 @@ class NautaProtocol(object): @classmethod def is_connected(cls): - r = requests.get(ETECSA_HOMEPAGE) + r = requests.get(CHECK_PAGE) return b'secure.etecsa.net' not in r.content @classmethod
atscub/nautapy
00b87567d07f4c3d845a9d0bbda06287af955e05
diff --git a/test/test_nauta_api.py b/test/test_nauta_api.py index 46e43a0..c4c3610 100644 --- a/test/test_nauta_api.py +++ b/test/test_nauta_api.py @@ -5,7 +5,7 @@ import requests from requests_mock import Mocker as RequestMocker, ANY from nautapy.exceptions import NautaLoginException, NautaPreLoginException -from nautapy.nauta_api import ETECSA_HOMEPAGE, NautaProtocol +from nautapy.nauta_api import CHECK_PAGE, NautaProtocol _assets_dir = os.path.join(
En Nauta Hogar siempre devuelve que la sesión está activa No puede usarse en nauta hogar debido a que la web www.etecsa.cu está abierta y por tanto nautapy siempre dice que hay una conexión activa
0.0
00b87567d07f4c3d845a9d0bbda06287af955e05
[ "test/test_nauta_api.py::test_nauta_protocol_create_session_raises_when_connected" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2020-03-20 05:20:46+00:00
mit
1,252
auth0__auth0-python-159
diff --git a/auth0/v3/management/client_grants.py b/auth0/v3/management/client_grants.py index 6b37015..719aab3 100644 --- a/auth0/v3/management/client_grants.py +++ b/auth0/v3/management/client_grants.py @@ -24,7 +24,7 @@ class ClientGrants(object): return url + '/' + id return url - def all(self, audience=None, page=None, per_page=None, include_totals=False): + def all(self, audience=None, page=None, per_page=None, include_totals=False, client_id=None): """Retrieves all client grants. Args: @@ -37,7 +37,9 @@ class ClientGrants(object): include_totals (bool, optional): True if the query summary is to be included in the result, False otherwise. - + + client_id (string, optional): The id of a client to filter + See: https://auth0.com/docs/api/management/v2#!/Client_Grants/get_client_grants """ @@ -45,7 +47,8 @@ class ClientGrants(object): 'audience': audience, 'page': page, 'per_page': per_page, - 'include_totals': str(include_totals).lower() + 'include_totals': str(include_totals).lower(), + 'client_id': client_id, } return self.client.get(self._url(), params=params) diff --git a/auth0/v3/management/users_by_email.py b/auth0/v3/management/users_by_email.py index 0c92e6a..24622a9 100644 --- a/auth0/v3/management/users_by_email.py +++ b/auth0/v3/management/users_by_email.py @@ -39,7 +39,7 @@ class UsersByEmail(object): See: https://auth0.com/docs/api/management/v2#!/Users_By_Email/get_users_by_email """ params = { - 'email': email.lower(), + 'email': email, 'fields': fields and ','.join(fields) or None, 'include_fields': str(include_fields).lower() }
auth0/auth0-python
2bdd9008b0124b53360c79dd299331a8190d95f9
diff --git a/auth0/v3/test/management/test_client_grants.py b/auth0/v3/test/management/test_client_grants.py index 3d6b292..1155649 100644 --- a/auth0/v3/test/management/test_client_grants.py +++ b/auth0/v3/test/management/test_client_grants.py @@ -21,7 +21,8 @@ class TestClientGrants(unittest.TestCase): 'audience': None, 'page': None, 'per_page': None, - 'include_totals': 'false' + 'include_totals': 'false', + 'client_id': None, }) # With audience @@ -34,7 +35,8 @@ class TestClientGrants(unittest.TestCase): 'audience': 'http://domain.auth0.com/api/v2/', 'page': None, 'per_page': None, - 'include_totals': 'false' + 'include_totals': 'false', + 'client_id': None, }) # With pagination params @@ -47,7 +49,22 @@ class TestClientGrants(unittest.TestCase): 'audience': None, 'page': 7, 'per_page': 23, - 'include_totals': 'true' + 'include_totals': 'true', + 'client_id': None, + }) + + # With client_id param + c.all(client_id='exampleid') + + args, kwargs = mock_instance.get.call_args + + self.assertEqual('https://domain/api/v2/client-grants', args[0]) + self.assertEqual(kwargs['params'], { + 'audience': None, + 'page': None, + 'per_page': None, + 'include_totals': 'false', + 'client_id': 'exampleid', }) @mock.patch('auth0.v3.management.client_grants.RestClient') diff --git a/auth0/v3/test/management/test_users_by_email.py b/auth0/v3/test/management/test_users_by_email.py index 012c597..2c6e9af 100644 --- a/auth0/v3/test/management/test_users_by_email.py +++ b/auth0/v3/test/management/test_users_by_email.py @@ -16,7 +16,7 @@ class TestUsersByEmail(unittest.TestCase): self.assertEqual('https://domain/api/v2/users-by-email', args[0]) self.assertEqual(kwargs['params'], { - 'email': '[email protected]', + 'email': '[email protected]', 'fields': None, 'include_fields': 'true' })
ClientGrants.all is missing `client_id` parameter supported by the API https://auth0.com/docs/api/management/v2#!/Client_Grants/get_client_grants
0.0
2bdd9008b0124b53360c79dd299331a8190d95f9
[ "auth0/v3/test/management/test_client_grants.py::TestClientGrants::test_all", "auth0/v3/test/management/test_users_by_email.py::TestUsersByEmail::test_search_users_by_email" ]
[ "auth0/v3/test/management/test_client_grants.py::TestClientGrants::test_create", "auth0/v3/test/management/test_client_grants.py::TestClientGrants::test_delete", "auth0/v3/test/management/test_client_grants.py::TestClientGrants::test_update" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2018-10-24 08:12:46+00:00
mit
1,253
auth0__auth0-python-195
diff --git a/auth0/v3/management/rules_configs.py b/auth0/v3/management/rules_configs.py index cfde7b0..686f101 100644 --- a/auth0/v3/management/rules_configs.py +++ b/auth0/v3/management/rules_configs.py @@ -39,10 +39,7 @@ class RulesConfigs(object): See: https://auth0.com/docs/api/management/v2#!/Rules_Configs/delete_rules_configs_by_key """ - params = { - 'key': key - } - return self.client.delete(self._url(), params=params) + return self.client.delete(self._url(key)) def set(self, key, value): """Sets the rules config for a given key.
auth0/auth0-python
af5d863ffe75a4a7cd729c9d084cad6b37bd632e
diff --git a/auth0/v3/test/management/test_rules_configs.py b/auth0/v3/test/management/test_rules_configs.py index 68ee018..d3c41e5 100644 --- a/auth0/v3/test/management/test_rules_configs.py +++ b/auth0/v3/test/management/test_rules_configs.py @@ -25,7 +25,7 @@ class TestRules(unittest.TestCase): c.unset('an-id') mock_instance.delete.assert_called_with( - 'https://domain/api/v2/rules-configs', params={'key': 'an-id'} + 'https://domain/api/v2/rules-configs/an-id' ) @mock.patch('auth0.v3.management.rules_configs.RestClient')
rules_config.unset does not work ### Description `rules_config.unset` does not work. It returns a 404 for a known existing key. I believe this is because the `key` param is being passed to the auth0 api as a query param when it should be a path param according to the docs. https://auth0.com/docs/api/management/v2#!/Rules_Configs/delete_rules_configs_by_key ### Prerequisites - [x] I have checked the [README documentation](https://github.com/auth0/auth0-python/blob/master/README.rst). - [x] I have checked the [Auth0 Community](https://community.auth0.com/) for related posts. - [x] I have checked for related or duplicate [Issues](https://github.com/auth0/auth0-python/issues) and [PRs](https://github.com/auth0/auth0-python/pulls). - [x] I have read the [Auth0 general contribution guidelines](https://github.com/auth0/open-source-template/blob/master/GENERAL-CONTRIBUTING.md). - [x] I have read the [Auth0 Code of Conduct](https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). - [x] I am reporting this to the correct repository. ### Environment - auth0-python 3.7.2 - python 3.7.0 ### Reproduction Code Sample: ``` from auth0.v3.authentication import GetToken from auth0.v3.management import Auth0 get_token = GetToken('xxx.auth0.com') mgmt_api_token = get_token.client_credentials('xxx', 'xxx', 'https://{}/api/v2/'.format('xxx.auth0.com')) auth0 = Auth0('xxx.auth0.com', mgmt_api_token['access_token']) key_name = 'FOO' auth0.rules_configs.set(key_name, 'BAR') rules_configs = auth0.rules_configs.all() print(rules_configs) auth0.rules_configs.unset(key_name) ``` Output: ``` [{'key': 'FOO'}] Traceback (most recent call last): File "<input>", line 13, in <module> File "/Users/jhunken/.pyenv/versions/securityPlatform-3.5.3/lib/python3.5/site-packages/auth0/v3/management/rules_configs.py", line 45, in unset return self.client.delete(self._url(), params=params) File "/Users/jhunken/.pyenv/versions/securityPlatform-3.5.3/lib/python3.5/site-packages/auth0/v3/management/rest.py", line 75, in delete return self._process_response(response) File "/Users/jhunken/.pyenv/versions/securityPlatform-3.5.3/lib/python3.5/site-packages/auth0/v3/management/rest.py", line 78, in _process_response return self._parse(response).content() File "/Users/jhunken/.pyenv/versions/securityPlatform-3.5.3/lib/python3.5/site-packages/auth0/v3/management/rest.py", line 98, in content message=self._error_message()) auth0.v3.exceptions.Auth0Error: 404: Not Found ```
0.0
af5d863ffe75a4a7cd729c9d084cad6b37bd632e
[ "auth0/v3/test/management/test_rules_configs.py::TestRules::test_unset" ]
[ "auth0/v3/test/management/test_rules_configs.py::TestRules::test_all", "auth0/v3/test/management/test_rules_configs.py::TestRules::test_set" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2019-05-10 17:51:59+00:00
mit
1,254
auth0__auth0-python-273
diff --git a/auth0/v3/authentication/token_verifier.py b/auth0/v3/authentication/token_verifier.py index ea411c0..1f44c08 100644 --- a/auth0/v3/authentication/token_verifier.py +++ b/auth0/v3/authentication/token_verifier.py @@ -229,6 +229,9 @@ class TokenVerifier(): organization (str, optional): The expected organization ID (org_id) claim value. This should be specified when logging in to an organization. + Returns: + the decoded payload from the token + Raises: TokenValidationError: when the token cannot be decoded, the token signing algorithm is not the expected one, the token signature is invalid or the token has a claim missing or with unexpected value. @@ -244,6 +247,8 @@ class TokenVerifier(): # Verify claims self._verify_payload(payload, nonce, max_age, organization) + return payload + def _verify_payload(self, payload, nonce=None, max_age=None, organization=None): try: # on Python 2.7, 'str' keys as parsed as 'unicode'
auth0/auth0-python
fbeab6a9a92ff51f9cdf6e8e5ab2bdeff683dcf3
diff --git a/auth0/v3/test/authentication/test_token_verifier.py b/auth0/v3/test/authentication/test_token_verifier.py index 7ff0eee..d1306d3 100644 --- a/auth0/v3/test/authentication/test_token_verifier.py +++ b/auth0/v3/test/authentication/test_token_verifier.py @@ -390,7 +390,7 @@ class TestTokenVerifier(unittest.TestCase): audience=expectations['audience'] ) tv._clock = MOCKED_CLOCK - tv.verify(token, organization='org_123') + tv.verify(token, organization='org_123') def test_fails_when_org_specified_but_not_present(self): token = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhdXRoMHxzZGs0NThma3MiLCJhdWQiOiJ0b2tlbnMtdGVzdC0xMjMiLCJpc3MiOiJodHRwczovL3Rva2Vucy10ZXN0LmF1dGgwLmNvbS8iLCJleHAiOjE1ODc3NjUzNjEsImlhdCI6MTU4NzU5MjU2MX0.wotJnUdD5IfdZMewF_-BnHc0pI56uwzwr5qaSXvSu9w" @@ -402,4 +402,22 @@ class TestTokenVerifier(unittest.TestCase): def test_fails_when_org_specified_but_does_not_match(self): token = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhdXRoMHxzZGs0NThma3MiLCJhdWQiOiJ0b2tlbnMtdGVzdC0xMjMiLCJvcmdfaWQiOiJvcmdfMTIzIiwiaXNzIjoiaHR0cHM6Ly90b2tlbnMtdGVzdC5hdXRoMC5jb20vIiwiZXhwIjoxNTg3NzY1MzYxLCJpYXQiOjE1ODc1OTI1NjF9.hjSPgJpg0Dn2z0giCdGqVLD5Kmqy_yMYlSkgwKD7ahQ" - self.assert_fails_with_error(token, 'Organization (org_id) claim mismatch in the ID token; expected "org_abc", found "org_123"', signature_verifier=SymmetricSignatureVerifier(HMAC_SHARED_SECRET), organization='org_abc') \ No newline at end of file + self.assert_fails_with_error(token, 'Organization (org_id) claim mismatch in the ID token; expected "org_abc", found "org_123"', signature_verifier=SymmetricSignatureVerifier(HMAC_SHARED_SECRET), organization='org_abc') + + def test_verify_returns_payload(self): + token = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhdXRoMHxzZGs0NThma3MiLCJhdWQiOiJ0b2tlbnMtdGVzdC0xMjMiLCJvcmdfaWQiOiJvcmdfMTIzIiwiaXNzIjoiaHR0cHM6Ly90b2tlbnMtdGVzdC5hdXRoMC5jb20vIiwiZXhwIjoxNTg3NzY1MzYxLCJpYXQiOjE1ODc1OTI1NjF9.hjSPgJpg0Dn2z0giCdGqVLD5Kmqy_yMYlSkgwKD7ahQ" + sv = SymmetricSignatureVerifier(HMAC_SHARED_SECRET) + tv = TokenVerifier( + signature_verifier=sv, + issuer=expectations['issuer'], + audience=expectations['audience'] + ) + tv._clock = MOCKED_CLOCK + response = tv.verify(token) + self.assertIn('sub', response); + self.assertIn('aud', response); + self.assertIn('org_id', response); + self.assertIn('iss', response); + self.assertIn('exp', response); + self.assertIn('iat', response); + self.assertEqual('org_123', response['org_id'])
Returning boolean on token validator Hi, I'm using the token validator of the authentication submodule. It's raising a TokenValidationError if validation failed and returning nothing (so the method returning None by python internals) if validation success. I'm not sure is it a good idea. If I use this method inside an if condition and assume the token is valid the condition won't work. Because it will return None which is equal to False. In my opinion, the method should return True when a token is validated. And IMHO it should return False instead of raising an error, but it will be a breaking change. So I'm not sure is it necessary.
0.0
fbeab6a9a92ff51f9cdf6e8e5ab2bdeff683dcf3
[ "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_verify_returns_payload" ]
[ "auth0/v3/test/authentication/test_token_verifier.py::TestSignatureVerifier::test_asymmetric_verifier_fetches_key", "auth0/v3/test/authentication/test_token_verifier.py::TestSignatureVerifier::test_asymmetric_verifier_uses_rs256_alg", "auth0/v3/test/authentication/test_token_verifier.py::TestSignatureVerifier::test_fail_at_creation_with_invalid_algorithm", "auth0/v3/test/authentication/test_token_verifier.py::TestSignatureVerifier::test_fails_with_none_algorithm", "auth0/v3/test/authentication/test_token_verifier.py::TestSignatureVerifier::test_symmetric_verifier_fetches_key", "auth0/v3/test/authentication/test_token_verifier.py::TestSignatureVerifier::test_symmetric_verifier_uses_hs256_alg", "auth0/v3/test/authentication/test_token_verifier.py::TestJwksFetcher::test_fails_to_fetch_jwks_json_after_retrying_twice", "auth0/v3/test/authentication/test_token_verifier.py::TestJwksFetcher::test_fetches_jwks_json_forced_on_cache_miss", "auth0/v3/test/authentication/test_token_verifier.py::TestJwksFetcher::test_fetches_jwks_json_once_on_cache_miss", "auth0/v3/test/authentication/test_token_verifier.py::TestJwksFetcher::test_get_jwks_json_once_on_cache_hit", "auth0/v3/test/authentication/test_token_verifier.py::TestJwksFetcher::test_get_jwks_json_twice_on_cache_expired", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_HS256_token_signature_fails", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_HS256_token_signature_passes", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_RS256_token_signature_fails", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_RS256_token_signature_passes", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_err_token_empty", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_err_token_format_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_at_creation_with_invalid_signature_verifier", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_when_max_age_sent_with_auth_time_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_when_max_age_sent_with_auth_time_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_when_org_specified_but_does_not_match", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_when_org_specified_but_not_", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_when_org_specified_but_not_present", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_algorithm_not_supported", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_aud_array_and_azp_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_aud_array_and_azp_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_aud_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_aud_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_exp_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_exp_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_iat_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_iss_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_iss_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_nonce_invalid", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_nonce_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_fails_with_sub_missing", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_passes_when_nonce_missing_but_not_required", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_passes_when_org_present_and_matches", "auth0/v3/test/authentication/test_token_verifier.py::TestTokenVerifier::test_passes_when_org_present_but_not_required" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2021-04-25 11:28:25+00:00
mit
1,255
auth0__auth0-python-354
diff --git a/auth0/v3/management/auth0.py b/auth0/v3/management/auth0.py index fb6bc90..c28bfcb 100644 --- a/auth0/v3/management/auth0.py +++ b/auth0/v3/management/auth0.py @@ -80,7 +80,15 @@ class Auth0(object): for name, cls in modules.items(): cls = asyncify(cls) - setattr(self, name, cls(domain=domain, token=token, rest_options=None)) + setattr( + self, + name, + cls(domain=domain, token=token, rest_options=rest_options), + ) else: for name, cls in modules.items(): - setattr(self, name, cls(domain=domain, token=token, rest_options=None)) + setattr( + self, + name, + cls(domain=domain, token=token, rest_options=rest_options), + )
auth0/auth0-python
c09e3c289b87b53f70ecd28d45b790ac106f6f31
diff --git a/auth0/v3/test/management/test_auth0.py b/auth0/v3/test/management/test_auth0.py index ea2d618..15ce786 100644 --- a/auth0/v3/test/management/test_auth0.py +++ b/auth0/v3/test/management/test_auth0.py @@ -29,6 +29,7 @@ from ...management.tickets import Tickets from ...management.user_blocks import UserBlocks from ...management.users import Users from ...management.users_by_email import UsersByEmail +from ...rest import RestClientOptions class TestAuth0(unittest.TestCase): @@ -120,3 +121,8 @@ class TestAuth0(unittest.TestCase): def test_users(self): self.assertIsInstance(self.a0.users, Users) + + def test_args(self): + rest_options = RestClientOptions(retries=99) + auth0 = Auth0(self.domain, self.token, rest_options=rest_options) + self.assertEqual(auth0.users.client.options.retries, 99)
Version 3.23.0 No longer supports RestClientOptions If you look at the Auth0 class constructor the rest_options parameter is not used. This means rate limit retries will not be used.
0.0
c09e3c289b87b53f70ecd28d45b790ac106f6f31
[ "auth0/v3/test/management/test_auth0.py::TestAuth0::test_args" ]
[ "auth0/v3/test/management/test_auth0.py::TestAuth0::test_actions", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_attack_protection", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_blacklists", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_client_grants", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_clients", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_connections", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_custom_domains", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_device_credentials", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_email_templates", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_emails", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_grants", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_guardian", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_hooks", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_jobs", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_log_streams", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_logs", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_organizations", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_prompts", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_resource_servers", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_roles", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_rules", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_rules_configs", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_stats", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_tenants", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_tickets", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_user_blocks", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_users", "auth0/v3/test/management/test_auth0.py::TestAuth0::test_users_by_email" ]
{ "failed_lite_validators": [ "has_short_problem_statement" ], "has_test_patch": true, "is_lite": false }
2022-06-10 12:03:23+00:00
mit
1,256
auth0__auth0-python-40
diff --git a/README.rst b/README.rst index 0a16d58..d9cbd8a 100644 --- a/README.rst +++ b/README.rst @@ -54,10 +54,10 @@ To use the management library you will need to instantiate an Auth0 object with from auth0.v2.management import Auth0 domain = 'myaccount.auth0.com' - token = '{A_JWT_TOKEN}' # You can generate one of these by using the + token = 'A_JWT_TOKEN' # You can generate one of these by using the # token generator at: https://auth0.com/docs/api/v2 - auth0 = Auth0('myuser.auth0.com', token) + auth0 = Auth0('myaccount.auth0.com', token) The ``Auth0()`` object is now ready to take orders! Let's see how we can use this to get all available connections. diff --git a/auth0/v2/authentication/users.py b/auth0/v2/authentication/users.py index e031a75..bec1b4d 100644 --- a/auth0/v2/authentication/users.py +++ b/auth0/v2/authentication/users.py @@ -46,5 +46,5 @@ class Users(AuthenticationBase): return self.post( url='https://%s/tokeninfo' % self.domain, data={'id_token': jwt}, - headers={'Content-Type: application/json'} + headers={'Content-Type': 'application/json'} ) diff --git a/examples/flask-webapp/public/app.js b/examples/flask-webapp/public/app.js index 16a837e..f36c6ab 100644 --- a/examples/flask-webapp/public/app.js +++ b/examples/flask-webapp/public/app.js @@ -1,10 +1,12 @@ $(document).ready(function() { - var lock = new Auth0Lock(AUTH0_CLIENT_ID, AUTH0_DOMAIN ); + var lock = new Auth0Lock(AUTH0_CLIENT_ID, AUTH0_DOMAIN, { + auth: { + redirectUrl: AUTH0_CALLBACK_URL + } + }); $('.btn-login').click(function(e) { e.preventDefault(); - lock.show({ - callbackURL: AUTH0_CALLBACK_URL - }); + lock.show(); }); }); diff --git a/examples/flask-webapp/server.py b/examples/flask-webapp/server.py index df54eb4..56cdc97 100644 --- a/examples/flask-webapp/server.py +++ b/examples/flask-webapp/server.py @@ -70,12 +70,5 @@ def callback_handling(): return redirect('/dashboard') - - - - - - - if __name__ == "__main__": app.run(host='0.0.0.0', port = int(os.environ.get('PORT', 3000))) diff --git a/examples/flask-webapp/templates/home.html b/examples/flask-webapp/templates/home.html index e6018ea..9bbf057 100644 --- a/examples/flask-webapp/templates/home.html +++ b/examples/flask-webapp/templates/home.html @@ -1,7 +1,7 @@ <html> <head> - <script src="http://code.jquery.com/jquery-2.1.1.min.js" type="text/javascript"></script> - <script src="https://cdn.auth0.com/js/lock-9.0.js"></script> + <script src="http://code.jquery.com/jquery-3.1.0.min.js" type="text/javascript"></script> + <script src="https://cdn.auth0.com/js/lock/10.0/lock.min.js"></script> <script type="text/javascript" src="//use.typekit.net/iws6ohy.js"></script> <script type="text/javascript">try{Typekit.load();}catch(e){}</script> @@ -9,8 +9,8 @@ <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- font awesome from BootstrapCDN --> - <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css" rel="stylesheet"> - <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet"> + <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet"> + <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet"> <script> var AUTH0_CLIENT_ID = '{{env.AUTH0_CLIENT_ID}}';
auth0/auth0-python
013851d48025ec202464721c23d65156cd138565
diff --git a/auth0/v2/test/authentication/test_users.py b/auth0/v2/test/authentication/test_users.py index c842f55..446301d 100644 --- a/auth0/v2/test/authentication/test_users.py +++ b/auth0/v2/test/authentication/test_users.py @@ -27,5 +27,5 @@ class TestUsers(unittest.TestCase): mock_post.assert_called_with( url='https://my.domain.com/tokeninfo', data={'id_token': 'jwtoken'}, - headers={'Content-Type: application/json'} + headers={'Content-Type': 'application/json'} )
authentication.Users(client_domain).tokeninfo fails I've got a traceback: ```python In [78]: user_authentication.tokeninfo(id_token) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-78-decf4417ce18> in <module>() ----> 1 user_authentication.tokeninfo(id_token) /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/auth0/v2/authentication/users.pyc in tokeninfo(self, jwt) 47 url='https://%s/tokeninfo' % self.domain,  48 data={'id_token': jwt}, ---> 49 headers={'Content-Type: application/json'} 50 ) /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/auth0/v2/authentication/base.pyc in post(self, url, data, headers) 8 def post(self, url, data={}, headers={}): 9 response = requests.post(url=url, data=json.dumps(data), ---> 10 headers=headers) 11 return self._process_response(response) 12 /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/requests/api.pyc in post(url, data, json, **kwargs) 107 """ 108 --> 109 return request('post', url, data=data, json=json, **kwargs) 110 111 /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/requests/api.pyc in request(method, url, **kwargs) 48 49 session = sessions.Session() ---> 50 response = session.request(method=method, url=url, **kwargs) 51 # By explicitly closing the session, we avoid leaving sockets open which 52 # can trigger a ResourceWarning in some cases, and look like a memory leak /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/requests/sessions.pyc in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, jso n) 452 hooks = hooks, 453 ) --> 454 prep = self.prepare_request(req) 455 456 proxies = proxies or {} /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/requests/sessions.pyc in prepare_request(self, request) 386 auth=merge_setting(auth, self.auth), 387 cookies=merged_cookies, --> 388 hooks=merge_hooks(request.hooks, self.hooks), 389 ) 390 return p /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/requests/models.pyc in prepare(self, method, url, headers, files, data, params, auth, cookies, hooks, json) 292 self.prepare_method(method) 293 self.prepare_url(url, params) --> 294 self.prepare_headers(headers) 295 self.prepare_cookies(cookies) 296 self.prepare_body(data, files, json) /home/ale/.virtualenvs/auth0/lib/python2.7/site-packages/requests/models.pyc in prepare_headers(self, headers) 400 401 if headers: --> 402 self.headers = CaseInsensitiveDict((to_native_string(name), value) for name, value in headers.items()) 403 else: 404 self.headers = CaseInsensitiveDict() AttributeError: 'set' object has no attribute 'items' ```
0.0
013851d48025ec202464721c23d65156cd138565
[ "auth0/v2/test/authentication/test_users.py::TestUsers::test_tokeninfo" ]
[ "auth0/v2/test/authentication/test_users.py::TestUsers::test_userinfo" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2016-04-29 12:24:26+00:00
mit
1,257
auth0__auth0-python-463
diff --git a/auth0/v3/management/branding.py b/auth0/v3/management/branding.py index 644e441..0fc09cc 100644 --- a/auth0/v3/management/branding.py +++ b/auth0/v3/management/branding.py @@ -91,6 +91,5 @@ class Branding(object): return self.client.put( self._url("templates", "universal-login"), - type="put_universal-login_body", body={"template": body}, )
auth0/auth0-python
6ebd7a2c4cde47ae0b30c6d4ff9ff32b1cf05bc1
diff --git a/auth0/v3/test/management/test_branding.py b/auth0/v3/test/management/test_branding.py index 78ec9a1..ff9f25d 100644 --- a/auth0/v3/test/management/test_branding.py +++ b/auth0/v3/test/management/test_branding.py @@ -70,6 +70,5 @@ class TestBranding(unittest.TestCase): api.put.assert_called_with( "https://domain/api/v2/branding/templates/universal-login", - type="put_universal-login_body", body={"template": {"a": "b", "c": "d"}}, )
TypeError when updating universal login template <!-- **Please do not report security vulnerabilities here**. The Responsible Disclosure Program (https://auth0.com/whitehat) details the procedure for disclosing security issues. Thank you in advance for helping us to improve this library! Please read through the template below and answer all relevant questions. Your additional work here is greatly appreciated and will help us respond as quickly as possible. For general support or usage questions, use the Auth0 Community (https://community.auth0.com/) or Auth0 Support (https://support.auth0.com/). Finally, to avoid duplicates, please search existing Issues before submitting one here. By submitting an Issue to this repository, you agree to the terms within the Auth0 Code of Conduct (https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). --> ### Describe the problem <!-- > Provide a clear and concise description of the issue --> Auth0.branding.update_template_universal_login(htmlText) gives TypeError: RestClient.put() got an unexpected keyword argument 'type' ### What was the expected behavior? <!-- > Tell us about the behavior you expected to see --> It should update the universal login page ### Reproduction <!-- > Detail the steps taken to reproduce this error, and whether this issue can be reproduced consistently or if it is intermittent. > **Note**: If clear, reproducable steps or the smallest sample app demonstrating misbehavior cannot be provided, we may not be able to follow up on this bug report. > Where possible, please include: > > - The smallest possible sample app that reproduces the undesirable behavior > - Log files (redact/remove sensitive information) > - Application settings (redact/remove sensitive information) > - Screenshots --> - call update_template_universal_login() update_template_universal_login() is defined as ` return self.client.put( self._url("templates", "universal-login"), type="put_universal-login_body", body={"template": body}, ) ` removing the type parameter and changing body to data fixes the issue ### Environment <!-- > Please provide the following: --> - **Version of this library used:** auth0-python==3.24.0
0.0
6ebd7a2c4cde47ae0b30c6d4ff9ff32b1cf05bc1
[ "auth0/v3/test/management/test_branding.py::TestBranding::test_update_template_universal_login" ]
[ "auth0/v3/test/management/test_branding.py::TestBranding::test_delete_template_universal_login", "auth0/v3/test/management/test_branding.py::TestBranding::test_get", "auth0/v3/test/management/test_branding.py::TestBranding::test_get_template_universal_login", "auth0/v3/test/management/test_branding.py::TestBranding::test_init_with_optionals", "auth0/v3/test/management/test_branding.py::TestBranding::test_update" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-01-19 15:16:24+00:00
mit
1,258
auth0__auth0-python-477
diff --git a/auth0/management/branding.py b/auth0/management/branding.py index 38084a9..7d60cc5 100644 --- a/auth0/management/branding.py +++ b/auth0/management/branding.py @@ -93,3 +93,56 @@ class Branding: self._url("templates", "universal-login"), body={"template": body}, ) + + def get_default_branding_theme(self): + """Retrieve default branding theme. + + See: https://auth0.com/docs/api/management/v2#!/Branding/get_default_branding_theme + """ + + return self.client.get(self._url("themes", "default")) + + def get_branding_theme(self, theme_id): + """Retrieve branding theme. + + Args: + theme_id (str): The theme_id to retrieve branding theme for. + + See: https://auth0.com/docs/api/management/v2#!/Branding/get_branding_theme + """ + + return self.client.get(self._url("themes", theme_id)) + + def delete_branding_theme(self, theme_id): + """Delete branding theme. + + Args: + theme_id (str): The theme_id to delete branding theme for. + + See: https://auth0.com/docs/api/management/v2#!/Branding/delete_branding_theme + """ + + return self.client.delete(self._url("themes", theme_id)) + + def update_branding_theme(self, theme_id, body): + """Update branding theme. + + Args: + theme_id (str): The theme_id to update branding theme for. + body (dict): The attributes to set on the theme. + + See: https://auth0.com/docs/api/management/v2#!/Branding/patch_branding_theme + """ + + return self.client.patch(self._url("themes", theme_id), data=body) + + def create_branding_theme(self, body): + """Create branding theme. + + Args: + body (dict): The attributes to set on the theme. + + See: https://auth0.com/docs/api/management/v2#!/Branding/post_branding_theme + """ + + return self.client.post(self._url("themes"), data=body)
auth0/auth0-python
1a1b97594e13a3be7a522657c133a528a6992286
diff --git a/auth0/test/management/test_branding.py b/auth0/test/management/test_branding.py index 5f200d1..a10bf3b 100644 --- a/auth0/test/management/test_branding.py +++ b/auth0/test/management/test_branding.py @@ -71,3 +71,65 @@ class TestBranding(unittest.TestCase): "https://domain/api/v2/branding/templates/universal-login", body={"template": {"a": "b", "c": "d"}}, ) + + @mock.patch("auth0.management.branding.RestClient") + def test_get_default_branding_theme(self, mock_rc): + api = mock_rc.return_value + api.get.return_value = {} + + branding = Branding(domain="domain", token="jwttoken") + branding.get_default_branding_theme() + + api.get.assert_called_with( + "https://domain/api/v2/branding/themes/default", + ) + + @mock.patch("auth0.management.branding.RestClient") + def test_get_branding_theme(self, mock_rc): + api = mock_rc.return_value + api.get.return_value = {} + + branding = Branding(domain="domain", token="jwttoken") + branding.get_branding_theme("theme_id") + + api.get.assert_called_with( + "https://domain/api/v2/branding/themes/theme_id", + ) + + @mock.patch("auth0.management.branding.RestClient") + def test_delete_branding_theme(self, mock_rc): + api = mock_rc.return_value + api.delete.return_value = {} + + branding = Branding(domain="domain", token="jwttoken") + branding.delete_branding_theme("theme_id") + + api.delete.assert_called_with( + "https://domain/api/v2/branding/themes/theme_id", + ) + + @mock.patch("auth0.management.branding.RestClient") + def test_update_branding_theme(self, mock_rc): + api = mock_rc.return_value + api.patch.return_value = {} + + branding = Branding(domain="domain", token="jwttoken") + branding.update_branding_theme("theme_id", {}) + + api.patch.assert_called_with( + "https://domain/api/v2/branding/themes/theme_id", + data={}, + ) + + @mock.patch("auth0.management.branding.RestClient") + def test_create_branding_theme(self, mock_rc): + api = mock_rc.return_value + api.post.return_value = {} + + branding = Branding(domain="domain", token="jwttoken") + branding.create_branding_theme({}) + + api.post.assert_called_with( + "https://domain/api/v2/branding/themes", + data={}, + )
Add support for latest /branding endpoints There are additional endpoints available for **Branding**, `/api/v2/branding/themes` and they are not available in the latest version (4.0.0): [Branding/get_default_branding_theme](https://auth0.com/docs/api/management/v2#!/Branding/get_default_branding_theme) [Branding/get_branding_theme](https://auth0.com/docs/api/management/v2#!/Branding/get_branding_theme) [Branding/patch_branding_theme](https://auth0.com/docs/api/management/v2#!/Branding/patch_branding_theme) ... and so on
0.0
1a1b97594e13a3be7a522657c133a528a6992286
[ "auth0/test/management/test_branding.py::TestBranding::test_create_branding_theme", "auth0/test/management/test_branding.py::TestBranding::test_delete_branding_theme", "auth0/test/management/test_branding.py::TestBranding::test_get_branding_theme", "auth0/test/management/test_branding.py::TestBranding::test_get_default_branding_theme", "auth0/test/management/test_branding.py::TestBranding::test_update_branding_theme" ]
[ "auth0/test/management/test_branding.py::TestBranding::test_delete_template_universal_login", "auth0/test/management/test_branding.py::TestBranding::test_get", "auth0/test/management/test_branding.py::TestBranding::test_get_template_universal_login", "auth0/test/management/test_branding.py::TestBranding::test_init_with_optionals", "auth0/test/management/test_branding.py::TestBranding::test_update", "auth0/test/management/test_branding.py::TestBranding::test_update_template_universal_login" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-03-14 08:47:32+00:00
mit
1,259
auth0__auth0-python-484
diff --git a/auth0/authentication/get_token.py b/auth0/authentication/get_token.py index bb697a2..4986e55 100644 --- a/auth0/authentication/get_token.py +++ b/auth0/authentication/get_token.py @@ -118,9 +118,9 @@ class GetToken(AuthenticationBase): self, username, password, - scope, - realm, - audience, + scope=None, + realm=None, + audience=None, grant_type="http://auth0.com/oauth/grant-type/password-realm", ): """Calls /oauth/token endpoint with password-realm grant type @@ -134,18 +134,18 @@ class GetToken(AuthenticationBase): this information. Args: - audience (str): The unique identifier of the target API you want to access. - username (str): Resource owner's identifier password (str): resource owner's Secret - scope(str): String value of the different scopes the client is asking for. + scope(str, optional): String value of the different scopes the client is asking for. Multiple scopes are separated with whitespace. - realm (str): String value of the realm the user belongs. + realm (str, optional): String value of the realm the user belongs. Set this if you want to add realm support at this grant. + audience (str, optional): The unique identifier of the target API you want to access. + grant_type (str, optional): Denotes the flow you're using. For password realm use http://auth0.com/oauth/grant-type/password-realm
auth0/auth0-python
1e2b7be44da8ff981b63758497feb2c6b39209ed
diff --git a/auth0/test/authentication/test_get_token.py b/auth0/test/authentication/test_get_token.py index 7dd9f49..f2c0b34 100644 --- a/auth0/test/authentication/test_get_token.py +++ b/auth0/test/authentication/test_get_token.py @@ -163,6 +163,32 @@ class TestGetToken(unittest.TestCase): }, ) + @mock.patch("auth0.rest.RestClient.post") + def test_login_simple(self, mock_post): + g = GetToken("my.domain.com", "cid", client_secret="clsec") + + g.login( + username="usrnm", + password="pswd", + ) + + args, kwargs = mock_post.call_args + + self.assertEqual(args[0], "https://my.domain.com/oauth/token") + self.assertEqual( + kwargs["data"], + { + "client_id": "cid", + "client_secret": "clsec", + "username": "usrnm", + "password": "pswd", + "realm": None, + "scope": None, + "audience": None, + "grant_type": "http://auth0.com/oauth/grant-type/password-realm", + }, + ) + @mock.patch("auth0.rest.RestClient.post") def test_refresh_token(self, mock_post): g = GetToken("my.domain.com", "cid", client_secret="clsec")
Outdated Readme.md <!-- **Please do not report security vulnerabilities here**. The Responsible Disclosure Program (https://auth0.com/whitehat) details the procedure for disclosing security issues. Thank you in advance for helping us to improve this library! Please read through the template below and answer all relevant questions. Your additional work here is greatly appreciated and will help us respond as quickly as possible. For general support or usage questions, use the Auth0 Community (https://community.auth0.com/) or Auth0 Support (https://support.auth0.com/). Finally, to avoid duplicates, please search existing Issues before submitting one here. By submitting an Issue to this repository, you agree to the terms within the Auth0 Code of Conduct (https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). --> ### Describe the problem Documentation on Readme.md is outdated. ### Reproduction ``` from auth0.authentication import Database database = Database('my-domain.us.auth0.com', 'my-client-id') database.signup(email='[email protected]', password='secr3t', connection='Username-Password-Authentication') ``` The above code produces: `ModuleNotFoundError: No module named 'auth0.authentication'` I think it should be: ``` from auth0.v3.authentication import Database database = Database('my-domain.us.auth0.com') database.signup(email='[email protected]', password='secr3t', connection='Username-Password-Authentication', client_id ='client_id') ``` Same for the rest of the examples. Login requires `token.login` to pass scope and audience. I'm using Python 3.9 and the latest pip install auth0-python
0.0
1e2b7be44da8ff981b63758497feb2c6b39209ed
[ "auth0/test/authentication/test_get_token.py::TestGetToken::test_login_simple" ]
[ "auth0/test/authentication/test_get_token.py::TestGetToken::test_authorization_code", "auth0/test/authentication/test_get_token.py::TestGetToken::test_authorization_code_pkce", "auth0/test/authentication/test_get_token.py::TestGetToken::test_authorization_code_with_client_assertion", "auth0/test/authentication/test_get_token.py::TestGetToken::test_client_credentials", "auth0/test/authentication/test_get_token.py::TestGetToken::test_client_credentials_with_client_assertion", "auth0/test/authentication/test_get_token.py::TestGetToken::test_login", "auth0/test/authentication/test_get_token.py::TestGetToken::test_passwordless_login_with_email", "auth0/test/authentication/test_get_token.py::TestGetToken::test_passwordless_login_with_sms", "auth0/test/authentication/test_get_token.py::TestGetToken::test_refresh_token" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-03-27 14:12:09+00:00
mit
1,260
auth0__auth0-python-499
diff --git a/auth0/asyncify.py b/auth0/asyncify.py index d57bc70..091d049 100644 --- a/auth0/asyncify.py +++ b/auth0/asyncify.py @@ -1,5 +1,7 @@ import aiohttp +from auth0.authentication.base import AuthenticationBase +from auth0.rest import RestClientOptions from auth0.rest_async import AsyncRestClient @@ -19,7 +21,7 @@ def asyncify(cls): if callable(getattr(cls, func)) and not func.startswith("_") ] - class AsyncClient(cls): + class AsyncManagementClient(cls): def __init__( self, domain, @@ -29,40 +31,47 @@ def asyncify(cls): protocol="https", rest_options=None, ): - if token is None: - # Wrap the auth client - super().__init__(domain, telemetry, timeout, protocol) - else: - # Wrap the mngtmt client - super().__init__( - domain, token, telemetry, timeout, protocol, rest_options - ) + super().__init__(domain, token, telemetry, timeout, protocol, rest_options) self.client = AsyncRestClient( jwt=token, telemetry=telemetry, timeout=timeout, options=rest_options ) - class Wrapper(cls): + class AsyncAuthenticationClient(cls): def __init__( self, domain, - token=None, + client_id, + client_secret=None, + client_assertion_signing_key=None, + client_assertion_signing_alg=None, telemetry=True, timeout=5.0, protocol="https", - rest_options=None, ): - if token is None: - # Wrap the auth client - super().__init__(domain, telemetry, timeout, protocol) - else: - # Wrap the mngtmt client - super().__init__( - domain, token, telemetry, timeout, protocol, rest_options - ) - - self._async_client = AsyncClient( - domain, token, telemetry, timeout, protocol, rest_options + super().__init__( + domain, + client_id, + client_secret, + client_assertion_signing_key, + client_assertion_signing_alg, + telemetry, + timeout, + protocol, + ) + self.client = AsyncRestClient( + None, + options=RestClientOptions( + telemetry=telemetry, timeout=timeout, retries=0 + ), ) + + class Wrapper(cls): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + if AuthenticationBase in cls.__bases__: + self._async_client = AsyncAuthenticationClient(*args, **kwargs) + else: + self._async_client = AsyncManagementClient(*args, **kwargs) for method in methods: setattr( self,
auth0/auth0-python
53c326a8e4828c4f552169e6167c7f2f8aa46205
diff --git a/auth0/test_async/test_asyncify.py b/auth0/test_async/test_asyncify.py index 2f98102..8a80bef 100644 --- a/auth0/test_async/test_asyncify.py +++ b/auth0/test_async/test_asyncify.py @@ -12,9 +12,11 @@ from aioresponses import CallbackResult, aioresponses from callee import Attrs from auth0.asyncify import asyncify +from auth0.authentication import GetToken from auth0.management import Clients, Guardian, Jobs clients = re.compile(r"^https://example\.com/api/v2/clients.*") +token = re.compile(r"^https://example\.com/oauth/token.*") factors = re.compile(r"^https://example\.com/api/v2/guardian/factors.*") users_imports = re.compile(r"^https://example\.com/api/v2/jobs/users-imports.*") payload = {"foo": "bar"} @@ -84,6 +86,31 @@ class TestAsyncify(getattr(unittest, "IsolatedAsyncioTestCase", object)): timeout=ANY, ) + @aioresponses() + async def test_post_auth(self, mocked): + callback, mock = get_callback() + mocked.post(token, callback=callback) + c = asyncify(GetToken)("example.com", "cid", client_secret="clsec") + self.assertEqual( + await c.login_async(username="usrnm", password="pswd"), payload + ) + mock.assert_called_with( + Attrs(path="/oauth/token"), + allow_redirects=True, + json={ + "client_id": "cid", + "username": "usrnm", + "password": "pswd", + "realm": None, + "scope": None, + "audience": None, + "grant_type": "http://auth0.com/oauth/grant-type/password-realm", + "client_secret": "clsec", + }, + headers={i: headers[i] for i in headers if i != "Authorization"}, + timeout=ANY, + ) + @aioresponses() async def test_file_post(self, mocked): callback, mock = get_callback()
GetToken and asyncify: Algorithm not supported ### Checklist - [X] I have looked into the [Readme](https://github.com/auth0/auth0-python#readme) and [Examples](https://github.com/auth0/auth0-python/blob/master/EXAMPLES.md), and have not found a suitable solution or answer. - [X] I have looked into the [API documentation](https://auth0-python.readthedocs.io/en/latest/) and have not found a suitable solution or answer. - [X] I have searched the [issues](https://github.com/auth0/auth0-python/issues) and have not found a suitable solution or answer. - [X] I have searched the [Auth0 Community](https://community.auth0.com) forums and have not found a suitable solution or answer. - [X] I agree to the terms within the [Auth0 Code of Conduct](https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). ### Description When wrapping `GetToken` with `asyncify`, it passes the wrong arguments via the wrapper, and it fails with an exception 'Algorithm not supported'. I believe the error resides [here](https://github.com/auth0/auth0-python/blob/master/auth0/asyncify.py#L59-L61). When I debug this piece of code, I see that it initializes the `AuthenticationBase` with `client_assertion_signing_alg = "https"`. That makes me believe that the order of the arguments passed onto `asyncify(GetToken)` -> `GetToken` ->`AuthenticationBase` is incorrect when using `asyncify`. ### Reproduction This is what I basically do (I'm migrating from 3.x to 4.x): ```python AsyncGetToken = asyncify(GetToken) get_token = AsyncGetToken(domain, client_id) # This fails with 'Algorithm not supported'. response = await self.get_token.login_async( username=username, password=password, realm="Username-Password-Authentication", scope="openid profile email", ) ``` ### Additional context Stack trace: ``` Traceback (most recent call last): File "lib/python3.10/site-packages/jwt/api_jws.py", line 95, in get_algorithm_by_name return self._algorithms[alg_name] KeyError: 'https' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "cli.py", line 178, in run sys.exit(asyncio.run(main(sys.argv))) File "/usr/local/Cellar/[email protected]/3.10.12/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/local/Cellar/[email protected]/3.10.12/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "cli.py", line 167, in main await command_devices(arguments) File "cli.py", line 55, in command_devices await account.update_devices() File "client.py", line 61, in update_devices device.id: device for device in await self.traccar_api.get_devices() File "api.py", line 49, in wrapper return await func(*args, **kwargs) File "api.py", line 217, in get_devices response = await self._get("devices") File "api.py", line 91, in wrapper return await func(*args, **kwargs) File "api.py", line 375, in _get await self.identity_api.login() File "api.py", line 154, in login response = await self.get_token.login_async( File "lib/python3.10/site-packages/auth0/asyncify.py", line 10, in closure return await m(*args, **kwargs) File "lib/python3.10/site-packages/auth0/authentication/get_token.py", line 156, in login return self.authenticated_post( File "lib/python3.10/site-packages/auth0/authentication/base.py", line 59, in authenticated_post url, data=self._add_client_authentication(data), headers=headers File "lib/python3.10/site-packages/auth0/authentication/base.py", line 45, in _add_client_authentication return add_client_authentication( File "lib/python3.10/site-packages/auth0/authentication/client_authentication.py", line 61, in add_client_authentication authenticated_payload["client_assertion"] = create_client_assertion_jwt( File "lib/python3.10/site-packages/auth0/authentication/client_authentication.py", line 23, in create_client_assertion_jwt return jwt.encode( File "lib/python3.10/site-packages/jwt/api_jwt.py", line 73, in encode return api_jws.encode( File "lib/python3.10/site-packages/jwt/api_jws.py", line 159, in encode alg_obj = self.get_algorithm_by_name(algorithm_) File "lib/python3.10/site-packages/jwt/api_jws.py", line 101, in get_algorithm_by_name raise NotImplementedError("Algorithm not supported") from e NotImplementedError: Algorithm not supported ``` ### auth0-python version 4.2.0 ### Python version 3.10
0.0
53c326a8e4828c4f552169e6167c7f2f8aa46205
[ "auth0/test_async/test_asyncify.py::TestAsyncify::test_post_auth" ]
[ "auth0/test_async/test_asyncify.py::TestAsyncify::test_delete", "auth0/test_async/test_asyncify.py::TestAsyncify::test_file_post", "auth0/test_async/test_asyncify.py::TestAsyncify::test_get", "auth0/test_async/test_asyncify.py::TestAsyncify::test_patch", "auth0/test_async/test_asyncify.py::TestAsyncify::test_post", "auth0/test_async/test_asyncify.py::TestAsyncify::test_put", "auth0/test_async/test_asyncify.py::TestAsyncify::test_rate_limit", "auth0/test_async/test_asyncify.py::TestAsyncify::test_shared_session", "auth0/test_async/test_asyncify.py::TestAsyncify::test_timeout" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-06-22 12:38:55+00:00
mit
1,261
auth0__auth0-python-500
diff --git a/auth0/management/connections.py b/auth0/management/connections.py index d807607..b6492bf 100644 --- a/auth0/management/connections.py +++ b/auth0/management/connections.py @@ -52,6 +52,7 @@ class Connections: page=None, per_page=None, extra_params=None, + name=None, ): """Retrieves all connections. @@ -76,6 +77,8 @@ class Connections: the request. The fields, include_fields, page and per_page values specified as parameters take precedence over the ones defined here. + name (str): Provide the name of the connection to retrieve. + See: https://auth0.com/docs/api/management/v2#!/Connections/get_connections Returns: @@ -88,6 +91,7 @@ class Connections: params["include_fields"] = str(include_fields).lower() params["page"] = page params["per_page"] = per_page + params["name"] = name return self.client.get(self._url(), params=params)
auth0/auth0-python
0bf017662f79e2498264c8298a785dbc1f55a091
diff --git a/auth0/test/management/test_connections.py b/auth0/test/management/test_connections.py index 69c0714..1f27de6 100644 --- a/auth0/test/management/test_connections.py +++ b/auth0/test/management/test_connections.py @@ -33,6 +33,7 @@ class TestConnection(unittest.TestCase): "page": None, "per_page": None, "include_fields": "true", + "name": None, }, ) @@ -50,6 +51,7 @@ class TestConnection(unittest.TestCase): "page": None, "per_page": None, "include_fields": "false", + "name": None, }, ) @@ -67,6 +69,7 @@ class TestConnection(unittest.TestCase): "page": None, "per_page": None, "include_fields": "true", + "name": None, }, ) @@ -84,6 +87,7 @@ class TestConnection(unittest.TestCase): "page": 7, "per_page": 25, "include_fields": "true", + "name": None, }, ) @@ -102,6 +106,25 @@ class TestConnection(unittest.TestCase): "per_page": None, "include_fields": "true", "some_key": "some_value", + "name": None, + }, + ) + + # Name + c.all(name="foo") + + args, kwargs = mock_instance.get.call_args + + self.assertEqual("https://domain/api/v2/connections", args[0]) + self.assertEqual( + kwargs["params"], + { + "fields": None, + "strategy": None, + "page": None, + "per_page": None, + "include_fields": "true", + "name": "foo", }, )
Connections.all() should accept name ### Checklist - [X] I have looked into the [Readme](https://github.com/auth0/auth0-python#readme) and [Examples](https://github.com/auth0/auth0-python/blob/master/EXAMPLES.md), and have not found a suitable solution or answer. - [X] I have looked into the [API documentation](https://auth0-python.readthedocs.io/en/latest/) and have not found a suitable solution or answer. - [X] I have searched the [issues](https://github.com/auth0/auth0-python/issues) and have not found a suitable solution or answer. - [X] I have searched the [Auth0 Community](https://community.auth0.com) forums and have not found a suitable solution or answer. - [X] I agree to the terms within the [Auth0 Code of Conduct](https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). ### Describe the problem you'd like to have solved I would like this library to have closer parity with the Auth0 Management API, especially when it comes to available parameters. The Connections.all() method should accept a `name` argument to retrieve a Connection by name. ### Describe the ideal solution The Connections.all() method should accept a `name` argument to retrieve a Connection by name. ### Alternatives and current workarounds Retrieve all connections and iterate through results to see if a Connection name matches. ### Additional context _No response_
0.0
0bf017662f79e2498264c8298a785dbc1f55a091
[ "auth0/test/management/test_connections.py::TestConnection::test_all" ]
[ "auth0/test/management/test_connections.py::TestConnection::test_create", "auth0/test/management/test_connections.py::TestConnection::test_delete", "auth0/test/management/test_connections.py::TestConnection::test_delete_user_by_email", "auth0/test/management/test_connections.py::TestConnection::test_get", "auth0/test/management/test_connections.py::TestConnection::test_init_with_optionals", "auth0/test/management/test_connections.py::TestConnection::test_update" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-06-22 12:52:23+00:00
mit
1,262
auth0__auth0-python-501
diff --git a/auth0/authentication/get_token.py b/auth0/authentication/get_token.py index 9de8929..a7321b8 100644 --- a/auth0/authentication/get_token.py +++ b/auth0/authentication/get_token.py @@ -125,6 +125,7 @@ class GetToken(AuthenticationBase): realm: str | None = None, audience: str | None = None, grant_type: str = "http://auth0.com/oauth/grant-type/password-realm", + forwarded_for: str | None = None, ) -> Any: """Calls /oauth/token endpoint with password-realm grant type @@ -152,9 +153,16 @@ class GetToken(AuthenticationBase): grant_type (str, optional): Denotes the flow you're using. For password realm use http://auth0.com/oauth/grant-type/password-realm + forwarded_for (str, optional): End-user IP as a string value. Set this if you want + brute-force protection to work in server-side scenarios. + See https://auth0.com/docs/get-started/authentication-and-authorization-flow/avoid-common-issues-with-resource-owner-password-flow-and-attack-protection + Returns: access_token, id_token """ + headers = None + if forwarded_for: + headers = {"auth0-forwarded-for": forwarded_for} return self.authenticated_post( f"{self.protocol}://{self.domain}/oauth/token", @@ -167,6 +175,7 @@ class GetToken(AuthenticationBase): "audience": audience, "grant_type": grant_type, }, + headers=headers, ) def refresh_token(
auth0/auth0-python
5c818868ba2684fbf770365cd6dac5192a3436c9
diff --git a/auth0/test/authentication/test_get_token.py b/auth0/test/authentication/test_get_token.py index f2c0b34..7e91f63 100644 --- a/auth0/test/authentication/test_get_token.py +++ b/auth0/test/authentication/test_get_token.py @@ -189,6 +189,22 @@ class TestGetToken(unittest.TestCase): }, ) + @mock.patch("auth0.rest.RestClient.post") + def test_login_with_forwarded_for(self, mock_post): + g = GetToken("my.domain.com", "cid", client_secret="clsec") + + g.login(username="usrnm", password="pswd", forwarded_for="192.168.0.1") + + args, kwargs = mock_post.call_args + + self.assertEqual(args[0], "https://my.domain.com/oauth/token") + self.assertEqual( + kwargs["headers"], + { + "auth0-forwarded-for": "192.168.0.1", + }, + ) + @mock.patch("auth0.rest.RestClient.post") def test_refresh_token(self, mock_post): g = GetToken("my.domain.com", "cid", client_secret="clsec")
Add X-Forwarded-For header to Get Token requests [SDK-1941] **Please do not report security vulnerabilities here**. The [Responsible Disclosure Program](https://auth0.com/whitehat) details the procedure for disclosing security issues. **Thank you in advance for helping us to improve this library!** Your attention to detail here is greatly appreciated and will help us respond as quickly as possible. For general support or usage questions, use the [Auth0 Community](https://community.auth0.com/) or [Auth0 Support](https://support.auth0.com/). Finally, to avoid duplicates, please search existing Issues before submitting one here. By submitting an Issue to this repository, you agree to the terms within the [Auth0 Code of Conduct](https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). ### Describe the problem you'd like to have solved > I currently have an API that my customers hit to get a token. The API then uses Python to call Auth0 to exchange credentials for a token. The code uses `GetToken.login()`, `GetToken.client_credentials`, and `GetToken.refresh_token`. I've noticed that the API supports using the X-Forwarded-For header so that I could include the customer's IP address and Auth0 would use that IP address for any anomaly detection and for logging activity instead of all customers being tied to my API's IP address. However, the auth0-python library does not support adding the X-Forwarded-For header to the request. ### Describe the ideal solution > The auth0-python `GetToken` methods support providing an X-Forwarded-For header to the request to the Auth0 API. This could either be a generic optional `headers` argument added to the method and those headers get added to the request or it could be a specific optional `x_forwarded_for` argument that gets added to the request headers. That would allow me to inspect the IP address of my incoming request and add that to the request sent to the Auth0 API. ## Alternatives and current work-arounds > I currently don't have any work-arounds in place, which means Auth0 only sees the IP address of my API resulting in one bad user potentially impacting all users. A possible work-around is to stop using the auth0-python library and to simply make requests directly to the Auth0 API myself. ### Additional context > N/A
0.0
5c818868ba2684fbf770365cd6dac5192a3436c9
[ "auth0/test/authentication/test_get_token.py::TestGetToken::test_login_with_forwarded_for" ]
[ "auth0/test/authentication/test_get_token.py::TestGetToken::test_authorization_code", "auth0/test/authentication/test_get_token.py::TestGetToken::test_authorization_code_pkce", "auth0/test/authentication/test_get_token.py::TestGetToken::test_authorization_code_with_client_assertion", "auth0/test/authentication/test_get_token.py::TestGetToken::test_client_credentials", "auth0/test/authentication/test_get_token.py::TestGetToken::test_client_credentials_with_client_assertion", "auth0/test/authentication/test_get_token.py::TestGetToken::test_login", "auth0/test/authentication/test_get_token.py::TestGetToken::test_login_simple", "auth0/test/authentication/test_get_token.py::TestGetToken::test_passwordless_login_with_email", "auth0/test/authentication/test_get_token.py::TestGetToken::test_passwordless_login_with_sms", "auth0/test/authentication/test_get_token.py::TestGetToken::test_refresh_token" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-06-22 13:13:35+00:00
mit
1,263
auth0__auth0-python-537
diff --git a/auth0/management/organizations.py b/auth0/management/organizations.py index 940aef7..dabaf6c 100644 --- a/auth0/management/organizations.py +++ b/auth0/management/organizations.py @@ -246,9 +246,14 @@ class Organizations: include_totals: bool = True, from_param: str | None = None, take: int | None = None, + fields: list[str] | None = None, + include_fields: bool = True, ): """Retrieves a list of all the organization members. + Member roles are not sent by default. Use `fields=roles` to retrieve the roles assigned to each listed member. + To use this parameter, you must include the `read:organization_member_roles scope` in the token. + Args: id (str): the ID of the organization. @@ -267,7 +272,14 @@ class Organizations: take (int, optional): The total amount of entries to retrieve when using the from parameter. When not set, the default value is up to the server. - See: https://auth0.com/docs/api/management/v2#!/Organizations/get_members + fields (list of str, optional): A list of fields to include or + exclude from the result (depending on include_fields). If fields is left blank, + all fields (except roles) are returned. + + include_fields (bool, optional): True if the fields specified are + to be included in the result, False otherwise. Defaults to True. + + See: https://auth0.com/docs/api/management/v2/organizations/get-members """ params = { @@ -276,6 +288,8 @@ class Organizations: "include_totals": str(include_totals).lower(), "from": from_param, "take": take, + "fields": fields and ",".join(fields) or None, + "include_fields": str(include_fields).lower(), } return self.client.get(self._url(id, "members"), params=params)
auth0/auth0-python
32a8cc8d23a0df5ecb57f72ed8b5f899a2dbef76
diff --git a/auth0/test/management/test_organizations.py b/auth0/test/management/test_organizations.py index a445ebf..ec1fc84 100644 --- a/auth0/test/management/test_organizations.py +++ b/auth0/test/management/test_organizations.py @@ -232,6 +232,8 @@ class TestOrganizations(unittest.TestCase): "include_totals": "true", "from": None, "take": None, + "fields": None, + "include_fields": "true", }, ) @@ -253,6 +255,8 @@ class TestOrganizations(unittest.TestCase): "include_totals": "false", "from": None, "take": None, + "fields": None, + "include_fields": "true", }, ) @@ -272,6 +276,29 @@ class TestOrganizations(unittest.TestCase): "page": None, "per_page": None, "include_totals": "true", + "fields": None, + "include_fields": "true", + }, + ) + + # With fields + c.all_organization_members("test-org", fields=["a,b"], include_fields=False) + + args, kwargs = mock_instance.get.call_args + + self.assertEqual( + "https://domain/api/v2/organizations/test-org/members", args[0] + ) + self.assertEqual( + kwargs["params"], + { + "page": None, + "per_page": None, + "include_totals": "true", + "from": None, + "take": None, + "fields": "a,b", + "include_fields": "false", }, )
`get_all_organization_members` should support `fields` ### Checklist - [X] I have looked into the [Readme](https://github.com/auth0/auth0-python#readme) and [Examples](https://github.com/auth0/auth0-python/blob/master/EXAMPLES.md), and have not found a suitable solution or answer. - [X] I have looked into the [API documentation](https://auth0-python.readthedocs.io/en/latest/) and have not found a suitable solution or answer. - [X] I have searched the [issues](https://github.com/auth0/auth0-python/issues) and have not found a suitable solution or answer. - [X] I have searched the [Auth0 Community](https://community.auth0.com) forums and have not found a suitable solution or answer. - [X] I agree to the terms within the [Auth0 Code of Conduct](https://github.com/auth0/open-source-template/blob/master/CODE-OF-CONDUCT.md). ### Describe the problem you'd like to have solved The endpoint for fetching all Organization members allows us to pass a `fields` parameter, specifically to include all member's roles in the response. The python library hides this option, and it would be much more performant to allow us to get all roles attached in one request instead of n^2 requests to fetch all users and then each user's roles in a separate request for each. ### Describe the ideal solution There are two ways it could play out, either by adding a `include_roles` flag, or by providing access to add more paramters via ** args to this and other functions. ### Alternatives and current workarounds The current way we have to do this is to do the following; 1. Fetch all users in an organization 2. Iterate over each user, sending a new request to fetch that user's roles specifically ### Additional context _No response_
0.0
32a8cc8d23a0df5ecb57f72ed8b5f899a2dbef76
[ "auth0/test/management/test_organizations.py::TestOrganizations::test_all_organization_members" ]
[ "auth0/test/management/test_organizations.py::TestOrganizations::test_all_organization_connections", "auth0/test/management/test_organizations.py::TestOrganizations::test_all_organization_invitations", "auth0/test/management/test_organizations.py::TestOrganizations::test_all_organization_member_roles", "auth0/test/management/test_organizations.py::TestOrganizations::test_all_organizations", "auth0/test/management/test_organizations.py::TestOrganizations::test_create_organization", "auth0/test/management/test_organizations.py::TestOrganizations::test_create_organization_connection", "auth0/test/management/test_organizations.py::TestOrganizations::test_create_organization_invitation", "auth0/test/management/test_organizations.py::TestOrganizations::test_create_organization_member_roles", "auth0/test/management/test_organizations.py::TestOrganizations::test_create_organization_members", "auth0/test/management/test_organizations.py::TestOrganizations::test_delete_organization", "auth0/test/management/test_organizations.py::TestOrganizations::test_delete_organization_connection", "auth0/test/management/test_organizations.py::TestOrganizations::test_delete_organization_invitation", "auth0/test/management/test_organizations.py::TestOrganizations::test_delete_organization_member_roles", "auth0/test/management/test_organizations.py::TestOrganizations::test_delete_organization_members", "auth0/test/management/test_organizations.py::TestOrganizations::test_get_organization", "auth0/test/management/test_organizations.py::TestOrganizations::test_get_organization_by_name", "auth0/test/management/test_organizations.py::TestOrganizations::test_get_organization_connection", "auth0/test/management/test_organizations.py::TestOrganizations::test_get_organization_invitation", "auth0/test/management/test_organizations.py::TestOrganizations::test_init_with_optionals", "auth0/test/management/test_organizations.py::TestOrganizations::test_update_organization", "auth0/test/management/test_organizations.py::TestOrganizations::test_update_organization_connection" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-10-19 11:00:13+00:00
mit
1,264
auth0__auth0-python-64
diff --git a/auth0/v3/authentication/base.py b/auth0/v3/authentication/base.py index b4ccd1d..778912c 100644 --- a/auth0/v3/authentication/base.py +++ b/auth0/v3/authentication/base.py @@ -19,8 +19,8 @@ class AuthenticationBase(object): except ValueError: return response.text else: - if 'error' in text: - raise Auth0Error(status_code=text['error'], - error_code=text['error'], - message=text['error_description']) + if response.status_code is None or response.status_code >= 400: + raise Auth0Error(status_code=response.status_code, + error_code=text.get('error', ''), + message=text.get('error_description', '')) return text
auth0/auth0-python
0d7decce20e04703d11a614bffc8f9d9f6e9723e
diff --git a/auth0/v3/test/authentication/test_base.py b/auth0/v3/test/authentication/test_base.py index c058864..d6539c8 100644 --- a/auth0/v3/test/authentication/test_base.py +++ b/auth0/v3/test/authentication/test_base.py @@ -10,6 +10,7 @@ class TestBase(unittest.TestCase): def test_post(self, mock_post): ab = AuthenticationBase() + mock_post.return_value.status_code = 200 mock_post.return_value.text = '{"x": "y"}' data = ab.post('the-url', data={'a': 'b'}, headers={'c': 'd'}) @@ -23,12 +24,14 @@ class TestBase(unittest.TestCase): def test_post_error(self, mock_post): ab = AuthenticationBase() - mock_post.return_value.text = '{"error": "e0",' \ - '"error_description": "desc"}' + for error_status in [400, 500, None]: + mock_post.return_value.status_code = error_status + mock_post.return_value.text = '{"error": "e0",' \ + '"error_description": "desc"}' - with self.assertRaises(Auth0Error) as context: - data = ab.post('the-url', data={'a': 'b'}, headers={'c': 'd'}) + with self.assertRaises(Auth0Error) as context: + data = ab.post('the-url', data={'a': 'b'}, headers={'c': 'd'}) - self.assertEqual(context.exception.status_code, 'e0') - self.assertEqual(context.exception.error_code, 'e0') - self.assertEqual(context.exception.message, 'desc') + self.assertEqual(context.exception.status_code, error_status) + self.assertEqual(context.exception.error_code, 'e0') + self.assertEqual(context.exception.message, 'desc')
Auth0Error not being raised due to inconsistent API error responses Currently `Auth0Error` is raised whenever the API response contains an `error` key in the response JSON. Unfortunately at least one endpoint (`/dbconnections/signup`) returns inconsistent error messages (that do not always contain the `error` key) for different scenarios and as a result `Auth0Error` is not raised when an error occurs. Examples of inconsistent responses: * when making a signup request with an email that is already registered: ``` { "code": "user_exists", "description": "The user already exists.", "name": "BadRequestError", "statusCode": 400 } ``` * when making a request with an invalid `client_id` (with public signup disabled) ``` { "name": "NotFoundError", "statusCode": 404 } ``` * when making a request with an invalid password (with password strength enabled) ``` { "code": "invalid_password", "description": { "rules": [ { "code": "lengthAtLeast", "format": [ 6 ], "message": "At least %d characters in length", "verified": false } ], "verified": false }, "message": "Password is too weak", "name": "PasswordStrengthError", "policy": "* At least 6 characters in length", "statusCode": 400 } ``` * when making a request with missing password ``` { "error": "password is required" } ``` The last example highlights a related issue. Even though there is an `error` key, a `KeyError` exception will ultimately occur because `AuthenticationBase._process_response` assumes the additional existence of an `error_description` key when creating the `Auth0Error` and setting its message.
0.0
0d7decce20e04703d11a614bffc8f9d9f6e9723e
[ "auth0/v3/test/authentication/test_base.py::TestBase::test_post_error" ]
[ "auth0/v3/test/authentication/test_base.py::TestBase::test_post" ]
{ "failed_lite_validators": [ "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2017-05-19 09:54:07+00:00
mit
1,265
awdeorio__mailmerge-101
diff --git a/mailmerge/template_message.py b/mailmerge/template_message.py index 135bba8..9a12fc4 100644 --- a/mailmerge/template_message.py +++ b/mailmerge/template_message.py @@ -115,9 +115,11 @@ class TemplateMessage(object): # Copy text, preserving original encoding original_text = self._message.get_payload(decode=True) + original_subtype = self._message.get_content_subtype() original_encoding = str(self._message.get_charset()) multipart_message.attach(email.mime.text.MIMEText( original_text, + _subtype=original_subtype, _charset=original_encoding, ))
awdeorio/mailmerge
d08667a8f2a7a9e7ad64e58c2bc049a80726bf3c
diff --git a/tests/test_template_message.py b/tests/test_template_message.py index 346975f..621a2bd 100644 --- a/tests/test_template_message.py +++ b/tests/test_template_message.py @@ -611,7 +611,7 @@ def test_attachment_multiple(tmp_path): def test_attachment_empty(tmp_path): - """Errr on empty attachment field.""" + """Err on empty attachment field.""" template_path = tmp_path / "template.txt" template_path.write_text(textwrap.dedent(u"""\ TO: [email protected] @@ -626,6 +626,75 @@ def test_attachment_empty(tmp_path): template_message.render({}) +def test_contenttype_attachment_html_body(tmpdir): + """ + Verify that the content-type of the message is correctly retained with an + HTML body. + """ + # Simple attachment + attachment_path = Path(tmpdir/"attachment.txt") + attachment_path.write_text(u"Hello world\n") + + # HTML template + template_path = Path(tmpdir/"template.txt") + template_path.write_text(textwrap.dedent(u"""\ + TO: [email protected] + FROM: [email protected] + ATTACHMENT: attachment.txt + CONTENT-TYPE: text/html + + Hello world + """)) + + # Render in tmpdir + with tmpdir.as_cwd(): + template_message = TemplateMessage(template_path) + _, _, message = template_message.render({}) + + # Verify that the message content type is HTML + payload = message.get_payload() + assert len(payload) == 2 + assert payload[0].get_content_type() == 'text/html' + + +def test_contenttype_attachment_markdown_body(tmpdir): + """ + Verify that the content-types of the MarkDown message are correct when + attachments are included. + """ + # Simple attachment + attachment_path = Path(tmpdir/"attachment.txt") + attachment_path.write_text(u"Hello world\n") + + # HTML template + template_path = Path(tmpdir/"template.txt") + template_path.write_text(textwrap.dedent(u"""\ + TO: [email protected] + FROM: [email protected] + ATTACHMENT: attachment.txt + CONTENT-TYPE: text/markdown + + Hello **world** + """)) + + # Render in tmpdir + with tmpdir.as_cwd(): + template_message = TemplateMessage(template_path) + _, _, message = template_message.render({}) + + # Markdown: Make sure there is a plaintext part and an HTML part + payload = message.get_payload() + assert len(payload) == 3 + + # Ensure that the first part is plaintext and the second part + # is HTML (as per RFC 2046) + plaintext_part = payload[0] + assert plaintext_part['Content-Type'].startswith("text/plain") + + html_part = payload[1] + assert html_part['Content-Type'].startswith("text/html") + + def test_duplicate_headers_attachment(tmp_path): """Verify multipart messages do not contain duplicate headers.
Content-Type gets overwritten to text/plain with attachment Template: ``` From: [email protected] TO: [email protected] SUBJECT: Hello ATTACHMENT: a.txt CONTENT-TYPE: text/html <html> <body> <p>Hi,</p> </body> </html> ``` command: `mailmerge --database database.csv --template 04.tpl --output-format raw` output: ``` >>> message 1 Content-Type: multipart/alternative; boundary="===============1872202902524908882==" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit SUBJECT: Hello From: [email protected] TO: [email protected] Date: Wed, 05 Aug 2020 21:37:21 -0000 --===============1872202902524908882== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit <html> <body> <p>Hi,</p> </body> </html> --===============1872202902524908882== Content-Type: application/octet-stream; Name="a.txt" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="a.txt" aGVsbG93b3JsZA== --===============1872202902524908882==-- >>> message 1 sent >>> Limit was 1 message. To remove the limit, use the --no-limit option. >>> This was a dry run. To send messages, use the --no-dry-run option. ``` The important part being `Content-Type: text/plain; charset="us-ascii"`, which is incorrect. Using a [html+plaintext](https://github.com/awdeorio/mailmerge#html-and-plain-text) message works, but that is more complicated.
0.0
d08667a8f2a7a9e7ad64e58c2bc049a80726bf3c
[ "tests/test_template_message.py::test_contenttype_attachment_html_body" ]
[ "tests/test_template_message.py::test_simple", "tests/test_template_message.py::test_no_substitutions", "tests/test_template_message.py::test_multiple_substitutions", "tests/test_template_message.py::test_bad_jinja", "tests/test_template_message.py::test_cc_bcc", "tests/test_template_message.py::test_html", "tests/test_template_message.py::test_html_plaintext", "tests/test_template_message.py::test_markdown", "tests/test_template_message.py::test_markdown_encoding", "tests/test_template_message.py::test_attachment_simple", "tests/test_template_message.py::test_attachment_relative", "tests/test_template_message.py::test_attachment_absolute", "tests/test_template_message.py::test_attachment_template", "tests/test_template_message.py::test_attachment_not_found", "tests/test_template_message.py::test_attachment_blank", "tests/test_template_message.py::test_attachment_tilde_path", "tests/test_template_message.py::test_attachment_multiple", "tests/test_template_message.py::test_attachment_empty", "tests/test_template_message.py::test_contenttype_attachment_markdown_body", "tests/test_template_message.py::test_duplicate_headers_attachment", "tests/test_template_message.py::test_duplicate_headers_markdown" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2020-08-10 03:33:57+00:00
mit
1,266
awdeorio__mailmerge-114
diff --git a/mailmerge/__init__.py b/mailmerge/__init__.py index dc48419..fd69f6d 100644 --- a/mailmerge/__init__.py +++ b/mailmerge/__init__.py @@ -7,4 +7,4 @@ Andrew DeOrio <[email protected]> from .sendmail_client import SendmailClient from .template_message import TemplateMessage -from .exceptions import MailmergeError +from .exceptions import MailmergeError, MailmergeRateLimitError diff --git a/mailmerge/__main__.py b/mailmerge/__main__.py index bfeb745..d9648db 100644 --- a/mailmerge/__main__.py +++ b/mailmerge/__main__.py @@ -5,6 +5,7 @@ Andrew DeOrio <[email protected]> """ from __future__ import print_function import sys +import time import codecs import textwrap import click @@ -112,9 +113,20 @@ def main(sample, dry_run, limit, no_limit, resume, template_message = TemplateMessage(template_path) csv_database = read_csv_database(database_path) sendmail_client = SendmailClient(config_path, dry_run) + for _, row in enumerate_range(csv_database, start, stop): sender, recipients, message = template_message.render(row) - sendmail_client.sendmail(sender, recipients, message) + while True: + try: + sendmail_client.sendmail(sender, recipients, message) + except exceptions.MailmergeRateLimitError: + print_bright_white_on_cyan( + ">>> rate limit exceeded, waiting ...", + output_format, + ) + else: + break + time.sleep(1) print_bright_white_on_cyan( ">>> message {message_num}" .format(message_num=message_num), @@ -127,6 +139,7 @@ def main(sample, dry_run, limit, no_limit, resume, output_format, ) message_num += 1 + except exceptions.MailmergeError as error: hint_text = '\nHint: "--resume {}"'.format(message_num) sys.exit( @@ -217,8 +230,18 @@ def create_sample_input_files(template_path, database_path, config_path): """)) with config_path.open("w") as config_file: config_file.write(textwrap.dedent(u"""\ + # Mailmerge SMTP Server Config + # https://github.com/awdeorio/mailmerge + # # Pro-tip: SSH or VPN into your network first to avoid spam # filters and server throttling. + # + # Parameters + # host # SMTP server hostname or IP + # port # SMTP server port + # security # Security protocol: "SSL/TLS", "STARTTLS", or omit + # username # Username for SSL/TLS or STARTTLS security + # ratelimit # Rate limit in messages per minute, 0 for unlimited # Example: GMail [smtp_server] @@ -226,6 +249,7 @@ def create_sample_input_files(template_path, database_path, config_path): port = 465 security = SSL/TLS username = YOUR_USERNAME_HERE + ratelimit = 0 # Example: SSL/TLS # [smtp_server] @@ -233,6 +257,7 @@ def create_sample_input_files(template_path, database_path, config_path): # port = 465 # security = SSL/TLS # username = YOUR_USERNAME_HERE + # ratelimit = 0 # Example: STARTTLS security # [smtp_server] @@ -240,11 +265,13 @@ def create_sample_input_files(template_path, database_path, config_path): # port = 25 # security = STARTTLS # username = YOUR_USERNAME_HERE + # ratelimit = 0 # Example: No security # [smtp_server] # host = newman.eecs.umich.edu # port = 25 + # ratelimit = 0 """)) print(textwrap.dedent(u"""\ Created sample template email "{template_path}" diff --git a/mailmerge/exceptions.py b/mailmerge/exceptions.py index 7ba05b8..1d29adb 100644 --- a/mailmerge/exceptions.py +++ b/mailmerge/exceptions.py @@ -3,3 +3,7 @@ class MailmergeError(Exception): """Top level exception raised by mailmerge functions.""" + + +class MailmergeRateLimitError(MailmergeError): + """Reuse to send message because rate limit exceeded.""" diff --git a/mailmerge/sendmail_client.py b/mailmerge/sendmail_client.py index cd400fd..129f394 100644 --- a/mailmerge/sendmail_client.py +++ b/mailmerge/sendmail_client.py @@ -3,20 +3,26 @@ SMTP client reads configuration and sends message. Andrew DeOrio <[email protected]> """ +import collections import socket import smtplib import configparser import getpass +import datetime from . import exceptions from . import utils +# Type to store info read from config file +MailmergeConfig = collections.namedtuple( + "MailmergeConfig", + ["username", "host", "port", "security", "ratelimit"], +) + + class SendmailClient(object): """Represent a client connection to an SMTP server.""" - # This class is pretty simple. We don't need more than one public method. - # pylint: disable=too-few-public-methods - # # We need to inherit from object for Python 2 compantibility # https://python-future.org/compatible_idioms.html#custom-class-behaviour # pylint: disable=bad-option-value,useless-object-inheritance @@ -24,34 +30,50 @@ class SendmailClient(object): def __init__(self, config_path, dry_run=False): """Read configuration from server configuration file.""" self.config_path = config_path - self.dry_run = dry_run - self.username = None - self.password = None + self.dry_run = dry_run # Do not send real messages + self.config = None # Config read from config_path by read_config() + self.password = None # Password read from stdin + self.lastsent = None # Timestamp of last successful send + self.read_config() + + def read_config(self): + """Read configuration file and return a MailmergeConfig object.""" try: - config = configparser.RawConfigParser() - config.read(str(config_path)) - self.host = config.get("smtp_server", "host") - self.port = config.getint("smtp_server", "port") - self.security = config.get("smtp_server", "security", - fallback=None) - if self.security == "Never": - # Coerce legacy option "security = Never" - self.security = None - if self.security is not None: - # Read username only if needed - self.username = config.get("smtp_server", "username") - except configparser.Error as err: + parser = configparser.RawConfigParser() + parser.read(str(self.config_path)) + host = parser.get("smtp_server", "host") + port = parser.getint("smtp_server", "port") + security = parser.get("smtp_server", "security", fallback=None) + username = parser.get("smtp_server", "username", fallback=None) + ratelimit = parser.getint("smtp_server", "ratelimit", fallback=0) + except (configparser.Error, ValueError) as err: raise exceptions.MailmergeError( "{}: {}".format(self.config_path, err) ) + # Coerce legacy option "security = Never" + if security == "Never": + security = None + # Verify security type - if self.security not in [None, "SSL/TLS", "STARTTLS"]: + if security not in [None, "SSL/TLS", "STARTTLS"]: raise exceptions.MailmergeError( "{}: unrecognized security type: '{}'" - .format(self.config_path, self.security) + .format(self.config_path, security) + ) + + # Verify username + if security is not None and username is None: + raise exceptions.MailmergeError( + "{}: username is required for security type '{}'" + .format(self.config_path, security) ) + # Save validated configuration + self.config = MailmergeConfig( + username, host, port, security, ratelimit, + ) + def sendmail(self, sender, recipients, message): """Send email message. @@ -62,41 +84,52 @@ class SendmailClient(object): if self.dry_run: return + # Check if we've hit the rate limit + now = datetime.datetime.now() + if self.config.ratelimit and self.lastsent: + waittime = datetime.timedelta(minutes=1.0 / self.config.ratelimit) + if now - self.lastsent < waittime: + raise exceptions.MailmergeRateLimitError() + # Ask for password if necessary - if self.security is not None and self.password is None: + if self.config.security is not None and self.password is None: prompt = ">>> password for {} on {}: ".format( - self.username, self.host) + self.config.username, self.config.host) self.password = getpass.getpass(prompt) # Send try: message_flattened = utils.flatten_message(message) - if self.security == "SSL/TLS": - with smtplib.SMTP_SSL(self.host, self.port) as smtp: - smtp.login(self.username, self.password) + host, port = self.config.host, self.config.port + if self.config.security == "SSL/TLS": + with smtplib.SMTP_SSL(host, port) as smtp: + smtp.login(self.config.username, self.password) smtp.sendmail(sender, recipients, message_flattened) - elif self.security == "STARTTLS": - with smtplib.SMTP(self.host, self.port) as smtp: + elif self.config.security == "STARTTLS": + with smtplib.SMTP(host, port) as smtp: smtp.ehlo() smtp.starttls() smtp.ehlo() - smtp.login(self.username, self.password) + smtp.login(self.config.username, self.password) smtp.sendmail(sender, recipients, message_flattened) - elif self.security is None: - with smtplib.SMTP(self.host, self.port) as smtp: + elif self.config.security is None: + with smtplib.SMTP(host, port) as smtp: smtp.sendmail(sender, recipients, message_flattened) except smtplib.SMTPAuthenticationError as err: raise exceptions.MailmergeError( "{}:{} failed to authenticate user '{}': {}" - .format(self.host, self.port, self.username, err) + .format(host, port, self.config.username, err) ) except smtplib.SMTPException as err: raise exceptions.MailmergeError( "{}:{} failed to send message: {}" - .format(self.host, self.port, err) + .format(host, port, err) ) except socket.error as err: raise exceptions.MailmergeError( "{}:{} failed to connect to server: {}" - .format(self.host, self.port, err) + .format(host, port, err) ) + + # Update timestamp of last sent message + self.lastsent = now diff --git a/setup.py b/setup.py index 49636bc..236d946 100644 --- a/setup.py +++ b/setup.py @@ -27,6 +27,9 @@ setuptools.setup( "click", "configparser;python_version<'3.6'", + # We mock the time when testing the rate limit feature + "freezegun", + # The attachments feature relies on a bug fix in the future library # https://github.com/awdeorio/mailmerge/pull/56 "future>0.18.0",
awdeorio/mailmerge
1f3b0c742c526311bac400d5d6e5dd11e49332b6
diff --git a/tests/test_ratelimit.py b/tests/test_ratelimit.py new file mode 100644 index 0000000..76b29d8 --- /dev/null +++ b/tests/test_ratelimit.py @@ -0,0 +1,140 @@ +# coding=utf-8 +# Python 2 source containing unicode https://www.python.org/dev/peps/pep-0263/ +""" +Tests for SMTP server rate limit feature. + +Andrew DeOrio <[email protected]> +""" +import textwrap +import datetime +import future.backports.email as email +import future.backports.email.parser # pylint: disable=unused-import +import freezegun +import pytest +import click +import click.testing +from mailmerge import SendmailClient, MailmergeRateLimitError +from mailmerge.__main__ import main + +try: + from unittest import mock # Python 3 +except ImportError: + import mock # Python 2 + +# Python 2 pathlib support requires backport +try: + from pathlib2 import Path +except ImportError: + from pathlib import Path + +# The sh library triggers lot of false no-member errors +# pylint: disable=no-member + +# We're going to use mock_SMTP because it mimics the real SMTP library +# pylint: disable=invalid-name + + [email protected]('smtplib.SMTP') +def test_sendmail_ratelimit(mock_SMTP, tmp_path): + """Verify SMTP library calls.""" + config_path = tmp_path/"server.conf" + config_path.write_text(textwrap.dedent(u"""\ + [smtp_server] + host = open-smtp.example.com + port = 25 + ratelimit = 60 + """)) + sendmail_client = SendmailClient( + config_path, + dry_run=False, + ) + message = email.message_from_string(u""" + TO: [email protected] + SUBJECT: Testing mailmerge + FROM: [email protected] + + Hello world + """) + + # First message + sendmail_client.sendmail( + sender="[email protected]", + recipients=["[email protected]"], + message=message, + ) + smtp = mock_SMTP.return_value.__enter__.return_value + assert smtp.sendmail.call_count == 1 + + # Second message exceeds the rate limit, doesn't try to send a message + with pytest.raises(MailmergeRateLimitError): + sendmail_client.sendmail( + sender="[email protected]", + recipients=["[email protected]"], + message=message, + ) + assert smtp.sendmail.call_count == 1 + + # Retry the second message after 1 s because the rate limit is 60 messages + # per minute + # + # Mock the time to be 1.1 s in the future + # Ref: https://github.com/spulec/freezegun + now = datetime.datetime.now() + with freezegun.freeze_time(now + datetime.timedelta(seconds=1)): + sendmail_client.sendmail( + sender="[email protected]", + recipients=["[email protected]"], + message=message, + ) + assert smtp.sendmail.call_count == 2 + + [email protected]('smtplib.SMTP') +def test_stdout_ratelimit(mock_SMTP, tmpdir): + """Verify SMTP server ratelimit parameter.""" + # Simple template + template_path = Path(tmpdir/"mailmerge_template.txt") + template_path.write_text(textwrap.dedent(u"""\ + TO: {{email}} + FROM: [email protected] + + Hello world + """)) + + # Simple database with two entries + database_path = Path(tmpdir/"mailmerge_database.csv") + database_path.write_text(textwrap.dedent(u"""\ + email + [email protected] + [email protected] + """)) + + # Simple unsecure server config + config_path = Path(tmpdir/"mailmerge_server.conf") + config_path.write_text(textwrap.dedent(u"""\ + [smtp_server] + host = open-smtp.example.com + port = 25 + ratelimit = 60 + """)) + + # Run mailmerge + before = datetime.datetime.now() + with tmpdir.as_cwd(): + runner = click.testing.CliRunner(mix_stderr=False) + result = runner.invoke( + main, [ + "--no-limit", + "--no-dry-run", + "--output-format", "text", + ] + ) + after = datetime.datetime.now() + assert after - before > datetime.timedelta(seconds=1) + smtp = mock_SMTP.return_value.__enter__.return_value + assert smtp.sendmail.call_count == 2 + assert result.exit_code == 0 + # assert result.stderr == "" # replace when we drop Python 3.4 support + assert ">>> message 1 sent" in result.stdout + assert ">>> rate limit exceeded, waiting ..." in result.stdout + assert ">>> message 2 sent" in result.stdout
Rate limiting option Is there interest in a flood limit timer? For example, my ISP has a limit of 50 emails in 5 minutes. I'd love to be able to tell mailmerge this so i can just leave a process running. Perhaps something like: mailmerge ... --pause-count=50 --pause-time=600 --pause-count = Pause after this many emails are sent - default 0 --pause-time = Pause for this long in seconds - default 0 Error condition if either is set to a negative value or if only one is set to a positive value WDYT?
0.0
1f3b0c742c526311bac400d5d6e5dd11e49332b6
[ "tests/test_ratelimit.py::test_stdout_ratelimit", "tests/test_ratelimit.py::test_sendmail_ratelimit" ]
[]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2021-03-05 15:11:51+00:00
mit
1,267
awdeorio__mailmerge-115
diff --git a/mailmerge/template_message.py b/mailmerge/template_message.py index 9ea5caa..638e141 100644 --- a/mailmerge/template_message.py +++ b/mailmerge/template_message.py @@ -1,3 +1,5 @@ +# coding=utf-8 +# Python 2 source containing unicode https://www.python.org/dev/peps/pep-0263/ """ Represent a templated email message. @@ -94,13 +96,19 @@ class TemplateMessage(object): self._sender = self._message["from"] def _make_message_multipart(self): - """Convert a message into a multipart message.""" + """ + Convert self._message into a multipart message. + + Specifically, if the message's content-type is not multipart, this + method will create a new `multipart/mixed` message, copy message + headers and re-attach the original payload. + """ # Do nothing if message already multipart if self._message.is_multipart(): return # Create empty multipart message - multipart_message = email.mime.multipart.MIMEMultipart('alternative') + multipart_message = email.mime.multipart.MIMEMultipart('mixed') # Copy headers. Avoid duplicate Content-Type and MIME-Version headers, # which we set explicitely. MIME-Version was set when we created an @@ -127,7 +135,20 @@ class TemplateMessage(object): self._message = multipart_message def _transform_markdown(self): - """Convert markdown in message text to HTML.""" + """ + Convert markdown in message text to HTML. + + Specifically, if the message's content-type is `text/markdown`, we + transform `self._message` to have the following structure: + + multipart/mixed + └── multipart/alternative + ├── text/plain (original markdown plaintext) + └── text/html (converted markdown) + + Attachments should be added as subsequent payload items of the + top-level `multipart/mixed` message. + """ # Do nothing if Content-Type is not text/markdown if not self._message['Content-Type'].startswith("text/markdown"): return @@ -143,29 +164,64 @@ class TemplateMessage(object): # plaintext payload is formatted with Markdown. for mimetext in self._message.get_payload(): if mimetext['Content-Type'].startswith('text/plain'): + original_text_payload = mimetext encoding = str(mimetext.get_charset()) text = mimetext.get_payload(decode=True).decode(encoding) break + assert original_text_payload assert encoding assert text + # Remove the original text payload. + self._message.set_payload( + self._message.get_payload().remove(original_text_payload)) + # Add a multipart/alternative part to the message. Email clients can + # choose which payload-part they wish to render. + # # Render Markdown to HTML and add the HTML as the last part of the - # multipart message as per RFC 2046. + # multipart/alternative message as per RFC 2046. # # Note: We need to use u"..." to ensure that unicode string # substitution works properly in Python 2. # # https://docs.python.org/3/library/email.mime.html#email.mime.text.MIMEText html = markdown.markdown(text, extensions=['nl2br']) - payload = future.backports.email.mime.text.MIMEText( + html_payload = future.backports.email.mime.text.MIMEText( u"<html><body>{}</body></html>".format(html), _subtype="html", _charset=encoding, ) - self._message.attach(payload) + + message_payload = email.mime.multipart.MIMEMultipart('alternative') + message_payload.attach(original_text_payload) + message_payload.attach(html_payload) + + self._message.attach(message_payload) def _transform_attachments(self): - """Parse Attachment headers and add attachments.""" + """ + Parse Attachment headers and add attachments. + + Attachments are added to the payload of a `multipart/mixed` message. + For instance, a plaintext message with attachments would have the + following structure: + + multipart/mixed + ├── text/plain + ├── attachment1 + └── attachment2 + + Another example: If the original message contained `text/markdown`, + then the message would have the following structure after transforming + markdown and attachments: + + multipart/mixed + ├── multipart/alternative + │ ├── text/plain + │ └── text/html + ├── attachment1 + └── attachment2 + """ # Do nothing if message has no attachment header if 'attachment' not in self._message: return
awdeorio/mailmerge
0dcd140df37163daf5d949881304e383dca62bcb
diff --git a/tests/test_template_message.py b/tests/test_template_message.py index 798c8d8..28cc9df 100644 --- a/tests/test_template_message.py +++ b/tests/test_template_message.py @@ -274,20 +274,26 @@ def test_markdown(tmp_path): # Verify message is multipart assert message.is_multipart() + assert message.get_content_subtype() == "mixed" - # Make sure there is a plaintext part and an HTML part - payload = message.get_payload() - assert len(payload) == 2 + # Make sure there is a single multipart/alternative payload + assert len(message.get_payload()) == 1 + assert message.get_payload()[0].is_multipart() + assert message.get_payload()[0].get_content_subtype() == "alternative" + + # And there should be a plaintext part and an HTML part + message_payload = message.get_payload()[0].get_payload() + assert len(message_payload) == 2 # Ensure that the first part is plaintext and the last part # is HTML (as per RFC 2046) - plaintext_part = payload[0] + plaintext_part = message_payload[0] assert plaintext_part['Content-Type'].startswith("text/plain") plaintext_encoding = str(plaintext_part.get_charset()) plaintext = plaintext_part.get_payload(decode=True) \ .decode(plaintext_encoding) - html_part = payload[1] + html_part = message_payload[1] assert html_part['Content-Type'].startswith("text/html") html_encoding = str(html_part.get_charset()) htmltext = html_part.get_payload(decode=True) \ @@ -323,7 +329,7 @@ def test_markdown_encoding(tmp_path): # Message should contain an unrendered Markdown plaintext part and a # rendered Markdown HTML part - plaintext_part, html_part = message.get_payload() + plaintext_part, html_part = message.get_payload()[0].get_payload() # Verify encodings assert str(plaintext_part.get_charset()) == "utf-8" @@ -683,16 +689,19 @@ def test_contenttype_attachment_markdown_body(tmpdir): template_message = TemplateMessage(template_path) _, _, message = template_message.render({}) - # Markdown: Make sure there is a plaintext part and an HTML part payload = message.get_payload() - assert len(payload) == 3 + assert len(payload) == 2 + + # Markdown: Make sure there is a plaintext part and an HTML part + message_payload = payload[0].get_payload() + assert len(message_payload) == 2 # Ensure that the first part is plaintext and the second part # is HTML (as per RFC 2046) - plaintext_part = payload[0] + plaintext_part = message_payload[0] assert plaintext_part['Content-Type'].startswith("text/plain") - html_part = payload[1] + html_part = message_payload[1] assert html_part['Content-Type'].startswith("text/html") diff --git a/tests/test_template_message_encodings.py b/tests/test_template_message_encodings.py index c244748..643c927 100644 --- a/tests/test_template_message_encodings.py +++ b/tests/test_template_message_encodings.py @@ -191,7 +191,8 @@ def test_emoji_markdown(tmp_path): # Message should contain an unrendered Markdown plaintext part and a # rendered Markdown HTML part - plaintext_part, html_part = message.get_payload() + message_payload = message.get_payload()[0] + plaintext_part, html_part = message_payload.get_payload() # Verify encodings assert str(plaintext_part.get_charset()) == "utf-8"
Apple Mail with attachments issue (former title: UTF-8 support issue) I have tried sending PDFs (diplomas) to participants of an event, and I've used UTF-8 in file names as well as in the body of the text. The result was pretty mixed: some mail clients displayed only the text, others just the attachment. How stable is UTF-8 support by mailmerge? Would it be possible for example to control if its base64 or quoted printable etc.? I don't know much about MIME and it's quite painful to debug mailmerge in a 'production' situation... But it looks like a very easy to use and versatile tool that would be very useful to me :-)
0.0
0dcd140df37163daf5d949881304e383dca62bcb
[ "tests/test_template_message.py::test_markdown", "tests/test_template_message.py::test_markdown_encoding", "tests/test_template_message.py::test_contenttype_attachment_markdown_body", "tests/test_template_message_encodings.py::test_emoji_markdown" ]
[ "tests/test_template_message.py::test_simple", "tests/test_template_message.py::test_no_substitutions", "tests/test_template_message.py::test_multiple_substitutions", "tests/test_template_message.py::test_bad_jinja", "tests/test_template_message.py::test_cc_bcc", "tests/test_template_message.py::test_html", "tests/test_template_message.py::test_html_plaintext", "tests/test_template_message.py::test_attachment_simple", "tests/test_template_message.py::test_attachment_relative", "tests/test_template_message.py::test_attachment_absolute", "tests/test_template_message.py::test_attachment_template", "tests/test_template_message.py::test_attachment_not_found", "tests/test_template_message.py::test_attachment_blank", "tests/test_template_message.py::test_attachment_tilde_path", "tests/test_template_message.py::test_attachment_multiple", "tests/test_template_message.py::test_attachment_empty", "tests/test_template_message.py::test_contenttype_attachment_html_body", "tests/test_template_message.py::test_duplicate_headers_attachment", "tests/test_template_message.py::test_duplicate_headers_markdown", "tests/test_template_message_encodings.py::test_utf8_template", "tests/test_template_message_encodings.py::test_utf8_database", "tests/test_template_message_encodings.py::test_utf8_to", "tests/test_template_message_encodings.py::test_utf8_from", "tests/test_template_message_encodings.py::test_utf8_subject", "tests/test_template_message_encodings.py::test_emoji", "tests/test_template_message_encodings.py::test_emoji_database", "tests/test_template_message_encodings.py::test_encoding_us_ascii", "tests/test_template_message_encodings.py::test_encoding_utf8", "tests/test_template_message_encodings.py::test_encoding_is8859_1", "tests/test_template_message_encodings.py::test_encoding_mismatch", "tests/test_template_message_encodings.py::test_encoding_multipart", "tests/test_template_message_encodings.py::test_encoding_multipart_mismatch" ]
{ "failed_lite_validators": [ "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2021-03-06 22:24:13+00:00
mit
1,268
awdeorio__mailmerge-135
diff --git a/mailmerge/__main__.py b/mailmerge/__main__.py index 2f24cb1..a364548 100644 --- a/mailmerge/__main__.py +++ b/mailmerge/__main__.py @@ -261,13 +261,18 @@ def read_csv_database(database_path): We'll use a class to modify the csv library's default dialect ('excel') to enable strict syntax checking. This will trigger errors for things like unclosed quotes. + + We open the file with the utf-8-sig encoding, which skips a byte order mark + (BOM), if any. Sometimes Excel will save CSV files with a BOM. See Issue + #93 https://github.com/awdeorio/mailmerge/issues/93 + """ class StrictExcel(csv.excel): # Our helper class is really simple # pylint: disable=too-few-public-methods, missing-class-docstring strict = True - with database_path.open() as database_file: + with database_path.open(encoding="utf-8-sig") as database_file: reader = csv.DictReader(database_file, dialect=StrictExcel) try: for row in reader:
awdeorio/mailmerge
930bc506f0f8d0c9c057019c399a491c1641909c
diff --git a/tests/test_main.py b/tests/test_main.py index 60c01cd..cab716e 100644 --- a/tests/test_main.py +++ b/tests/test_main.py @@ -7,11 +7,13 @@ pytest tmpdir docs: http://doc.pytest.org/en/latest/tmpdir.html#the-tmpdir-fixture """ import copy +import shutil import re from pathlib import Path import textwrap import click.testing from mailmerge.__main__ import main +from . import utils def test_no_options(tmpdir): @@ -799,3 +801,61 @@ def test_other_mime_type(tmpdir): >>> Limit was 1 message. To remove the limit, use the --no-limit option. >>> This was a dry run. To send messages, use the --no-dry-run option. """) # noqa: E501 + + +def test_database_bom(tmpdir): + """Bug fix CSV with a byte order mark (BOM). + + It looks like Excel will sometimes save a file with Byte Order Mark + (BOM). When the mailmerge database contains a BOM, it can't seem to find + the first header key. + https://github.com/awdeorio/mailmerge/issues/93 + + """ + # Simple template + template_path = Path(tmpdir/"mailmerge_template.txt") + template_path.write_text(textwrap.dedent("""\ + TO: {{email}} + FROM: My Self <[email protected]> + + Hello {{name}} + """)) + + # Copy database containing a BOM + database_path = Path(tmpdir/"mailmerge_database.csv") + database_with_bom = utils.TESTDATA/"mailmerge_database_with_BOM.csv" + shutil.copyfile(database_with_bom, database_path) + + # Simple unsecure server config + config_path = Path(tmpdir/"mailmerge_server.conf") + config_path.write_text(textwrap.dedent("""\ + [smtp_server] + host = open-smtp.example.com + port = 25 + """)) + + # Run mailmerge + runner = click.testing.CliRunner() + with tmpdir.as_cwd(): + result = runner.invoke(main, ["--output-format", "text"]) + assert not result.exception + assert result.exit_code == 0 + + # Verify output + stdout = copy.deepcopy(result.output) + stdout = re.sub(r"Date:.+", "Date: REDACTED", stdout, re.MULTILINE) + assert stdout == textwrap.dedent("""\ + >>> message 1 + TO: [email protected] + FROM: My Self <[email protected]> + MIME-Version: 1.0 + Content-Type: text/plain; charset="us-ascii" + Content-Transfer-Encoding: 7bit + Date: REDACTED + + Hello My Name + + >>> message 1 sent + >>> Limit was 1 message. To remove the limit, use the --no-limit option. + >>> This was a dry run. To send messages, use the --no-dry-run option. + """) # noqa: E501 diff --git a/tests/testdata/mailmerge_database_with_BOM.csv b/tests/testdata/mailmerge_database_with_BOM.csv new file mode 100644 index 0000000..fb9c879 --- /dev/null +++ b/tests/testdata/mailmerge_database_with_BOM.csv @@ -0,0 +1,2 @@ +name,email +My Name,[email protected]
CSV with BOM It looks like Excel will sometimes save a file with Byte Order Mark (BOM). When the mailmerge database contains a BOM, it can't seem to find the first header key. `mailmerge_template.txt`: ``` TO: {{email}} FROM: My Self <[email protected]> Hello {{name}} ``` `mailmerge_database.csv`: ``` name,email My Name,[email protected] ``` ```console $ file mailmerge_database.csv mailmerge_database.csv: UTF-8 Unicode (with BOM) text, with CRLF line terminators ``` [mailmerge_database_with_BOM.csv.txt](https://github.com/awdeorio/mailmerge/files/4621656/mailmerge_database_with_BOM.csv.txt)
0.0
930bc506f0f8d0c9c057019c399a491c1641909c
[ "tests/test_main.py::test_database_bom" ]
[ "tests/test_main.py::test_no_options", "tests/test_main.py::test_sample", "tests/test_main.py::test_sample_clobber_template", "tests/test_main.py::test_sample_clobber_database", "tests/test_main.py::test_sample_clobber_config", "tests/test_main.py::test_defaults", "tests/test_main.py::test_bad_limit", "tests/test_main.py::test_limit_combo", "tests/test_main.py::test_template_not_found", "tests/test_main.py::test_database_not_found", "tests/test_main.py::test_config_not_found", "tests/test_main.py::test_help", "tests/test_main.py::test_version", "tests/test_main.py::test_bad_template", "tests/test_main.py::test_bad_database", "tests/test_main.py::test_bad_config", "tests/test_main.py::test_attachment", "tests/test_main.py::test_utf8_template", "tests/test_main.py::test_utf8_database", "tests/test_main.py::test_utf8_headers", "tests/test_main.py::test_resume", "tests/test_main.py::test_resume_too_small", "tests/test_main.py::test_resume_too_big", "tests/test_main.py::test_resume_hint_on_config_error", "tests/test_main.py::test_resume_hint_on_csv_error", "tests/test_main.py::test_other_mime_type" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2021-07-19 19:08:23+00:00
mit
1,269
awdeorio__mailmerge-144
diff --git a/mailmerge/template_message.py b/mailmerge/template_message.py index 936041c..aa92489 100644 --- a/mailmerge/template_message.py +++ b/mailmerge/template_message.py @@ -87,7 +87,7 @@ class TemplateMessage: Convert self._message into a multipart message. Specifically, if the message's content-type is not multipart, this - method will create a new `multipart/mixed` message, copy message + method will create a new `multipart/related` message, copy message headers and re-attach the original payload. """ # Do nothing if message already multipart @@ -95,7 +95,7 @@ class TemplateMessage: return # Create empty multipart message - multipart_message = email.mime.multipart.MIMEMultipart('mixed') + multipart_message = email.mime.multipart.MIMEMultipart('related') # Copy headers. Avoid duplicate Content-Type and MIME-Version headers, # which we set explicitely. MIME-Version was set when we created an @@ -128,13 +128,13 @@ class TemplateMessage: Specifically, if the message's content-type is `text/markdown`, we transform `self._message` to have the following structure: - multipart/mixed + multipart/related └── multipart/alternative ├── text/plain (original markdown plaintext) └── text/html (converted markdown) Attachments should be added as subsequent payload items of the - top-level `multipart/mixed` message. + top-level `multipart/related` message. """ # Do nothing if Content-Type is not text/markdown if not self._message['Content-Type'].startswith("text/markdown"): @@ -186,11 +186,11 @@ class TemplateMessage: """ Parse attachment headers and generate content-id headers for each. - Attachments are added to the payload of a `multipart/mixed` message. + Attachments are added to the payload of a `multipart/related` message. For instance, a plaintext message with attachments would have the following structure: - multipart/mixed + multipart/related ├── text/plain ├── attachment1 └── attachment2 @@ -199,7 +199,7 @@ class TemplateMessage: then the message would have the following structure after transforming markdown and attachments: - multipart/mixed + multipart/related ├── multipart/alternative │ ├── text/plain │ └── text/html
awdeorio/mailmerge
1a695a7b02b12418a197ae291554452c9c936c9d
diff --git a/tests/test_template_message.py b/tests/test_template_message.py index 3124cf6..cd12a54 100644 --- a/tests/test_template_message.py +++ b/tests/test_template_message.py @@ -300,7 +300,7 @@ def test_markdown(tmp_path): # Verify message is multipart assert message.is_multipart() - assert message.get_content_subtype() == "mixed" + assert message.get_content_subtype() == "related" # Make sure there is a single multipart/alternative payload assert len(message.get_payload()) == 1
Inline images not working in thunderbird? I am trying to use an inline image as described in the `readme.md` and at first sight it is working, but not consistently for different mail readers. Here is my template: ``` To: {{email}} Subject: Hi From: Foo Bar <[email protected]> Attachment: logo.png Content-Type: text/markdown {{salutation}} {{title}} {{lastname}}, Some email text. Best regards me -- Me jobtitle ![Company logo](logo.png) ``` I sent this to myself (different accounts), first and opened it in different mail readers. To my surprise they differ with respect to the result: - gmail web: image is shown - thunderbird: Image is *not* shown inline - Webmail (t-online): image is shown inline Of course, this seems to indicate a problem at the end of the mail reader but then Thunderbird is pretty widespread so I was wondering if there is something I am getting wrong or if this could even be a bug. Of course, I checked and "Display attachments inline" *is* checked in thunderbird. Any hints or insights are highly welcome. Thanks Phil
0.0
1a695a7b02b12418a197ae291554452c9c936c9d
[ "tests/test_template_message.py::test_markdown" ]
[ "tests/test_template_message.py::test_simple", "tests/test_template_message.py::test_no_substitutions", "tests/test_template_message.py::test_multiple_substitutions", "tests/test_template_message.py::test_bad_jinja", "tests/test_template_message.py::test_cc_bcc", "tests/test_template_message.py::test_html", "tests/test_template_message.py::test_html_plaintext", "tests/test_template_message.py::test_markdown_encoding", "tests/test_template_message.py::test_attachment_simple", "tests/test_template_message.py::test_attachment_relative", "tests/test_template_message.py::test_attachment_absolute", "tests/test_template_message.py::test_attachment_template", "tests/test_template_message.py::test_attachment_not_found", "tests/test_template_message.py::test_attachment_blank", "tests/test_template_message.py::test_attachment_tilde_path", "tests/test_template_message.py::test_attachment_multiple", "tests/test_template_message.py::test_attachment_empty", "tests/test_template_message.py::test_contenttype_attachment_html_body", "tests/test_template_message.py::test_contenttype_attachment_markdown_body", "tests/test_template_message.py::test_duplicate_headers_attachment", "tests/test_template_message.py::test_duplicate_headers_markdown", "tests/test_template_message.py::test_attachment_image_in_markdown", "tests/test_template_message.py::test_content_id_header_for_attachments" ]
{ "failed_lite_validators": [ "has_media", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-05-23 13:50:25+00:00
mit
1,270
awdeorio__mailmerge-155
diff --git a/mailmerge/sendmail_client.py b/mailmerge/sendmail_client.py index 80d0a89..d0bde46 100644 --- a/mailmerge/sendmail_client.py +++ b/mailmerge/sendmail_client.py @@ -10,6 +10,7 @@ import configparser import getpass import datetime import base64 +import ssl from . import exceptions # Type to store info read from config file @@ -118,7 +119,12 @@ class SendmailClient: def sendmail_ssltls(self, sender, recipients, message): """Send email message with SSL/TLS security.""" message_flattened = str(message) - with smtplib.SMTP_SSL(self.config.host, self.config.port) as smtp: + try: + ctx = ssl.create_default_context() + except ssl.SSLError as err: + raise exceptions.MailmergeError(f"SSL Error: {err}") + host, port = (self.config.host, self.config.port) + with smtplib.SMTP_SSL(host, port, context=ctx) as smtp: smtp.login(self.config.username, self.password) smtp.sendmail(sender, recipients, message_flattened)
awdeorio/mailmerge
e23b6071ba4f6a3b6ef1b7b7520e13fcc7013756
diff --git a/tests/test_sendmail_client.py b/tests/test_sendmail_client.py index 3b0e47f..cad7600 100644 --- a/tests/test_sendmail_client.py +++ b/tests/test_sendmail_client.py @@ -9,6 +9,7 @@ import smtplib import email import email.parser import base64 +import ssl import pytest from mailmerge import SendmailClient, MailmergeError @@ -378,7 +379,7 @@ def test_security_plain(mocker, tmp_path): def test_security_ssl(mocker, tmp_path): - """Verify open (Never) security configuration.""" + """Verify SSL/TLS security configuration.""" # Config for SSL SMTP server config_path = tmp_path/"server.conf" config_path.write_text(textwrap.dedent("""\ @@ -397,6 +398,10 @@ def test_security_ssl(mocker, tmp_path): mock_smtp = mocker.patch('smtplib.SMTP') mock_smtp_ssl = mocker.patch('smtplib.SMTP_SSL') + # Mock SSL + mock_ssl_create_default_context = \ + mocker.patch('ssl.create_default_context') + # Mock the password entry mock_getpass = mocker.patch('getpass.getpass') mock_getpass.return_value = "password" @@ -412,6 +417,8 @@ def test_security_ssl(mocker, tmp_path): assert mock_getpass.call_count == 1 assert mock_smtp.call_count == 0 assert mock_smtp_ssl.call_count == 1 + assert mock_ssl_create_default_context.called + assert "context" in mock_smtp_ssl.call_args[1] # SSL cert chain smtp = mock_smtp_ssl.return_value.__enter__.return_value assert smtp.ehlo.call_count == 0 assert smtp.starttls.call_count == 0 @@ -419,6 +426,44 @@ def test_security_ssl(mocker, tmp_path): assert smtp.sendmail.call_count == 1 +def test_ssl_error(mocker, tmp_path): + """Verify SSL/TLS with an SSL error.""" + # Config for SSL SMTP server + config_path = tmp_path/"server.conf" + config_path.write_text(textwrap.dedent("""\ + [smtp_server] + host = smtp.mail.umich.edu + port = 465 + security = SSL/TLS + username = YOUR_USERNAME_HERE + """)) + + # Simple template + sendmail_client = SendmailClient(config_path, dry_run=False) + message = email.message_from_string("Hello world") + + # Mock ssl.create_default_context() to raise an exception + mocker.patch( + 'ssl.create_default_context', + side_effect=ssl.SSLError(1, "CERTIFICATE_VERIFY_FAILED") + ) + + # Mock the password entry + mock_getpass = mocker.patch('getpass.getpass') + mock_getpass.return_value = "password" + + # Send a message + with pytest.raises(MailmergeError) as err: + sendmail_client.sendmail( + sender="[email protected]", + recipients=["[email protected]"], + message=message, + ) + + # Verify exception string + assert "CERTIFICATE_VERIFY_FAILED" in str(err.value) + + def test_missing_username(tmp_path): """Verify exception on missing username.""" config_path = tmp_path/"server.conf"
Load system default SSL cert This issue applies to the SMTP SSL security mode. Use the system's default SSL certificate chain. From https://docs.python.org/3/library/smtplib.html : > Please use [ssl.SSLContext.load_cert_chain()](https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_cert_chain) instead, or let [ssl.create_default_context()](https://docs.python.org/3/library/ssl.html#ssl.create_default_context) select the system’s trusted CA certificates for you.
0.0
e23b6071ba4f6a3b6ef1b7b7520e13fcc7013756
[ "tests/test_sendmail_client.py::test_security_ssl", "tests/test_sendmail_client.py::test_ssl_error" ]
[ "tests/test_sendmail_client.py::test_smtp", "tests/test_sendmail_client.py::test_dry_run", "tests/test_sendmail_client.py::test_no_dry_run", "tests/test_sendmail_client.py::test_bad_config_key", "tests/test_sendmail_client.py::test_security_error", "tests/test_sendmail_client.py::test_security_open", "tests/test_sendmail_client.py::test_security_open_legacy", "tests/test_sendmail_client.py::test_security_starttls", "tests/test_sendmail_client.py::test_security_xoauth", "tests/test_sendmail_client.py::test_security_xoauth_bad_username", "tests/test_sendmail_client.py::test_security_plain", "tests/test_sendmail_client.py::test_missing_username", "tests/test_sendmail_client.py::test_smtp_login_error", "tests/test_sendmail_client.py::test_smtp_sendmail_error", "tests/test_sendmail_client.py::test_socket_error" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-07-29 01:15:36+00:00
mit
1,271
awdeorio__mailmerge-60
diff --git a/mailmerge/template_message.py b/mailmerge/template_message.py index 0366d6e..a442cfc 100644 --- a/mailmerge/template_message.py +++ b/mailmerge/template_message.py @@ -93,38 +93,67 @@ class TemplateMessage(object): def _make_message_multipart(self): """Convert a message into a multipart message.""" - if not self._message.is_multipart(): - multipart_message = email.mime.multipart.MIMEMultipart( - 'alternative') - for header_key in set(self._message.keys()): - # Preserve duplicate headers - values = self._message.get_all(header_key, failobj=[]) - for value in values: - multipart_message[header_key] = value - original_text = self._message.get_payload() - multipart_message.attach(email.mime.text.MIMEText(original_text)) - self._message = multipart_message + # Do nothing if message already multipart + if self._message.is_multipart(): + return + + # Create empty multipart message + multipart_message = email.mime.multipart.MIMEMultipart('alternative') + + # Copy headers, preserving duplicate headers + for header_key in set(self._message.keys()): + values = self._message.get_all(header_key, failobj=[]) + for value in values: + multipart_message[header_key] = value + + # Copy text, preserving original encoding + original_text = self._message.get_payload(decode=True) + original_encoding = str(self._message.get_charset()) + multipart_message.attach(email.mime.text.MIMEText( + original_text, + _charset=original_encoding, + )) + + # Replace original message with multipart message + self._message = multipart_message def _transform_markdown(self): """Convert markdown in message text to HTML.""" + # Do nothing if Content-Type is not text/markdown if not self._message['Content-Type'].startswith("text/markdown"): return + # Remove the markdown Content-Type header, it's non-standard for email del self._message['Content-Type'] - # Convert the text from markdown and then make the message multipart + + # Make sure the message is multipart. We need a multipart message so + # that we can add an HTML part containing rendered Markdown. self._make_message_multipart() - for payload_item in set(self._message.get_payload()): - # Assume the plaintext item is formatted with markdown. - # Add corresponding HTML version of the item as the last part of - # the multipart message (as per RFC 2046) - if payload_item['Content-Type'].startswith('text/plain'): - original_text = payload_item.get_payload() - html_text = markdown.markdown(original_text) - html_payload = future.backports.email.mime.text.MIMEText( - "<html><body>{}</body></html>".format(html_text), - "html", - ) - self._message.attach(html_payload) + + # Extract unrendered text and encoding. We assume that the first + # plaintext payload is formatted with Markdown. + for mimetext in self._message.get_payload(): + if mimetext['Content-Type'].startswith('text/plain'): + encoding = str(mimetext.get_charset()) + text = mimetext.get_payload(decode=True).decode(encoding) + break + assert encoding + assert text + + # Render Markdown to HTML and add the HTML as the last part of the + # multipart message as per RFC 2046. + # + # Note: We need to use u"..." to ensure that unicode string + # substitution works properly in Python 2. + # + # https://docs.python.org/3/library/email.mime.html#email.mime.text.MIMEText + html = markdown.markdown(text) + payload = future.backports.email.mime.text.MIMEText( + u"<html><body>{}</body></html>".format(html), + _subtype="html", + _charset=encoding, + ) + self._message.attach(payload) def _transform_attachments(self): """Parse Attachment headers and add attachments."""
awdeorio/mailmerge
8f6f9468a511d942b220ec1a660aa8c2f394fadb
diff --git a/tests/test_template_message.py b/tests/test_template_message.py index 1dc59f1..0baa86b 100644 --- a/tests/test_template_message.py +++ b/tests/test_template_message.py @@ -74,19 +74,59 @@ def test_markdown(): # Ensure that the first part is plaintext and the last part # is HTML (as per RFC 2046) - plaintext_contenttype = payload[0]['Content-Type'] - assert plaintext_contenttype.startswith("text/plain") - plaintext = payload[0].get_payload() - html_contenttype = payload[1]['Content-Type'] - assert html_contenttype.startswith("text/html") + plaintext_part = payload[0] + assert plaintext_part['Content-Type'].startswith("text/plain") + plaintext_encoding = str(plaintext_part.get_charset()) + plaintext = plaintext_part.get_payload(decode=True) \ + .decode(plaintext_encoding) + + html_part = payload[1] + assert html_part['Content-Type'].startswith("text/html") + html_encoding = str(html_part.get_charset()) + htmltext = html_part.get_payload(decode=True) \ + .decode(html_encoding) # Verify rendered Markdown - htmltext = payload[1].get_payload() rendered = markdown.markdown(plaintext) htmltext_correct = "<html><body>{}</body></html>".format(rendered) assert htmltext.strip() == htmltext_correct.strip() +def test_markdown_encoding(): + """Verify encoding is preserved when rendering a Markdown template. + + See Issue #59 for a detailed explanation + https://github.com/awdeorio/mailmerge/issues/59 + """ + template_message = mailmerge.template_message.TemplateMessage( + utils.TESTDATA/"markdown_template_utf8.txt" + ) + _, _, message = template_message.render({ + "email": "[email protected]", + "name": "Myself", + }) + + # Message should contain an unrendered Markdown plaintext part and a + # rendered Markdown HTML part + plaintext_part, html_part = message.get_payload() + + # Verify encodings + assert str(plaintext_part.get_charset()) == "utf-8" + assert str(html_part.get_charset()) == "utf-8" + assert plaintext_part["Content-Transfer-Encoding"] == "base64" + assert html_part["Content-Transfer-Encoding"] == "base64" + + # Verify content, which is base64 encoded + plaintext = plaintext_part.get_payload().strip() + htmltext = html_part.get_payload().strip() + assert plaintext == "SGksIE15c2VsZiwKw6bDuMOl" + assert htmltext == ( + "PGh0bWw+PGJvZHk+PHA+" + "SGksIE15c2VsZiwKw6bDuMOl" + "PC9wPjwvYm9keT48L2h0bWw+" + ) + + def test_attachment(): """Attachments should be sent as part of the email.""" template_message = mailmerge.template_message.TemplateMessage( @@ -165,7 +205,17 @@ def test_utf8_template(): # NOTE: to decode a base46-encoded string: # print((str(base64.b64decode(payload), "utf-8"))) payload = message.get_payload().replace("\n", "") - assert payload == 'RnJvbSB0aGUgVGFnZWxpZWQgb2YgV29sZnJhbSB2b24gRXNjaGVuYmFjaCAoTWlkZGxlIEhpZ2ggR2VybWFuKToKClPDrm5lIGtsw6J3ZW4gZHVyaCBkaWUgd29sa2VuIHNpbnQgZ2VzbGFnZW4sCmVyIHN0w65nZXQgw7tmIG1pdCBncsO0emVyIGtyYWZ0LAppY2ggc2loIGluIGdyw6J3ZW4gdMOkZ2Vsw65jaCBhbHMgZXIgd2lsIHRhZ2VuLApkZW4gdGFjLCBkZXIgaW0gZ2VzZWxsZXNjaGFmdAplcndlbmRlbiB3aWwsIGRlbSB3ZXJkZW4gbWFuLApkZW4gaWNoIG1pdCBzb3JnZW4gw65uIHZlcmxpZXouCmljaCBicmluZ2UgaW4gaGlubmVuLCBvYiBpY2gga2FuLgpzw65uIHZpbCBtYW5lZ2l1IHR1Z2VudCBtaWNoeiBsZWlzdGVuIGhpZXouCgpodHRwOi8vd3d3LmNvbHVtYmlhLmVkdS9+ZmRjL3V0Zjgv' # noqa: E501 pylint: disable=line-too-long + assert payload == ( + "RnJvbSB0aGUgVGFnZWxpZWQgb2YgV29sZnJhbSB2b24gRXNjaGVuYmFjaCAo" + "TWlkZGxlIEhpZ2ggR2VybWFuKToKClPDrm5lIGtsw6J3ZW4gZHVyaCBkaWUg" + "d29sa2VuIHNpbnQgZ2VzbGFnZW4sCmVyIHN0w65nZXQgw7tmIG1pdCBncsO0" + "emVyIGtyYWZ0LAppY2ggc2loIGluIGdyw6J3ZW4gdMOkZ2Vsw65jaCBhbHMg" + "ZXIgd2lsIHRhZ2VuLApkZW4gdGFjLCBkZXIgaW0gZ2VzZWxsZXNjaGFmdApl" + "cndlbmRlbiB3aWwsIGRlbSB3ZXJkZW4gbWFuLApkZW4gaWNoIG1pdCBzb3Jn" + "ZW4gw65uIHZlcmxpZXouCmljaCBicmluZ2UgaW4gaGlubmVuLCBvYiBpY2gg" + "a2FuLgpzw65uIHZpbCBtYW5lZ2l1IHR1Z2VudCBtaWNoeiBsZWlzdGVuIGhp" + "ZXouCgpodHRwOi8vd3d3LmNvbHVtYmlhLmVkdS9+ZmRjL3V0Zjgv" + ) def test_utf8_database(): diff --git a/tests/testdata/markdown_template_utf8.txt b/tests/testdata/markdown_template_utf8.txt new file mode 100644 index 0000000..aa8b14d --- /dev/null +++ b/tests/testdata/markdown_template_utf8.txt @@ -0,0 +1,7 @@ +TO: {{email}} +SUBJECT: Testing mailmerge +FROM: [email protected] +CONTENT-TYPE: text/markdown + +Hi, {{name}}, +æøå
Doesn't use charset=utf-8 when using markdown Sending a message with special characters gives good results. ``` TO: {{email}} SUBJECT: Testing mailmerge FROM: [email protected] Hi, {{name}}, æøå ``` outputs ``` >>> encoding utf-8 >>> message 0 TO: [email protected] SUBJECT: Testing mailmerge FROM: [email protected] MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Date: Fri, 13 Dec 2019 20:41:08 -0000 SGksIE15c2VsZiwKw6bDuMOl ``` Notice that `charset` here is set ut `utf-8` and message renders well in email client. But when specifying markdown: ``` TO: {{email}} SUBJECT: Testing mailmerge FROM: [email protected] CONTENT-TYPE: text/markdown Hi, {{name}}, æøå ``` It outputs ``` >>> encoding utf-8 >>> message 0 MIME-Version: 1.0 SUBJECT: Testing mailmerge Date: Fri, 13 Dec 2019 20:42:22 -0000 TO: [email protected] FROM: [email protected] MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Type: multipart/alternative; boundary="===============3629053266709230733==" --===============3629053266709230733== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit SGksIE15c2VsZiwKw6bDuMOl --===============3629053266709230733== Content-Type: text/html; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit <html><body><p>SGksIE15c2VsZiwKw6bDuMOl</p></body></html> --===============3629053266709230733==-- ``` Notice that `charset` here is set ut `us-ascii`, and message shows up as SGksIE15c2VsZiwKw6bDuMOl in the client.
0.0
8f6f9468a511d942b220ec1a660aa8c2f394fadb
[ "tests/test_template_message.py::test_markdown_encoding" ]
[ "tests/test_template_message.py::test_bad_jinja", "tests/test_template_message.py::test_cc_bcc", "tests/test_template_message.py::test_markdown", "tests/test_template_message.py::test_attachment", "tests/test_template_message.py::test_attachment_empty", "tests/test_template_message.py::test_utf8_template" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2019-12-14 05:27:07+00:00
mit
1,272
awdeorio__mailmerge-95
diff --git a/mailmerge/template_message.py b/mailmerge/template_message.py index c90c505..135bba8 100644 --- a/mailmerge/template_message.py +++ b/mailmerge/template_message.py @@ -102,8 +102,13 @@ class TemplateMessage(object): # Create empty multipart message multipart_message = email.mime.multipart.MIMEMultipart('alternative') - # Copy headers, preserving duplicate headers + # Copy headers. Avoid duplicate Content-Type and MIME-Version headers, + # which we set explicitely. MIME-Version was set when we created an + # empty mulitpart message. Content-Type will be set when we copy the + # original text later. for header_key in set(self._message.keys()): + if header_key.lower() in ["content-type", "mime-version"]: + continue values = self._message.get_all(header_key, failobj=[]) for value in values: multipart_message[header_key] = value
awdeorio/mailmerge
a95c4951e91edac5c7bfa5e19f2389c7e4657d47
diff --git a/tests/test_template_message.py b/tests/test_template_message.py index 18e0d5d..346975f 100644 --- a/tests/test_template_message.py +++ b/tests/test_template_message.py @@ -624,3 +624,56 @@ def test_attachment_empty(tmp_path): template_message = TemplateMessage(template_path) with pytest.raises(MailmergeError): template_message.render({}) + + +def test_duplicate_headers_attachment(tmp_path): + """Verify multipart messages do not contain duplicate headers. + + Duplicate headers are rejected by some SMTP servers. + """ + # Simple attachment + attachment_path = Path(tmp_path/"attachment.txt") + attachment_path.write_text(u"Hello world\n") + + # Simple message + template_path = tmp_path / "template.txt" + template_path.write_text(textwrap.dedent(u"""\ + TO: [email protected] + SUBJECT: Testing mailmerge + FROM: [email protected]> + ATTACHMENT: attachment.txt + + {{message}} + """)) + template_message = TemplateMessage(template_path) + _, _, message = template_message.render({ + "message": "Hello world" + }) + + # Verifty no duplicate headers + assert len(message.keys()) == len(set(message.keys())) + + +def test_duplicate_headers_markdown(tmp_path): + """Verify multipart messages do not contain duplicate headers. + + Duplicate headers are rejected by some SMTP servers. + """ + template_path = tmp_path / "template.txt" + template_path.write_text(textwrap.dedent(u"""\ + TO: [email protected] + SUBJECT: Testing mailmerge + FROM: [email protected] + CONTENT-TYPE: text/markdown + + ``` + Message as code block: {{message}} + ``` + """)) + template_message = TemplateMessage(template_path) + _, _, message = template_message.render({ + "message": "hello world", + }) + + # Verifty no duplicate headers + assert len(message.keys()) == len(set(message.keys()))
With attachments its send duplicate header of Content Type This is the email header with attachment . Without attachment is okay > Content-Type: multipart/alternative; boundary="===============6399458286909476==" > MIME-Version: 1.0 > Content-Type: multipart/alternative; boundary="===============6399458286909476==" > SUBJECT: Testing mailmerge > MIME-Version: 1.0 > Content-Transfer-Encoding: 7bit > FROM: User <[email protected]> > TO: [email protected] > Date: Mon, 18 May 2020 18:21:03 -0000
0.0
a95c4951e91edac5c7bfa5e19f2389c7e4657d47
[ "tests/test_template_message.py::test_duplicate_headers_attachment", "tests/test_template_message.py::test_duplicate_headers_markdown" ]
[ "tests/test_template_message.py::test_simple", "tests/test_template_message.py::test_no_substitutions", "tests/test_template_message.py::test_multiple_substitutions", "tests/test_template_message.py::test_bad_jinja", "tests/test_template_message.py::test_cc_bcc", "tests/test_template_message.py::test_html", "tests/test_template_message.py::test_html_plaintext", "tests/test_template_message.py::test_markdown", "tests/test_template_message.py::test_markdown_encoding", "tests/test_template_message.py::test_attachment_simple", "tests/test_template_message.py::test_attachment_relative", "tests/test_template_message.py::test_attachment_absolute", "tests/test_template_message.py::test_attachment_template", "tests/test_template_message.py::test_attachment_not_found", "tests/test_template_message.py::test_attachment_blank", "tests/test_template_message.py::test_attachment_tilde_path", "tests/test_template_message.py::test_attachment_multiple", "tests/test_template_message.py::test_attachment_empty" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2020-05-23 14:54:30+00:00
mit
1,273
awelzel__flake8-gl-codeclimate-15
diff --git a/flake8_gl_codeclimate/__init__.py b/flake8_gl_codeclimate/__init__.py index 6ef76ba..3f4ac5d 100644 --- a/flake8_gl_codeclimate/__init__.py +++ b/flake8_gl_codeclimate/__init__.py @@ -115,7 +115,7 @@ class GitlabCodeClimateFormatter(BaseFormatter): # issue, including deeper explanations and links to other resources. "categories": cls._guess_categories(v), "location": { - "path": v.filename, + "path": v.filename[2:] if v.filename.startswith("./") else v.filename, "lines": { "begin": v.line_number, "end": v.line_number,
awelzel/flake8-gl-codeclimate
91c89ebe0c455815653bc66e5e75e6a80e51d4de
diff --git a/tests/test_formatter.py b/tests/test_formatter.py index a558a1a..d70df44 100644 --- a/tests/test_formatter.py +++ b/tests/test_formatter.py @@ -20,7 +20,7 @@ class TestGitlabCodeClimateFormatter(unittest.TestCase): self.formatter = GitlabCodeClimateFormatter(self.options) self.error1 = Violation( code="E302", - filename="examples/hello-world.py", + filename="./examples/hello-world.py", line_number=23, column_number=None, text="expected 2 blank lines, found 1", @@ -29,7 +29,7 @@ class TestGitlabCodeClimateFormatter(unittest.TestCase): self.error2 = Violation( code="X111", # unknown - filename="examples/unknown.py", + filename="./examples/unknown.py", line_number=99, column_number=None, text="Some extension produced this.", @@ -38,7 +38,7 @@ class TestGitlabCodeClimateFormatter(unittest.TestCase): self.logging_error = Violation( code="G001", # This is coming from flake8-logging-format - filename="examples/logging-format.py", + filename="./examples/logging-format.py", line_number=4, column_number=None, text="Logging statement uses string.format()", @@ -47,7 +47,7 @@ class TestGitlabCodeClimateFormatter(unittest.TestCase): self.complexity_error = Violation( code="C901", # This is coming from flake8-logging-format - filename="examples/complex-code.py", + filename="./examples/complex-code.py", line_number=42, column_number=None, text="Something is too complex", @@ -141,3 +141,25 @@ class TestGitlabCodeClimateFormatter(unittest.TestCase): self.assertEqual("bandit", violations[0]["check_name"]) self.assertEqual(["Security"], violations[0]["categories"]) self.assertEqual("critical", violations[0]["severity"]) + + def test_error_filepath_with_prefix(self): + self.formatter.start() + self.formatter.handle(self.security_error) + self.formatter.stop() + + with open(self.options.output_file) as fp: + violations = json.load(fp) + + self.assertEqual(1, len(violations)) + self.assertEqual("examples/insecure-code.py", violations[0]["location"]["path"]) + + def test_error_filepath(self): + self.formatter.start() + self.formatter.handle(self.error1) + self.formatter.stop() + + with open(self.options.output_file) as fp: + violations = json.load(fp) + + self.assertEqual(1, len(violations)) + self.assertEqual("examples/hello-world.py", violations[0]["location"]["path"])
Fix relative file path Thanks for this great utility. We've been using it for several months. I noticed today that the reported violations do not show up in the diff view on a merge request (it's an Ultimate feature). The reason seems to be that the relative file path starts with `./` whereas GitLab seems to expect it not to start with that (see [docs](https://docs.gitlab.com/ee/ci/testing/code_quality.html#implementing-a-custom-tool)). I did a test by removing those as follows in our pipeline: ```shell flake8 --format gl-codeclimate | python -c "import sys; import json; lines = [line.replace('./', '') for line in sys.stdin]; print(json.dumps(json.loads('\n'.join(lines)), indent='\t'));" > gl-code-quality-report.json ``` And GitLab successfully showed it in the diff after. Happy to contribute a PR but might need a little pointer in how to achieve this. It's not immediately clear to me what type `v` and `v.filename` are.
0.0
91c89ebe0c455815653bc66e5e75e6a80e51d4de
[ "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_error_filepath" ]
[ "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_complexity_error", "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_error1", "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_error1_and_error2", "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_error_filepath_with_prefix", "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_logging_errro", "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_no_errors", "tests/test_formatter.py::TestGitlabCodeClimateFormatter::test_security_error" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2022-11-19 03:35:19+00:00
mit
1,274
awisu2__imagenatepy-3
diff --git a/setup.cfg b/setup.cfg index 581b2a7..e469e17 100644 --- a/setup.cfg +++ b/setup.cfg @@ -36,6 +36,9 @@ package_dir = # entry_pointの設定を外部ファイルにする場合 # entry_points = file: entry_points.cfg +# テストモジュールのあるディレクトリ +test_suite=tests + # options.packages = find: の引数 [options.packages.find] where=src diff --git a/src/imagenate/sample.py b/src/imagenate/sample.py index 5d0fbb8..d58c878 100644 --- a/src/imagenate/sample.py +++ b/src/imagenate/sample.py @@ -1,2 +1,2 @@ def hello(): - print('hello') \ No newline at end of file + return "hello"
awisu2/imagenatepy
747131c2a92fb2a09c1fed5ac56d62b354a7d02c
diff --git a/src/tests/__init__.py b/src/tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/src/tests/test_imagenate.py b/src/tests/test_imagenate.py new file mode 100644 index 0000000..108890b --- /dev/null +++ b/src/tests/test_imagenate.py @@ -0,0 +1,7 @@ +import unittest +import imagenate + +class TestImagenate(unittest.TestCase): + + def test_hello(self): + self.assertEqual(imagenate.hello(), 'hello', 'hello is misssing')
unittest <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. -->
0.0
747131c2a92fb2a09c1fed5ac56d62b354a7d02c
[ "src/tests/test_imagenate.py::TestImagenate::test_hello" ]
[]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2020-12-04 03:59:31+00:00
mit
1,275
aws-cloudformation__cloudformation-cli-937
diff --git a/.github/workflows/pypi-release.yaml b/.github/workflows/pypi-release.yaml index b893ac7..4a56df5 100644 --- a/.github/workflows/pypi-release.yaml +++ b/.github/workflows/pypi-release.yaml @@ -22,6 +22,6 @@ jobs: run: | python setup.py sdist bdist_wheel - name: Publish distribution 📦 to PyPI (If triggered from release) - uses: pypa/gh-action-pypi-publish@master + uses: pypa/gh-action-pypi-publish@release/v1 with: password: ${{ secrets.PYPI_API_KEY_CLOUDFORMATION_CLI }} diff --git a/src/rpdk/core/contract/resource_generator.py b/src/rpdk/core/contract/resource_generator.py index 7818d70..ad54533 100644 --- a/src/rpdk/core/contract/resource_generator.py +++ b/src/rpdk/core/contract/resource_generator.py @@ -1,4 +1,5 @@ import logging +import re from collections.abc import Sequence from hypothesis.strategies import ( @@ -27,9 +28,17 @@ LOG = logging.getLogger(__name__) # https://github.com/aws-cloudformation/aws-cloudformation-rpdk/issues/118 # Arn is just a placeholder for testing +# format list taken from https://python-jsonschema.readthedocs.io/en/stable/validate/#jsonschema.FormatChecker.checkers +# date-time regex from https://github.com/naimetti/rfc3339-validator +# date is extraction from date-time +# time is extraction from date-time STRING_FORMATS = { "arn": "^arn:aws(-(cn|gov))?:[a-z-]+:(([a-z]+-)+[0-9])?:([0-9]{12})?:[^.]+$", - "uri": "^(https?|ftp|file)://[0-9a-zA-Z]([-.\\w]*[0-9a-zA-Z])(:[0-9]*)*([?/#].*)?$", + "uri": r"^(https?|ftp|file)://[0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])(:[0-9]*)*([?/#].*)?$", + "date-time": r"^(\d{4})-(0[1-9]|1[0-2])-(\d{2})T(?:[01]\d|2[0123]):(?:[0-5]\d):(?:[0-5]\d)(?:\.\d+)?(?:Z|[+-](?:[01]\d|2[0123]):[0-5]\d)$", + "date": r"^(\d{4})-(0[1-9]|1[0-2])-(\d{2})$", + "time": r"^(?:[01]\d|2[0123]):(?:[0-5]\d):(?:[0-5]\d)(?:\.\d+)?(?:Z|[+-](?:[01]\d|2[0123]):[0-5]\d)$", + "email": r"^.+@[^\.].*\.[a-z]{2,}$", } NEG_INF = float("-inf") @@ -37,8 +46,10 @@ POS_INF = float("inf") def terminate_regex(regex): + if regex.startswith("^"): + regex = r"\A" + regex[1:] if regex.endswith("$"): - return regex[:-1] + r"\Z" + regex = regex[:-1] + r"\Z" return regex @@ -247,7 +258,7 @@ class ResourceGenerator: if "maxLength" in schema: # pragma: no cover LOG.warning("found maxLength used with pattern") - return from_regex(terminate_regex(regex)) + return from_regex(re.compile(terminate_regex(regex), re.ASCII)) if "pattern" in schema: # pragma: no cover LOG.warning("found pattern used with format") @@ -257,4 +268,4 @@ class ResourceGenerator: LOG.warning("found maxLength used with format") regex = STRING_FORMATS[string_format] - return from_regex(regex) + return from_regex(re.compile(regex, re.ASCII)) diff --git a/src/rpdk/core/data_loaders.py b/src/rpdk/core/data_loaders.py index c4a6fd1..61f36f7 100644 --- a/src/rpdk/core/data_loaders.py +++ b/src/rpdk/core/data_loaders.py @@ -249,7 +249,10 @@ def load_resource_spec(resource_spec_file): # pylint: disable=R # noqa: C901 pattern, ) try: - re.compile(pattern) + # http://json-schema.org/understanding-json-schema/reference/regular_expressions.html + # ECMA-262 has \w, \W, \b, \B, \d, \D, \s and \S perform ASCII-only matching + # instead of full Unicode matching. Unicode matching is the default in Python + re.compile(pattern, re.ASCII) except re.error: LOG.warning("Could not validate regular expression: %s", pattern)
aws-cloudformation/cloudformation-cli
86f6528c9285f4fce33bc56f3f836c2e68d0b825
diff --git a/tests/contract/test_resource_generator.py b/tests/contract/test_resource_generator.py index c02f34a..0ba9ea8 100644 --- a/tests/contract/test_resource_generator.py +++ b/tests/contract/test_resource_generator.py @@ -22,9 +22,28 @@ def test_terminate_regex_end_of_line_like_a_normal_person(): assert re.match(modified_regex, "dfqh3eqefhq") -def test_terminate_regex_no_termination_needed(): +def test_terminate_regex_line_start_change(): original_regex = r"^[a-zA-Z0-9]{1,219}\Z" - assert terminate_regex(original_regex) == original_regex + terminated_regex = r"\A[a-zA-Z0-9]{1,219}\Z" + assert terminate_regex(original_regex) == terminated_regex + + +def test_terminate_regex_line_end_change(): + original_regex = r"\A[a-zA-Z0-9]{1,219}$" + terminated_regex = r"\A[a-zA-Z0-9]{1,219}\Z" + assert terminate_regex(original_regex) == terminated_regex + + +def test_terminate_regex_line_start_and_end_change(): + original_regex = r"^[a-zA-Z0-9]{1,219}$" + terminated_regex = r"\A[a-zA-Z0-9]{1,219}\Z" + assert terminate_regex(original_regex) == terminated_regex + + +def test_terminate_regex_no_termination_needed(): + original_regex = r"\A[a-zA-Z0-9]{1,219}\Z" + terminated_regex = r"\A[a-zA-Z0-9]{1,219}\Z" + assert terminate_regex(original_regex) == terminated_regex @pytest.mark.parametrize("schema_type", ["integer", "number"]) @@ -68,11 +87,34 @@ def test_generate_string_strategy_regex(): assert re.fullmatch(schema["pattern"], regex_strategy.example()) +def test_generate_string_strategy_ascii(): + schema = {"type": "string", "pattern": "^\\w{1,6}$"} + strategy = ResourceGenerator(schema).generate_schema_strategy(schema) + for _ in range(100): + assert re.match("^[A-Za-z0-9_]{1,6}$", strategy.example()) + + def test_generate_string_strategy_format(): schema = {"type": "string", "format": "arn"} strategy = ResourceGenerator(schema).generate_schema_strategy(schema) assert re.fullmatch(STRING_FORMATS["arn"], strategy.example()) + schema = {"type": "string", "format": "date-time"} + strategy = ResourceGenerator(schema).generate_schema_strategy(schema) + assert re.match(STRING_FORMATS["date-time"], strategy.example()) + + schema = {"type": "string", "format": "time"} + strategy = ResourceGenerator(schema).generate_schema_strategy(schema) + assert re.match(STRING_FORMATS["time"], strategy.example()) + + schema = {"type": "string", "format": "date"} + strategy = ResourceGenerator(schema).generate_schema_strategy(schema) + assert re.match(STRING_FORMATS["date"], strategy.example()) + + schema = {"type": "string", "format": "email"} + strategy = ResourceGenerator(schema).generate_schema_strategy(schema) + assert re.match(STRING_FORMATS["email"], strategy.example()) + def test_generate_string_strategy_length(): schema = {"type": "string", "minLength": 5, "maxLength": 10}
The pypi release github action needs to be updated ``` You are using "pypa/gh-action-pypi-publish@master". The "master" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use "pypa/gh-action-pypi-publish@release/v1". If you feel adventurous, you may opt to use use "pypa/gh-action-pypi-publish@unstable/v1" instead. A more general recommendation is to pin to exact tags or commit shas. ```
0.0
86f6528c9285f4fce33bc56f3f836c2e68d0b825
[ "tests/contract/test_resource_generator.py::test_terminate_regex_line_start_change", "tests/contract/test_resource_generator.py::test_terminate_regex_line_start_and_end_change", "tests/contract/test_resource_generator.py::test_generate_string_strategy_ascii", "tests/contract/test_resource_generator.py::test_generate_string_strategy_format" ]
[ "tests/contract/test_resource_generator.py::test_terminate_regex_end_of_line_like_a_normal_person", "tests/contract/test_resource_generator.py::test_terminate_regex_line_end_change", "tests/contract/test_resource_generator.py::test_terminate_regex_no_termination_needed", "tests/contract/test_resource_generator.py::test_generate_number_strategy_inclusive[integer]", "tests/contract/test_resource_generator.py::test_generate_number_strategy_inclusive[number]", "tests/contract/test_resource_generator.py::test_generate_number_strategy_exclusive[integer]", "tests/contract/test_resource_generator.py::test_generate_number_strategy_exclusive[number]", "tests/contract/test_resource_generator.py::test_generate_number_strategy_no_inf_or_nan", "tests/contract/test_resource_generator.py::test_generate_string_strategy_regex", "tests/contract/test_resource_generator.py::test_generate_string_strategy_length", "tests/contract/test_resource_generator.py::test_generate_string_strategy_no_constraints", "tests/contract/test_resource_generator.py::test_generate_boolean_strategy", "tests/contract/test_resource_generator.py::test_generate_array_strategy_simple", "tests/contract/test_resource_generator.py::test_generate_array_strategy_items[items]", "tests/contract/test_resource_generator.py::test_generate_array_strategy_items[contains]", "tests/contract/test_resource_generator.py::test_generate_array_strategy_multiple_items", "tests/contract/test_resource_generator.py::test_generate_object_strategy_simple_combiner[allOf]", "tests/contract/test_resource_generator.py::test_generate_object_strategy_simple_combiner[oneOf]", "tests/contract/test_resource_generator.py::test_generate_object_strategy_simple_combiner[anyOf]", "tests/contract/test_resource_generator.py::test_generate_object_strategy_one_of[oneOf]", "tests/contract/test_resource_generator.py::test_generate_object_strategy_one_of[anyOf]", "tests/contract/test_resource_generator.py::test_generate_object_strategy_all_of", "tests/contract/test_resource_generator.py::test_generate_object_strategy_properties", "tests/contract/test_resource_generator.py::test_generate_object_strategy_empty", "tests/contract/test_resource_generator.py::test_generate_const_strategy[schema0]", "tests/contract/test_resource_generator.py::test_generate_const_strategy[schema1]", "tests/contract/test_resource_generator.py::test_generate_enum_strategy[schema0]", "tests/contract/test_resource_generator.py::test_generate_enum_strategy[schema1]", "tests/contract/test_resource_generator.py::test_generate_strategy_with_refs" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-11-08 03:11:35+00:00
apache-2.0
1,276
aws-cloudformation__cloudformation-cli-958
diff --git a/.pylintrc b/.pylintrc index 75a3f8c..2b4fb64 100644 --- a/.pylintrc +++ b/.pylintrc @@ -22,3 +22,6 @@ good-names=e,ex,f,fp,i,j,k,n,_ indent-string=' ' max-line-length=160 + +[DESIGN] +max-locals=16 diff --git a/README.md b/README.md index 703c35f..64d4fa0 100644 --- a/README.md +++ b/README.md @@ -60,7 +60,10 @@ cfn test --log-group-name cw_log_group --log-role-arn log_delivery_role_arn # Ha ``` Note: -* To use your type configuration in contract tests, you will need to save your type configuration json file in `~/.cfn-cli/typeConfiguration.json`. +* To use your type configuration in contract tests, you will need to save your type configuration json file in `~/.cfn-cli/typeConfiguration.json` or specify the file you would like to use + * `--typeconfig ./myResourceTypeConfig.json` + * `--typeconfig /test/myresource/config1.json` + * `--typeconfig C:\MyResource\typeconf.json` * To use `propertyTransform` in schema, you will need to install [PYJQ](https://pypi.org/project/pyjq/). This feature will not be available to use with contract tests on Windows OS diff --git a/src/rpdk/core/contract/hook_client.py b/src/rpdk/core/contract/hook_client.py index 74b8ad6..782ec51 100644 --- a/src/rpdk/core/contract/hook_client.py +++ b/src/rpdk/core/contract/hook_client.py @@ -57,6 +57,7 @@ class HookClient: # pylint: disable=too-many-instance-attributes log_group_name=None, log_role_arn=None, docker_image=None, + typeconfig=None, executable_entrypoint=None, target_info=None, profile=None, @@ -101,6 +102,7 @@ class HookClient: # pylint: disable=too-many-instance-attributes self._executable_entrypoint = executable_entrypoint self._target_info = self._setup_target_info(target_info) self._resolved_targets = {} + self._typeconfig = typeconfig @staticmethod def _properties_to_paths(schema, key): @@ -491,7 +493,7 @@ class HookClient: # pylint: disable=too-many-instance-attributes invocation_point, target, target_model, - TypeConfiguration.get_hook_configuration(), + TypeConfiguration.get_hook_configuration(self._typeconfig), **kwargs, ) start_time = time.time() diff --git a/src/rpdk/core/contract/resource_client.py b/src/rpdk/core/contract/resource_client.py index 40e3937..0e0fd37 100644 --- a/src/rpdk/core/contract/resource_client.py +++ b/src/rpdk/core/contract/resource_client.py @@ -171,6 +171,7 @@ class ResourceClient: # pylint: disable=too-many-instance-attributes log_group_name=None, log_role_arn=None, docker_image=None, + typeconfig=None, executable_entrypoint=None, profile=None, ): # pylint: disable=too-many-arguments @@ -213,6 +214,7 @@ class ResourceClient: # pylint: disable=too-many-instance-attributes self._docker_image = docker_image self._docker_client = docker.from_env() if self._docker_image else None self._executable_entrypoint = executable_entrypoint + self._typeconfig = typeconfig def _properties_to_paths(self, key): return {fragment_decode(prop, prefix="") for prop in self._schema.get(key, [])} @@ -736,7 +738,7 @@ class ResourceClient: # pylint: disable=too-many-instance-attributes action, current_model, previous_model, - TypeConfiguration.get_type_configuration(), + TypeConfiguration.get_type_configuration(self._typeconfig), **kwargs, ) start_time = time.time() diff --git a/src/rpdk/core/contract/type_configuration.py b/src/rpdk/core/contract/type_configuration.py index 1500318..209b257 100644 --- a/src/rpdk/core/contract/type_configuration.py +++ b/src/rpdk/core/contract/type_configuration.py @@ -6,48 +6,50 @@ from rpdk.core.exceptions import InvalidProjectError LOG = logging.getLogger(__name__) -TYPE_CONFIGURATION_FILE_PATH = "~/.cfn-cli/typeConfiguration.json" - class TypeConfiguration: TYPE_CONFIGURATION = None @staticmethod - def get_type_configuration(): + def get_type_configuration(typeconfigloc): + if typeconfigloc: + type_config_file_path = typeconfigloc + else: + type_config_file_path = "~/.cfn-cli/typeConfiguration.json" + LOG.debug( - "Loading type configuration setting file at '~/.cfn-cli/typeConfiguration.json'" + "Loading type configuration setting file at %s", + type_config_file_path, ) if TypeConfiguration.TYPE_CONFIGURATION is None: try: with open( - os.path.expanduser(TYPE_CONFIGURATION_FILE_PATH), encoding="utf-8" + os.path.expanduser(type_config_file_path), encoding="utf-8" ) as f: TypeConfiguration.TYPE_CONFIGURATION = json.load(f) except json.JSONDecodeError as json_decode_error: LOG.debug( "Type configuration file '%s' is invalid", - TYPE_CONFIGURATION_FILE_PATH, + type_config_file_path, ) raise InvalidProjectError( - "Type configuration file '%s' is invalid" - % TYPE_CONFIGURATION_FILE_PATH + "Type configuration file '%s' is invalid" % type_config_file_path ) from json_decode_error except FileNotFoundError: LOG.debug( "Type configuration file '%s' not Found, do nothing", - TYPE_CONFIGURATION_FILE_PATH, + type_config_file_path, ) return TypeConfiguration.TYPE_CONFIGURATION @staticmethod - def get_hook_configuration(): - # pylint: disable=unsubscriptable-object - type_configuration = TypeConfiguration.get_type_configuration() + def get_hook_configuration(typeconfigloc): + type_configuration = TypeConfiguration.get_type_configuration(typeconfigloc) if type_configuration: try: - return type_configuration["CloudFormationConfiguration"][ + return type_configuration.get("CloudFormationConfiguration", {})[ "HookConfiguration" - ].get("Properties") + ]["Properties"] except KeyError as e: LOG.warning("Hook configuration is invalid") raise InvalidProjectError("Hook configuration is invalid") from e diff --git a/src/rpdk/core/hook/init_hook.py b/src/rpdk/core/hook/init_hook.py index 308719b..279bf1c 100644 --- a/src/rpdk/core/hook/init_hook.py +++ b/src/rpdk/core/hook/init_hook.py @@ -32,6 +32,9 @@ def init_hook(args, project): project.init_hook(type_name, language, settings) project.generate(args.endpoint_url, args.region, args.target_schemas, args.profile) + # Reload the generated example schema + project.load_configuration_schema() + # generate the docs based on the example schema loaded project.generate_docs()
aws-cloudformation/cloudformation-cli
b8f69bb0345a033ba44516637e5d8a42d87a1608
diff --git a/src/rpdk/core/test.py b/src/rpdk/core/test.py index 19a5143..5d64e75 100644 --- a/src/rpdk/core/test.py +++ b/src/rpdk/core/test.py @@ -342,6 +342,7 @@ def get_contract_plugin_client(args, project, overrides, inputs): args.log_role_arn, executable_entrypoint=project.executable_entrypoint, docker_image=args.docker_image, + typeconfig=args.typeconfig, target_info=project._load_target_info( # pylint: disable=protected-access args.cloudformation_endpoint_url, args.region ), @@ -362,6 +363,7 @@ def get_contract_plugin_client(args, project, overrides, inputs): project.type_name, args.log_group_name, args.log_role_arn, + typeconfig=args.typeconfig, executable_entrypoint=project.executable_entrypoint, docker_image=args.docker_image, profile=args.profile, @@ -441,7 +443,7 @@ def setup_subparser(subparsers, parents): _sam_arguments(parser) # this parameter can be used to pass additional arguments to pytest after `--` - # for example, + # for example, cfn test -- -k contract_delete_update # to have pytest run a single test parser.add_argument( "--role-arn", help="Role used when performing handler operations." @@ -477,6 +479,11 @@ def setup_subparser(subparsers, parents): "of SAM", ) + parser.add_argument( + "--typeconfig", + help="typeConfiguration file to use. Default: '~/.cfn-cli/typeConfiguration.json.'", + ) + def _sam_arguments(parser): parser.add_argument( diff --git a/tests/contract/test_type_configuration.py b/tests/contract/test_type_configuration.py index d026099..6c4b452 100644 --- a/tests/contract/test_type_configuration.py +++ b/tests/contract/test_type_configuration.py @@ -1,3 +1,4 @@ +import os from unittest.mock import mock_open, patch import pytest @@ -29,17 +30,35 @@ def teardown_function(): def test_get_type_configuration_with_not_exist_file(): with patch("builtins.open", mock_open()) as f: f.side_effect = FileNotFoundError() - assert TypeConfiguration.get_type_configuration() is None + assert TypeConfiguration.get_type_configuration(None) is None + + +def test_get_type_configuration_with_default_typeconfig_location(): + with patch( + "builtins.open", mock_open(read_data=TYPE_CONFIGURATION_TEST_SETTING) + ) as f: + TypeConfiguration.get_type_configuration(None) + f.assert_called_with( + os.path.expanduser("~/.cfn-cli/typeConfiguration.json"), encoding="utf-8" + ) + + +def test_get_type_configuration_with_set_typeconfig_location(): + with patch( + "builtins.open", mock_open(read_data=TYPE_CONFIGURATION_TEST_SETTING) + ) as f: + TypeConfiguration.get_type_configuration("./test.json") + f.assert_called_with("./test.json", encoding="utf-8") @patch("builtins.open", mock_open(read_data=TYPE_CONFIGURATION_TEST_SETTING)) def test_get_type_configuration(): - type_configuration = TypeConfiguration.get_type_configuration() + type_configuration = TypeConfiguration.get_type_configuration(None) assert type_configuration["Credentials"]["ApiKey"] == "123" assert type_configuration["Credentials"]["ApplicationKey"] == "123" # get type config again, should be the same config - type_configuration = TypeConfiguration.get_type_configuration() + type_configuration = TypeConfiguration.get_type_configuration(None) assert type_configuration["Credentials"]["ApiKey"] == "123" assert type_configuration["Credentials"]["ApplicationKey"] == "123" @@ -47,19 +66,19 @@ def test_get_type_configuration(): @patch("builtins.open", mock_open(read_data=TYPE_CONFIGURATION_INVALID)) def test_get_type_configuration_with_invalid_json(): try: - TypeConfiguration.get_type_configuration() + TypeConfiguration.get_type_configuration(None) except InvalidProjectError: pass @patch("builtins.open", mock_open(read_data=HOOK_CONFIGURATION_TEST_SETTING)) def test_get_hook_configuration(): - hook_configuration = TypeConfiguration.get_hook_configuration() + hook_configuration = TypeConfiguration.get_hook_configuration(None) assert hook_configuration["Credentials"]["ApiKey"] == "123" assert hook_configuration["Credentials"]["ApplicationKey"] == "123" # get type config again, should be the same config - hook_configuration = TypeConfiguration.get_hook_configuration() + hook_configuration = TypeConfiguration.get_hook_configuration(None) assert hook_configuration["Credentials"]["ApiKey"] == "123" assert hook_configuration["Credentials"]["ApplicationKey"] == "123" @@ -67,7 +86,7 @@ def test_get_hook_configuration(): @patch("builtins.open", mock_open(read_data=HOOK_CONFIGURATION_INVALID)) def test_get_hook_configuration_with_invalid_json(): with pytest.raises(InvalidProjectError) as execinfo: - TypeConfiguration.get_hook_configuration() + TypeConfiguration.get_hook_configuration(None) assert "Hook configuration is invalid" in str(execinfo.value) @@ -75,4 +94,4 @@ def test_get_hook_configuration_with_invalid_json(): def test_get_hook_configuration_with_not_exist_file(): with patch("builtins.open", mock_open()) as f: f.side_effect = FileNotFoundError() - assert TypeConfiguration.get_hook_configuration() is None + assert TypeConfiguration.get_hook_configuration(None) is None diff --git a/tests/test_test.py b/tests/test_test.py index 35dfe07..6f17496 100644 --- a/tests/test_test.py +++ b/tests/test_test.py @@ -227,6 +227,7 @@ def test_test_command_happy_path_resource( mock_project.type_name, None, None, + typeconfig=None, executable_entrypoint=None, docker_image=None, profile=profile, @@ -334,6 +335,7 @@ def test_test_command_happy_path_hook( mock_project.type_name, None, None, + typeconfig=None, executable_entrypoint=None, docker_image=None, target_info=HOOK_TARGET_INFO,
Add option to specify the typeconfiguration.json file to use Testing / submitting hooks/resources requires copying over specific typeconfiguration.json files to ~/.cfn-cli/typeConfiguration.json It would be useful to either default to the local resource/hook directory where the cfn commands are being run and/or be able to specify which typeConfiguration.json file to use.
0.0
b8f69bb0345a033ba44516637e5d8a42d87a1608
[ "tests/contract/test_type_configuration.py::test_get_type_configuration_with_not_exist_file", "tests/contract/test_type_configuration.py::test_get_type_configuration_with_default_typeconfig_location", "tests/contract/test_type_configuration.py::test_get_type_configuration_with_set_typeconfig_location", "tests/contract/test_type_configuration.py::test_get_type_configuration", "tests/contract/test_type_configuration.py::test_get_type_configuration_with_invalid_json", "tests/contract/test_type_configuration.py::test_get_hook_configuration", "tests/contract/test_type_configuration.py::test_get_hook_configuration_with_invalid_json", "tests/contract/test_type_configuration.py::test_get_hook_configuration_with_not_exist_file", "tests/test_test.py::test_test_command_happy_path_resource[args_in0-pytest_args0-plugin_args0]", "tests/test_test.py::test_test_command_happy_path_resource[args_in1-pytest_args1-plugin_args1]", "tests/test_test.py::test_test_command_happy_path_resource[args_in2-pytest_args2-plugin_args2]", "tests/test_test.py::test_test_command_happy_path_resource[args_in3-pytest_args3-plugin_args3]", "tests/test_test.py::test_test_command_happy_path_resource[args_in4-pytest_args4-plugin_args4]", "tests/test_test.py::test_test_command_happy_path_resource[args_in5-pytest_args5-plugin_args5]", "tests/test_test.py::test_test_command_happy_path_hook[args_in0-pytest_args0-plugin_args0]", "tests/test_test.py::test_test_command_happy_path_hook[args_in1-pytest_args1-plugin_args1]", "tests/test_test.py::test_test_command_happy_path_hook[args_in2-pytest_args2-plugin_args2]", "tests/test_test.py::test_test_command_happy_path_hook[args_in3-pytest_args3-plugin_args3]", "tests/test_test.py::test_test_command_happy_path_hook[args_in4-pytest_args4-plugin_args4]", "tests/test_test.py::test_test_command_happy_path_hook[args_in5-pytest_args5-plugin_args5]", "tests/test_test.py::test_test_command_return_code_on_error" ]
[ "tests/test_test.py::test_test_command_module_project_succeeds", "tests/test_test.py::test_temporary_ini_file", "tests/test_test.py::test_get_overrides_no_root", "tests/test_test.py::test_get_overrides_file_not_found", "tests/test_test.py::test_get_overrides_invalid_file", "tests/test_test.py::test_get_overrides_empty_overrides", "tests/test_test.py::test_get_overrides_invalid_pointer_skipped", "tests/test_test.py::test_get_overrides_good_path", "tests/test_test.py::test_get_hook_overrides_no_root", "tests/test_test.py::test_get_hook_overrides_file_not_found", "tests/test_test.py::test_get_hook_overrides_invalid_file", "tests/test_test.py::test_get_hook_overrides_good_path", "tests/test_test.py::test_get_overrides_with_jinja[{\"CREATE\":", "tests/test_test.py::test_get_marker_options[schema0-]", "tests/test_test.py::test_get_marker_options[schema1-expected_marker_keywords1]", "tests/test_test.py::test_get_marker_options[schema2-expected_marker_keywords2]", "tests/test_test.py::test_with_inputs[{\"Name\":", "tests/test_test.py::test_with_inputs_invalid", "tests/test_test.py::test_get_input_invalid_root", "tests/test_test.py::test_get_input_input_folder_does_not_exist", "tests/test_test.py::test_get_input_file_not_found", "tests/test_test.py::test_use_both_sam_and_docker_arguments" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-12-29 23:19:11+00:00
apache-2.0
1,277
aws-cloudformation__cloudformation-cli-python-plugin-236
diff --git a/README.md b/README.md index cd5ec66..6854fe5 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,14 @@ This plugin library helps to provide runtime bindings for the execution of your [![Build Status](https://travis-ci.com/aws-cloudformation/cloudformation-cli-python-plugin.svg?branch=master)](https://travis-ci.com/aws-cloudformation/cloudformation-cli-python-plugin) +## Community + +Join us on Discord! Connect & interact with CloudFormation developers & +experts, find channels to discuss and get help for our CLIs, cfn-lint, CloudFormation registry, StackSets, +Guard and more: + +[![Join our Discord](https://discordapp.com/api/guilds/981586120448020580/widget.png?style=banner3)](https://discord.gg/9zpd7TTRwq) + Installation ------------ diff --git a/python/rpdk/python/codegen.py b/python/rpdk/python/codegen.py index f5c49cf..60174aa 100644 --- a/python/rpdk/python/codegen.py +++ b/python/rpdk/python/codegen.py @@ -334,14 +334,24 @@ class Python36LanguagePlugin(LanguagePlugin): LOG.warning("Starting pip build.") try: - completed_proc = subprocess_run( # nosec - command, - stdout=PIPE, - stderr=PIPE, - cwd=base_path, - check=True, - shell=True, - ) + # On windows run pip command through the default shell (CMD) + if os.name == "nt": + completed_proc = subprocess_run( # nosec + command, + stdout=PIPE, + stderr=PIPE, + cwd=base_path, + check=True, + shell=True, + ) + else: + completed_proc = subprocess_run( # nosec + command, + stdout=PIPE, + stderr=PIPE, + cwd=base_path, + check=True, + ) LOG.warning("pip build finished.") except (FileNotFoundError, CalledProcessError) as e: raise DownstreamError("pip build failed") from e
aws-cloudformation/cloudformation-cli-python-plugin
1866ad7cd1b3c000cf9ce07ee90421a8788dc766
diff --git a/tests/plugin/codegen_test.py b/tests/plugin/codegen_test.py index a36d04b..aa64261 100644 --- a/tests/plugin/codegen_test.py +++ b/tests/plugin/codegen_test.py @@ -416,6 +416,43 @@ def test__build_pip(plugin): mock_pip.assert_called_once_with(sentinel.base_path) +def test__build_pip_posix(plugin): + patch_os_name = patch("rpdk.python.codegen.os.name", "posix") + patch_subproc = patch("rpdk.python.codegen.subprocess_run") + + # Path must be set outside simulated os.name + temppath = Path(str(sentinel.base_path)) + with patch_os_name, patch_subproc as mock_subproc: + plugin._pip_build(temppath) + + mock_subproc.assert_called_once_with( + plugin._make_pip_command(temppath), + stdout=ANY, + stderr=ANY, + cwd=temppath, + check=ANY, + ) + + +def test__build_pip_windows(plugin): + patch_os_name = patch("rpdk.python.codegen.os.name", "nt") + patch_subproc = patch("rpdk.python.codegen.subprocess_run") + + # Path must be set outside simulated os.name + temppath = Path(str(sentinel.base_path)) + with patch_os_name, patch_subproc as mock_subproc: + plugin._pip_build(temppath) + + mock_subproc.assert_called_once_with( + plugin._make_pip_command(temppath), + stdout=ANY, + stderr=ANY, + cwd=temppath, + check=ANY, + shell=True, + ) + + def test__build_docker(plugin): plugin._use_docker = True
It looks like the upgrade to 2.1.6 has broken dependency installation in `cfn submit --dry-run` The pip installation in `cfn submit` appears to log pip usage instructions to stdout instead of actually installing the depenencies. If we downgrade to 2.1.5 then the dependencies are included in the build dir. If we run 2.1.6, then the build directory does not contain any of the pip dependencies that we'd expect. This is consistent if I downgrade and then re-upgrade, but I don't have a public project i can share to demonstrate the behaviour. Example .rpdk-config: ```{ "artifact_type": "HOOK", "typeName": "XXX::XXX::XXX", "language": "python37", "runtime": "python3.7", "entrypoint": "xxx.handlers.hook", "testEntrypoint": "xxx.handlers.hook", "settings": { "version": false, "subparser_name": null, "verbose": 0, "force": false, "type_name": null, "artifact_type": null, "endpoint_url": null, "region": null, "target_schemas": [], "use_docker": false, "protocolVersion": "2.0.0" } } ```
0.0
1866ad7cd1b3c000cf9ce07ee90421a8788dc766
[ "tests/plugin/codegen_test.py::test__build_pip_posix" ]
[ "tests/plugin/codegen_test.py::test_validate_no[y-True]", "tests/plugin/codegen_test.py::test_validate_no[Y-True]", "tests/plugin/codegen_test.py::test_validate_no[yes-True]", "tests/plugin/codegen_test.py::test_validate_no[Yes-True]", "tests/plugin/codegen_test.py::test_validate_no[YES-True]", "tests/plugin/codegen_test.py::test_validate_no[asdf-True]", "tests/plugin/codegen_test.py::test_validate_no[no-False]", "tests/plugin/codegen_test.py::test_validate_no[No-False0]", "tests/plugin/codegen_test.py::test_validate_no[No-False1]", "tests/plugin/codegen_test.py::test_validate_no[n-False]", "tests/plugin/codegen_test.py::test_validate_no[N-False]", "tests/plugin/codegen_test.py::test__remove_build_artifacts_file_found", "tests/plugin/codegen_test.py::test__remove_build_artifacts_file_not_found", "tests/plugin/codegen_test.py::test_initialize_resource", "tests/plugin/codegen_test.py::test_initialize_hook", "tests/plugin/codegen_test.py::test_package_resource_pip", "tests/plugin/codegen_test.py::test__pip_build_executable_not_found", "tests/plugin/codegen_test.py::test__pip_build_called_process_error", "tests/plugin/codegen_test.py::test__build_pip", "tests/plugin/codegen_test.py::test__build_pip_windows", "tests/plugin/codegen_test.py::test__build_docker", "tests/plugin/codegen_test.py::test__build_docker_posix", "tests/plugin/codegen_test.py::test__build_docker_windows", "tests/plugin/codegen_test.py::test__build_docker_no_euid", "tests/plugin/codegen_test.py::test__docker_build_good_path", "tests/plugin/codegen_test.py::test_get_plugin_information", "tests/plugin/codegen_test.py::test__docker_build_bad_path[<lambda>0]", "tests/plugin/codegen_test.py::test__docker_build_bad_path[ImageLoadError]", "tests/plugin/codegen_test.py::test__docker_build_bad_path[<lambda>1]", "tests/plugin/codegen_test.py::test__docker_build_bad_path[<lambda>2]" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2022-12-19 18:37:41+00:00
apache-2.0
1,278
aws-cloudformation__cloudformation-cli-python-plugin-238
diff --git a/python/rpdk/python/codegen.py b/python/rpdk/python/codegen.py index f5c49cf..60174aa 100644 --- a/python/rpdk/python/codegen.py +++ b/python/rpdk/python/codegen.py @@ -334,14 +334,24 @@ class Python36LanguagePlugin(LanguagePlugin): LOG.warning("Starting pip build.") try: - completed_proc = subprocess_run( # nosec - command, - stdout=PIPE, - stderr=PIPE, - cwd=base_path, - check=True, - shell=True, - ) + # On windows run pip command through the default shell (CMD) + if os.name == "nt": + completed_proc = subprocess_run( # nosec + command, + stdout=PIPE, + stderr=PIPE, + cwd=base_path, + check=True, + shell=True, + ) + else: + completed_proc = subprocess_run( # nosec + command, + stdout=PIPE, + stderr=PIPE, + cwd=base_path, + check=True, + ) LOG.warning("pip build finished.") except (FileNotFoundError, CalledProcessError) as e: raise DownstreamError("pip build failed") from e diff --git a/src/cloudformation_cli_python_lib/resource.py b/src/cloudformation_cli_python_lib/resource.py index bf7ba30..bfc9384 100644 --- a/src/cloudformation_cli_python_lib/resource.py +++ b/src/cloudformation_cli_python_lib/resource.py @@ -164,9 +164,13 @@ class Resource: try: return UnmodelledRequest( clientRequestToken=request.bearerToken, - desiredResourceState=request.requestData.resourceProperties, + desiredResourceState=request.requestData.resourceProperties + if request.requestData.resourceProperties + else {}, previousResourceState=request.requestData.previousResourceProperties, - desiredResourceTags=request.requestData.stackTags, + desiredResourceTags=request.requestData.stackTags + if request.requestData.stackTags + else {}, previousResourceTags=request.requestData.previousStackTags, systemTags=request.requestData.systemTags, previousSystemTags=request.requestData.previousSystemTags,
aws-cloudformation/cloudformation-cli-python-plugin
87c31b86cfd21470c9b4b9dced3bfb391a0671b8
diff --git a/tests/plugin/codegen_test.py b/tests/plugin/codegen_test.py index a36d04b..aa64261 100644 --- a/tests/plugin/codegen_test.py +++ b/tests/plugin/codegen_test.py @@ -416,6 +416,43 @@ def test__build_pip(plugin): mock_pip.assert_called_once_with(sentinel.base_path) +def test__build_pip_posix(plugin): + patch_os_name = patch("rpdk.python.codegen.os.name", "posix") + patch_subproc = patch("rpdk.python.codegen.subprocess_run") + + # Path must be set outside simulated os.name + temppath = Path(str(sentinel.base_path)) + with patch_os_name, patch_subproc as mock_subproc: + plugin._pip_build(temppath) + + mock_subproc.assert_called_once_with( + plugin._make_pip_command(temppath), + stdout=ANY, + stderr=ANY, + cwd=temppath, + check=ANY, + ) + + +def test__build_pip_windows(plugin): + patch_os_name = patch("rpdk.python.codegen.os.name", "nt") + patch_subproc = patch("rpdk.python.codegen.subprocess_run") + + # Path must be set outside simulated os.name + temppath = Path(str(sentinel.base_path)) + with patch_os_name, patch_subproc as mock_subproc: + plugin._pip_build(temppath) + + mock_subproc.assert_called_once_with( + plugin._make_pip_command(temppath), + stdout=ANY, + stderr=ANY, + cwd=temppath, + check=ANY, + shell=True, + ) + + def test__build_docker(plugin): plugin._use_docker = True
If a resource is created without Properties, request.desiredResourceState is None This is not technically against the type annotations, but it would be nice if this wasn't the case. Some snippets to test this with: Schema: ```json { "properties": { "Identifier": {"type": "string"}, }, "required": [], "readOnlyProperties": ["/properties/Identifier"], "primaryIdentifier": ["/properties/Identifier"], } ``` handlers.py inside of the create handler ```python # this fails with `AttributeError: 'NoneType' object has no attribute 'Identifier'` model = request.desiredResourceState model.Identifier = identifier_utils.generate_resource_identifier( request.stackId, request.logicalResourceIdentifier, request.clientRequestToken, 255 ) # this works model = request.desiredResourceState if request.desiredResourceState else ResourceModel(Identifier=None) model.Identifier = identifier_utils.generate_resource_identifier( request.stackId, request.logicalResourceIdentifier, request.clientRequestToken, 255 ) ``` I believe this can be solved by changing [this code in resource.py](https://github.com/aws-cloudformation/cloudformation-cli-python-plugin/blob/master/src/cloudformation_cli_python_lib/resource.py#L167) from ```python return UnmodelledRequest( # [...] desiredResourceState=request.requestData.resourceProperties, # [...] ).to_modelled(self._model_cls, self._type_configuration_model_cls) ``` to ```python return UnmodelledRequest( # [...] desiredResourceState=request.requestData.resourceProperties if request.requestData.resourceProperties else {}, # [...] ).to_modelled(self._model_cls, self._type_configuration_model_cls) ``` This probably also applies to previousResourceState and maybe even to the Tag-related properties
0.0
87c31b86cfd21470c9b4b9dced3bfb391a0671b8
[ "tests/plugin/codegen_test.py::test__build_pip_posix" ]
[ "tests/plugin/codegen_test.py::test_validate_no[y-True]", "tests/plugin/codegen_test.py::test_validate_no[Y-True]", "tests/plugin/codegen_test.py::test_validate_no[yes-True]", "tests/plugin/codegen_test.py::test_validate_no[Yes-True]", "tests/plugin/codegen_test.py::test_validate_no[YES-True]", "tests/plugin/codegen_test.py::test_validate_no[asdf-True]", "tests/plugin/codegen_test.py::test_validate_no[no-False]", "tests/plugin/codegen_test.py::test_validate_no[No-False0]", "tests/plugin/codegen_test.py::test_validate_no[No-False1]", "tests/plugin/codegen_test.py::test_validate_no[n-False]", "tests/plugin/codegen_test.py::test_validate_no[N-False]", "tests/plugin/codegen_test.py::test__remove_build_artifacts_file_found", "tests/plugin/codegen_test.py::test__remove_build_artifacts_file_not_found", "tests/plugin/codegen_test.py::test_initialize_resource", "tests/plugin/codegen_test.py::test_initialize_hook", "tests/plugin/codegen_test.py::test_package_resource_pip", "tests/plugin/codegen_test.py::test__pip_build_executable_not_found", "tests/plugin/codegen_test.py::test__pip_build_called_process_error", "tests/plugin/codegen_test.py::test__build_pip", "tests/plugin/codegen_test.py::test__build_pip_windows", "tests/plugin/codegen_test.py::test__build_docker", "tests/plugin/codegen_test.py::test__build_docker_posix", "tests/plugin/codegen_test.py::test__build_docker_windows", "tests/plugin/codegen_test.py::test__build_docker_no_euid", "tests/plugin/codegen_test.py::test__docker_build_good_path", "tests/plugin/codegen_test.py::test_get_plugin_information", "tests/plugin/codegen_test.py::test__docker_build_bad_path[<lambda>0]", "tests/plugin/codegen_test.py::test__docker_build_bad_path[ImageLoadError]", "tests/plugin/codegen_test.py::test__docker_build_bad_path[<lambda>1]", "tests/plugin/codegen_test.py::test__docker_build_bad_path[<lambda>2]" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2022-12-21 14:34:13+00:00
apache-2.0
1,279
aws__aws-cli-3139
diff --git a/.changes/next-release/bugfix-Configuration-17224.json b/.changes/next-release/bugfix-Configuration-17224.json new file mode 100644 index 000000000..6c0270eb1 --- /dev/null +++ b/.changes/next-release/bugfix-Configuration-17224.json @@ -0,0 +1,5 @@ +{ + "type": "bugfix", + "category": "Configuration", + "description": "Fixes `#2996 <https://github.com/aws/aws-cli/issues/2996>`__. Fixed a bug where config file updates would sometimes append new sections to the previous section without adding a newline." +} diff --git a/awscli/customizations/configure/writer.py b/awscli/customizations/configure/writer.py index 2ab842c7a..4aedabc43 100644 --- a/awscli/customizations/configure/writer.py +++ b/awscli/customizations/configure/writer.py @@ -76,8 +76,22 @@ class ConfigFileWriter(object): os.O_WRONLY | os.O_CREAT, 0o600), 'w'): pass + def _check_file_needs_newline(self, filename): + # check if the last byte is a newline + with open(filename, 'rb') as f: + # check if the file is empty + f.seek(0, os.SEEK_END) + if not f.tell(): + return False + f.seek(-1, os.SEEK_END) + last = f.read() + return last != b'\n' + def _write_new_section(self, section_name, new_values, config_filename): + needs_newline = self._check_file_needs_newline(config_filename) with open(config_filename, 'a') as f: + if needs_newline: + f.write('\n') f.write('[%s]\n' % section_name) contents = [] self._insert_new_values(line_number=0,
aws/aws-cli
9fe8025a3b29925037f09bdf823cb5202347c6c8
diff --git a/tests/unit/customizations/configure/test_writer.py b/tests/unit/customizations/configure/test_writer.py index 45ae72329..92fd0f477 100644 --- a/tests/unit/customizations/configure/test_writer.py +++ b/tests/unit/customizations/configure/test_writer.py @@ -356,3 +356,16 @@ class TestConfigFileWriter(unittest.TestCase): '[preview]\n' 'cloudfront = true\n' ) + + def test_appends_newline_on_new_section(self): + original = ( + '[preview]\n' + 'cloudfront = true' + ) + self.assert_update_config( + original, {'region': 'us-west-2', '__section__': 'new-section'}, + '[preview]\n' + 'cloudfront = true\n' + '[new-section]\n' + 'region = us-west-2\n' + )
Adding new profile should add newline before Have multiple IAM users configured in `~/.aws/credentials` Added IAM user as follows: `aws configure --profile NEW_PROFILE_NAME` Completed sucessfully as expected. However, next step of running a command resulted in error: ``` PS C:\WINDOWS\system32> aws --profile NEW_PROFILE_NAME s3 ls Unable to locate credentials. You can configure credentials by running "aws configure". ``` Checking `~/.aws/credentials` file showed new creds were appended to file without newline as follows: ``` [OLD_PROFILE_NAME] aws_access_key_id = AAAAAAAAAAAAAAAAAAAA aws_secret_access_key = KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK region=XX-east-X toolkit_artifact_guid=11111111-1111-1111-1111-111111111111[NEW_PROFILE_NAME] aws_access_key_id = BBBBBBBBBBBBBBBBBBBB aws_secret_access_key = JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ ``` IMHO this is easily fixable by adding a newline echo before the new creds CLI version: `aws-cli/1.12.2 Python/2.7.14 Windows/10 botocore/1.8.2`
0.0
9fe8025a3b29925037f09bdf823cb5202347c6c8
[ "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_appends_newline_on_new_section" ]
[ "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_add_to_nested_with_nested_in_the_end", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_add_to_nested_with_nested_in_the_middle", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_can_handle_empty_section", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_config_file_does_not_exist", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_double_quoted_profile_name", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_handles_no_spaces", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_insert_new_value_in_middle_section", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_insert_values_in_middle_section", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_nested_attributes_new_file", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_new_config_file", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_permissions_on_new_file", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_profile_with_multiple_spaces", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_section_does_not_exist", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_spaces_around_key_names", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_unquoted_profile_name", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_config_with_commented_section", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_config_with_comments", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_nested_attr_no_prior_nesting", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_nested_attribute", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_single_existing_value", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_single_existing_value_no_spaces", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_single_new_values", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_update_value_with_square_brackets", "tests/unit/customizations/configure/test_writer.py::TestConfigFileWriter::test_updated_nested_attribute_new_section" ]
{ "failed_lite_validators": [ "has_added_files" ], "has_test_patch": true, "is_lite": false }
2018-02-08 18:54:24+00:00
apache-2.0
1,280
aws__aws-cli-6603
diff --git a/.changes/next-release/bugfix-s3-66408.json b/.changes/next-release/bugfix-s3-66408.json new file mode 100644 index 000000000..e629c877e --- /dev/null +++ b/.changes/next-release/bugfix-s3-66408.json @@ -0,0 +1,5 @@ +{ + "type": "bugfix", + "category": "s3", + "description": "Support for S3 Glacer Instant Retrieval storage class. Fixes `#6587 <https://github.com/aws/aws-cli/issues/6587>`__" +} diff --git a/awscli/customizations/s3/subcommands.py b/awscli/customizations/s3/subcommands.py index b8dc4b86e..57ed1ea92 100644 --- a/awscli/customizations/s3/subcommands.py +++ b/awscli/customizations/s3/subcommands.py @@ -249,12 +249,12 @@ SSE_C_COPY_SOURCE_KEY = { STORAGE_CLASS = {'name': 'storage-class', 'choices': ['STANDARD', 'REDUCED_REDUNDANCY', 'STANDARD_IA', 'ONEZONE_IA', 'INTELLIGENT_TIERING', 'GLACIER', - 'DEEP_ARCHIVE'], + 'DEEP_ARCHIVE', 'GLACIER_IR'], 'help_text': ( "The type of storage to use for the object. " "Valid choices are: STANDARD | REDUCED_REDUNDANCY " "| STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING " - "| GLACIER | DEEP_ARCHIVE. " + "| GLACIER | DEEP_ARCHIVE | GLACIER_IR. " "Defaults to 'STANDARD'")}
aws/aws-cli
23ef06b14f3b783d62ee618c942bd4f0014cffe3
diff --git a/tests/unit/customizations/s3/test_copy_params.py b/tests/unit/customizations/s3/test_copy_params.py index f9a2a82f9..1f735f96d 100644 --- a/tests/unit/customizations/s3/test_copy_params.py +++ b/tests/unit/customizations/s3/test_copy_params.py @@ -80,6 +80,15 @@ class TestGetObject(BaseAWSCommandParamsTest): 'StorageClass': u'STANDARD_IA'} self.assert_params(cmdline, result) + def test_glacier_ir_storage_class(self): + cmdline = self.prefix + cmdline += self.file_path + cmdline += ' s3://mybucket/mykey' + cmdline += ' --storage-class GLACIER_IR' + result = {'Bucket': u'mybucket', 'Key': u'mykey', + 'StorageClass': u'GLACIER_IR'} + self.assert_params(cmdline, result) + def test_website_redirect(self): cmdline = self.prefix cmdline += self.file_path
aws s3 cp with GLACIER_IR storage class Confirm by changing [ ] to [x] below: - [X] I've gone though the [User Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) and the [API reference](https://docs.aws.amazon.com/cli/latest/reference/) - [X] I've searched for [previous similar issues](https://github.com/aws/aws-cli/issues) and didn't find any solution Issue is about usage on: aws s3 cp storage class **Platform/OS/Hardware/Device** Cent 7.5 aws-cli/2.4.4 **Describe the question** Anybody knows when the AWS CLI will suport then new Glacier instant, works with the API, but not via s3 cp root ~ $ `aws s3 cp --follow-symlinks /data/emfs1/.projects/premiere/test/IMG_6723.MOV s3://XXX --storage-class GLACIER_IR` ``` aws: error: argument --storage-class: Invalid choice, valid choices are: STANDARD | REDUCED_REDUNDANCY STANDARD_IA | ONEZONE_IA INTELLIGENT_TIERING | GLACIER DEEP_ARCHIVE Invalid choice: 'GLACIER_IR', maybe you meant: STANDARD | REDUCED_REDUNDANCY STANDARD_IA | ONEZONE_IA INTELLIGENT_TIERING | GLACIER DEEP_ARCHIVE GLACIER ``` **Logs/output** ``` Invalid choice: 'GLACIER_IR', maybe you meant: STANDARD | REDUCED_REDUNDANCY STANDARD_IA | ONEZONE_IA INTELLIGENT_TIERING | GLACIER DEEP_ARCHIVE ```
0.0
23ef06b14f3b783d62ee618c942bd4f0014cffe3
[ "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_glacier_ir_storage_class" ]
[ "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_acl", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_content_params", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_content_type", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_grants", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_grants_bad", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_simple", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_sse", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_standard_ia_storage_class", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_storage_class", "tests/unit/customizations/s3/test_copy_params.py::TestGetObject::test_website_redirect" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_media", "has_added_files" ], "has_test_patch": true, "is_lite": false }
2021-12-08 21:14:42+00:00
apache-2.0
1,281
aws__aws-cli-7342
diff --git a/.changes/next-release/bugfix-docs-61070.json b/.changes/next-release/bugfix-docs-61070.json new file mode 100644 index 000000000..b8005c2aa --- /dev/null +++ b/.changes/next-release/bugfix-docs-61070.json @@ -0,0 +1,5 @@ +{ + "type": "bugfix", + "category": "docs", + "description": "Fixes `#7338 <https://github.com/aws/aws-cli/issues/7338>`__. Remove global options from topic tags." +} diff --git a/awscli/clidocs.py b/awscli/clidocs.py index 3f65041be..9d8fb9d59 100644 --- a/awscli/clidocs.py +++ b/awscli/clidocs.py @@ -655,6 +655,9 @@ class TopicListerDocumentEventHandler(CLIDocumentEventHandler): def doc_options_end(self, help_command, **kwargs): pass + def doc_global_option(self, help_command, **kwargs): + pass + def doc_subitems_start(self, help_command, **kwargs): doc = help_command.doc doc.style.h2('Available Topics')
aws/aws-cli
1df6ee16f4472be368173f176e1c2c7a67e22dfb
diff --git a/tests/unit/test_clidocs.py b/tests/unit/test_clidocs.py index 600657310..be6a3ec7a 100644 --- a/tests/unit/test_clidocs.py +++ b/tests/unit/test_clidocs.py @@ -652,6 +652,11 @@ class TestTopicDocumentEventHandler(TestTopicDocumentEventHandlerBase): contents = self.cmd.doc.getvalue().decode('utf-8') self.assertIn(ref_body, contents) + def test_excludes_global_options(self): + self.doc_handler.doc_global_option(self.cmd) + global_options = self.cmd.doc.getvalue().decode('utf-8') + self.assertNotIn('Global Options', global_options) + class TestGlobalOptionsDocumenter(unittest.TestCase): def create_help_command(self):
Unknown interpreted text role "abc" while calling aws help config-vars ### Describe the bug Error is shown instead of information how to configure variables. ### Expected Behavior Display help how to configure variables. ### Current Behavior ``` <string>:450: (ERROR/3) Unknown interpreted text role "doc". <string>:567: (SEVERE/4) Title level inconsistent: ============== Global Options ============== <string>:567: (SEVERE/4) Title level inconsistent: ============== Global Options ============== ``` ### Reproduction Steps aws help config-vars ### Possible Solution _No response_ ### Additional Information/Context _No response_ ### CLI version used aws-cli/2.8.2 Python/3.9.11 Windows/10 exe/AMD64 prompt/off ### Environment details (OS name and version, etc.) Windows 11 Home 22H2 (compilation 22621.608)
0.0
1df6ee16f4472be368173f176e1c2c7a67e22dfb
[ "tests/unit/test_clidocs.py::TestTopicDocumentEventHandler::test_excludes_global_options" ]
[ "tests/unit/test_clidocs.py::TestRecursiveShapes::test_handle_empty_nested_struct", "tests/unit/test_clidocs.py::TestRecursiveShapes::test_handle_memberless_output_shape", "tests/unit/test_clidocs.py::TestRecursiveShapes::test_handle_no_output_shape", "tests/unit/test_clidocs.py::TestRecursiveShapes::test_handle_recursive_input", "tests/unit/test_clidocs.py::TestRecursiveShapes::test_handle_recursive_output", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_breadcrumbs_html", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_breadcrumbs_man", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_breadcrumbs_operation_command_html", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_breadcrumbs_service_command_html", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_breadcrumbs_wait_command_html", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_description_only_for_crosslink_manpage", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_documents_enum_values", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_documents_json_header_shape", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_documents_nested_list", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_documents_nested_map", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_documents_nested_structure", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_documents_recursive_input", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_includes_streaming_blob_options", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_includes_tagged_union_options", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_includes_webapi_crosslink_in_html", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_streaming_blob_comes_after_docstring", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_tagged_union_comes_after_docstring_options", "tests/unit/test_clidocs.py::TestCLIDocumentEventHandler::test_tagged_union_comes_after_docstring_output", "tests/unit/test_clidocs.py::TestTopicListerDocumentEventHandler::test_breadcrumbs", "tests/unit/test_clidocs.py::TestTopicListerDocumentEventHandler::test_description", "tests/unit/test_clidocs.py::TestTopicListerDocumentEventHandler::test_subitems_start", "tests/unit/test_clidocs.py::TestTopicListerDocumentEventHandler::test_subitems_start_html", "tests/unit/test_clidocs.py::TestTopicListerDocumentEventHandler::test_title", "tests/unit/test_clidocs.py::TestTopicDocumentEventHandler::test_breadcrumbs", "tests/unit/test_clidocs.py::TestTopicDocumentEventHandler::test_description", "tests/unit/test_clidocs.py::TestTopicDocumentEventHandler::test_description_no_tags", "tests/unit/test_clidocs.py::TestTopicDocumentEventHandler::test_description_tags_in_body", "tests/unit/test_clidocs.py::TestTopicDocumentEventHandler::test_title", "tests/unit/test_clidocs.py::TestGlobalOptionsDocumenter::test_doc_global_options", "tests/unit/test_clidocs.py::TestGlobalOptionsDocumenter::test_doc_global_synopsis" ]
{ "failed_lite_validators": [ "has_added_files" ], "has_test_patch": true, "is_lite": false }
2022-10-14 01:18:24+00:00
apache-2.0
1,282
aws__aws-cli-7612
diff --git a/awscli/customizations/eks/get_token.py b/awscli/customizations/eks/get_token.py index 0b10f2bc8..aedc1022f 100644 --- a/awscli/customizations/eks/get_token.py +++ b/awscli/customizations/eks/get_token.py @@ -20,6 +20,7 @@ from datetime import datetime, timedelta from botocore.signers import RequestSigner from botocore.model import ServiceId +from awscli.formatter import get_formatter from awscli.customizations.commands import BasicCommand from awscli.customizations.utils import uni_print from awscli.customizations.utils import validate_mutually_exclusive @@ -142,7 +143,11 @@ class GetTokenCommand(BasicCommand): }, } - uni_print(json.dumps(full_object)) + output = self._session.get_config_variable('output') + formatter = get_formatter(output, parsed_globals) + formatter.query = parsed_globals.query + + formatter(self.NAME, full_object) uni_print('\n') return 0
aws/aws-cli
4316b69807cd69d2e323f4552139ccf920ec568e
diff --git a/tests/functional/eks/test_get_token.py b/tests/functional/eks/test_get_token.py index bf2165899..f22211fa0 100644 --- a/tests/functional/eks/test_get_token.py +++ b/tests/functional/eks/test_get_token.py @@ -95,6 +95,48 @@ class TestGetTokenCommand(BaseAWSCommandParamsTest): }, ) + @mock.patch('awscli.customizations.eks.get_token.datetime') + def test_query_nested_object(self, mock_datetime): + mock_datetime.utcnow.return_value = datetime(2019, 10, 23, 23, 0, 0, 0) + cmd = 'eks get-token --cluster-name %s' % self.cluster_name + cmd += ' --query status' + response = self.run_get_token(cmd) + self.assertEqual( + response, + { + "expirationTimestamp": "2019-10-23T23:14:00Z", + "token": mock.ANY, # This is asserted in later cases + }, + ) + + def test_query_value(self): + cmd = 'eks get-token --cluster-name %s' % self.cluster_name + cmd += ' --query apiVersion' + response = self.run_get_token(cmd) + self.assertEqual( + response, "client.authentication.k8s.io/v1beta1", + ) + + @mock.patch('awscli.customizations.eks.get_token.datetime') + def test_output_text(self, mock_datetime): + mock_datetime.utcnow.return_value = datetime(2019, 10, 23, 23, 0, 0, 0) + cmd = 'eks get-token --cluster-name %s' % self.cluster_name + cmd += ' --output text' + stdout, _, _ = self.run_cmd(cmd) + self.assertIn("ExecCredential", stdout) + self.assertIn("client.authentication.k8s.io/v1beta1", stdout) + self.assertIn("2019-10-23T23:14:00Z", stdout) + + @mock.patch('awscli.customizations.eks.get_token.datetime') + def test_output_table(self, mock_datetime): + mock_datetime.utcnow.return_value = datetime(2019, 10, 23, 23, 0, 0, 0) + cmd = 'eks get-token --cluster-name %s' % self.cluster_name + cmd += ' --output table' + stdout, _, _ = self.run_cmd(cmd) + self.assertIn("ExecCredential", stdout) + self.assertIn("client.authentication.k8s.io/v1beta1", stdout) + self.assertIn("2019-10-23T23:14:00Z", stdout) + def test_url(self): cmd = 'eks get-token --cluster-name %s' % self.cluster_name response = self.run_get_token(cmd)
--query does not work with aws eks get-token It looks like `aws eks get-token` returns JSON-like output, but not pretty printed, and not working with `--query`, like ``` aws eks get-token --cluster-name myclustername --query status.token ``` still returns the complete output. As well format output cannot be changed. Tested with ``` aws --version aws-cli/1.16.218 Python/3.6.8 Linux/4.15.0-1047-aws botocore/1.12.208 ``` but others reported the same for `1.16.230`: https://stackoverflow.com/a/57878048/1545325 Thank you!
0.0
4316b69807cd69d2e323f4552139ccf920ec568e
[ "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_query_nested_object", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_query_value" ]
[ "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_api_version_discovery_deprecated", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_api_version_discovery_empty", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_api_version_discovery_malformed", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_api_version_discovery_unknown", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_api_version_discovery_v1", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_api_version_discovery_v1beta1", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_get_token", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_output_table", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_output_text", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_token_has_no_padding", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_url", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_url_different_partition", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_url_with_arn", "tests/functional/eks/test_get_token.py::TestGetTokenCommand::test_url_with_region" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2023-01-24 17:07:14+00:00
apache-2.0
1,283
aws__aws-xray-sdk-python-93
diff --git a/aws_xray_sdk/ext/resources/aws_para_whitelist.json b/aws_xray_sdk/ext/resources/aws_para_whitelist.json index bc6642a..3a89b2e 100644 --- a/aws_xray_sdk/ext/resources/aws_para_whitelist.json +++ b/aws_xray_sdk/ext/resources/aws_para_whitelist.json @@ -1,5 +1,14 @@ { "services": { + "sns": { + "operations": { + "Publish": { + "request_parameters": [ + "TopicArn" + ] + } + } + }, "dynamodb": { "operations": { "BatchGetItem": {
aws/aws-xray-sdk-python
303861097a2dc401dfe0c3fdafed184a0aefb2b2
diff --git a/tests/ext/botocore/test_botocore.py b/tests/ext/botocore/test_botocore.py index 8d48785..f006de2 100644 --- a/tests/ext/botocore/test_botocore.py +++ b/tests/ext/botocore/test_botocore.py @@ -150,3 +150,26 @@ def test_pass_through_on_context_missing(): assert result is not None xray_recorder.configure(context_missing='RUNTIME_ERROR') + + +def test_sns_publish_parameters(): + sns = session.create_client('sns', region_name='us-west-2') + response = { + 'ResponseMetadata': { + 'RequestId': REQUEST_ID, + 'HTTPStatusCode': 200, + } + } + + with Stubber(sns) as stubber: + stubber.add_response('publish', response, {'TopicArn': 'myAmazingTopic', 'Message': 'myBodaciousMessage'}) + sns.publish(TopicArn='myAmazingTopic', Message='myBodaciousMessage') + + subsegment = xray_recorder.current_segment().subsegments[0] + assert subsegment.http['response']['status'] == 200 + + aws_meta = subsegment.aws + assert aws_meta['topic_arn'] == 'myAmazingTopic' + assert aws_meta['request_id'] == REQUEST_ID + assert aws_meta['region'] == 'us-west-2' + assert aws_meta['operation'] == 'Publish'
Add SNS Service "Publish" operation to the aws_para_whitelist Currently the SNS publish operation shows up with a minimum set of metadata: ``` "aws": { "operation": "Publish", "region": "us-east-1", "request_id": "a939cee1-7c48-5675-b385-9ae2206dc121" } ``` This should include at least the known internal AWS resources like `TopicArn` or `TargetArn` and maybe even the `PhoneNumber`.
0.0
303861097a2dc401dfe0c3fdafed184a0aefb2b2
[ "tests/ext/botocore/test_botocore.py::test_sns_publish_parameters" ]
[ "tests/ext/botocore/test_botocore.py::test_ddb_table_name", "tests/ext/botocore/test_botocore.py::test_s3_bucket_name_capture", "tests/ext/botocore/test_botocore.py::test_list_parameter_counting", "tests/ext/botocore/test_botocore.py::test_map_parameter_grouping", "tests/ext/botocore/test_botocore.py::test_pass_through_on_context_missing" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2018-09-07 11:44:11+00:00
apache-2.0
1,284
awslabs__aws-cfn-template-flip-22
diff --git a/cfn_flip/__init__.py b/cfn_flip/__init__.py index 34e9257..d5b112a 100644 --- a/cfn_flip/__init__.py +++ b/cfn_flip/__init__.py @@ -10,19 +10,11 @@ or in the "license" file accompanying this file. This file is distributed on an from .clean import clean from .custom_json import DateTimeAwareJsonEncoder -from .custom_yaml import custom_yaml +from .custom_yaml import CustomDumper, CustomLoader import collections import json +import yaml -class MyDumper(custom_yaml.Dumper): - """ - Indent block sequences from parent using more common style - (" - entry" vs "- entry"). - Causes fewer problems with validation and tools. - """ - - def increase_indent(self,flow=False, indentless=False): - return super(MyDumper,self).increase_indent(flow, False) def to_json(template, clean_up=False): """ @@ -30,7 +22,7 @@ def to_json(template, clean_up=False): undoing yaml short syntax where detected """ - data = custom_yaml.load(template) + data = yaml.load(template, Loader=CustomLoader) if clean_up: data = clean(data) @@ -48,7 +40,7 @@ def to_yaml(template, clean_up=False): if clean_up: data = clean(data) - return custom_yaml.dump(data, Dumper=MyDumper, default_flow_style=False) + return yaml.dump(data, Dumper=CustomDumper, default_flow_style=False) def flip(template, clean_up=False): """ diff --git a/cfn_flip/clean.py b/cfn_flip/clean.py index 318e445..2b731b2 100644 --- a/cfn_flip/clean.py +++ b/cfn_flip/clean.py @@ -1,11 +1,11 @@ -""" -Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at - - http://aws.amazon.com/apache2.0/ - -or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. +""" +Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at + + http://aws.amazon.com/apache2.0/ + +or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import json @@ -27,7 +27,7 @@ def convert_join(sep, parts): parts[i] = "${{{}}}".format(part["Ref"]) elif "Fn::GetAtt" in part: params = part["Fn::GetAtt"] - parts[i] = "${{{}.{}}}".format(params[0], params[1]) + parts[i] = "${{{}}}".format(".".join(params)) else: param_name = "Param{}".format(len(args) + 1) args[param_name] = part @@ -42,7 +42,7 @@ def convert_join(sep, parts): return { "Fn::Sub": [source, args], } - + return { "Fn::Sub": source, } diff --git a/cfn_flip/custom_yaml.py b/cfn_flip/custom_yaml.py index a2ff89c..6ddb14c 100644 --- a/cfn_flip/custom_yaml.py +++ b/cfn_flip/custom_yaml.py @@ -1,23 +1,37 @@ -""" -Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at - - http://aws.amazon.com/apache2.0/ - -or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. +""" +Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at + + http://aws.amazon.com/apache2.0/ + +or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ -import imp import six import collections +import yaml -custom_yaml = imp.load_module("custom_yaml", *imp.find_module("yaml")) TAG_MAP = "tag:yaml.org,2002:map" TAG_STRING = "tag:yaml.org,2002:str" UNCONVERTED_SUFFIXES = ["Ref", "Condition"] +class CustomDumper(yaml.Dumper): + """ + Indent block sequences from parent using more common style + (" - entry" vs "- entry"). + Causes fewer problems with validation and tools. + """ + + def increase_indent(self,flow=False, indentless=False): + return super(CustomDumper,self).increase_indent(flow, False) + + +class CustomLoader(yaml.Loader): + pass + + def multi_constructor(loader, tag_suffix, node): """ Deal with !Ref style function format @@ -30,11 +44,11 @@ def multi_constructor(loader, tag_suffix, node): if tag_suffix == "Fn::GetAtt": constructor = construct_getatt - elif isinstance(node, custom_yaml.ScalarNode): + elif isinstance(node, yaml.ScalarNode): constructor = loader.construct_scalar - elif isinstance(node, custom_yaml.SequenceNode): + elif isinstance(node, yaml.SequenceNode): constructor = loader.construct_sequence - elif isinstance(node, custom_yaml.MappingNode): + elif isinstance(node, yaml.MappingNode): constructor = loader.construct_mapping else: raise "Bad tag: !{}".format(tag_suffix) @@ -116,7 +130,7 @@ def representer(dumper, data): data = data[key] if tag == "!GetAtt": - data = "{}.{}".format(data[0], data[1]) + data = ".".join(data) if isinstance(data, dict): return dumper.represent_mapping(tag, data, flow_style=False) @@ -126,8 +140,8 @@ def representer(dumper, data): return dumper.represent_scalar(tag, data) # Customise our yaml -custom_yaml.add_representer(six.text_type, lambda dumper, value: dumper.represent_scalar(TAG_STRING, value)) -custom_yaml.add_constructor(TAG_MAP, construct_mapping) -custom_yaml.add_multi_constructor("!", multi_constructor) -custom_yaml.add_representer(collections.OrderedDict, representer) -custom_yaml.add_representer(dict, representer) +CustomDumper.add_representer(six.text_type, lambda dumper, value: dumper.represent_scalar(TAG_STRING, value)) +CustomLoader.add_constructor(TAG_MAP, construct_mapping) +CustomLoader.add_multi_constructor("!", multi_constructor) +CustomDumper.add_representer(collections.OrderedDict, representer) +CustomDumper.add_representer(dict, representer) diff --git a/setup.py b/setup.py index 52de37c..e6acc7c 100644 --- a/setup.py +++ b/setup.py @@ -24,6 +24,7 @@ setup( "six", ], zip_safe=False, + test_suite="tests", entry_points={ "console_scripts": ["cfn-flip=cfn_flip.main:main"], },
awslabs/aws-cfn-template-flip
68a80c5903ecae27703165db35f8693aed5fff85
diff --git a/tests/__init__.py b/tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tests/test_flip.py b/tests/test_flip.py index 17d7236..526e9a4 100644 --- a/tests/test_flip.py +++ b/tests/test_flip.py @@ -1,17 +1,18 @@ -""" -Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at - - http://aws.amazon.com/apache2.0/ - -or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. +""" +Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at + + http://aws.amazon.com/apache2.0/ + +or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import cfn_flip +from cfn_flip.custom_yaml import CustomLoader import json import unittest -from cfn_flip.custom_yaml import custom_yaml +import yaml class CfnFlipTestCase(unittest.TestCase): @@ -33,10 +34,10 @@ class CfnFlipTestCase(unittest.TestCase): self.clean_yaml = f.read() self.parsed_json = json.loads(self.input_json) - self.parsed_yaml = custom_yaml.load(self.input_yaml) + self.parsed_yaml = yaml.load(self.input_yaml, Loader=CustomLoader) self.parsed_clean_json = json.loads(self.clean_json) - self.parsed_clean_yaml = custom_yaml.load(self.clean_yaml) + self.parsed_clean_yaml = yaml.load(self.clean_yaml, Loader=CustomLoader) self.bad_data = "<!DOCTYPE html>\n\n<html>\n\tThis isn't right!\n</html>" @@ -76,7 +77,7 @@ class CfnFlipTestCase(unittest.TestCase): with self.assertRaises(ValueError): json.loads(actual) - parsed_actual = custom_yaml.load(actual) + parsed_actual = yaml.load(actual, Loader=CustomLoader) self.assertDictEqual(parsed_actual, self.parsed_yaml) @@ -111,7 +112,7 @@ class CfnFlipTestCase(unittest.TestCase): with self.assertRaises(ValueError): json.loads(actual) - parsed_actual = custom_yaml.load(actual) + parsed_actual = yaml.load(actual, Loader=CustomLoader) self.assertDictEqual(parsed_actual, self.parsed_yaml) @@ -139,7 +140,7 @@ class CfnFlipTestCase(unittest.TestCase): with self.assertRaises(ValueError): json.loads(actual) - parsed_actual = custom_yaml.load(actual) + parsed_actual = yaml.load(actual, Loader=CustomLoader) self.assertDictEqual(parsed_actual, self.parsed_clean_yaml) @@ -183,12 +184,43 @@ class CfnFlipTestCase(unittest.TestCase): } """ - + expected = "!GetAtt 'Left.Right'\n" self.assertEqual(cfn_flip.to_yaml(data, clean_up=False), expected) self.assertEqual(cfn_flip.to_yaml(data, clean_up=True), expected) + def test_flip_to_yaml_with_multi_level_getatt(self): + """ + Test that we correctly convert multi-level Fn::GetAtt + from JSON to YAML format + """ + + data = """ + { + "Fn::GetAtt": ["First", "Second", "Third"] + } + """ + + expected = "!GetAtt 'First.Second.Third'\n" + + self.assertEqual(cfn_flip.to_yaml(data), expected) + + def test_flip_to_json_with_multi_level_getatt(self): + """ + Test that we correctly convert multi-level Fn::GetAtt + from YAML to JSON format + """ + + data = "!GetAtt 'First.Second.Third'\n" + + expected = { + "Fn::GetAtt": ["First", "Second", "Third"] + } + + actual = cfn_flip.to_json(data, clean_up=True) + self.assertEqual(expected, json.loads(actual)) + def test_getatt_from_yaml(self): """ Test that we correctly convert the short form of GetAtt diff --git a/tests/test_joins.py b/tests/test_joins.py index 60b28e6..b158f5f 100644 --- a/tests/test_joins.py +++ b/tests/test_joins.py @@ -1,11 +1,11 @@ -""" -Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at - - http://aws.amazon.com/apache2.0/ - -or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. +""" +Copyright 2016-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at + + http://aws.amazon.com/apache2.0/ + +or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import cfn_flip @@ -77,6 +77,26 @@ class ReplaceJoinTestCase(unittest.TestCase): self.assertEqual(expected, actual) + def test_multi_level_get_att(self): + """ + Base64 etc should be replaced by parameters to Sub + """ + + source = { + "Fn::Join": [ + " ", + ["The", {"Fn::GetAtt": ["First", "Second", "Third"]}, "is", "a", "lie"], + ], + } + + expected = { + "Fn::Sub": "The ${First.Second.Third} is a lie", + } + + actual = cfn_flip.clean(source) + + self.assertEqual(expected, actual) + def test_others(self): """ GetAtt should be replaced by ${Thing.Property} diff --git a/tests/test_yaml_patching.py b/tests/test_yaml_patching.py index 5079357..db23442 100644 --- a/tests/test_yaml_patching.py +++ b/tests/test_yaml_patching.py @@ -9,6 +9,7 @@ or in the "license" file accompanying this file. This file is distributed on an """ import cfn_flip +import collections import json import unittest import yaml @@ -19,7 +20,7 @@ class YamlPatchTestCase(unittest.TestCase): Check that we don't patch yaml for everybody """ - def test_yaml_ordered_dict(self): + def test_yaml_no_ordered_dict(self): """ cfn-flip patches yaml to use OrderedDict by default Check that we don't do this for folks who import cfn_flip and yaml @@ -29,3 +30,14 @@ class YamlPatchTestCase(unittest.TestCase): data = yaml.load(yaml_string) self.assertEqual(type(data), dict) + + def test_yaml_no_ordered_dict(self): + """ + cfn-flip patches yaml to use OrderedDict by default + Check that we do this for normal cfn_flip use cases + """ + + yaml_string = "key: value" + data = yaml.load(yaml_string, Loader=cfn_flip.CustomLoader) + + self.assertEqual(type(data), collections.OrderedDict)
yaml ordereddict fix breaks when yaml is a .egg The fix for #14 doesn't work in all cases. When PyYAML is installed in a .egg file the load of yaml as custom_yaml fails. For troposphere, [here](https://travis-ci.org/cloudtools/troposphere/jobs/256102858) is an example of the tests failing. The issue is the imp module does not know how how to handle hooks to zipimport. I have yet to find a good alternate solution to this [code](https://github.com/awslabs/aws-cfn-template-flip/blob/master/cfn_flip/custom_yaml.py#L15).
0.0
68a80c5903ecae27703165db35f8693aed5fff85
[ "tests/test_flip.py::CfnFlipTestCase::test_flip_to_clean_json", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_clean_yaml", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_json", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_json_with_condition", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_json_with_datetimes", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_json_with_multi_level_getatt", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_yaml", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_yaml_with_clean_getatt", "tests/test_flip.py::CfnFlipTestCase::test_flip_to_yaml_with_multi_level_getatt", "tests/test_flip.py::CfnFlipTestCase::test_flip_with_bad_data", "tests/test_flip.py::CfnFlipTestCase::test_getatt_from_yaml", "tests/test_flip.py::CfnFlipTestCase::test_to_json_with_json", "tests/test_flip.py::CfnFlipTestCase::test_to_json_with_yaml", "tests/test_flip.py::CfnFlipTestCase::test_to_yaml_with_json", "tests/test_flip.py::CfnFlipTestCase::test_to_yaml_with_yaml", "tests/test_joins.py::ReplaceJoinTestCase::test_basic_case", "tests/test_joins.py::ReplaceJoinTestCase::test_get_att", "tests/test_joins.py::ReplaceJoinTestCase::test_in_array", "tests/test_joins.py::ReplaceJoinTestCase::test_multi_level_get_att", "tests/test_joins.py::ReplaceJoinTestCase::test_others", "tests/test_joins.py::ReplaceJoinTestCase::test_ref", "tests/test_yaml_patching.py::YamlPatchTestCase::test_yaml_no_ordered_dict" ]
[]
{ "failed_lite_validators": [ "has_hyperlinks", "has_issue_reference", "has_many_modified_files", "has_many_hunks", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2017-07-23 18:44:01+00:00
apache-2.0
1,287
awslabs__aws-cfn-template-flip-43
diff --git a/cfn_flip/yaml_dumper.py b/cfn_flip/yaml_dumper.py index 85b287d..2a3a764 100644 --- a/cfn_flip/yaml_dumper.py +++ b/cfn_flip/yaml_dumper.py @@ -15,7 +15,9 @@ See the License for the specific language governing permissions and limitations from cfn_clean.yaml_dumper import CleanCfnYamlDumper from cfn_tools.odict import ODict from cfn_tools.yaml_dumper import CfnYamlDumper +import six +TAG_STR = "tag:yaml.org,2002:str" TAG_MAP = "tag:yaml.org,2002:map" CONVERTED_SUFFIXES = ["Ref", "Condition"] @@ -46,6 +48,13 @@ class LongCleanDumper(CleanCfnYamlDumper): """ +def string_representer(dumper, value): + if value.startswith("0"): + return dumper.represent_scalar(TAG_STR, value, style="'") + + return dumper.represent_scalar(TAG_STR, value) + + def fn_representer(dumper, fn_name, value): tag = "!{}".format(fn_name) @@ -82,6 +91,7 @@ def map_representer(dumper, value): # Customise our dumpers Dumper.add_representer(ODict, map_representer) +Dumper.add_representer(six.text_type, string_representer) CleanDumper.add_representer(ODict, map_representer)
awslabs/aws-cfn-template-flip
168476fed202b08221f163de22adb9cb859d937e
diff --git a/tests/test_flip.py b/tests/test_flip.py index c479a20..5ac0cee 100644 --- a/tests/test_flip.py +++ b/tests/test_flip.py @@ -502,5 +502,39 @@ def test_get_dumper(): When invoking get_dumper use clean_up & long_form :return: LongCleanDumper """ + resp = cfn_flip.get_dumper(clean_up=True, long_form=True) assert resp == cfn_flip.yaml_dumper.LongCleanDumper + + +def test_quoted_digits(): + """ + Any value that is composed entirely of digits + should be quoted for safety. + CloudFormation is happy for numbers to appear as strings. + But the opposite (e.g. account numbers as numbers) can cause issues + See https://github.com/awslabs/aws-cfn-template-flip/issues/41 + """ + + value = dump_json(ODict(( + ("int", 123456), + ("float", 123.456), + ("oct", "0123456"), + ("bad-oct", "012345678"), + ("safe-oct", "0o123456"), + ("string", "abcdef"), + ))) + + expected = "\n".join(( + "int: 123456", + "float: 123.456", + "oct: '0123456'", + "bad-oct: '012345678'", + "safe-oct: '0o123456'", + "string: abcdef", + "" + )) + + actual = cfn_flip.to_yaml(value) + + assert actual == expected
Inconsistent conversion of strings from json to yaml I am converting a document from json to yaml as part of a CloudFormation Template, and am noticing an odd error where some Id's that are marked as strings are being converted to strings, and other times not. Here's a json snippet I'm working with right now which are the mappings for some of the Generic Elastic Load Balancer ID's for AWS: ``` "Mappings": { "Regions": { "us-east-1": { "ELBID": "127311923021", "Name": "ue1" }, "us-east-2": { "ELBID": "033677994240", "Name": "ue2" }, "us-west-1": { "ELBID": "027434742980", "Name": "uw1" }, "us-west-2": { "ELBID": "797873946194", "Name": "uw2" } } } ``` And This is the resulting yaml I'm getting after calling to_yaml: ``` Mappings: Regions: us-east-1: ELBID: '127311923021' Name: ue1 us-east-2: ELBID: 033677994240 Name: ue2 us-west-1: ELBID: 027434742980 Name: uw1 us-west-2: ELBID: '797873946194' Name: uw2 ``` Strangely enough, any number beginning with 0 is converted, but the ones beginning with other numbers do not. I'm not sure what the expected behavior should be in this case, (either fully converted or not) but having it half and half is inconsistent, and I would believe is a bug. Currently I'm having errors with using this yaml with sceptre/CloudFormation due to some of the Elastic Load Balancer ID's not being strings.
0.0
168476fed202b08221f163de22adb9cb859d937e
[ "tests/test_flip.py::test_quoted_digits" ]
[ "tests/test_flip.py::test_flip_to_json_with_datetimes", "tests/test_flip.py::test_flip_to_yaml_with_clean_getatt", "tests/test_flip.py::test_flip_to_yaml_with_multi_level_getatt", "tests/test_flip.py::test_flip_to_yaml_with_dotted_getatt", "tests/test_flip.py::test_flip_to_json_with_multi_level_getatt", "tests/test_flip.py::test_getatt_from_yaml", "tests/test_flip.py::test_flip_to_json_with_condition", "tests/test_flip.py::test_flip_to_yaml_with_newlines", "tests/test_flip.py::test_clean_flip_to_yaml_with_newlines", "tests/test_flip.py::test_unconverted_types", "tests/test_flip.py::test_get_dumper" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2018-03-12 14:21:52+00:00
apache-2.0
1,288
awslabs__aws-cfn-template-flip-64
diff --git a/cfn_clean/__init__.py b/cfn_clean/__init__.py index 782002b..3fdb6d4 100644 --- a/cfn_clean/__init__.py +++ b/cfn_clean/__init__.py @@ -53,6 +53,13 @@ def convert_join(value): new_parts.append("${{{}}}".format(".".join(params))) else: for key, val in args.items(): + # we want to bail if a conditional can evaluate to AWS::NoValue + if isinstance(val, dict): + if "Fn::If" in val and "AWS::NoValue" in val["Fn::If"]: + return { + "Fn::Join": value, + } + if val == part: param_name = key break
awslabs/aws-cfn-template-flip
4b0f0936b5895db1ab74c124db140624deb6a7db
diff --git a/tests/test_clean.py b/tests/test_clean.py index 24e14a0..6fb9000 100644 --- a/tests/test_clean.py +++ b/tests/test_clean.py @@ -251,6 +251,84 @@ def test_deep_nested_join(): assert expected == actual +def test_gh_63_no_value(): + """ + Test that Joins with conditionals that can evaluate to AWS::NoValue + are not converted to Fn::Sub + """ + + source = { + "Fn::Join": [ + ",", + [ + { + "Fn::If": [ + "Condition1", + "True1", + "AWS::NoValue" + ] + }, + { + "Fn::If": [ + "Condition2", + "True2", + "False2" + ] + } + ] + ] + } + + assert source == clean(source) + + +def test_gh_63_value(): + """ + Test that Joins with conditionals that cannot evaluate to AWS::NoValue + are converted to Fn::Sub + """ + + source = { + "Fn::Join": [ + ",", + [ + { + "Fn::If": [ + "Condition1", + "True1", + "False1" + ] + }, + { + "Fn::If": [ + "Condition2", + "True2", + "False2" + ] + } + ] + ] + } + + expected = ODict(( + ("Fn::Sub", [ + "${Param1},${Param2}", + ODict(( + ("Param1", ODict(( + ("Fn::If", ["Condition1", "True1", "False1"]), + ))), + ("Param2", ODict(( + ("Fn::If", ["Condition2", "True2", "False2"]), + ))), + )), + ]), + )) + + actual = clean(source) + + assert actual == expected + + def test_misused_join(): """ Test that we don't break in the case that there is
using clean produces invalid template we are using `new_template = to_yaml( to_json( orig_template ) )` to cleanup the yaml we are producing with jinja2 `clean_up=False` we see ``` PublicSubnetIds: Description: The public subnetids Condition: HasPublicSubnet1AZ1 Value: Fn::Join: - ',' - - Fn::If: - HasPublicSubnet1AZ1 - !Ref PublicSubnet1AZ1 - !Ref AWS::NoValue - Fn::If: - HasPublicSubnet1AZ2 - !Ref PublicSubnet1AZ2 - !Ref AWS::NoValue - Fn::If: - HasPublicSubnet1AZ3 - !Ref PublicSubnet1AZ3 - !Ref AWS::NoValue Export: Name: !Sub vpc-${Environment}-PublicSubnetIds ``` but with `clean_up=True` we get ``` PublicSubnetIds: Description: The public subnetids Condition: HasPublicSubnet1AZ1 Value: !Sub - ${Param1},${Param2},${Param3} - Param1: !If - HasPublicSubnet1AZ1 - !Ref 'PublicSubnet1AZ1' - !Ref 'AWS::NoValue' Param2: !If - HasPublicSubnet1AZ2 - !Ref 'PublicSubnet1AZ2' - !Ref 'AWS::NoValue' Param3: !If - HasPublicSubnet1AZ3 - !Ref 'PublicSubnet1AZ3' - !Ref 'AWS::NoValue' Export: Name: !Sub 'vpc-${Environment}-PublicSubnetIds' ``` Which CloudFormation complains about .. suggestions ??
0.0
4b0f0936b5895db1ab74c124db140624deb6a7db
[ "tests/test_clean.py::test_gh_63_no_value" ]
[ "tests/test_clean.py::test_basic_case", "tests/test_clean.py::test_ref", "tests/test_clean.py::test_get_att", "tests/test_clean.py::test_multi_level_get_att", "tests/test_clean.py::test_others", "tests/test_clean.py::test_in_array", "tests/test_clean.py::test_literals", "tests/test_clean.py::test_nested_join", "tests/test_clean.py::test_deep_nested_join", "tests/test_clean.py::test_gh_63_value", "tests/test_clean.py::test_misused_join", "tests/test_clean.py::test_yaml_dumper", "tests/test_clean.py::test_reused_sub_params" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2018-12-17 10:37:39+00:00
apache-2.0
1,289
awslabs__aws-embedded-metrics-python-44
diff --git a/aws_embedded_metrics/sinks/tcp_client.py b/aws_embedded_metrics/sinks/tcp_client.py index a1e3a93..5ff737d 100644 --- a/aws_embedded_metrics/sinks/tcp_client.py +++ b/aws_embedded_metrics/sinks/tcp_client.py @@ -15,6 +15,7 @@ from aws_embedded_metrics.sinks import SocketClient import logging import socket import threading +import errno from urllib.parse import ParseResult log = logging.getLogger(__name__) @@ -25,24 +26,44 @@ log = logging.getLogger(__name__) class TcpClient(SocketClient): def __init__(self, endpoint: ParseResult): self._endpoint = endpoint - self._sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - self._write_lock = threading.Lock() + # using reentrant lock so that we can retry through recursion + self._write_lock = threading.RLock() + self._connect_lock = threading.RLock() self._should_connect = True def connect(self) -> "TcpClient": - try: - self._sock.connect((self._endpoint.hostname, self._endpoint.port)) - self._should_connect = False - except socket.timeout as e: - log.error("Socket timeout durring connect %s" % (e,)) - self._should_connect = True - except Exception as e: - log.error("Failed to connect to the socket. %s" % (e,)) - self._should_connect = True - return self - - def send_message(self, message: bytes) -> None: - if self._sock._closed or self._should_connect: # type: ignore + with self._connect_lock: + try: + self._sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self._sock.connect((self._endpoint.hostname, self._endpoint.port)) + self._should_connect = False + except socket.timeout as e: + log.error("Socket timeout durring connect %s" % (e,)) + except OSError as e: + if e.errno == errno.EISCONN: + log.debug("Socket is already connected.") + self._should_connect = False + else: + log.error("Failed to connect to the socket. %s" % (e,)) + self._should_connect = True + except Exception as e: + log.error("Failed to connect to the socket. %s" % (e,)) + self._should_connect = True + return self + + # TODO: once #21 lands, we should increase the max retries + # the reason this is only 1 is to allow for a single + # reconnect attempt in case the agent disconnects + # additional retries and backoff would impose back + # pressure on the caller that may not be accounted + # for. Before we do that, we need to run the I/O + # operations on a background thread.s + def send_message(self, message: bytes, retry: int = 1) -> None: + if retry < 0: + log.error("Max retries exhausted, dropping message") + return + + if self._sock is None or self._sock._closed or self._should_connect: # type: ignore self.connect() with self._write_lock: @@ -52,9 +73,12 @@ class TcpClient(SocketClient): except socket.timeout as e: log.error("Socket timeout durring send %s" % (e,)) self.connect() + self.send_message(message, retry - 1) except socket.error as e: log.error("Failed to write metrics to the socket due to socket.error. %s" % (e,)) self.connect() + self.send_message(message, retry - 1) except Exception as e: log.error("Failed to write metrics to the socket due to exception. %s" % (e,)) self.connect() + self.send_message(message, retry - 1) diff --git a/setup.py b/setup.py index f6490c8..8f8a4a7 100644 --- a/setup.py +++ b/setup.py @@ -5,7 +5,7 @@ with open("README.md", "r") as fh: setup( name="aws-embedded-metrics", - version="1.0.3", + version="1.0.4", author="Amazon Web Services", author_email="[email protected]", description="AWS Embedded Metrics Package",
awslabs/aws-embedded-metrics-python
ac27573d8779406dcbab4455fe95d09dd31b7659
diff --git a/tests/sinks/test_tcp_client.py b/tests/sinks/test_tcp_client.py new file mode 100644 index 0000000..b3f7621 --- /dev/null +++ b/tests/sinks/test_tcp_client.py @@ -0,0 +1,126 @@ +from aws_embedded_metrics.sinks.tcp_client import TcpClient +from urllib.parse import urlparse +import socket +import threading +import time +import logging + +log = logging.getLogger(__name__) + +test_host = '0.0.0.0' +test_port = 9999 +endpoint = urlparse("tcp://0.0.0.0:9999") +message = "_16-Byte-String_".encode('utf-8') + + +def test_can_send_message(): + # arrange + agent = InProcessAgent().start() + client = TcpClient(endpoint) + + # act + client.connect() + client.send_message(message) + + # assert + time.sleep(1) + messages = agent.messages + assert 1 == len(messages) + assert message == messages[0] + agent.shutdown() + + +def test_can_connect_concurrently_from_threads(): + # arrange + concurrency = 10 + agent = InProcessAgent().start() + client = TcpClient(endpoint) + barrier = threading.Barrier(concurrency, timeout=5) + + def run(): + barrier.wait() + client.connect() + client.send_message(message) + + def start_thread(): + thread = threading.Thread(target=run, args=()) + thread.daemon = True + thread.start() + + # act + for _ in range(concurrency): + start_thread() + + # assert + time.sleep(1) + messages = agent.messages + assert concurrency == len(messages) + for i in range(concurrency): + assert message == messages[i] + agent.shutdown() + + +def test_can_recover_from_agent_shutdown(): + # arrange + agent = InProcessAgent().start() + client = TcpClient(endpoint) + + # act + client.connect() + client.send_message(message) + agent.shutdown() + time.sleep(5) + client.send_message(message) + agent = InProcessAgent().start() + client.send_message(message) + + # assert + time.sleep(1) + messages = agent.messages + assert 1 == len(messages) + assert message == messages[0] + agent.shutdown() + + +class InProcessAgent(object): + """ Agent that runs on a background thread and collects + messages in memory. + """ + + def __init__(self): + self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + self.sock.bind((test_host, test_port)) + self.sock.listen() + self.is_shutdown = False + self.messages = [] + + def start(self) -> "InProcessAgent": + thread = threading.Thread(target=self.run, args=()) + thread.daemon = True + thread.start() + return self + + def run(self): + while not self.is_shutdown: + connection, client_address = self.sock.accept() + self.connection = connection + + try: + while not self.is_shutdown: + data = self.connection.recv(16) + if data: + self.messages.append(data) + else: + break + finally: + log.error("Exited the recv loop") + + def shutdown(self): + try: + self.is_shutdown = True + self.connection.shutdown(socket.SHUT_RDWR) + self.connection.close() + self.sock.close() + except Exception as e: + log.error("Failed to shutdown %s" % (e,))
TCP Socket errors - Transport endpoint is already connected Hi there! We're using this module to log metrics for a high-frequency ETL service that is running on ECS where we have been noticing TCP socket `106` errors. A CloudWatch Agent container is setup as a sidecar alongside the application server in the ECS Task Definition that is responsible for publishing the metrics. I'm trying to understand the root cause of these errors and how to avoid them. Any help is appreciated. Thanks! _Exact error:_ ``` [ERROR][tcp_client] Failed to connect to the socket. [Errno 106] Transport endpoint is already connected ```
0.0
ac27573d8779406dcbab4455fe95d09dd31b7659
[ "tests/sinks/test_tcp_client.py::test_can_recover_from_agent_shutdown" ]
[ "tests/sinks/test_tcp_client.py::test_can_send_message", "tests/sinks/test_tcp_client.py::test_can_connect_concurrently_from_threads" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2020-07-12 03:00:52+00:00
apache-2.0
1,290
aymanizz__smalld-click-12
diff --git a/README.md b/README.md index 0eef38e..54ca7ed 100644 --- a/README.md +++ b/README.md @@ -97,7 +97,10 @@ prompts that are hidden, using `hide_input=True`, are sent to the user DM, and c Note that, echo and prompt will send a message in the same channel as the message that triggered the command invocation. -Calls to echo are buffered, the buffer is flushed either when there is a prompt or when the command finishes execution. +Calls to echo are buffered. When the buffer is flushed, its content is sent in 2K chunks (limit set by discord.) +The buffer can be flushed automatically when there is a prompt, or the command finishes execution, or the content +in the buffer exceeds the 2K limit. + It's also possible to flush the buffer by passing `flush=True` to `click.echo` call. ## Acknowledgements diff --git a/smalld_click/conversation.py b/smalld_click/conversation.py index fcacfb1..a094244 100644 --- a/smalld_click/conversation.py +++ b/smalld_click/conversation.py @@ -6,6 +6,9 @@ click_prompt = click.prompt click_echo = click.echo +MESSAGE_CHARACTERS_LIMIT = 2000 + + class Conversation: def __init__(self, runner, message, timeout): self.runner = runner @@ -44,11 +47,11 @@ class Conversation: def flush(self): content = self.echo_buffer.getvalue() self.echo_buffer = io.StringIO() - if not content.strip(): - return smalld, channel_id = self.runner.smalld, self.channel_id - smalld.post(f"/channels/{channel_id}/messages", {"content": content}) + for message in chunked(content, MESSAGE_CHARACTERS_LIMIT): + if message.strip(): + smalld.post(f"/channels/{channel_id}/messages", {"content": message}) def wait_for_message(self): handle = self.runner.add_listener(self.user_id, self.channel_id) @@ -72,3 +75,8 @@ class Conversation: def get_conversation(): return click.get_current_context().find_object(Conversation) + + +def chunked(it, n): + for i in range(0, len(it), n): + yield it[i : i + n]
aymanizz/smalld-click
63373fe3096a3b2556f57089de360f673be13deb
diff --git a/test/test_smalld_click.py b/test/test_smalld_click.py index ac4461f..bdecef9 100644 --- a/test/test_smalld_click.py +++ b/test/test_smalld_click.py @@ -251,3 +251,21 @@ def test_patches_click_functions_in_context_only(smalld): assert click.echo is click_echo assert click.prompt is click_prompt + + +def test_sends_chunked_messages_not_exceeding_message_length_limit(subject, smalld): + @click.command() + def command(): + click.echo("a" * 3000) + + subject.cli = command + + subject.on_message(make_message("command")) + + assert smalld.post.call_count == 2 + smalld.post.assert_has_calls( + [ + call(POST_MESSAGE_ROUTE, {"content": "a" * 2000}), + call(POST_MESSAGE_ROUTE, {"content": "a" * 1000 + "\n"}), + ] + )
Send messages in 2K characters chunks to avoid hitting message length limit There is a limit of 2K characters for messages sent through discord.
0.0
63373fe3096a3b2556f57089de360f673be13deb
[ "test/test_smalld_click.py::test_sends_chunked_messages_not_exceeding_message_length_limit" ]
[ "test/test_smalld_click.py::test_exposes_correct_context", "test/test_smalld_click.py::test_parses_command", "test/test_smalld_click.py::test_handles_echo", "test/test_smalld_click.py::test_buffers_calls_to_echo", "test/test_smalld_click.py::test_should_not_send_empty_messages", "test/test_smalld_click.py::test_handles_prompt", "test/test_smalld_click.py::test_sends_prompts_without_buffering", "test/test_smalld_click.py::test_drops_conversation_when_timed_out", "test/test_smalld_click.py::test_prompts_in_DM_for_hidden_prompts", "test/test_smalld_click.py::test_only_responds_to_hidden_prompts_answers_in_DM", "test/test_smalld_click.py::test_continues_conversation_in_DM_after_hidden_prompt", "test/test_smalld_click.py::test_patches_click_functions_in_context_only" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2020-07-14 10:35:52+00:00
mit
1,291
aymanizz__smalld-click-24
diff --git a/smalld_click/smalld_click.py b/smalld_click/smalld_click.py index dbf3fa6..cbc56ac 100644 --- a/smalld_click/smalld_click.py +++ b/smalld_click/smalld_click.py @@ -26,10 +26,14 @@ class SmallDCliRunner: create_message=None, executor=None, ): + self.prefix = prefix.strip() + self.name = name.strip() if name is not None else cli.name or "" + + if not self.prefix and not self.name: + raise ValueError("either prefix or name must be non empty") + self.smalld = smalld self.cli = cli - self.prefix = prefix - self.name = name if name is not None else cli.name or "" self.timeout = timeout self.create_message = create_message if create_message else plain_message self.executor = executor if executor else ThreadPoolExecutor() @@ -45,18 +49,18 @@ class SmallDCliRunner: self.executor.__exit__(*args) def on_message(self, msg): - content = msg["content"] + content = msg.get("content") or "" user_id = msg["author"]["id"] channel_id = msg["channel_id"] handle = self.pending.pop((user_id, channel_id), None) if handle is not None: handle.complete_with(msg) - return + return None args = parse_command(self.prefix, self.name, content) if args is None: - return + return None return self.executor.submit(self.handle_command, msg, args) @@ -90,14 +94,16 @@ def plain_message(msg): def parse_command(prefix, name, message): if not message.startswith(prefix): - return + return None + cmd = message[len(prefix) :].lstrip() if not name: return cmd - cmd_name, *rest = cmd.split(maxsplit=1) - if cmd_name != name: - return - return "".join(rest) + elif not cmd: + return None + + cmd_name, *args = cmd.split(maxsplit=1) + return "".join(args) if cmd_name == name else None def split_args(command):
aymanizz/smalld-click
332eeb88a7e6717c89e2509bb4a1d428c1006629
diff --git a/test/test_smalld_click.py b/test/test_smalld_click.py index 83f512a..34121ac 100644 --- a/test/test_smalld_click.py +++ b/test/test_smalld_click.py @@ -55,6 +55,15 @@ def make_subject(request, smalld): return factory +def test_raises_error_for_empty_prefix_and_name(make_subject): + @click.command() + def command(): + pass + + with pytest.raises(ValueError) as exc_info: + make_subject(command, prefix="", name="") + + def test_exposes_correct_context(make_subject): conversation = None @@ -110,10 +119,10 @@ def test_parses_multicommands(make_subject): create_command(1) cli_collection = click.CommandCollection(sources=[cli]) - subject = make_subject(cli_collection) + subject = make_subject(cli_collection, prefix="$") - f1 = subject.on_message(make_message("cmd0 --opt")) - f2 = subject.on_message(make_message("cmd1 --opt")) + f1 = subject.on_message(make_message("$cmd0 --opt")) + f2 = subject.on_message(make_message("$cmd1 --opt")) assert_completes([f1, f2]) assert all(slots) @@ -121,38 +130,42 @@ def test_parses_multicommands(make_subject): @pytest.mark.parametrize( "prefix, name, message, expected", [ - ("", "", "command", True), - ("++", "", "++command", True), - ("++", "invoke", "++invoke command", True), - ("++", "", "++ command", True), - ("++", "", "++--opt command", True), - ("", "invoke", "invokecommand", False), - ("", "invoke", "invoke--opt command", False), - ("", "invoke", "invoke command", True), + ("++", "", "", False), + ("++", "", "++", True), + ("++", "", "++arg", True), + ("++", "", "++ arg", True), + ("++", "", "++--opt arg", True), + ("", "invoke", "", False), + ("", "invoke", "invoke", True), + ("", "invoke", "invoke arg", True), + ("", "invoke", "invokearg", False), + ("", "invoke", "invoke --opt", True), + ("", "invoke", "invoke--opt arg", False), + ("++", "invoke", "", False), + ("++", "invoke", "++", False), + ("++", "invoke", "++invoke", True), + ("++", "invoke", "++ invoke", True), + ("++", "invoke", "++invoke arg", True), + ("++", "invoke", None, False), ], ) def test_parses_name_and_prefix_correctly( make_subject, prefix, name, message, expected ): - cli_called = False - command_called = False + called = False - @click.group() + @click.command() + @click.argument("arg", required=False) @click.option("--opt", is_flag=True) - def cli(opt): - nonlocal cli_called - cli_called = True - - @cli.command() - def command(): - nonlocal command_called - command_called = True + def cli(arg, opt): + nonlocal called + called = True subject = make_subject(cli, prefix=prefix, name=name) f = subject.on_message(make_message(message)) - assert_completes(f) if expected else time.sleep(0.5) - assert cli_called is command_called is expected + assert_completes(f) if expected else time.sleep(0.2) + assert called is expected def test_handles_echo(make_subject, smalld):
Messages with empty content throw an error For bots running with a prefix and name, messages with only a prefix would also cause an error.
0.0
332eeb88a7e6717c89e2509bb4a1d428c1006629
[ "test/test_smalld_click.py::test_raises_error_for_empty_prefix_and_name", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[-invoke--False]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++-invoke-++-False]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++-invoke-None-False]" ]
[ "test/test_smalld_click.py::test_exposes_correct_context", "test/test_smalld_click.py::test_parses_command", "test/test_smalld_click.py::test_parses_multicommands", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++---False]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++--++-True]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++--++arg-True]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++--++", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++--++--opt", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[-invoke-invoke-True]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[-invoke-invoke", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[-invoke-invokearg-False]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[-invoke-invoke--opt", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++-invoke--False]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++-invoke-++invoke-True]", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++-invoke-++", "test/test_smalld_click.py::test_parses_name_and_prefix_correctly[++-invoke-++invoke", "test/test_smalld_click.py::test_handles_echo", "test/test_smalld_click.py::test_buffers_calls_to_echo", "test/test_smalld_click.py::test_should_not_send_empty_messages", "test/test_smalld_click.py::test_handles_prompt", "test/test_smalld_click.py::test_sends_prompts_without_buffering", "test/test_smalld_click.py::test_drops_conversation_when_timed_out", "test/test_smalld_click.py::test_prompts_in_DM_for_hidden_prompts", "test/test_smalld_click.py::test_only_responds_to_hidden_prompts_answers_in_DM", "test/test_smalld_click.py::test_continues_conversation_in_DM_after_hidden_prompt", "test/test_smalld_click.py::test_patches_click_functions_in_context_only", "test/test_smalld_click.py::test_sends_chunked_messages_not_exceeding_message_length_limit", "test/test_smalld_click.py::test_message_is_latest_message_payload" ]
{ "failed_lite_validators": [ "has_short_problem_statement" ], "has_test_patch": true, "is_lite": false }
2020-08-05 09:59:52+00:00
mit
1,292
barbosa__clorox-21
diff --git a/clorox/clorox.py b/clorox/clorox.py index b194817..0d0910e 100755 --- a/clorox/clorox.py +++ b/clorox/clorox.py @@ -44,7 +44,7 @@ class Clorox: def _process_file(self, file_path): self.all_files.append(file_path) - has_header, updated_content = self._has_xcode_header(file_path) + has_header, updated_content = self._find_xcode_header(file_path) if has_header: succeeded = True if not self.args.inspection: @@ -53,19 +53,14 @@ class Clorox: self.modified_files.append(file_path) self.printer.print_file(file_path, succeeded) - def _has_xcode_header(self, file_path): + def _find_xcode_header(self, file_path): with open(file_path, 'r') as file: - content = file.readlines() - header_height = Matcher.HEADER_TEMPLATE.count('\n') - for line in range(header_height, len(content)): - if content[line] == '\n': - header_height = header_height + 1 - else: - break - - header = ''.join(content[:header_height]) - updated_content = content[header_height:] - return Matcher(header).matches(), updated_content + content = ''.join(file.readlines()) + header = Matcher(content, trim_new_lines=self.args.trim).match() + if header is None: + return False, None + + return True, content.replace(header, '') def _remove_header(self, file_path, updated_content): try: @@ -78,10 +73,16 @@ class Clorox: def main(): parser = argparse.ArgumentParser() - parser.add_argument('-p', '--path', nargs='+', required=True) - parser.add_argument('-i', '--inspection', dest='inspection', action='store_true') - parser.add_argument('-q', '--quiet', dest='quiet', action='store_true') - parser.add_argument('-r', '--reporter', choices=['json']) + parser.add_argument('-p', '--path', nargs='+', required=True, + help='directory of file to run clorox') + parser.add_argument('-t', '--trim', dest='trim', action='store_true', + default=True, help='trim new lines around header') + parser.add_argument('-i', '--inspection', dest='inspection', + action='store_true', help='do not change files (only inspect them)') + parser.add_argument('-q', '--quiet', dest='quiet', action='store_true', + help='do not print any output') + parser.add_argument('-r', '--reporter', choices=['json'], + help='render output using a custom report') args = parser.parse_args() if not args.path: diff --git a/clorox/matcher.py b/clorox/matcher.py index 83ac73a..1f10fa3 100644 --- a/clorox/matcher.py +++ b/clorox/matcher.py @@ -5,7 +5,7 @@ import re class Matcher: - HEADER_TEMPLATE = (r"" + _DEFAULT_HEADER_TEMPLATE = (r"" "\/\/\n" "\/\/.*\..*\n" "\/\/.*\n" @@ -15,8 +15,18 @@ class Matcher: "\/\/\n" ) - def __init__(self, header): - self.header = header + def __init__(self, content, trim_new_lines=False): + self.content = content + self.trim_new_lines = trim_new_lines - def matches(self): - return re.match(self.HEADER_TEMPLATE, self.header) is not None + @property + def header(self): + trim_regex = r"\s*" if self.trim_new_lines else r"" + return r"{trim_regex}{core}{trim_regex}".format( + trim_regex=trim_regex, + core=self._DEFAULT_HEADER_TEMPLATE + ) + + def match(self): + result = re.match(self.header, self.content) + return result.group(0) if result else None diff --git a/requirements-dev.txt b/requirements-dev.txt new file mode 100644 index 0000000..469e4c7 --- /dev/null +++ b/requirements-dev.txt @@ -0,0 +1,3 @@ +-r requirements.txt +nose +ipdb
barbosa/clorox
4a694a34146a80e21935a603549b549707a494f7
diff --git a/tests/test_matcher.py b/tests/test_matcher.py index 6665fff..4694a94 100644 --- a/tests/test_matcher.py +++ b/tests/test_matcher.py @@ -17,7 +17,7 @@ class MatcherTestCase(unittest.TestCase): "// Copyright (c) 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() def test_matcher_with_1_digit_month(self): header = ("" @@ -29,7 +29,7 @@ class MatcherTestCase(unittest.TestCase): "// Copyright (c) 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() def test_matcher_with_1_digit_day(self): header = ("" @@ -41,7 +41,7 @@ class MatcherTestCase(unittest.TestCase): "// Copyright (c) 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() def test_matcher_with_objc_header_file(self): header = ("" @@ -53,7 +53,7 @@ class MatcherTestCase(unittest.TestCase): "// Copyright (c) 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() def test_matcher_with_objc_implementation_file(self): header = ("" @@ -65,7 +65,7 @@ class MatcherTestCase(unittest.TestCase): "// Copyright (c) 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() def test_matcher_with_objc_implementation_file(self): header = ("" @@ -77,7 +77,7 @@ class MatcherTestCase(unittest.TestCase): "// Copyright (c) 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() def test_matcher_with_special_copyright_character(self): header = ("" @@ -89,7 +89,39 @@ class MatcherTestCase(unittest.TestCase): "// Copyright © 2015 MyCompany. All rights reserved.\n" "//\n") - assert Matcher(header).matches() + assert Matcher(header).match() + + def test_matcher_with_trim_new_lines_on(self): + header = ("" + "\n" + "\n" + "//\n" + "// MyFile.m\n" + "// MyCompany\n" + "//\n" + "// Created by John Appleseed on 12/18/15.\n" + "// Copyright © 2015 MyCompany. All rights reserved.\n" + "//\n" + "\n" + "\n") + + assert Matcher(header, trim_new_lines=True).match() + + def test_matcher_with_trim_new_lines_off(self): + header = ("" + "\n" + "\n" + "//\n" + "// MyFile.m\n" + "// MyCompany\n" + "//\n" + "// Created by John Appleseed on 12/18/15.\n" + "// Copyright © 2015 MyCompany. All rights reserved.\n" + "//\n" + "\n" + "\n") + + assert not Matcher(header, trim_new_lines=False).match() if __name__ == '__main__': unittest.main()
Add option to remove all leading whitespace I really enjoy Clorox and we even enforce it's usage using the [danger plugin](https://github.com/barbosa/danger-clorox), however I have some files with trailing whitespaces/newlines before the actual code so when I run it these files start being flagged as with errors due to the [SwiftLint Leading Whitespace rule](https://github.com/realm/SwiftLint/blob/master/Source/SwiftLintFramework/Rules/LeadingWhitespaceRule.swift). How do you feel about adding an option to enable the complete removal of leading whitespace? Thanks for the awesome job so far! 🎉
0.0
4a694a34146a80e21935a603549b549707a494f7
[ "tests/test_matcher.py::MatcherTestCase::test_matcher_with_1_digit_day", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_1_digit_month", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_2_digits_dates", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_objc_header_file", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_objc_implementation_file", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_special_copyright_character", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_trim_new_lines_off", "tests/test_matcher.py::MatcherTestCase::test_matcher_with_trim_new_lines_on" ]
[]
{ "failed_lite_validators": [ "has_hyperlinks", "has_added_files", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2016-12-27 17:51:13+00:00
mit
1,293
barrust__pyprobables-115
diff --git a/probables/blooms/bloom.py b/probables/blooms/bloom.py index 912cc92..1da0311 100644 --- a/probables/blooms/bloom.py +++ b/probables/blooms/bloom.py @@ -315,21 +315,9 @@ class BloomFilter: with open(filename, "w", encoding="utf-8") as file: print(f"/* BloomFilter Export of a {bloom_type} */", file=file) print("#include <inttypes.h>", file=file) - print( - "const uint64_t estimated_elements = ", - self.estimated_elements, - ";", - sep="", - file=file, - ) + print("const uint64_t estimated_elements = ", self.estimated_elements, ";", sep="", file=file) print("const uint64_t elements_added = ", self.elements_added, ";", sep="", file=file) - print( - "const float false_positive_rate = ", - self.false_positive_rate, - ";", - sep="", - file=file, - ) + print("const float false_positive_rate = ", self.false_positive_rate, ";", sep="", file=file) print("const uint64_t number_bits = ", self.number_bits, ";", sep="", file=file) print("const unsigned int number_hashes = ", self.number_hashes, ";", sep="", file=file) print("const unsigned char bloom[] = {", *data, "};", sep="\n", file=file) diff --git a/probables/quotientfilter/quotientfilter.py b/probables/quotientfilter/quotientfilter.py index 3411954..7f5fce9 100644 --- a/probables/quotientfilter/quotientfilter.py +++ b/probables/quotientfilter/quotientfilter.py @@ -4,7 +4,7 @@ """ from array import array -from typing import Optional +from typing import Iterator, List, Optional from probables.hashes import KeyT, SimpleHashT, fnv_1a_32 from probables.utilities import Bitarray @@ -15,6 +15,7 @@ class QuotientFilter: Args: quotient (int): The size of the quotient to use + auto_expand (bool): Automatically expand or not hash_function (function): Hashing strategy function to use `hf(key, number)` Returns: QuotientFilter: The initialized filter @@ -35,18 +36,27 @@ class QuotientFilter: "_is_continuation", "_is_shifted", "_filter", + "_max_load_factor", + "_auto_resize", ) - def __init__(self, quotient: int = 20, hash_function: Optional[SimpleHashT] = None): # needs to be parameterized + def __init__( + self, quotient: int = 20, auto_expand: bool = True, hash_function: Optional[SimpleHashT] = None + ): # needs to be parameterized if quotient < 3 or quotient > 31: raise ValueError( f"Quotient filter: Invalid quotient setting; quotient must be between 3 and 31; {quotient} was provided" ) - self._q = quotient - self._r = 32 - quotient - self._size = 1 << self._q # same as 2**q - self._elements_added = 0 + self.__set_params(quotient, auto_expand, hash_function) + + def __set_params(self, quotient: int, auto_expand: bool, hash_function: Optional[SimpleHashT]): + self._q: int = quotient + self._r: int = 32 - quotient + self._size: int = 1 << self._q # same as 2**q + self._elements_added: int = 0 + self._auto_resize: bool = auto_expand self._hash_func: SimpleHashT = fnv_1a_32 if hash_function is None else hash_function # type: ignore + self._max_load_factor: float = 0.85 # ensure we use the smallest type possible to reduce memory wastage if self._r <= 8: @@ -89,21 +99,61 @@ class QuotientFilter: return self._elements_added @property - def bits_per_elm(self): + def bits_per_elm(self) -> int: """int: The number of bits used per element""" return self._bits_per_elm + @property + def size(self) -> int: + """int: The number of bins available in the filter + + Note: + same as `num_elements`""" + return self._size + + @property + def load_factor(self) -> float: + """float: The load factor of the filter""" + return self._elements_added / self._size + + @property + def auto_expand(self) -> bool: + """bool: Will the quotient filter automatically expand""" + return self._auto_resize + + @auto_expand.setter + def auto_expand(self, val: bool): + """change the auto expand property""" + self._auto_resize = bool(val) + + @property + def max_load_factor(self) -> float: + """float: The maximum allowed load factor after which auto expanding should occur""" + return self._max_load_factor + + @max_load_factor.setter + def max_load_factor(self, val: float): + """set the maximum load factor""" + self._max_load_factor = float(val) + def add(self, key: KeyT) -> None: """Add key to the quotient filter Args: key (str|bytes): The element to add""" _hash = self._hash_func(key, 0) + self.add_alt(_hash) + + def add_alt(self, _hash: int) -> None: + """Add the pre-hashed value to the quotient filter + + Args: + _hash (int): The element to add""" key_quotient = _hash >> self._r key_remainder = _hash & ((1 << self._r) - 1) - - if not self._contains(key_quotient, key_remainder): - # TODO, add it here + if self._contained_at_loc(key_quotient, key_remainder) == -1: + if self._auto_resize and self.load_factor >= self._max_load_factor: + self.resize() self._add(key_quotient, key_remainder) def check(self, key: KeyT) -> bool: @@ -114,9 +164,92 @@ class QuotientFilter: Return: bool: True if likely encountered, False if definately not""" _hash = self._hash_func(key, 0) + return self.check_alt(_hash) + + def check_alt(self, _hash: int) -> bool: + """Check to see if the pre-calculated hash is likely in the quotient filter + + Args: + _hash (int): The element to add + Return: + bool: True if likely encountered, False if definately not""" key_quotient = _hash >> self._r key_remainder = _hash & ((1 << self._r) - 1) - return self._contains(key_quotient, key_remainder) + return not self._contained_at_loc(key_quotient, key_remainder) == -1 + + def iter_hashes(self) -> Iterator[int]: + """A generator over the hashes in the quotient filter + + Yields: + int: The next hash stored in the quotient filter""" + queue: List[int] = [] + + # find first empty location + start = 0 + while True: + is_occupied = self._is_occupied.check_bit(start) + is_continuation = self._is_continuation.check_bit(start) + is_shifted = self._is_shifted.check_bit(start) + if is_occupied + is_continuation + is_shifted == 0: + break + start += 1 + + cur_quot = 0 + for i in range(start, self._size + start): # this will allow for wrap-arounds + idx = i % self._size + is_occupied = self._is_occupied.check_bit(idx) + is_continuation = self._is_continuation.check_bit(idx) + is_shifted = self._is_shifted.check_bit(idx) + # Nothing here, keep going + if is_occupied + is_continuation + is_shifted == 0: + assert len(queue) == 0 + continue + + if is_occupied == 1: # keep track of the indicies that match a hashed quotient + queue.append(idx) + + # run start + if not is_continuation and (is_occupied or is_shifted): + cur_quot = queue.pop(0) + + if self._filter[idx] != 0: + yield (cur_quot << self._r) + self._filter[idx] + + def get_hashes(self) -> List[int]: + """Get the hashes from the quotient filter as a list + + Returns: + list(int): The hash values stored in the quotient filter""" + return list(self.iter_hashes()) + + def resize(self, quotient: Optional[int] = None) -> None: + """Resize the quotient filter to use the new quotient size + + Args: + int: The new quotient to use + Note: + If `None` is provided, the quotient filter will double in size (quotient + 1) + Raises: + ValueError: When the new quotient will not accommodate the elements already added""" + if quotient is None: + quotient = self._q + 1 + + if self.elements_added >= (1 << quotient): + raise ValueError("Unable to shrink since there will be too many elements in the quotient filter") + if quotient < 3 or quotient > 31: + raise ValueError( + f"Quotient filter: Invalid quotient setting; quotient must be between 3 and 31; {quotient} was provided" + ) + + hashes = self.get_hashes() + + for i in range(self._size): + self._filter[i] = 0 + + self.__set_params(quotient, self._auto_resize, self._hash_func) + + for _h in hashes: + self.add_alt(_h) def _shift_insert(self, k, v, start, j, flag): if self._is_occupied[j] == 0 and self._is_continuation[j] == 0 and self._is_shifted[j] == 0: @@ -215,9 +348,10 @@ class QuotientFilter: self._shift_insert(q, r, orig_start_idx, start_idx, 1) self._elements_added += 1 - def _contains(self, q: int, r: int) -> bool: + def _contained_at_loc(self, q: int, r: int) -> int: + """returns the index location of the element, or -1 if not present""" if self._is_occupied[q] == 0: - return False + return -1 start_idx = self._get_start_index(q) @@ -236,7 +370,7 @@ class QuotientFilter: break if self._filter[start_idx] == r: - return True + return start_idx start_idx = (start_idx + 1) & (self._size - 1) meta_bits = ( @@ -245,4 +379,4 @@ class QuotientFilter: + self._is_shifted.check_bit(start_idx) ) - return False + return -1 diff --git a/pyproject.toml b/pyproject.toml index ae50c6a..c697c86 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -15,6 +15,7 @@ keywords = [ "bloom-filter", "count-min-sketch", "cuckoo-filter", + "quotient-filter", ] readme = "README.rst" classifiers = [
barrust/pyprobables
84dbffc9a5a27d5daeed37137efc0b2efc0e8ecc
diff --git a/tests/quotientfilter_test.py b/tests/quotientfilter_test.py index 1f0f1a1..292c5ba 100644 --- a/tests/quotientfilter_test.py +++ b/tests/quotientfilter_test.py @@ -38,14 +38,16 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.remainder, 24) self.assertEqual(qf.elements_added, 0) self.assertEqual(qf.num_elements, 256) # 2**qf.quotient + self.assertTrue(qf.auto_expand) - qf = QuotientFilter(quotient=24) + qf = QuotientFilter(quotient=24, auto_expand=False) self.assertEqual(qf.bits_per_elm, 8) self.assertEqual(qf.quotient, 24) self.assertEqual(qf.remainder, 8) self.assertEqual(qf.elements_added, 0) self.assertEqual(qf.num_elements, 16777216) # 2**qf.quotient + self.assertFalse(qf.auto_expand) def test_qf_add_check(self): "test that the qf is able to add and check elements" @@ -54,7 +56,7 @@ class TestQuotientFilter(unittest.TestCase): for i in range(0, 200, 2): qf.add(str(i)) self.assertEqual(qf.elements_added, 100) - + self.assertEqual(qf.load_factor, 100 / qf.size) found_no = False for i in range(0, 200, 2): if not qf.check(str(i)): @@ -87,6 +89,102 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.elements_added, 100) - def test_qf_errors(self): + def test_qf_init_errors(self): + """test quotient filter initialization errors""" self.assertRaises(ValueError, lambda: QuotientFilter(quotient=2)) self.assertRaises(ValueError, lambda: QuotientFilter(quotient=32)) + + def test_retrieve_hashes(self): + """test retrieving hashes back from the quotient filter""" + qf = QuotientFilter(quotient=8, auto_expand=False) + hashes = [] + for i in range(255): + hashes.append(qf._hash_func(str(i), 0)) # use the private function here.. + qf.add(str(i)) + self.assertEqual(qf.size, 256) + self.assertEqual(qf.load_factor, 255 / qf.size) + out_hashes = qf.get_hashes() + self.assertEqual(qf.elements_added, len(out_hashes)) + self.assertEqual(set(hashes), set(out_hashes)) + + def test_resize(self): + """test resizing the quotient filter""" + qf = QuotientFilter(quotient=8, auto_expand=False) + for i in range(200): + qf.add(str(i)) + + self.assertEqual(qf.elements_added, 200) + self.assertEqual(qf.load_factor, 200 / qf.size) + self.assertEqual(qf.quotient, 8) + self.assertEqual(qf.remainder, 24) + self.assertEqual(qf.bits_per_elm, 32) + self.assertFalse(qf.auto_expand) + + self.assertRaises(ValueError, lambda: qf.resize(7)) # should be too small to fit + + qf.resize(17) + self.assertEqual(qf.elements_added, 200) + self.assertEqual(qf.load_factor, 200 / qf.size) + self.assertEqual(qf.quotient, 17) + self.assertEqual(qf.remainder, 15) + self.assertEqual(qf.bits_per_elm, 16) + # ensure everything is still accessable + for i in range(200): + self.assertTrue(qf.check(str(i))) + + def test_auto_resize(self): + """test resizing the quotient filter automatically""" + qf = QuotientFilter(quotient=8, auto_expand=True) + self.assertEqual(qf.max_load_factor, 0.85) + self.assertEqual(qf.elements_added, 0) + self.assertEqual(qf.load_factor, 0 / qf.size) + self.assertEqual(qf.quotient, 8) + self.assertEqual(qf.remainder, 24) + self.assertEqual(qf.bits_per_elm, 32) + self.assertTrue(qf.auto_expand) + + for i in range(220): + qf.add(str(i)) + + self.assertEqual(qf.max_load_factor, 0.85) + self.assertEqual(qf.elements_added, 220) + self.assertEqual(qf.load_factor, 220 / qf.size) + self.assertEqual(qf.quotient, 9) + self.assertEqual(qf.remainder, 23) + self.assertEqual(qf.bits_per_elm, 32) + + def test_auto_resize_changed_max_load_factor(self): + """test resizing the quotient filter with a different load factor""" + qf = QuotientFilter(quotient=8, auto_expand=True) + self.assertEqual(qf.max_load_factor, 0.85) + self.assertTrue(qf.auto_expand) + qf.max_load_factor = 0.65 + self.assertEqual(qf.max_load_factor, 0.65) + + self.assertEqual(qf.elements_added, 0) + self.assertEqual(qf.load_factor, 0 / qf.size) + self.assertEqual(qf.quotient, 8) + self.assertEqual(qf.remainder, 24) + self.assertEqual(qf.bits_per_elm, 32) + self.assertTrue(qf.auto_expand) + + for i in range(200): + qf.add(str(i)) + + self.assertEqual(qf.max_load_factor, 0.85) + self.assertEqual(qf.elements_added, 200) + self.assertEqual(qf.load_factor, 200 / qf.size) + self.assertEqual(qf.quotient, 9) + self.assertEqual(qf.remainder, 23) + self.assertEqual(qf.bits_per_elm, 32) + + def test_resize_errors(self): + """test resizing errors""" + + qf = QuotientFilter(quotient=8, auto_expand=True) + for i in range(200): + qf.add(str(i)) + + self.assertRaises(ValueError, lambda: qf.resize(quotient=2)) + self.assertRaises(ValueError, lambda: qf.resize(quotient=32)) + self.assertRaises(ValueError, lambda: qf.resize(quotient=6))
quotient filter: additional functionality Additional functionality to add to the quotient filter: - Resize / Merge - Delete element - Import / Export Something to consider would be to use a form of bit packing to make it more compact, perhaps as a second class
0.0
84dbffc9a5a27d5daeed37137efc0b2efc0e8ecc
[ "tests/quotientfilter_test.py::TestQuotientFilter::test_auto_resize", "tests/quotientfilter_test.py::TestQuotientFilter::test_auto_resize_changed_max_load_factor", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_add_check", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_init", "tests/quotientfilter_test.py::TestQuotientFilter::test_resize", "tests/quotientfilter_test.py::TestQuotientFilter::test_resize_errors", "tests/quotientfilter_test.py::TestQuotientFilter::test_retrieve_hashes" ]
[ "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_add_check_in", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_init_errors" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2024-01-13 15:59:00+00:00
mit
1,294
barrust__pyprobables-116
diff --git a/CHANGELOG.md b/CHANGELOG.md index 0dd8b03..6cbec4f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,13 @@ # PyProbables Changelog +### Version 0.6.1 + +* Quotient Filter: + * Add ability to get hashes from the filter either as a list, or as a generator + * Add quotient filter expand capability, auto and on request + * Add QuotientFilterError exception + * Add merge functionality + ### Version 0.6.0 * Add `QuotientFilter` implementation; [see issue #37](https://github.com/barrust/pyprobables/issues/37) diff --git a/probables/exceptions.py b/probables/exceptions.py index ff516e4..b76eb37 100644 --- a/probables/exceptions.py +++ b/probables/exceptions.py @@ -68,3 +68,14 @@ class CountMinSketchError(ProbablesBaseException): def __init__(self, message: str) -> None: self.message = message super().__init__(self.message) + + +class QuotientFilterError(ProbablesBaseException): + """Quotient Filter Exception + + Args: + message (str): The error message to be reported""" + + def __init__(self, message: str) -> None: + self.message = message + super().__init__(self.message) diff --git a/probables/quotientfilter/quotientfilter.py b/probables/quotientfilter/quotientfilter.py index 7f5fce9..5f70056 100644 --- a/probables/quotientfilter/quotientfilter.py +++ b/probables/quotientfilter/quotientfilter.py @@ -6,6 +6,7 @@ from array import array from typing import Iterator, List, Optional +from probables.exceptions import QuotientFilterError from probables.hashes import KeyT, SimpleHashT, fnv_1a_32 from probables.utilities import Bitarray @@ -20,7 +21,7 @@ class QuotientFilter: Returns: QuotientFilter: The initialized filter Raises: - ValueError: + QuotientFilterError: Raised when unable to initialize Note: The size of the QuotientFilter will be 2**q""" @@ -44,8 +45,8 @@ class QuotientFilter: self, quotient: int = 20, auto_expand: bool = True, hash_function: Optional[SimpleHashT] = None ): # needs to be parameterized if quotient < 3 or quotient > 31: - raise ValueError( - f"Quotient filter: Invalid quotient setting; quotient must be between 3 and 31; {quotient} was provided" + raise QuotientFilterError( + f"Invalid quotient setting; quotient must be between 3 and 31; {quotient} was provided" ) self.__set_params(quotient, auto_expand, hash_function) @@ -140,7 +141,9 @@ class QuotientFilter: """Add key to the quotient filter Args: - key (str|bytes): The element to add""" + key (str|bytes): The element to add + Raises: + QuotientFilterError: Raised when no locations are available in which to insert""" _hash = self._hash_func(key, 0) self.add_alt(_hash) @@ -148,12 +151,14 @@ class QuotientFilter: """Add the pre-hashed value to the quotient filter Args: - _hash (int): The element to add""" + _hash (int): The element to add + Raises: + QuotientFilterError: Raised when no locations are available in which to insert""" + if self._auto_resize and self.load_factor >= self._max_load_factor: + self.resize() key_quotient = _hash >> self._r key_remainder = _hash & ((1 << self._r) - 1) if self._contained_at_loc(key_quotient, key_remainder) == -1: - if self._auto_resize and self.load_factor >= self._max_load_factor: - self.resize() self._add(key_quotient, key_remainder) def check(self, key: KeyT) -> bool: @@ -177,7 +182,7 @@ class QuotientFilter: key_remainder = _hash & ((1 << self._r) - 1) return not self._contained_at_loc(key_quotient, key_remainder) == -1 - def iter_hashes(self) -> Iterator[int]: + def hashes(self) -> Iterator[int]: """A generator over the hashes in the quotient filter Yields: @@ -220,25 +225,25 @@ class QuotientFilter: Returns: list(int): The hash values stored in the quotient filter""" - return list(self.iter_hashes()) + return list(self.hashes()) def resize(self, quotient: Optional[int] = None) -> None: """Resize the quotient filter to use the new quotient size Args: - int: The new quotient to use + quotient (int): The new quotient to use Note: If `None` is provided, the quotient filter will double in size (quotient + 1) Raises: - ValueError: When the new quotient will not accommodate the elements already added""" + QuotientFilterError: When the new quotient will not accommodate the elements already added""" if quotient is None: quotient = self._q + 1 if self.elements_added >= (1 << quotient): - raise ValueError("Unable to shrink since there will be too many elements in the quotient filter") + raise QuotientFilterError("Unable to shrink since there will be too many elements in the quotient filter") if quotient < 3 or quotient > 31: - raise ValueError( - f"Quotient filter: Invalid quotient setting; quotient must be between 3 and 31; {quotient} was provided" + raise QuotientFilterError( + f"Invalid quotient setting; quotient must be between 3 and 31; {quotient} was provided" ) hashes = self.get_hashes() @@ -251,6 +256,19 @@ class QuotientFilter: for _h in hashes: self.add_alt(_h) + def merge(self, second: "QuotientFilter") -> None: + """Merge the `second` quotient filter into the first + + Args: + second (QuotientFilter): The quotient filter to merge + Note: + The hashing function between the two filters should match + Note: + Errors can occur if the quotient filter being inserted into does not expand (i.e., auto_expand=False)""" + + for _h in second.hashes(): + self.add_alt(_h) + def _shift_insert(self, k, v, start, j, flag): if self._is_occupied[j] == 0 and self._is_continuation[j] == 0 and self._is_shifted[j] == 0: self._filter[j] = v @@ -311,6 +329,8 @@ class QuotientFilter: return j def _add(self, q: int, r: int): + if self._size == self._elements_added: + raise QuotientFilterError("Unable to insert the element due to insufficient space") if self._is_occupied[q] == 0 and self._is_continuation[q] == 0 and self._is_shifted[q] == 0: self._filter[q] = r self._is_occupied[q] = 1
barrust/pyprobables
28a58b088b1d5856a09e82c72117a368c7e5b7bf
diff --git a/tests/quotientfilter_test.py b/tests/quotientfilter_test.py index 292c5ba..602ad8d 100644 --- a/tests/quotientfilter_test.py +++ b/tests/quotientfilter_test.py @@ -9,6 +9,8 @@ import unittest from pathlib import Path from tempfile import NamedTemporaryFile +from probables.exceptions import QuotientFilterError + this_dir = Path(__file__).parent sys.path.insert(0, str(this_dir)) sys.path.insert(0, str(this_dir.parent)) @@ -49,6 +51,10 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.num_elements, 16777216) # 2**qf.quotient self.assertFalse(qf.auto_expand) + # reset auto_expand + qf.auto_expand = True + self.assertTrue(qf.auto_expand) + def test_qf_add_check(self): "test that the qf is able to add and check elements" qf = QuotientFilter(quotient=8) @@ -91,10 +97,10 @@ class TestQuotientFilter(unittest.TestCase): def test_qf_init_errors(self): """test quotient filter initialization errors""" - self.assertRaises(ValueError, lambda: QuotientFilter(quotient=2)) - self.assertRaises(ValueError, lambda: QuotientFilter(quotient=32)) + self.assertRaises(QuotientFilterError, lambda: QuotientFilter(quotient=2)) + self.assertRaises(QuotientFilterError, lambda: QuotientFilter(quotient=32)) - def test_retrieve_hashes(self): + def test_qf_retrieve_hashes(self): """test retrieving hashes back from the quotient filter""" qf = QuotientFilter(quotient=8, auto_expand=False) hashes = [] @@ -107,7 +113,7 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.elements_added, len(out_hashes)) self.assertEqual(set(hashes), set(out_hashes)) - def test_resize(self): + def test_qf_resize(self): """test resizing the quotient filter""" qf = QuotientFilter(quotient=8, auto_expand=False) for i in range(200): @@ -120,7 +126,7 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.bits_per_elm, 32) self.assertFalse(qf.auto_expand) - self.assertRaises(ValueError, lambda: qf.resize(7)) # should be too small to fit + self.assertRaises(QuotientFilterError, lambda: qf.resize(7)) # should be too small to fit qf.resize(17) self.assertEqual(qf.elements_added, 200) @@ -132,7 +138,7 @@ class TestQuotientFilter(unittest.TestCase): for i in range(200): self.assertTrue(qf.check(str(i))) - def test_auto_resize(self): + def test_qf_auto_resize(self): """test resizing the quotient filter automatically""" qf = QuotientFilter(quotient=8, auto_expand=True) self.assertEqual(qf.max_load_factor, 0.85) @@ -153,7 +159,7 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.remainder, 23) self.assertEqual(qf.bits_per_elm, 32) - def test_auto_resize_changed_max_load_factor(self): + def test_qf_auto_resize_changed_max_load_factor(self): """test resizing the quotient filter with a different load factor""" qf = QuotientFilter(quotient=8, auto_expand=True) self.assertEqual(qf.max_load_factor, 0.85) @@ -178,13 +184,46 @@ class TestQuotientFilter(unittest.TestCase): self.assertEqual(qf.remainder, 23) self.assertEqual(qf.bits_per_elm, 32) - def test_resize_errors(self): + def test_qf_resize_errors(self): """test resizing errors""" qf = QuotientFilter(quotient=8, auto_expand=True) for i in range(200): qf.add(str(i)) - self.assertRaises(ValueError, lambda: qf.resize(quotient=2)) - self.assertRaises(ValueError, lambda: qf.resize(quotient=32)) - self.assertRaises(ValueError, lambda: qf.resize(quotient=6)) + self.assertRaises(QuotientFilterError, lambda: qf.resize(quotient=2)) + self.assertRaises(QuotientFilterError, lambda: qf.resize(quotient=32)) + self.assertRaises(QuotientFilterError, lambda: qf.resize(quotient=6)) + + def test_qf_merge(self): + """test merging two quotient filters together""" + qf = QuotientFilter(quotient=8, auto_expand=True) + for i in range(200): + qf.add(str(i)) + + fq = QuotientFilter(quotient=8) + for i in range(300, 500): + fq.add(str(i)) + + qf.merge(fq) + + for i in range(200): + self.assertTrue(qf.check(str(i))) + for i in range(200, 300): + self.assertFalse(qf.check(str(i))) + for i in range(300, 500): + self.assertTrue(qf.check(str(i))) + + self.assertEqual(qf.elements_added, 400) + + def test_qf_merge_error(self): + """test unable to merge due to inability to grow""" + qf = QuotientFilter(quotient=8, auto_expand=False) + for i in range(200): + qf.add(str(i)) + + fq = QuotientFilter(quotient=8) + for i in range(300, 400): + fq.add(str(i)) + + self.assertRaises(QuotientFilterError, lambda: qf.merge(fq))
quotient filter: Merge Add functionality to merge multiple quotient filters see #112
0.0
28a58b088b1d5856a09e82c72117a368c7e5b7bf
[ "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_add_check", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_add_check_in", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_auto_resize", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_auto_resize_changed_max_load_factor", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_init", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_init_errors", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_merge", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_merge_error", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_resize", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_resize_errors", "tests/quotientfilter_test.py::TestQuotientFilter::test_qf_retrieve_hashes" ]
[]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_issue_reference", "has_many_modified_files", "has_many_hunks", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2024-01-13 21:44:48+00:00
mit
1,295
barrust__pyspellchecker-101
diff --git a/spellchecker/utils.py b/spellchecker/utils.py index 437a7aa..7260d42 100644 --- a/spellchecker/utils.py +++ b/spellchecker/utils.py @@ -1,8 +1,8 @@ """ Additional utility functions """ import contextlib import gzip -import re import functools +import re import warnings from .info import __version__ @@ -135,4 +135,4 @@ def _parse_into_words(text): text (str): The text to split into words """ # see: https://stackoverflow.com/a/12705513 - return re.findall(r"(\w[\w']*\w|\w)", text.lower()) + return re.findall(r"(\w[\w']*\w|\w)", text)
barrust/pyspellchecker
e48c4350685a134994fad716dea0a5b626ff759e
diff --git a/tests/spellchecker_test.py b/tests/spellchecker_test.py index bf13fa4..772ac1b 100644 --- a/tests/spellchecker_test.py +++ b/tests/spellchecker_test.py @@ -36,13 +36,13 @@ class TestSpellChecker(unittest.TestCase): def test_words(self): ''' test the parsing of words ''' spell = SpellChecker() - res = ['this', 'is', 'a', 'test', 'of', 'this'] + res = ['This', 'is', 'a', 'test', 'of', 'this'] self.assertEqual(spell.split_words('This is a test of this'), res) def test_words_more_complete(self): ''' test the parsing of words ''' spell = SpellChecker() - res = ['this', 'is', 'a', 'test', 'of', 'the', 'word', 'parser', 'it', 'should', 'work', 'correctly'] + res = ['This', 'is', 'a', 'test', 'of', 'the', 'word', 'parser', 'It', 'should', 'work', 'correctly'] self.assertEqual(spell.split_words('This is a test of the word parser. It should work correctly!!!'), res) def test_word_frequency(self): @@ -413,7 +413,7 @@ class TestSpellChecker(unittest.TestCase): ''' test using split_words ''' spell = SpellChecker() res = spell.split_words("This isn't a good test, but it is a test!!!!") - self.assertEqual(set(res), set(["this", "isn't", "a", "good", "test", "but", "it", "is", "a", "test"])) + self.assertEqual(set(res), set(["This", "isn't", "a", "good", "test", "but", "it", "is", "a", "test"])) def test_iter_spellchecker(self): """ Test using the iterator on the SpellChecker """ @@ -440,3 +440,17 @@ class TestSpellChecker(unittest.TestCase): self.assertTrue(word in spell) cnt += 1 self.assertEqual(cnt, len(spell.word_frequency.dictionary)) + + def test_case_sensitive_parse_words(self): + """ Test using the parse words to generate a case sensitive dict """ + spell = SpellChecker(language=None, case_sensitive=True) + spell.word_frequency.load_text("This is a Test of the test!") + self.assertTrue("This" in spell) + self.assertFalse("this" in spell) + + def test_case_insensitive_parse_words(self): + """ Test using the parse words to generate a case insensitive dict """ + spell = SpellChecker(language=None, case_sensitive=False) + spell.word_frequency.load_text("This is a Test of the test!") + # in makes sure it is lower case in this instance + self.assertTrue("this" in spell)
Case Sensitive bug Case sensitive parameter has no affect bcz in "_parse_into_words" method text is made lower irrespective of case sensitive parameter, need to avoid it to make case-sensitive valid
0.0
e48c4350685a134994fad716dea0a5b626ff759e
[ "tests/spellchecker_test.py::TestSpellChecker::test_case_sensitive_parse_words", "tests/spellchecker_test.py::TestSpellChecker::test_split_words", "tests/spellchecker_test.py::TestSpellChecker::test_words", "tests/spellchecker_test.py::TestSpellChecker::test_words_more_complete" ]
[ "tests/spellchecker_test.py::TestSpellChecker::test_add_word", "tests/spellchecker_test.py::TestSpellChecker::test_adding_unicode", "tests/spellchecker_test.py::TestSpellChecker::test_bytes_input", "tests/spellchecker_test.py::TestSpellChecker::test_candidates", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_defaults_to_false", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_true", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_language_set", "tests/spellchecker_test.py::TestSpellChecker::test_case_insensitive_parse_words", "tests/spellchecker_test.py::TestSpellChecker::test_checking_odd_word", "tests/spellchecker_test.py::TestSpellChecker::test_correction", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_invalud", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one_property", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_two", "tests/spellchecker_test.py::TestSpellChecker::test_extremely_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_gzip", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_json", "tests/spellchecker_test.py::TestSpellChecker::test_iter_spellchecker", "tests/spellchecker_test.py::TestSpellChecker::test_iter_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_load_text_file", "tests/spellchecker_test.py::TestSpellChecker::test_missing_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_pop", "tests/spellchecker_test.py::TestSpellChecker::test_pop_default", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold_using_items", "tests/spellchecker_test.py::TestSpellChecker::test_remove_word", "tests/spellchecker_test.py::TestSpellChecker::test_remove_words", "tests/spellchecker_test.py::TestSpellChecker::test_spanish_dict", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_provided", "tests/spellchecker_test.py::TestSpellChecker::test_unique_words", "tests/spellchecker_test.py::TestSpellChecker::test_unknown_words", "tests/spellchecker_test.py::TestSpellChecker::test_word_contains", "tests/spellchecker_test.py::TestSpellChecker::test_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_word_in", "tests/spellchecker_test.py::TestSpellChecker::test_word_known", "tests/spellchecker_test.py::TestSpellChecker::test_word_probability_calc", "tests/spellchecker_test.py::TestSpellChecker::test_word_usage_frequency" ]
{ "failed_lite_validators": [ "has_short_problem_statement" ], "has_test_patch": true, "is_lite": false }
2021-03-23 02:36:42+00:00
mit
1,296
barrust__pyspellchecker-132
diff --git a/spellchecker/spellchecker.py b/spellchecker/spellchecker.py index 760b9c2..7c2fb8e 100644 --- a/spellchecker/spellchecker.py +++ b/spellchecker/spellchecker.py @@ -471,13 +471,14 @@ class WordFrequency(object): self._dictionary.update([word if self._case_sensitive else word.lower() for word in words]) self._update_dictionary() - def add(self, word: KeyT) -> None: + def add(self, word: KeyT, val: int = 1) -> None: """Add a word to the word frequency list Args: - word (str): The word to add""" + word (str): The word to add + val (int): The number of times to insert the word""" word = ensure_unicode(word) - self.load_words([word]) + self.load_json({word if self._case_sensitive else word.lower(): val}) def remove_words(self, words: typing.Iterable[KeyT]) -> None: """Remove a list of words from the word frequency list
barrust/pyspellchecker
35b2c4e1f4c8c50da9f7e30b7370c62847c4f513
diff --git a/tests/spellchecker_test.py b/tests/spellchecker_test.py index 2e3ee57..e9db470 100644 --- a/tests/spellchecker_test.py +++ b/tests/spellchecker_test.py @@ -269,6 +269,13 @@ class TestSpellChecker(unittest.TestCase): spell.word_frequency.add("appt") self.assertEqual(spell["appt"], 1) + def test_add_word_priority(self): + """test adding a word with larger priority""" + spell = SpellChecker() + self.assertEqual(spell["appt"], 0) + spell.word_frequency.add("appt", 5000) + self.assertEqual(spell["appt"], 5000) + def test_checking_odd_word(self): """test checking a word that is really a number""" spell = SpellChecker() @@ -334,7 +341,7 @@ class TestSpellChecker(unittest.TestCase): def test_large_words(self): """test checking for words that are clearly larger than the largest dictionary word""" spell = SpellChecker(language=None, distance=2) - spell.word_frequency.add("Bob") + spell.word_frequency.add("Bob", 1) words = ["Bb", "bb", "BB"] self.assertEqual(spell.unknown(words), {"bb"})
load_words is not prioritized Looks like the functionality load_words is not prioritized in the spellchecking. ``` from spellchecker import SpellChecker known_words = ['covid', 'Covid19'] spell = SpellChecker(language='en') spell.word_frequency.load_words(known_words) word = 'coved' misspelled = spell.unknown(word) print(spell.correction(allwords)) ``` the output of this is `loved`
0.0
35b2c4e1f4c8c50da9f7e30b7370c62847c4f513
[ "tests/spellchecker_test.py::TestSpellChecker::test_add_word_priority", "tests/spellchecker_test.py::TestSpellChecker::test_large_words" ]
[ "tests/spellchecker_test.py::TestSpellChecker::test_add_word", "tests/spellchecker_test.py::TestSpellChecker::test_adding_unicode", "tests/spellchecker_test.py::TestSpellChecker::test_bytes_input", "tests/spellchecker_test.py::TestSpellChecker::test_candidates", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_defaults_to_false", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_true", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_language_set", "tests/spellchecker_test.py::TestSpellChecker::test_case_insensitive_parse_words", "tests/spellchecker_test.py::TestSpellChecker::test_case_sensitive_parse_words", "tests/spellchecker_test.py::TestSpellChecker::test_checking_odd_word", "tests/spellchecker_test.py::TestSpellChecker::test_correction", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_invalud", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one_property", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_two", "tests/spellchecker_test.py::TestSpellChecker::test_extremely_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_gzip", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_json", "tests/spellchecker_test.py::TestSpellChecker::test_iter_spellchecker", "tests/spellchecker_test.py::TestSpellChecker::test_iter_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_language_list", "tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_load_text_file", "tests/spellchecker_test.py::TestSpellChecker::test_missing_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_multiple_dicts", "tests/spellchecker_test.py::TestSpellChecker::test_nan_correction", "tests/spellchecker_test.py::TestSpellChecker::test_pop", "tests/spellchecker_test.py::TestSpellChecker::test_pop_default", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold_using_items", "tests/spellchecker_test.py::TestSpellChecker::test_remove_word", "tests/spellchecker_test.py::TestSpellChecker::test_remove_words", "tests/spellchecker_test.py::TestSpellChecker::test_spanish_dict", "tests/spellchecker_test.py::TestSpellChecker::test_split_words", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_provided", "tests/spellchecker_test.py::TestSpellChecker::test_unique_words", "tests/spellchecker_test.py::TestSpellChecker::test_unknown_words", "tests/spellchecker_test.py::TestSpellChecker::test_word_contains", "tests/spellchecker_test.py::TestSpellChecker::test_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_word_in", "tests/spellchecker_test.py::TestSpellChecker::test_word_known", "tests/spellchecker_test.py::TestSpellChecker::test_word_usage_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_words", "tests/spellchecker_test.py::TestSpellChecker::test_words_more_complete" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2022-08-29 23:47:38+00:00
mit
1,297
barrust__pyspellchecker-156
diff --git a/CHANGELOG.md b/CHANGELOG.md index 21fd9df..bf6effd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,10 @@ # pyspellchecker +## Version 0.7.3 +* Remove relative imports in favor of absolute imports +* Add `Path` support for files + + ## Version 0.7.2 * Added `Latvian` language support; see [#145](https://github.com/barrust/pyspellchecker/pull/145) * Added `Basque` language support; see [#146](https://github.com/barrust/pyspellchecker/pull/146) diff --git a/spellchecker/__init__.py b/spellchecker/__init__.py index 0426be6..52a5b4b 100644 --- a/spellchecker/__init__.py +++ b/spellchecker/__init__.py @@ -1,6 +1,6 @@ """ SpellChecker Module """ -from .spellchecker import SpellChecker, WordFrequency -from .info import ( +from spellchecker.spellchecker import SpellChecker, WordFrequency +from spellchecker.info import ( __author__, __maintainer__, __email__, diff --git a/spellchecker/spellchecker.py b/spellchecker/spellchecker.py index 9558c41..7707d4f 100644 --- a/spellchecker/spellchecker.py +++ b/spellchecker/spellchecker.py @@ -7,8 +7,9 @@ import string import typing from collections import Counter from collections.abc import Iterable +from pathlib import Path -from .utils import KeyT, _parse_into_words, ensure_unicode, load_file, write_file +from spellchecker.utils import KeyT, PathOrStr, _parse_into_words, ensure_unicode, load_file, write_file class SpellChecker: @@ -33,7 +34,7 @@ class SpellChecker: def __init__( self, language: typing.Union[str, typing.Iterable[str]] = "en", - local_dictionary: typing.Optional[str] = None, + local_dictionary: typing.Optional[PathOrStr] = None, distance: int = 2, tokenizer: typing.Optional[typing.Callable[[str], typing.Iterable[str]]] = None, case_sensitive: bool = False, @@ -122,7 +123,7 @@ class SpellChecker: text = ensure_unicode(text) return self._tokenizer(text) - def export(self, filepath: str, encoding: str = "utf-8", gzipped: bool = True) -> None: + def export(self, filepath: PathOrStr, encoding: str = "utf-8", gzipped: bool = True) -> None: """Export the word frequency list for import in the future Args: @@ -330,7 +331,7 @@ class WordFrequency: @property def total_words(self) -> int: - """int: The sum of all word occurances in the word frequency dictionary + """int: The sum of all word occurrences in the word frequency dictionary Note: Not settable""" @@ -401,7 +402,7 @@ class WordFrequency: This is the same as `dict.items()`""" yield from self._dictionary.items() - def load_dictionary(self, filename: str, encoding: str = "utf-8") -> None: + def load_dictionary(self, filename: PathOrStr, encoding: str = "utf-8") -> None: """Load in a pre-built word frequency list Args: @@ -422,7 +423,7 @@ class WordFrequency: def load_text_file( self, - filename: str, + filename: PathOrStr, encoding: str = "utf-8", tokenizer: typing.Optional[typing.Callable[[str], typing.Iterable[str]]] = None, ) -> None: diff --git a/spellchecker/utils.py b/spellchecker/utils.py index 050415f..fd7db15 100644 --- a/spellchecker/utils.py +++ b/spellchecker/utils.py @@ -5,10 +5,12 @@ import gzip import re import typing import warnings +from pathlib import Path -from .info import __version__ +from spellchecker.info import __version__ KeyT = typing.Union[str, bytes] +PathOrStr = typing.Union[Path, str] def fail_after(version: str) -> typing.Callable: @@ -77,7 +79,7 @@ def ensure_unicode(_str: KeyT, encoding: str = "utf-8") -> str: @contextlib.contextmanager -def __gzip_read(filename: str, mode: str = "rb", encoding: str = "UTF-8") -> typing.Generator[KeyT, None, None]: +def __gzip_read(filename: PathOrStr, mode: str = "rb", encoding: str = "UTF-8") -> typing.Generator[KeyT, None, None]: """Context manager to correctly handle the decoding of the output of the gzip file Args: @@ -92,7 +94,7 @@ def __gzip_read(filename: str, mode: str = "rb", encoding: str = "UTF-8") -> typ @contextlib.contextmanager -def load_file(filename: str, encoding: str) -> typing.Generator[KeyT, None, None]: +def load_file(filename: PathOrStr, encoding: str) -> typing.Generator[KeyT, None, None]: """Context manager to handle opening a gzip or text file correctly and reading all the data @@ -102,6 +104,9 @@ def load_file(filename: str, encoding: str) -> typing.Generator[KeyT, None, None Yields: str: The string data from the file read """ + if isinstance(filename, Path): + filename = str(filename) + if filename[-3:].lower() == ".gz": with __gzip_read(filename, mode="rt", encoding=encoding) as data: yield data @@ -110,7 +115,7 @@ def load_file(filename: str, encoding: str) -> typing.Generator[KeyT, None, None yield fobj.read() -def write_file(filepath: str, encoding: str, gzipped: bool, data: str) -> None: +def write_file(filepath: PathOrStr, encoding: str, gzipped: bool, data: str) -> None: """Write the data to file either as a gzip file or text based on the gzipped parameter @@ -130,7 +135,7 @@ def write_file(filepath: str, encoding: str, gzipped: bool, data: str) -> None: def _parse_into_words(text: str) -> typing.Iterable[str]: """Parse the text into words; currently removes punctuation except for - apostrophies. + apostrophizes. Args: text (str): The text to split into words
barrust/pyspellchecker
29c9210aae75db6d0621552f2ec3f1bcb87f35ad
diff --git a/tests/spellchecker_test.py b/tests/spellchecker_test.py index a00b054..9a97096 100644 --- a/tests/spellchecker_test.py +++ b/tests/spellchecker_test.py @@ -1,7 +1,8 @@ """ Unittest class """ -import unittest import os +import unittest +from pathlib import Path from spellchecker import SpellChecker @@ -175,6 +176,14 @@ class TestSpellChecker(unittest.TestCase): self.assertEqual(spell["a"], 1) self.assertTrue("apple" in spell) + def test_load_external_dictionary_path(self): + """test loading a local dictionary""" + here = os.path.dirname(__file__) + filepath = Path(f"{here}/resources/small_dictionary.json") + spell = SpellChecker(language=None, local_dictionary=filepath) + self.assertEqual(spell["a"], 1) + self.assertTrue("apple" in spell) + def test_edit_distance_one(self): """test a case where edit distance must be one""" here = os.path.dirname(__file__) @@ -217,6 +226,18 @@ class TestSpellChecker(unittest.TestCase): self.assertTrue(spell["whale"]) self.assertTrue("waves" in spell) + def test_load_text_file_path(self): + """test loading a text file""" + here = os.path.dirname(__file__) + filepath = Path(f"{here}/resources/small_doc.txt") + spell = SpellChecker(language=None) # just from this doc! + spell.word_frequency.load_text_file(filepath) + self.assertEqual(spell["a"], 3) + self.assertEqual(spell["storm"], 2) + self.assertFalse("awesome" in spell) + self.assertTrue(spell["whale"]) + self.assertTrue("waves" in spell) + def test_remove_words(self): """test is a word is removed""" spell = SpellChecker() @@ -431,6 +452,23 @@ class TestSpellChecker(unittest.TestCase): self.assertTrue(spell["whale"]) self.assertTrue("sea." in spell) + def test_tokenizer_file_path(self): + """def using a custom tokenizer for file loading""" + + def tokens(txt): + yield from txt.split() + + here = os.path.dirname(__file__) + filepath = Path(f"{here}/resources/small_doc.txt") + spell = SpellChecker(language=None) # just from this doc! + spell.word_frequency.load_text_file(filepath, tokenizer=tokens) + self.assertEqual(spell["a"], 3) + self.assertEqual(spell["storm"], 1) + self.assertEqual(spell["storm."], 1) + self.assertFalse("awesome" in spell) + self.assertTrue(spell["whale"]) + self.assertTrue("sea." in spell) + def test_tokenizer_provided(self): """Test passing in a tokenizer"""
Error in load file function I stumbled upon an error being raised by the function load_file in utils.py (line 95) that is called from spellchecker.py (line 436). The error I got: "..\spellchecker\utils.py", line 105, in load_file if filename[-3:].lower() == ".gz": ~~~~~~~~^^^^^ TypeError: 'WindowsPath' object is not subscriptable
0.0
29c9210aae75db6d0621552f2ec3f1bcb87f35ad
[ "tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary_path", "tests/spellchecker_test.py::TestSpellChecker::test_load_text_file_path", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file_path" ]
[ "tests/spellchecker_test.py::TestSpellChecker::test_add_word", "tests/spellchecker_test.py::TestSpellChecker::test_add_word_priority", "tests/spellchecker_test.py::TestSpellChecker::test_adding_unicode", "tests/spellchecker_test.py::TestSpellChecker::test_bytes_input", "tests/spellchecker_test.py::TestSpellChecker::test_candidates", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_defaults_to_false", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_true", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_language_set", "tests/spellchecker_test.py::TestSpellChecker::test_case_insensitive_parse_words", "tests/spellchecker_test.py::TestSpellChecker::test_case_sensitive_parse_words", "tests/spellchecker_test.py::TestSpellChecker::test_checking_odd_word", "tests/spellchecker_test.py::TestSpellChecker::test_correction", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_invalud", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one_property", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_two", "tests/spellchecker_test.py::TestSpellChecker::test_extremely_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_gzip", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_json", "tests/spellchecker_test.py::TestSpellChecker::test_iter_spellchecker", "tests/spellchecker_test.py::TestSpellChecker::test_iter_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_language_list", "tests/spellchecker_test.py::TestSpellChecker::test_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_load_text_file", "tests/spellchecker_test.py::TestSpellChecker::test_missing_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_multiple_dicts", "tests/spellchecker_test.py::TestSpellChecker::test_nan_correction", "tests/spellchecker_test.py::TestSpellChecker::test_pop", "tests/spellchecker_test.py::TestSpellChecker::test_pop_default", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold_using_items", "tests/spellchecker_test.py::TestSpellChecker::test_remove_word", "tests/spellchecker_test.py::TestSpellChecker::test_remove_words", "tests/spellchecker_test.py::TestSpellChecker::test_spanish_dict", "tests/spellchecker_test.py::TestSpellChecker::test_split_words", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_provided", "tests/spellchecker_test.py::TestSpellChecker::test_unique_words", "tests/spellchecker_test.py::TestSpellChecker::test_unknown_words", "tests/spellchecker_test.py::TestSpellChecker::test_word_contains", "tests/spellchecker_test.py::TestSpellChecker::test_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_word_in", "tests/spellchecker_test.py::TestSpellChecker::test_word_known", "tests/spellchecker_test.py::TestSpellChecker::test_word_usage_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_words", "tests/spellchecker_test.py::TestSpellChecker::test_words_more_complete" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-08-19 17:58:39+00:00
mit
1,298
barrust__pyspellchecker-87
diff --git a/CHANGELOG.md b/CHANGELOG.md index e4a891d..7aa5b30 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,10 +1,9 @@ # pyspellchecker -## Future Release -* Updated automated `scripts/build_dictionary.py` script to support adding missing words - -## Version 0.6.0 +## Version 0.6.0 (future version) * Remove **python 2.7** support +* Updated automated `scripts/build_dictionary.py` script to support adding missing words +* Updated `split_words()` to attempt to better handle punctuation; [#84](https://github.com/barrust/pyspellchecker/issues/84) ## Version 0.5.6 * ***NOTE:*** Last planned support for **Python 2.7** diff --git a/spellchecker/utils.py b/spellchecker/utils.py index deafce4..e00484d 100644 --- a/spellchecker/utils.py +++ b/spellchecker/utils.py @@ -64,9 +64,11 @@ def write_file(filepath, encoding, gzipped, data): def _parse_into_words(text): - """ Parse the text into words; currently removes punctuation + """ Parse the text into words; currently removes punctuation except for + apostrophies. Args: text (str): The text to split into words """ - return re.findall(r"\w+", text.lower()) + # see: https://stackoverflow.com/a/12705513 + return re.findall(r"(\w[\w']*\w|\w)", text.lower())
barrust/pyspellchecker
aa9668243fef58ff62c505a727b4a7284b81f42a
diff --git a/tests/spellchecker_test.py b/tests/spellchecker_test.py index 165371a..b403117 100644 --- a/tests/spellchecker_test.py +++ b/tests/spellchecker_test.py @@ -191,7 +191,6 @@ class TestSpellChecker(unittest.TestCase): cnt += 1 self.assertEqual(cnt, 0) - def test_remove_by_threshold_using_items(self): ''' test removing everything below a certain threshold; using items to test ''' spell = SpellChecker() @@ -398,3 +397,9 @@ class TestSpellChecker(unittest.TestCase): self.assertTrue(var in spell) self.assertEqual(spell[var], 60) + + def test_split_words(self): + ''' test using split_words ''' + spell = SpellChecker() + res = spell.split_words("This isn't a good test, but it is a test!!!!") + self.assertEqual(set(res), set(["this", "isn't", "a", "good", "test", "but", "it", "is", "a", "test"]))
English spellchecking Hello Team! I am new to the Project and I have a question. I use python 3.7 and run into problem with this test program: ``` python from spellchecker import SpellChecker spell = SpellChecker() split_words = spell.split_words spell_unknown = spell.unknown words = split_words("That's how t and s don't fit.") print(words) misspelled = spell_unknown(words) print(misspelled) ``` With pyspellchecker ver 0.5.4 the printout is: ``` python ['that', 's', 'how', 't', 'and', 's', 'don', 't', 'fit'] set() ``` So free standing 't' and 's' are not marked as errors neither are contractions. If I change the phrase to: ```python words = split_words("That is how that's and don't do not fit.") ``` and use pyspellchecker ver 0.5.6 the printout is: ``` python ['that', 'is', 'how', 'that', 's', 'and', 'don', 't', 'do', 'not', 'fit'] {'t', 's'} ``` So contractions are marked as mistakes again. (I read barrust comment on Oct 22, 2019} Please, assist.
0.0
aa9668243fef58ff62c505a727b4a7284b81f42a
[ "tests/spellchecker_test.py::TestSpellChecker::test_split_words" ]
[ "tests/spellchecker_test.py::TestSpellChecker::test_add_word", "tests/spellchecker_test.py::TestSpellChecker::test_adding_unicode", "tests/spellchecker_test.py::TestSpellChecker::test_bytes_input", "tests/spellchecker_test.py::TestSpellChecker::test_candidates", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_defaults_to_false", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_true", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_language_set", "tests/spellchecker_test.py::TestSpellChecker::test_checking_odd_word", "tests/spellchecker_test.py::TestSpellChecker::test_correction", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_invalud", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one_property", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_two", "tests/spellchecker_test.py::TestSpellChecker::test_extremely_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_gzip", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_json", "tests/spellchecker_test.py::TestSpellChecker::test_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_load_text_file", "tests/spellchecker_test.py::TestSpellChecker::test_missing_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_pop", "tests/spellchecker_test.py::TestSpellChecker::test_pop_default", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold_using_items", "tests/spellchecker_test.py::TestSpellChecker::test_remove_word", "tests/spellchecker_test.py::TestSpellChecker::test_remove_words", "tests/spellchecker_test.py::TestSpellChecker::test_spanish_dict", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_provided", "tests/spellchecker_test.py::TestSpellChecker::test_unique_words", "tests/spellchecker_test.py::TestSpellChecker::test_unknown_words", "tests/spellchecker_test.py::TestSpellChecker::test_word_contains", "tests/spellchecker_test.py::TestSpellChecker::test_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_word_in", "tests/spellchecker_test.py::TestSpellChecker::test_word_known", "tests/spellchecker_test.py::TestSpellChecker::test_word_probability", "tests/spellchecker_test.py::TestSpellChecker::test_words", "tests/spellchecker_test.py::TestSpellChecker::test_words_more_complete" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2021-02-22 20:09:34+00:00
mit
1,299
barrust__pyspellchecker-92
diff --git a/CHANGELOG.md b/CHANGELOG.md index eefac8f..8208ff8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,8 @@ # pyspellchecker +## Version 0.6.1 (Future) +* Deprecated `spell.word_probability` since the name makes it seem that it is building a true probability; use `spell.word_usage_frequency` instead + ## Version 0.6.0 * Remove **python 2.7** support * Updated automated `scripts/build_dictionary.py` script to support adding missing words diff --git a/spellchecker/spellchecker.py b/spellchecker/spellchecker.py index 336e405..85df817 100644 --- a/spellchecker/spellchecker.py +++ b/spellchecker/spellchecker.py @@ -2,12 +2,11 @@ Peter Norvig. See: https://norvig.com/spell-correct.html """ import gzip import json -import os import pkgutil import string from collections import Counter -from .utils import _parse_into_words, ensure_unicode, load_file, write_file +from .utils import _parse_into_words, ensure_unicode, load_file, write_file, deprecated class SpellChecker(object): @@ -52,14 +51,14 @@ class SpellChecker(object): elif language: filename = "resources/{}.json.gz".format(language.lower()) try: - json_open = pkgutil.get_data('spellchecker', filename) + json_open = pkgutil.get_data("spellchecker", filename) except FileNotFoundError: msg = ( "The provided dictionary language ({}) does not " "exist!" ).format(language.lower()) raise ValueError(msg) - lang_dict = json.loads(gzip.decompress(json_open).decode('utf-8')) + lang_dict = json.loads(gzip.decompress(json_open).decode("utf-8")) self._word_frequency.load_json(lang_dict) def __contains__(self, key): @@ -122,9 +121,9 @@ class SpellChecker(object): data = json.dumps(self.word_frequency.dictionary, sort_keys=True) write_file(filepath, encoding, gzipped, data) - def word_probability(self, word, total_words=None): - """ Calculate the probability of the `word` being the desired, correct - word + def word_usage_frequency(self, word, total_words=None): + """ Calculate the frequency to the `word` provided as seen across the + entire dictionary Args: word (str): The word for which the word probability is \ @@ -134,11 +133,32 @@ class SpellChecker(object): frequency Returns: float: The probability that the word is the correct word """ - if total_words is None: + if not total_words: total_words = self._word_frequency.total_words word = ensure_unicode(word) return self._word_frequency.dictionary[word] / total_words + @deprecated("Deprecated as of version 0.6.1; use word_usage_frequency instead") + def word_probability(self, word, total_words=None): + """ Calculate the frequency to the `word` provided as seen across the + entire dictionary; function was a misnomar and is therefore + deprecated! + + Args: + word (str): The word for which the word probability is \ + calculated + total_words (int): The total number of words to use in the \ + calculation; use the default for using the whole word \ + frequency + Returns: + float: The probability that the word is the correct word + Note: + Deprecated as of version 0.6.1; use `word_usage_frequency` \ + instead + Note: + Will be removed in version 0.6.3 """ + return self.word_usage_frequency(word, total_words) + def correction(self, word): """ The most probable correct spelling for the word @@ -148,7 +168,7 @@ class SpellChecker(object): str: The most likely candidate """ word = ensure_unicode(word) candidates = list(self.candidates(word)) - return max(sorted(candidates), key=self.word_probability) + return max(sorted(candidates), key=self.__getitem__) def candidates(self, word): """ Generate possible spelling corrections for the provided word up to @@ -191,8 +211,7 @@ class SpellChecker(object): return set( w for w in tmp - if w in self._word_frequency.dictionary - and self._check_if_should_check(w) + if w in self._word_frequency.dictionary and self._check_if_should_check(w) ) def unknown(self, words): @@ -221,7 +240,11 @@ class SpellChecker(object): Returns: set: The set of strings that are edit distance one from the \ provided word """ - word = ensure_unicode(word).lower() if not self._case_sensitive else ensure_unicode(word) + word = ( + ensure_unicode(word).lower() + if not self._case_sensitive + else ensure_unicode(word) + ) if self._check_if_should_check(word) is False: return {word} letters = self._word_frequency.letters @@ -241,7 +264,11 @@ class SpellChecker(object): Returns: set: The set of strings that are edit distance two from the \ provided word """ - word = ensure_unicode(word).lower() if not self._case_sensitive else ensure_unicode(word) + word = ( + ensure_unicode(word).lower() + if not self._case_sensitive + else ensure_unicode(word) + ) return [ e2 for e1 in self.edit_distance_1(word) for e2 in self.edit_distance_1(e1) ] @@ -266,7 +293,9 @@ class SpellChecker(object): def _check_if_should_check(self, word): if len(word) == 1 and word in string.punctuation: return False - if len(word) > self._word_frequency.longest_word_length + 3: # magic number to allow removal of up to 2 letters. + if ( + len(word) > self._word_frequency.longest_word_length + 3 + ): # magic number to allow removal of up to 2 letters. return False try: # check if it is a number (int, float, etc) float(word) @@ -288,7 +317,7 @@ class WordFrequency(object): "_letters", "_tokenizer", "_case_sensitive", - "_longest_word_length" + "_longest_word_length", ] def __init__(self, tokenizer=None, case_sensitive=False): diff --git a/spellchecker/utils.py b/spellchecker/utils.py index 93a3560..437a7aa 100644 --- a/spellchecker/utils.py +++ b/spellchecker/utils.py @@ -2,25 +2,80 @@ import contextlib import gzip import re +import functools +import warnings +from .info import __version__ -def ensure_unicode(s, encoding='utf-8'): + +def fail_after(version): + """ Decorator to add to tests to ensure that they fail if a deprecated + feature is not removed before the specified version + + Args: + version (str): The version to check against """ + + def decorator_wrapper(func): + @functools.wraps(func) + def test_inner(*args, **kwargs): + if [int(x) for x in version.split(".")] <= [ + int(x) for x in __version__.split(".") + ]: + msg = "The function {} must be fully removed as it is depricated and must be removed by version {}".format( + func.__name__, version + ) + raise AssertionError(msg) + return func(*args, **kwargs) + + return test_inner + + return decorator_wrapper + + +def deprecated(message=""): + """ A simplistic decorator to mark functions as deprecated. The function + will pass a message to the user on the first use of the function + + Args: + message (str): The message to display if the function is deprecated + """ + + def decorator_wrapper(func): + @functools.wraps(func) + def function_wrapper(*args, **kwargs): + func_name = func.__name__ + if func_name not in function_wrapper.deprecated_items: + msg = "Function {} is now deprecated! {}".format(func.__name__, message) + warnings.warn(msg, category=DeprecationWarning, stacklevel=2) + function_wrapper.deprecated_items.add(func_name) + + return func(*args, **kwargs) + + # set this up the first time the decorator is called + function_wrapper.deprecated_items = set() + + return function_wrapper + + return decorator_wrapper + + +def ensure_unicode(_str, encoding="utf-8"): """ Simplify checking if passed in data are bytes or a string and decode bytes into unicode. Args: - s (str): The input string (possibly bytes) + _str (str): The input string (possibly bytes) encoding (str): The encoding to use if input is bytes Returns: str: The encoded string """ - if isinstance(s, bytes): - return s.decode(encoding) - return s + if isinstance(_str, bytes): + return _str.decode(encoding) + return _str @contextlib.contextmanager -def __gzip_read(filename, mode='rb', encoding='UTF-8'): +def __gzip_read(filename, mode="rb", encoding="UTF-8"): """ Context manager to correctly handle the decoding of the output of \ the gzip file @@ -47,7 +102,7 @@ def load_file(filename, encoding): str: The string data from the file read """ if filename[-3:].lower() == ".gz": - with __gzip_read(filename, mode='rt', encoding=encoding) as data: + with __gzip_read(filename, mode="rt", encoding=encoding) as data: yield data else: with open(filename, mode="r", encoding=encoding) as fobj: @@ -65,7 +120,7 @@ def write_file(filepath, encoding, gzipped, data): data (str): The data to be written out """ if gzipped: - with gzip.open(filepath, 'wt') as fobj: + with gzip.open(filepath, "wt") as fobj: fobj.write(data) else: with open(filepath, "w", encoding=encoding) as fobj:
barrust/pyspellchecker
304c2662cedd6ee9cec6a9d9009e3941911ab00b
diff --git a/tests/spellchecker_test.py b/tests/spellchecker_test.py index b403117..ba0254a 100644 --- a/tests/spellchecker_test.py +++ b/tests/spellchecker_test.py @@ -5,6 +5,7 @@ import unittest import os from spellchecker import SpellChecker +from spellchecker.utils import fail_after class TestSpellChecker(unittest.TestCase): ''' test the spell checker class ''' @@ -50,7 +51,17 @@ class TestSpellChecker(unittest.TestCase): # if the default load changes so will this... self.assertEqual(spell.word_frequency['the'], 76138318) - def test_word_probability(self): + def test_word_usage_frequency(self): + ''' test the word usage frequency calculation ''' + spell = SpellChecker() + # if the default load changes so will this... + num = spell.word_frequency['the'] + denom = spell.word_frequency.total_words + self.assertEqual(spell.word_usage_frequency('the'), num / denom) + + # deprecated! + @fail_after("0.6.3") + def test_word_probability_calc(self): ''' test the word probability calculation ''' spell = SpellChecker() # if the default load changes so will this...
Deprecate `word_probability` `word_probability` is really a misnomer and should be deprecated. Instead, something like `word_usage_frequency` or something similar should be used. It really is the ratio of the word provided against all words in the dictionary. I believe issue #68 is partly due to this being named what it is named.
0.0
304c2662cedd6ee9cec6a9d9009e3941911ab00b
[ "tests/spellchecker_test.py::TestSpellChecker::test_add_word", "tests/spellchecker_test.py::TestSpellChecker::test_adding_unicode", "tests/spellchecker_test.py::TestSpellChecker::test_bytes_input", "tests/spellchecker_test.py::TestSpellChecker::test_candidates", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_defaults_to_false", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_true", "tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_language_set", "tests/spellchecker_test.py::TestSpellChecker::test_checking_odd_word", "tests/spellchecker_test.py::TestSpellChecker::test_correction", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_invalud", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one_property", "tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_two", "tests/spellchecker_test.py::TestSpellChecker::test_extremely_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_gzip", "tests/spellchecker_test.py::TestSpellChecker::test_import_export_json", "tests/spellchecker_test.py::TestSpellChecker::test_large_words", "tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_load_text_file", "tests/spellchecker_test.py::TestSpellChecker::test_missing_dictionary", "tests/spellchecker_test.py::TestSpellChecker::test_pop", "tests/spellchecker_test.py::TestSpellChecker::test_pop_default", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold", "tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold_using_items", "tests/spellchecker_test.py::TestSpellChecker::test_remove_word", "tests/spellchecker_test.py::TestSpellChecker::test_remove_words", "tests/spellchecker_test.py::TestSpellChecker::test_spanish_dict", "tests/spellchecker_test.py::TestSpellChecker::test_split_words", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file", "tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_provided", "tests/spellchecker_test.py::TestSpellChecker::test_unique_words", "tests/spellchecker_test.py::TestSpellChecker::test_unknown_words", "tests/spellchecker_test.py::TestSpellChecker::test_word_contains", "tests/spellchecker_test.py::TestSpellChecker::test_word_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_word_in", "tests/spellchecker_test.py::TestSpellChecker::test_word_known", "tests/spellchecker_test.py::TestSpellChecker::test_word_probability_calc", "tests/spellchecker_test.py::TestSpellChecker::test_word_usage_frequency", "tests/spellchecker_test.py::TestSpellChecker::test_words", "tests/spellchecker_test.py::TestSpellChecker::test_words_more_complete" ]
[]
{ "failed_lite_validators": [ "has_issue_reference", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2021-03-07 23:54:17+00:00
mit
1,300
batfish__pybatfish-232
diff --git a/jupyter_notebooks/Introduction to Forwarding Analysis.ipynb b/jupyter_notebooks/Introduction to Forwarding Analysis.ipynb index 968bcf5..cfc6dce 100644 --- a/jupyter_notebooks/Introduction to Forwarding Analysis.ipynb +++ b/jupyter_notebooks/Introduction to Forwarding Analysis.ipynb @@ -309,7 +309,7 @@ "<span style=\"color:#7c020e; text-weight:bold;\">DENIED_IN</span><br><strong>1</strong>. node: as3core1<br>&nbsp;&nbsp;ORIGINATED(default)<br>&nbsp;&nbsp;FORWARDED(Routes: ibgp [Network: 2.128.0.0/16, Next Hop IP:10.23.21.2])<br>&nbsp;&nbsp;TRANSMITTED(GigabitEthernet1/0)<br><strong>2</strong>. node: as3border1<br>&nbsp;&nbsp;RECEIVED(GigabitEthernet0/0)<br>&nbsp;&nbsp;FORWARDED(Routes: bgp [Network: 2.128.0.0/16, Next Hop IP:10.23.21.2])<br>&nbsp;&nbsp;TRANSMITTED(GigabitEthernet1/0)<br><strong>3</strong>. node: as2border2<br>&nbsp;&nbsp;RECEIVED(GigabitEthernet0/0: OUTSIDE_TO_INSIDE)<br>&nbsp;&nbsp;FORWARDED(Routes: ibgp [Network: 2.128.0.0/24, Next Hop IP:2.34.101.4],ibgp [Network: 2.128.0.0/24, Next Hop IP:2.34.201.4])<br>&nbsp;&nbsp;TRANSMITTED(GigabitEthernet1/0)<br><strong>4</strong>. node: as2core2<br>&nbsp;&nbsp;RECEIVED(GigabitEthernet0/0)<br>&nbsp;&nbsp;FORWARDED(Routes: ibgp [Network: 2.128.0.0/24, Next Hop IP:2.34.101.4],ibgp [Network: 2.128.0.0/24, Next Hop IP:2.34.201.4])<br>&nbsp;&nbsp;TRANSMITTED(GigabitEthernet2/0)<br><strong>5</strong>. node: as2dist2<br>&nbsp;&nbsp;RECEIVED(GigabitEthernet0/0)<br>&nbsp;&nbsp;FORWARDED(Routes: bgp [Network: 2.128.0.0/24, Next Hop IP:2.34.201.4])<br>&nbsp;&nbsp;TRANSMITTED(GigabitEthernet2/0)<br><strong>6</strong>. node: as2dept1<br>&nbsp;&nbsp;RECEIVED(GigabitEthernet1/0)<br>&nbsp;&nbsp;FORWARDED(Routes: connected [Network: 2.128.0.0/24, Next Hop IP:AUTO/NONE(-1l)])<br>&nbsp;&nbsp;TRANSMITTED(GigabitEthernet2/0)<br><strong>7</strong>. node: host1<br>&nbsp;&nbsp;DENIED(eth0: filter::INPUT)" ], "text/plain": [ - "Trace(disposition='DENIED_IN', hops=[Hop(node='as3core1', steps=[Step(detail=OriginateStepDetail(originatingVrf='default'), action='ORIGINATED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/16', 'nextHopIp': '10.23.21.2', 'protocol': 'ibgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet1/0', outputFilter=None, transformedFlow=None), action='TRANSMITTED')]), Hop(node='as3border1', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/16', 'nextHopIp': '10.23.21.2', 'protocol': 'bgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet1/0', outputFilter=None, transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2border2', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter='OUTSIDE_TO_INSIDE'), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': '2.34.101.4', 'protocol': 'ibgp'}, {'network': '2.128.0.0/24', 'nextHopIp': '2.34.201.4', 'protocol': 'ibgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet1/0', outputFilter=None, transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2core2', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': '2.34.101.4', 'protocol': 'ibgp'}, {'network': '2.128.0.0/24', 'nextHopIp': '2.34.201.4', 'protocol': 'ibgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet2/0', outputFilter=None, transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2dist2', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': '2.34.201.4', 'protocol': 'bgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet2/0', outputFilter=None, transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2dept1', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet1/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': 'AUTO/NONE(-1l)', 'protocol': 'connected'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet2/0', outputFilter=None, transformedFlow=None), action='TRANSMITTED')]), Hop(node='host1', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='eth0', inputVrf='default', inputFilter='filter::INPUT'), action='DENIED')])])" + "Trace(disposition='DENIED_IN', hops=[Hop(node='as3core1', steps=[Step(detail=OriginateStepDetail(originatingVrf='default'), action='ORIGINATED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/16', 'nextHopIp': '10.23.21.2', 'protocol': 'ibgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet1/0', outputFilter=None, flowDiffs=[], transformedFlow=None), action='TRANSMITTED')]), Hop(node='as3border1', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/16', 'nextHopIp': '10.23.21.2', 'protocol': 'bgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet1/0', outputFilter=None, flowDiffs=[], transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2border2', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter='OUTSIDE_TO_INSIDE'), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': '2.34.101.4', 'protocol': 'ibgp'}, {'network': '2.128.0.0/24', 'nextHopIp': '2.34.201.4', 'protocol': 'ibgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet1/0', outputFilter=None, flowDiffs=[], transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2core2', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': '2.34.101.4', 'protocol': 'ibgp'}, {'network': '2.128.0.0/24', 'nextHopIp': '2.34.201.4', 'protocol': 'ibgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet2/0', outputFilter=None, flowDiffs=[], transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2dist2', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet0/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': '2.34.201.4', 'protocol': 'bgp'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet2/0', outputFilter=None, flowDiffs=[], transformedFlow=None), action='TRANSMITTED')]), Hop(node='as2dept1', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='GigabitEthernet1/0', inputVrf='default', inputFilter=None), action='RECEIVED'), Step(detail=RoutingStepDetail(routes=[{'network': '2.128.0.0/24', 'nextHopIp': 'AUTO/NONE(-1l)', 'protocol': 'connected'}]), action='FORWARDED'), Step(detail=ExitOutputIfaceStepDetail(outputInterface='GigabitEthernet2/0', outputFilter=None, flowDiffs=[], transformedFlow=None), action='TRANSMITTED')]), Hop(node='host1', steps=[Step(detail=EnterInputIfaceStepDetail(inputInterface='eth0', inputVrf='default', inputFilter='filter::INPUT'), action='DENIED')])])" ] }, "execution_count": 6, diff --git a/pybatfish/datamodel/flow.py b/pybatfish/datamodel/flow.py index 2ae6fbb..813eaf8 100644 --- a/pybatfish/datamodel/flow.py +++ b/pybatfish/datamodel/flow.py @@ -194,6 +194,34 @@ class Flow(DataModelElement): return ip [email protected](frozen=True) +class FlowDiff(DataModelElement): + """A difference between two Flows. + + :ivar fieldName: A Flow field name that has changed. + :ivar oldValue: The old value of the field. + :ivar newValue: The new value of the field. + """ + + fieldName = attr.ib(type=str) + oldValue = attr.ib(type=str) + newValue = attr.ib(type=str) + + @classmethod + def from_dict(cls, json_dict): + # type: (Dict) -> FlowDiff + return FlowDiff(json_dict["fieldName"], + json_dict["oldValue"], + json_dict["newValue"]) + + def __str__(self): + # type: () -> str + return "{fieldName}: {oldValue} -> {newValue}".format( + fieldName=self.fieldName, + oldValue=self.oldValue, + newValue=self.newValue) + + @attr.s(frozen=True) class FlowTrace(DataModelElement): """A trace of a flow through the network. @@ -325,11 +353,13 @@ class ExitOutputIfaceStepDetail(DataModelElement): :ivar outputInterface: Interface of the Hop from which the flow exits :ivar outputFilter: Filter associated with the output interface + :ivar flowDiff: Set of changed flow fields :ivar transformedFlow: Transformed Flow if a source NAT was applied on the Flow """ outputInterface = attr.ib(type=str) outputFilter = attr.ib(type=Optional[str]) + flowDiffs = attr.ib(type=List[FlowDiff]) transformedFlow = attr.ib(type=Optional[str]) @classmethod @@ -338,6 +368,7 @@ class ExitOutputIfaceStepDetail(DataModelElement): return ExitOutputIfaceStepDetail( json_dict.get("outputInterface", {}).get("interface"), json_dict.get("outputFilter"), + [FlowDiff.from_dict(fd) for fd in json_dict.get("flowDiffs", [])], json_dict.get("transformedFlow")) def __str__(self): @@ -345,6 +376,9 @@ class ExitOutputIfaceStepDetail(DataModelElement): str_output = str(self.outputInterface) if self.outputFilter: str_output += ": {}".format(self.outputFilter) + if self.flowDiffs: + str_output += " " + ", ".join( + [str(flowDiff) for flowDiff in self.flowDiffs]) return str_output
batfish/pybatfish
9c84fb0aa8f77ca8f097ba45d7c641d7cb429cf8
diff --git a/tests/datamodel/test_flow.py b/tests/datamodel/test_flow.py index f312aa8..11b4a06 100644 --- a/tests/datamodel/test_flow.py +++ b/tests/datamodel/test_flow.py @@ -20,12 +20,43 @@ import attr import pytest from pybatfish.datamodel.flow import (EnterInputIfaceStepDetail, - ExitOutputIfaceStepDetail, Flow, + ExitOutputIfaceStepDetail, Flow, FlowDiff, FlowTraceHop, HeaderConstraints, Hop, MatchTcpFlags, PreSourceNatOutgoingFilterStepDetail, RoutingStepDetail, Step, TcpFlags) +def testExitOutputIfaceStepDetail_str(): + noDiffDetail = ExitOutputIfaceStepDetail( + "iface", + "filter", + None, + None) + oneDiffDetail = ExitOutputIfaceStepDetail( + "iface", + "filter", + [FlowDiff("field", "old", "new")], + None) + twoDiffDetail = ExitOutputIfaceStepDetail( + "iface", + "filter", + [FlowDiff("field1", "old1", "new1"), + FlowDiff("field2", "old2", "new2")], + None) + + step = Step(noDiffDetail, "ACTION") + assert str(step) == "ACTION(iface: filter)" + + step = Step(oneDiffDetail, "ACTION") + assert str(step) == "ACTION(iface: filter field: old -> new)" + + step = Step(twoDiffDetail, "ACTION") + assert str(step) == ''.join([ + "ACTION(iface: filter ", + "field1: old1 -> new1, ", + "field2: old2 -> new2)"]) + + def testFlowDeserialization(): hopDict = { "dscp": 0, @@ -191,9 +222,10 @@ def test_hop_repr_str(): "nextHopIp": "1.2.3.4"}, {"network": "1.1.1.2/24", "protocol": "static", "nextHopIp": "1.2.3.5"}]), "FORWARDED"), - Step(PreSourceNatOutgoingFilterStepDetail("out_iface1", "preSourceNat_filter"), + Step(PreSourceNatOutgoingFilterStepDetail("out_iface1", + "preSourceNat_filter"), "PERMITTED"), - Step(ExitOutputIfaceStepDetail("out_iface1", "out_filter1", None), + Step(ExitOutputIfaceStepDetail("out_iface1", "out_filter1", None, None), "SENT_OUT") ])
Pretty-printed traces should include NAT steps Currently there's no indication when a flow is NATted in pretty printed traces. The traces themselves include the NAT steps and make clear what changed. It would be helpful if pretty printed flows included those steps.
0.0
9c84fb0aa8f77ca8f097ba45d7c641d7cb429cf8
[ "tests/datamodel/test_flow.py::testExitOutputIfaceStepDetail_str", "tests/datamodel/test_flow.py::testFlowDeserialization", "tests/datamodel/test_flow.py::testFlowDeserializationOptionalMissing", "tests/datamodel/test_flow.py::test_flow_trace_hop_no_transformed_flow", "tests/datamodel/test_flow.py::test_get_ip_protocol_str", "tests/datamodel/test_flow.py::test_header_constraints_serialization", "tests/datamodel/test_flow.py::test_hop_repr_str", "tests/datamodel/test_flow.py::test_match_tcp_generators", "tests/datamodel/test_flow.py::test_flow_repr_html_ports", "tests/datamodel/test_flow.py::test_flow_repr_html_start_location", "tests/datamodel/test_flow.py::test_flow_str_ports" ]
[]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2018-12-12 20:49:59+00:00
apache-2.0
1,301
batfish__pybatfish-244
diff --git a/pybatfish/question/question.py b/pybatfish/question/question.py index 12085aa..4c88c01 100644 --- a/pybatfish/question/question.py +++ b/pybatfish/question/question.py @@ -76,17 +76,21 @@ class QuestionMeta(type): """Creates a new class for a specific question.""" new_cls = super(QuestionMeta, cls).__new__(cls, name, base, dct) - def constructor(self, question_name=None, - exclusions=None, **kwargs): + def constructor(self, *args, **kwargs): """Create a new question.""" + # Reject positional args; this way is PY2-compliant + if args: + raise TypeError("Please use keyword arguments") + # Call super (i.e., QuestionBase) super(new_cls, self).__init__(new_cls.template) # Update well-known params, if passed in - if exclusions is not None: - self._dict['exclusions'] = exclusions - if question_name: - self._dict['instance']['instanceName'] = question_name + if "exclusions" in kwargs: + self._dict['exclusions'] = kwargs.get("exclusions") + if "question_name" in kwargs: + self._dict['instance']['instanceName'] = kwargs.get( + "question_name") else: self._dict['instance']['instanceName'] = ( "__{}_{}".format( @@ -94,14 +98,18 @@ class QuestionMeta(type): # Validate that we are not accepting invalid kwargs/variables instance_vars = self._dict['instance'].get('variables', {}) - var_difference = set(kwargs.keys()).difference(instance_vars) + additional_kwargs = {'exclusions', 'question_name'} + allowed_kwargs = set(instance_vars) + allowed_kwargs.update(additional_kwargs) + var_difference = set(kwargs.keys()).difference(allowed_kwargs) if var_difference: raise QuestionValidationException( "Received unsupported parameters/variables: {}".format( var_difference)) # Set question-specific parameters for var_name, var_value in kwargs.items(): - instance_vars[var_name]['value'] = var_value + if var_name not in additional_kwargs: + instance_vars[var_name]['value'] = var_value # Define signature. Helps with tab completion. Python3 centric if PY3:
batfish/pybatfish
570de5864e949a647337100f5f4d14b2961cd71a
diff --git a/tests/question/test_question.py b/tests/question/test_question.py index 55ed682..0246b08 100644 --- a/tests/question/test_question.py +++ b/tests/question/test_question.py @@ -273,3 +273,10 @@ def test_question_name(): inferred_name = qclass() assert inferred_name.get_name().startswith( '__{}_'.format(TEST_QUESTION_NAME)) + + +def test_question_positional_args(): + """Test that a question constructor rejects positional arguments.""" + qname, qclass = _load_question_dict(TEST_QUESTION_DICT) + with pytest.raises(TypeError): + qclass("positional")
Error out on default parameters pybatfish lets me ask `bfq.nodeProperties("garbage")` and returns an answer happily. It should instead produce an error.
0.0
570de5864e949a647337100f5f4d14b2961cd71a
[ "tests/question/test_question.py::test_question_positional_args" ]
[ "tests/question/test_question.py::test_min_length", "tests/question/test_question.py::test_valid_comparator", "tests/question/test_question.py::test_validate_allowed_values", "tests/question/test_question.py::test_validate_old_allowed_values", "tests/question/test_question.py::test_validate_allowed_values_list", "tests/question/test_question.py::test_validate_old_allowed_values_list", "tests/question/test_question.py::test_compute_docstring", "tests/question/test_question.py::test_compute_var_help_with_no_allowed_values", "tests/question/test_question.py::test_compute_var_help_with_allowed_values", "tests/question/test_question.py::test_compute_var_help_with_new_and_old_allowed_values", "tests/question/test_question.py::test_compute_var_help_with_old_allowed_values", "tests/question/test_question.py::test_process_variables", "tests/question/test_question.py::test_load_dir_questions", "tests/question/test_question.py::test_list_questions", "tests/question/test_question.py::test_make_check", "tests/question/test_question.py::test_question_name" ]
{ "failed_lite_validators": [ "has_short_problem_statement" ], "has_test_patch": true, "is_lite": false }
2018-12-26 21:31:32+00:00
apache-2.0
1,302
bbc__nmos-common-55
diff --git a/CHANGELOG.md b/CHANGELOG.md index a7aaaaf..e7a86bc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,9 @@ # NMOS Common Library Changelog +## 0.6.8 +- Resolve issue where interactions of aggregator.py with Registration API + failed to set Content-Type + ## 0.6.7 - Updated stdeb.cfg to include dependencies on mediajson and mediatimestamp diff --git a/nmoscommon/aggregator.py b/nmoscommon/aggregator.py index 249666e..359e24d 100644 --- a/nmoscommon/aggregator.py +++ b/nmoscommon/aggregator.py @@ -290,8 +290,10 @@ class Aggregator(object): if self.aggregator == "": self.aggregator = self.mdnsbridge.getHref(REGISTRATION_MDNSTYPE) + headers = None if data is not None: data = json.dumps(data) + headers = {"Content-Type": "application/json"} url = AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url for i in range(0, 3): @@ -308,9 +310,9 @@ class Aggregator(object): # majority of the time... try: if nmoscommonconfig.config.get('prefer_ipv6',False) == False: - R = requests.request(method, urljoin(self.aggregator, url), data=data, timeout=1.0) + R = requests.request(method, urljoin(self.aggregator, url), data=data, timeout=1.0, headers=headers) else: - R = requests.request(method, urljoin(self.aggregator, url), data=data, timeout=1.0, proxies={'http':''}) + R = requests.request(method, urljoin(self.aggregator, url), data=data, timeout=1.0, headers=headers, proxies={'http':''}) if R is None: # Try another aggregator self.logger.writeWarning("No response from aggregator {}".format(self.aggregator)) diff --git a/setup.py b/setup.py index da2426d..58cb34a 100644 --- a/setup.py +++ b/setup.py @@ -146,7 +146,7 @@ deps_required = [ setup(name="nmoscommon", - version="0.6.7", + version="0.6.8", description="Common components for the BBC's NMOS implementations", url='https://github.com/bbc/nmos-common', author='Peter Brightwell',
bbc/nmos-common
b35788d29bdcfb4b3a9cfbb3f34360641b3547b2
diff --git a/tests/test_aggregator.py b/tests/test_aggregator.py index c0e4188..035b138 100644 --- a/tests/test_aggregator.py +++ b/tests/test_aggregator.py @@ -159,7 +159,7 @@ class TestMDNSUpdater(unittest.TestCase): UUT.mdns.update.assert_not_called() def test_update_mdns(self): - """A call to MDNSUpdater.update_mdns when P2P is enabled ought to call mdns.update to increment version numbers for devices. Device + """A call to MDNSUpdater.update_mdns when P2P is enabled ought to call mdns.update to increment version numbers for devices. Device version numbers should be 8-bit integers which roll over to 0 when incremented beyond the limits of 1 byte.""" mappings = {"device": "ver_dvc", "flow": "ver_flw", "source": "ver_src", "sender":"ver_snd", "receiver":"ver_rcv", "self":"ver_slf"} mdnstype = "_nmos-node._tcp" @@ -211,7 +211,7 @@ class TestAggregator(unittest.TestCase): # self.mocks['nmoscommon.aggregator.Logger'].return_value.writeDebug.side_effect = printmsg("DEBUG") # self.mocks['nmoscommon.aggregator.Logger'].return_value.writeError.side_effect = printmsg("ERROR") # self.mocks['nmoscommon.aggregator.Logger'].return_value.writeFatal.side_effect = printmsg("FATAL") - + def test_init(self): """Test a call to Aggregator()""" self.mocks['gevent.spawn'].side_effect = lambda f : mock.MagicMock(thread_function=f) @@ -299,7 +299,7 @@ class TestAggregator(unittest.TestCase): def test_heartbeat_registers(self): """The heartbeat thread should trigger a registration of the node if the node is not yet registered when it is run.""" a = Aggregator(mdns_updater=mock.MagicMock()) - a._registered["registered"] = False + a._registered["registered"] = False def killloop(*args, **kwargs): a._running = False @@ -879,7 +879,7 @@ class TestAggregator(unittest.TestCase): SEND_ITERATION_2 = 6 SEND_TOO_MANY_RETRIES = 7 - def assert_send_runs_correctly(self, method, url, data=None, to_point=SEND_ITERATION_0, initial_aggregator="", aggregator_urls=["http://example0.com/aggregator/", "http://example1.com/aggregator/", "http://example2.com/aggregator/"], request=None, expected_return=None, expected_exception=None, prefer_ipv6=False): + def assert_send_runs_correctly(self, method, url, data=None, headers=None, to_point=SEND_ITERATION_0, initial_aggregator="", aggregator_urls=["http://example0.com/aggregator/", "http://example1.com/aggregator/", "http://example2.com/aggregator/"], request=None, expected_return=None, expected_exception=None, prefer_ipv6=False): """This method checks that the SEND routine runs through its state machine as expected: The states are: @@ -921,23 +921,23 @@ class TestAggregator(unittest.TestCase): expected_request_calls = [] if to_point >= self.SEND_ITERATION_0: if not prefer_ipv6: - expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[0], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, timeout=1.0)) + expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[0], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, headers=headers, timeout=1.0)) else: - expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[0], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, timeout=1.0, proxies={'http':''})) + expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[0], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, headers=headers, timeout=1.0, proxies={'http':''})) if to_point > self.SEND_ITERATION_0: expected_gethref_calls.append(mock.call(REGISTRATION_MDNSTYPE)) if to_point >= self.SEND_ITERATION_1: if not prefer_ipv6: - expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[1], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, timeout=1.0)) + expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[1], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, headers=headers, timeout=1.0)) else: - expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[1], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, timeout=1.0, proxies={'http':''})) + expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[1], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, headers=headers, timeout=1.0, proxies={'http':''})) if to_point > self.SEND_ITERATION_1: expected_gethref_calls.append(mock.call(REGISTRATION_MDNSTYPE)) if to_point >= self.SEND_ITERATION_2: if not prefer_ipv6: - expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[2], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, timeout=1.0)) + expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[2], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, headers=headers, timeout=1.0)) else: - expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[2], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, timeout=1.0, proxies={'http':''})) + expected_request_calls.append(mock.call(method, urljoin(aggregator_urls[2], AGGREGATOR_APINAMESPACE + "/" + AGGREGATOR_APINAME + "/" + AGGREGATOR_APIVERSION + url), data=expected_data, headers=headers, timeout=1.0, proxies={'http':''})) if to_point > self.SEND_ITERATION_2: expected_gethref_calls.append(mock.call(REGISTRATION_MDNSTYPE)) @@ -984,7 +984,7 @@ class TestAggregator(unittest.TestCase): "dummy2" : [ "dummy3", "dummy4" ] } def request(*args, **kwargs): return mock.MagicMock(status_code = 204) - self.assert_send_runs_correctly("PUT", "/dummy/url", data=data, to_point=self.SEND_ITERATION_0, request=request, expected_return=None) + self.assert_send_runs_correctly("PUT", "/dummy/url", data=data, headers={"Content-Type": "application/json"}, to_point=self.SEND_ITERATION_0, request=request, expected_return=None) def test_send_get_which_returns_200_returns_content(self): """If the first attempt at sending gives a 200 success then the SEND method will return normally with a body."""
Aggregator.py fails to set Content-Type header in interactions with Registration API Noted by Tektronix. This is a breach of the spec, but our registry isn't strictly checking this as it doesn't expect to receive anything other than JSON.
0.0
b35788d29bdcfb4b3a9cfbb3f34360641b3547b2
[ "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_then_returns_201_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_put_which_returns_204_returns_nothing", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_then_returns_204_returns_nothing", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_then_returns_400_raises_exception", "tests/test_aggregator.py::TestAggregator::test_send_get_which_raises_with_only_one_aggregator_fails_at_second_checkpoint", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_twice_then_returns_201_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_on_three_aggregators_raises", "tests/test_aggregator.py::TestAggregator::test_send_get_which_returns_400_raises_exception", "tests/test_aggregator.py::TestAggregator::test_send_get_which_returns_204_returns_nothing", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_twice_then_returns_400_raises_exception", "tests/test_aggregator.py::TestAggregator::test_send_get_which_returns_201_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_twice_then_returns_200_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_returns_500_with_only_one_aggregator_fails_at_second_checkpoint", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_with_only_two_aggregators_fails_at_third_checkpoint", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_twice_then_returns_204_returns_nothing", "tests/test_aggregator.py::TestAggregator::test_send_get_which_returns_200_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_returns_200_and_json_returns_json", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_twice_then_returns_200_and_json_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_then_returns_200_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_over_ipv6_get_which_returns_200_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_then_returns_200_and_json_returns_content", "tests/test_aggregator.py::TestAggregator::test_send_get_which_fails_with_only_one_aggregator_fails_at_second_checkpoint" ]
[ "tests/test_aggregator.py::TestMDNSUpdater::test_update_mdns_does_nothing_when_not_enabled", "tests/test_aggregator.py::TestMDNSUpdater::test_inc_P2P_enable_count", "tests/test_aggregator.py::TestMDNSUpdater::test_init", "tests/test_aggregator.py::TestMDNSUpdater::test_update_mdns", "tests/test_aggregator.py::TestMDNSUpdater::test_P2P_disable_when_enabled", "tests/test_aggregator.py::TestMDNSUpdater::test_P2P_disable_resets_enable_count", "tests/test_aggregator.py::TestMDNSUpdater::test_p2p_enable", "tests/test_aggregator.py::TestAggregator::test_process_queue_processes_queue_when_running_and_ignores_unknown_methods", "tests/test_aggregator.py::TestAggregator::test_process_queue_processes_queue_when_running_and_aborts_on_exception_in_general_register", "tests/test_aggregator.py::TestAggregator::test_register", "tests/test_aggregator.py::TestAggregator::test_process_reregister_continues_when_delete_fails", "tests/test_aggregator.py::TestAggregator::test_stop", "tests/test_aggregator.py::TestAggregator::test_process_queue_does_nothing_when_queue_empty", "tests/test_aggregator.py::TestAggregator::test_process_queue_does_nothing_when_not_registered", "tests/test_aggregator.py::TestAggregator::test_process_queue_handles_exception_in_unqueueing", "tests/test_aggregator.py::TestAggregator::test_heartbeat_registers", "tests/test_aggregator.py::TestAggregator::test_process_queue_processes_queue_when_not_running", "tests/test_aggregator.py::TestAggregator::test_process_reregister_bails_if_node_not_registered", "tests/test_aggregator.py::TestAggregator::test_send_get_with_no_aggregators_fails_at_first_checkpoint", "tests/test_aggregator.py::TestAggregator::test_process_queue_processes_queue_when_running_and_aborts_on_exception_in_general_unregister", "tests/test_aggregator.py::TestAggregator::test_process_reregister_bails_if_delete_throws_unknown_exception", "tests/test_aggregator.py::TestAggregator::test_heartbeat_with_other_exception", "tests/test_aggregator.py::TestAggregator::test_heartbeat_correctly", "tests/test_aggregator.py::TestAggregator::test_heartbeat_with_500_exception", "tests/test_aggregator.py::TestAggregator::test_heartbeat_with_404_exception", "tests/test_aggregator.py::TestAggregator::test_process_reregister_handles_queue_exception", "tests/test_aggregator.py::TestAggregator::test_unregister", "tests/test_aggregator.py::TestAggregator::test_process_queue_processes_queue_when_running", "tests/test_aggregator.py::TestAggregator::test_init", "tests/test_aggregator.py::TestAggregator::test_register_into", "tests/test_aggregator.py::TestAggregator::test_process_queue_processes_queue_when_running_and_aborts_on_exception_in_node_register", "tests/test_aggregator.py::TestAggregator::test_heartbeat_unregisters_when_no_node", "tests/test_aggregator.py::TestAggregator::test_process_reregister", "tests/test_aggregator.py::TestAggregator::test_process_reregister_bails_if_first_post_throws_unknown_exception" ]
{ "failed_lite_validators": [ "has_short_problem_statement", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2018-09-05 15:36:53+00:00
apache-2.0
1,303
beancount__fava-1760
diff --git a/src/fava/help/options.md b/src/fava/help/options.md index 0c716bdf..22361dce 100644 --- a/src/fava/help/options.md +++ b/src/fava/help/options.md @@ -78,13 +78,15 @@ Default: `12-31` The last day of the fiscal (financial or tax) period for accounting purposes in `%m-%d` format. Allows for the use of `FY2018`, `FY2018-Q3`, `fiscal_year` and `fiscal_quarter` in the time filter, and `FY2018` as the start date, end date, -or both dates in a date range in the time filter. +or both dates in a date range in the time filter. Month can be a value larger +than `12` to have `FY2018` end in 2019 for example. Examples are: -- `09-30` - US federal government -- `06-30` - Australia / NZ - `04-05` - UK +- `06-30` - Australia / NZ +- `09-30` - US federal government +- `15-31` - Japan See [Fiscal Year on WikiPedia](https://en.wikipedia.org/wiki/Fiscal_year) for more examples. diff --git a/src/fava/util/date.py b/src/fava/util/date.py index 8c63eef1..62c5791a 100644 --- a/src/fava/util/date.py +++ b/src/fava/util/date.py @@ -30,13 +30,13 @@ DAY_RE = re.compile(r"^(\d{4})-(\d{2})-(\d{2})$") WEEK_RE = re.compile(r"^(\d{4})-w(\d{2})$") # this matches a quarter like 2016-Q1 for the first quarter of 2016 -QUARTER_RE = re.compile(r"^(\d{4})-q(\d)$") +QUARTER_RE = re.compile(r"^(\d{4})-q([1234])$") # this matches a financial year like FY2018 for the financial year ending 2018 FY_RE = re.compile(r"^fy(\d{4})$") # this matches a quarter in a financial year like FY2018-Q2 -FY_QUARTER_RE = re.compile(r"^fy(\d{4})-q(\d)$") +FY_QUARTER_RE = re.compile(r"^fy(\d{4})-q([1234])$") VARIABLE_RE = re.compile( r"\(?(fiscal_year|year|fiscal_quarter|quarter" @@ -51,6 +51,32 @@ class FiscalYearEnd: month: int day: int + @property + def month_of_year(self) -> int: + """Actual month of the year.""" + return (self.month - 1) % 12 + 1 + + @property + def year_offset(self) -> int: + """Number of years that this is offset into the future.""" + return (self.month - 1) // 12 + + def has_quarters(self) -> bool: + """Whether this fiscal year end supports fiscal quarters.""" + return ( + datetime.date(2001, self.month_of_year, self.day) + ONE_DAY + ).day == 1 + + +class FyeHasNoQuartersError(ValueError): + """Only fiscal year that start on the first of a month have quarters.""" + + def __init__(self) -> None: + super().__init__( + "Cannot use fiscal quarter if fiscal year " + "does not start on first of the month" + ) + END_OF_YEAR = FiscalYearEnd(12, 31) @@ -229,7 +255,7 @@ def local_today() -> datetime.date: return datetime.date.today() # noqa: DTZ011 -def substitute( # noqa: PLR0914 +def substitute( string: str, fye: FiscalYearEnd | None = None, ) -> str: @@ -246,57 +272,44 @@ def substitute( # noqa: PLR0914 """ # pylint: disable=too-many-locals today = local_today() + fye = fye or END_OF_YEAR for match in VARIABLE_RE.finditer(string): complete_match, interval, plusminus_, mod_ = match.group(0, 1, 2, 3) mod = int(mod_) if mod_ else 0 - plusminus = 1 if plusminus_ == "+" else -1 + offset = mod if plusminus_ == "+" else -mod if interval == "fiscal_year": - year = today.year - start, end = get_fiscal_period(year, fye) - if end and today >= end: - year += 1 - year += plusminus * mod - string = string.replace(complete_match, f"FY{year}") + after_fye = (today.month, today.day) > (fye.month_of_year, fye.day) + year = today.year + (1 if after_fye else 0) - fye.year_offset + string = string.replace(complete_match, f"FY{year + offset}") if interval == "year": - year = today.year + plusminus * mod - string = string.replace(complete_match, str(year)) + string = string.replace(complete_match, str(today.year + offset)) if interval == "fiscal_quarter": - target = month_offset(today.replace(day=1), plusminus * mod * 3) - start, end = get_fiscal_period(target.year, fye) - if start and start.day != 1: - raise ValueError( - "Cannot use fiscal_quarter if fiscal year " - "does not start on first of the month", - ) - if end and target >= end: - start = end - if start: - quarter = int(((target.month - start.month) % 12) / 3) - string = string.replace( - complete_match, - f"FY{start.year + 1}-Q{(quarter % 4) + 1}", - ) + if not fye.has_quarters(): + raise FyeHasNoQuartersError + target = month_offset(today.replace(day=1), offset * 3) + after_fye = (target.month) > (fye.month_of_year) + year = target.year + (1 if after_fye else 0) - fye.year_offset + quarter = ((target.month - fye.month_of_year - 1) // 3) % 4 + 1 + string = string.replace(complete_match, f"FY{year}-Q{quarter}") if interval == "quarter": quarter_today = (today.month - 1) // 3 + 1 - year = today.year + (quarter_today + plusminus * mod - 1) // 4 - quarter = (quarter_today + plusminus * mod - 1) % 4 + 1 + year = today.year + (quarter_today + offset - 1) // 4 + quarter = (quarter_today + offset - 1) % 4 + 1 string = string.replace(complete_match, f"{year}-Q{quarter}") if interval == "month": - year = today.year + (today.month + plusminus * mod - 1) // 12 - month = (today.month + plusminus * mod - 1) % 12 + 1 + year = today.year + (today.month + offset - 1) // 12 + month = (today.month + offset - 1) % 12 + 1 string = string.replace(complete_match, f"{year}-{month:02}") if interval == "week": - delta = timedelta(plusminus * mod * 7) string = string.replace( complete_match, - (today + delta).strftime("%Y-W%W"), + (today + timedelta(offset * 7)).strftime("%Y-W%W"), ) if interval == "day": - delta = timedelta(plusminus * mod) string = string.replace( complete_match, - (today + delta).isoformat(), + (today + timedelta(offset)).isoformat(), ) return string @@ -404,11 +417,16 @@ def parse_fye_string(fye: str) -> FiscalYearEnd | None: Args: fye: The end of the fiscal year to parse. """ + match = re.match(r"^(?P<month>\d{2})-(?P<day>\d{2})$", fye) + if not match: + return None + month = int(match.group("month")) + day = int(match.group("day")) try: - date = datetime.date.fromisoformat(f"2001-{fye}") + _ = datetime.date(2001, (month - 1) % 12 + 1, day) + return FiscalYearEnd(month, day) except ValueError: return None - return FiscalYearEnd(date.month, date.day) def get_fiscal_period( @@ -430,34 +448,27 @@ def get_fiscal_period( A tuple (start, end) of dates. """ - if fye is None: - start_date = datetime.date(year=year, month=1, day=1) - else: - start_date = datetime.date( - year=year - 1, - month=fye.month, - day=fye.day, - ) + timedelta(days=1) - # Special case 02-28 because of leap years - if fye.month == 2 and fye.day == 28: - start_date = start_date.replace(month=3, day=1) + fye = fye or END_OF_YEAR + start = ( + datetime.date(year - 1 + fye.year_offset, fye.month_of_year, fye.day) + + ONE_DAY + ) + # Special case 02-28 because of leap years + if fye.month_of_year == 2 and fye.day == 28: + start = start.replace(month=3, day=1) if quarter is None: - return start_date, start_date.replace(year=start_date.year + 1) + return start, start.replace(year=start.year + 1) - if start_date.day != 1: - # quarters make no sense in jurisdictions where period starts - # on a date (UK etc) + if not fye.has_quarters(): return None, None if quarter < 1 or quarter > 4: return None, None - if quarter > 1: - start_date = month_offset(start_date, (quarter - 1) * 3) + start = month_offset(start, (quarter - 1) * 3) - end_date = month_offset(start_date, 3) - return start_date, end_date + return start, month_offset(start, 3) def days_in_daterange(
beancount/fava
4ee3106596cf91dc20cbceb509d5427d76a97350
diff --git a/tests/test_core_attributes.py b/tests/test_core_attributes.py index d6e0fdbc..63aacd40 100644 --- a/tests/test_core_attributes.py +++ b/tests/test_core_attributes.py @@ -37,6 +37,11 @@ def test_get_active_years(load_doc_entries: list[Directive]) -> None: "FY2012", "FY2011", ] + assert get_active_years(load_doc_entries, FiscalYearEnd(15, 31)) == [ + "FY2012", + "FY2011", + "FY2010", + ] def test_payee_accounts(example_ledger: FavaLedger) -> None: diff --git a/tests/test_util_date.py b/tests/test_util_date.py index d96d1564..23aeb2d4 100644 --- a/tests/test_util_date.py +++ b/tests/test_util_date.py @@ -179,6 +179,17 @@ def test_substitute(string: str, output: str) -> None: ("06-30", "2018-02-02", "fiscal_quarter", "FY2018-Q3"), ("06-30", "2018-07-03", "fiscal_quarter-1", "FY2018-Q4"), ("06-30", "2018-07-03", "fiscal_quarter+6", "FY2020-Q3"), + ("15-31", "2018-02-02", "fiscal_year", "FY2017"), + ("15-31", "2018-05-02", "fiscal_year", "FY2018"), + ("15-31", "2018-05-02", "fiscal_year-1", "FY2017"), + ("15-31", "2018-02-02", "fiscal_year+6", "FY2023"), + ("15-31", "2018-05-02", "fiscal_year+6", "FY2024"), + ("15-31", "2018-02-02", "fiscal_quarter", "FY2017-Q4"), + ("15-31", "2018-05-02", "fiscal_quarter", "FY2018-Q1"), + ("15-31", "2018-08-02", "fiscal_quarter", "FY2018-Q2"), + ("15-31", "2018-11-02", "fiscal_quarter", "FY2018-Q3"), + ("15-31", "2018-05-02", "fiscal_quarter-1", "FY2017-Q4"), + ("15-31", "2018-05-02", "fiscal_quarter+6", "FY2019-Q3"), ("04-05", "2018-07-03", "fiscal_quarter", None), ], ) @@ -195,7 +206,7 @@ def test_fiscal_substitute( if output is None: with pytest.raises( ValueError, - match="Cannot use fiscal_quarter if fiscal year", + match="Cannot use fiscal quarter if fiscal year", ): substitute(string, fye) else: @@ -329,6 +340,10 @@ def test_month_offset( # 28th February - consider leap years [FYE=02-28] (2016, None, "02-28", "2015-03-01", "2016-03-01"), (2017, None, "02-28", "2016-03-01", "2017-03-01"), + # 1st Apr (last year) - JP [FYE=15-31] + (2018, None, "15-31", "2018-04-01", "2019-04-01"), + (2018, 1, "15-31", "2018-04-01", "2018-07-01"), + (2018, 4, "15-31", "2019-01-01", "2019-04-01"), # None (2018, None, None, "2018-01-01", "2019-01-01"), # expected errors @@ -355,6 +370,7 @@ def test_get_fiscal_period( ("12-31", 12, 31), ("06-30", 6, 30), ("02-28", 2, 28), + ("15-31", 15, 31), ], ) def test_parse_fye_string(fye_str: str, month: int, day: int) -> None:
Fiscal Year cannot end in the next year of Calendar Year In my country (Japan), FY2023 starts at 2023-04-01 and ends at 2024-03-31, but Fava option cannot describe this. `custom "fava-option" "fiscal-year-end" "03-31"` defines FY2023 as `[2022-04-01, 2023-04-01)`, and this is not my intention.
0.0
4ee3106596cf91dc20cbceb509d5427d76a97350
[ "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-02-02-fiscal_year-FY2017]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-02-02-fiscal_year+6-FY2023]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-02-02-fiscal_quarter-FY2017-Q4]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-05-02-fiscal_quarter-FY2018-Q1]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-08-02-fiscal_quarter-FY2018-Q2]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-11-02-fiscal_quarter-FY2018-Q3]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-05-02-fiscal_quarter-1-FY2017-Q4]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-05-02-fiscal_quarter+6-FY2019-Q3]", "tests/test_util_date.py::test_fiscal_substitute[04-05-2018-07-03-fiscal_quarter-None]", "tests/test_util_date.py::test_get_fiscal_period[2018-None-15-31-2018-04-01-2019-04-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-1-15-31-2018-04-01-2018-07-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-4-15-31-2019-01-01-2019-04-01]", "tests/test_util_date.py::test_parse_fye_string[15-31-15-31]" ]
[ "tests/test_core_attributes.py::test_get_active_years", "tests/test_core_attributes.py::test_payee_accounts", "tests/test_core_attributes.py::test_payee_transaction", "tests/test_util_date.py::test_interval", "tests/test_util_date.py::test_interval_format[2016-01-01-Interval.DAY-2016-01-01-2016-01-01]", "tests/test_util_date.py::test_interval_format[2016-01-04-Interval.WEEK-2016W01-2016-W01]", "tests/test_util_date.py::test_interval_format[2016-01-04-Interval.MONTH-Jan", "tests/test_util_date.py::test_interval_format[2016-01-04-Interval.QUARTER-2016Q1-2016-Q1]", "tests/test_util_date.py::test_interval_format[2016-01-04-Interval.YEAR-2016-2016]", "tests/test_util_date.py::test_get_next_interval[2016-01-01-Interval.DAY-2016-01-02]", "tests/test_util_date.py::test_get_next_interval[2016-01-01-Interval.WEEK-2016-01-04]", "tests/test_util_date.py::test_get_next_interval[2016-01-01-Interval.MONTH-2016-02-01]", "tests/test_util_date.py::test_get_next_interval[2016-01-01-Interval.QUARTER-2016-04-01]", "tests/test_util_date.py::test_get_next_interval[2016-01-01-Interval.YEAR-2017-01-01]", "tests/test_util_date.py::test_get_next_interval[2016-12-31-Interval.DAY-2017-01-01]", "tests/test_util_date.py::test_get_next_interval[2016-12-31-Interval.WEEK-2017-01-02]", "tests/test_util_date.py::test_get_next_interval[2016-12-31-Interval.MONTH-2017-01-01]", "tests/test_util_date.py::test_get_next_interval[2016-12-31-Interval.QUARTER-2017-01-01]", "tests/test_util_date.py::test_get_next_interval[2016-12-31-Interval.YEAR-2017-01-01]", "tests/test_util_date.py::test_get_next_interval[9999-12-31-Interval.QUARTER-9999-12-31]", "tests/test_util_date.py::test_get_next_interval[9999-12-31-Interval.YEAR-9999-12-31]", "tests/test_util_date.py::test_get_prev_interval[2016-01-01-Interval.DAY-2016-01-01]", "tests/test_util_date.py::test_get_prev_interval[2016-01-01-Interval.WEEK-2015-12-28]", "tests/test_util_date.py::test_get_prev_interval[2016-01-01-Interval.MONTH-2016-01-01]", "tests/test_util_date.py::test_get_prev_interval[2016-01-01-Interval.QUARTER-2016-01-01]", "tests/test_util_date.py::test_get_prev_interval[2016-01-01-Interval.YEAR-2016-01-01]", "tests/test_util_date.py::test_get_prev_interval[2016-12-31-Interval.DAY-2016-12-31]", "tests/test_util_date.py::test_get_prev_interval[2016-12-31-Interval.WEEK-2016-12-26]", "tests/test_util_date.py::test_get_prev_interval[2016-12-31-Interval.MONTH-2016-12-01]", "tests/test_util_date.py::test_get_prev_interval[2016-12-31-Interval.QUARTER-2016-10-01]", "tests/test_util_date.py::test_get_prev_interval[2016-12-31-Interval.YEAR-2016-01-01]", "tests/test_util_date.py::test_get_prev_interval[9999-12-31-Interval.QUARTER-9999-10-01]", "tests/test_util_date.py::test_get_prev_interval[9999-12-31-Interval.YEAR-9999-01-01]", "tests/test_util_date.py::test_interval_tuples", "tests/test_util_date.py::test_substitute[year-2016]", "tests/test_util_date.py::test_substitute[(year-1)-2015]", "tests/test_util_date.py::test_substitute[year-1-2-2015-2]", "tests/test_util_date.py::test_substitute[(year)-1-2-2016-1-2]", "tests/test_util_date.py::test_substitute[(year+3)-2019]", "tests/test_util_date.py::test_substitute[(year+3)month-20192016-06]", "tests/test_util_date.py::test_substitute[(year-1000)-1016]", "tests/test_util_date.py::test_substitute[quarter-2016-Q2]", "tests/test_util_date.py::test_substitute[quarter+2-2016-Q4]", "tests/test_util_date.py::test_substitute[quarter+20-2021-Q2]", "tests/test_util_date.py::test_substitute[(month)-2016-06]", "tests/test_util_date.py::test_substitute[month+6-2016-12]", "tests/test_util_date.py::test_substitute[(month+24)-2018-06]", "tests/test_util_date.py::test_substitute[week-2016-W25]", "tests/test_util_date.py::test_substitute[week+20-2016-W45]", "tests/test_util_date.py::test_substitute[week+2000-2054-W42]", "tests/test_util_date.py::test_substitute[day-2016-06-24]", "tests/test_util_date.py::test_substitute[day+20-2016-07-14]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-02-02-fiscal_year-FY2018]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-08-02-fiscal_year-FY2019]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-07-01-fiscal_year-FY2019]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-08-02-fiscal_year-1-FY2018]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-02-02-fiscal_year+6-FY2024]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-08-02-fiscal_year+6-FY2025]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-08-02-fiscal_quarter-FY2019-Q1]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-10-01-fiscal_quarter-FY2019-Q2]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-12-30-fiscal_quarter-FY2019-Q2]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-02-02-fiscal_quarter-FY2018-Q3]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-07-03-fiscal_quarter-1-FY2018-Q4]", "tests/test_util_date.py::test_fiscal_substitute[06-30-2018-07-03-fiscal_quarter+6-FY2020-Q3]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-05-02-fiscal_year-FY2018]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-05-02-fiscal_year-1-FY2017]", "tests/test_util_date.py::test_fiscal_substitute[15-31-2018-05-02-fiscal_year+6-FY2024]", "tests/test_util_date.py::test_parse_date[2000-01-01-2001-01-01-", "tests/test_util_date.py::test_parse_date[2010-10-01-2010-11-01-2010-10]", "tests/test_util_date.py::test_parse_date[2000-01-03-2000-01-04-2000-01-03]", "tests/test_util_date.py::test_parse_date[2015-01-05-2015-01-12-2015-W01]", "tests/test_util_date.py::test_parse_date[2015-04-01-2015-07-01-2015-Q2]", "tests/test_util_date.py::test_parse_date[2014-01-01-2016-01-01-2014", "tests/test_util_date.py::test_parse_date[2014-01-01-2016-01-01-2014-2015]", "tests/test_util_date.py::test_parse_date[2011-10-01-2016-01-01-2011-10", "tests/test_util_date.py::test_parse_date[2018-07-01-2020-07-01-FY2019", "tests/test_util_date.py::test_parse_date[2018-07-01-2021-01-01-FY2019", "tests/test_util_date.py::test_parse_date[2010-07-01-2015-07-01-FY2011", "tests/test_util_date.py::test_parse_date[2011-01-01-2015-07-01-2011", "tests/test_util_date.py::test_parse_date_empty", "tests/test_util_date.py::test_parse_date_relative[2014-01-01-2016-06-27-year-2-day+2]", "tests/test_util_date.py::test_parse_date_relative[2016-01-01-2016-06-25-year-day]", "tests/test_util_date.py::test_parse_date_relative[2015-01-01-2017-01-01-2015-year]", "tests/test_util_date.py::test_parse_date_relative[2016-01-01-2016-04-01-quarter-1]", "tests/test_util_date.py::test_parse_date_relative[2013-07-01-2014-07-01-fiscal_year-2]", "tests/test_util_date.py::test_parse_date_relative[2016-04-01-2016-07-01-fiscal_quarter]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.DAY-2016-05-01-1]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.DAY-2016-05-31-1]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.WEEK-2016-05-01-7]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.WEEK-2016-05-31-7]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2016-05-02-31]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2016-05-31-31]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2016-06-11-30]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2016-07-31-31]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2016-02-01-29]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2015-02-01-28]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.MONTH-2016-01-01-31]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.QUARTER-2015-02-01-90]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.QUARTER-2015-05-01-91]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.QUARTER-2016-02-01-91]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.QUARTER-2016-12-01-92]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.YEAR-2015-02-01-365]", "tests/test_util_date.py::test_number_of_days_in_period[Interval.YEAR-2016-01-01-366]", "tests/test_util_date.py::test_month_offset[2018-01-12-0-2018-01-12]", "tests/test_util_date.py::test_month_offset[2018-01-01--3-2017-10-01]", "tests/test_util_date.py::test_month_offset[2018-01-30-1-None]", "tests/test_util_date.py::test_month_offset[2018-01-12-13-2019-02-12]", "tests/test_util_date.py::test_month_offset[2018-01-12--13-2016-12-12]", "tests/test_util_date.py::test_get_fiscal_period[2018-None-12-31-2018-01-01-2019-01-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-1-12-31-2018-01-01-2018-04-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-3-12-31-2018-07-01-2018-10-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-4-12-31-2018-10-01-2019-01-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-None-09-30-2017-10-01-2018-10-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-3-09-30-2018-04-01-2018-07-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-None-06-30-2017-07-01-2018-07-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-1-06-30-2017-07-01-2017-10-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-2-06-30-2017-10-01-2018-01-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-4-06-30-2018-04-01-2018-07-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-None-04-05-2017-04-06-2018-04-06]", "tests/test_util_date.py::test_get_fiscal_period[2018-1-04-05-None-None]", "tests/test_util_date.py::test_get_fiscal_period[2016-None-02-28-2015-03-01-2016-03-01]", "tests/test_util_date.py::test_get_fiscal_period[2017-None-02-28-2016-03-01-2017-03-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-None-None-2018-01-01-2019-01-01]", "tests/test_util_date.py::test_get_fiscal_period[2018-0-12-31-None-None]", "tests/test_util_date.py::test_get_fiscal_period[2018-5-12-31-None-None]", "tests/test_util_date.py::test_parse_fye_string[12-31-12-31]", "tests/test_util_date.py::test_parse_fye_string[06-30-6-30]", "tests/test_util_date.py::test_parse_fye_string[02-28-2-28]", "tests/test_util_date.py::test_parse_fye_invalid_string[12-32]", "tests/test_util_date.py::test_parse_fye_invalid_string[asdfasdf]", "tests/test_util_date.py::test_parse_fye_invalid_string[02-29]" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2024-02-13 20:01:57+00:00
mit
1,304
bear__python-twitter-416
diff --git a/twitter/__init__.py b/twitter/__init__.py index 87bb718..0534776 100644 --- a/twitter/__init__.py +++ b/twitter/__init__.py @@ -23,7 +23,7 @@ __author__ = 'The Python-Twitter Developers' __email__ = '[email protected]' __copyright__ = 'Copyright (c) 2007-2016 The Python-Twitter Developers' __license__ = 'Apache License 2.0' -__version__ = '3.2' +__version__ = '3.2.1' __url__ = 'https://github.com/bear/python-twitter' __download_url__ = 'https://pypi.python.org/pypi/python-twitter' __description__ = 'A Python wrapper around the Twitter API' diff --git a/twitter/twitter_utils.py b/twitter/twitter_utils.py index 081d1ed..0b2af5b 100644 --- a/twitter/twitter_utils.py +++ b/twitter/twitter_utils.py @@ -161,12 +161,13 @@ def calc_expected_status_length(status, short_url_length=23): Expected length of the status message as an integer. """ - replaced_chars = 0 - status_length = len(status) - match = re.findall(URL_REGEXP, status) - if len(match) >= 1: - replaced_chars = len(''.join(match)) - status_length = status_length - replaced_chars + (short_url_length * len(match)) + status_length = 0 + for word in re.split(r'\s', status): + if is_url(word): + status_length += short_url_length + else: + status_length += len(word) + status_length += len(re.findall(r'\s', status)) return status_length
bear/python-twitter
ae88240b902d857ba099dfd17f820e640c67557d
diff --git a/tests/test_twitter_utils.py b/tests/test_twitter_utils.py index 3ca619f..b021e34 100644 --- a/tests/test_twitter_utils.py +++ b/tests/test_twitter_utils.py @@ -5,6 +5,7 @@ import unittest import twitter from twitter.twitter_utils import ( + calc_expected_status_length, parse_media_file ) @@ -58,3 +59,18 @@ class ApiTest(unittest.TestCase): self.assertRaises( twitter.TwitterError, lambda: twitter.twitter_utils.enf_type('test', int, 'hi')) + + def test_calc_expected_status_length(self): + status = 'hi a tweet there' + len_status = calc_expected_status_length(status) + self.assertEqual(len_status, 16) + + def test_calc_expected_status_length_with_url(self): + status = 'hi a tweet there example.com' + len_status = calc_expected_status_length(status) + self.assertEqual(len_status, 40) + + def test_calc_expected_status_length_with_url_and_extra_spaces(self): + status = 'hi a tweet there example.com' + len_status = calc_expected_status_length(status) + self.assertEqual(len_status, 63)
calc_expected_status_length does not work calc_expected_status_length is broken in two ways right now. 1. URL_REGEXP only recognizes URLs at the start of a string, which is correct for is_url, but for calc_expected_status_length, all URLs should be detected, not just URLs at the start of the tweet. There should be a different URL_REGEXP for calc_expected_status_length without the start-of-string makers. 2. The URL regex has multiple groups, so findall returns a list of tuples, not strings. If there are matches, replaced_chars = len(''.join(match)) crashes, it should be replaced_chars = len(''.join(map(lambda x: x[0], match))) instead
0.0
ae88240b902d857ba099dfd17f820e640c67557d
[ "tests/test_twitter_utils.py::ApiTest::test_calc_expected_status_length_with_url", "tests/test_twitter_utils.py::ApiTest::test_calc_expected_status_length_with_url_and_extra_spaces" ]
[ "tests/test_twitter_utils.py::ApiTest::test_calc_expected_status_length", "tests/test_twitter_utils.py::ApiTest::test_parse_media_file_fileobj", "tests/test_twitter_utils.py::ApiTest::test_parse_media_file_http", "tests/test_twitter_utils.py::ApiTest::test_parse_media_file_local_file", "tests/test_twitter_utils.py::ApiTest::test_utils_error_checking" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2016-11-28 14:30:54+00:00
apache-2.0
1,305
beardypig__pymp4-30
diff --git a/src/pymp4/parser.py b/src/pymp4/parser.py index 1c2311c..bfe2b50 100644 --- a/src/pymp4/parser.py +++ b/src/pymp4/parser.py @@ -694,18 +694,26 @@ ProtectionSystemHeaderBox = Struct( TrackEncryptionBox = Struct( "type" / If(this._.type != b"uuid", Const(b"tenc")), - "version" / Default(Int8ub, 0), + "version" / Default(OneOf(Int8ub, (0, 1)), 0), "flags" / Default(Int24ub, 0), - "_reserved0" / Const(Int8ub, 0), - "_reserved1" / Const(Int8ub, 0), - "is_encrypted" / Int8ub, - "iv_size" / Int8ub, + "_reserved" / Const(Int8ub, 0), + "default_byte_blocks" / Default(IfThenElse( + this.version > 0, + BitStruct( + # count of encrypted blocks in the protection pattern, where each block is 16-bytes + "crypt" / Nibble, + # count of unencrypted blocks in the protection pattern + "skip" / Nibble + ), + Const(Int8ub, 0) + ), 0), + "is_encrypted" / OneOf(Int8ub, (0, 1)), + "iv_size" / OneOf(Int8ub, (0, 8, 16)), "key_ID" / UUIDBytes(Bytes(16)), - "constant_iv" / Default(If(this.is_encrypted and this.iv_size == 0, - PrefixedArray(Int8ub, Byte), - ), - None) - + "constant_iv" / Default(If( + this.is_encrypted and this.iv_size == 0, + PrefixedArray(Int8ub, Byte) + ), None) ) SampleEncryptionBox = Struct(
beardypig/pymp4
47628e8abeb3a7ed0f0c2e45a407fdd205d93d24
diff --git a/tests/test_dashboxes.py b/tests/test_dashboxes.py index e1b014b..3500619 100644 --- a/tests/test_dashboxes.py +++ b/tests/test_dashboxes.py @@ -32,6 +32,8 @@ class BoxTests(unittest.TestCase): (type=b"tenc") (version=0) (flags=0) + (_reserved=0) + (default_byte_blocks=0) (is_encrypted=1) (iv_size=8) (key_ID=UUID('337b9643-21b6-4355-9e59-3eccb46c7ef7'))
Definition of tenc box seems invalid or too leniant I've attached a sample `tenc` box from https://sho.com, specifically the show Yellowjackets S02E05 on the 2160p track. ![image](https://user-images.githubusercontent.com/17136956/233861866-8042fd19-2094-44e3-819c-dc47a88829d8.png) It fails to parse with the following error: ``` Traceback (most recent call last): File "C:\Program Files\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Program Files\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Program Files\Python310\Scripts\mp4dump.exe\__main__.py", line 7, in <module> File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\pymp4\cli.py", line 26, in dump box = Box.parse_stream(fd) File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\construct\core.py", line 186, in parse_stream return self._parse(stream, context, "parsing") File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\pymp4\parser.py", line 46, in _parse obj = self.subcon._parse(stream2, context, path) File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\construct\core.py", line 855, in _parse subobj = list(sc._parse(stream, context, path).items()) File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\construct\core.py", line 297, in _parse return self.subcon._parse(stream, context, path) File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\construct\core.py", line 1544, in _parse obj = self.cases.get(key, self.default)._parse(stream, context, path) File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\construct\core.py", line 859, in _parse subobj = sc._parse(stream, context, path) File "C:\Users\User\AppData\Roaming\Python\Python310\site-packages\construct\core.py", line 2700, in _parse raise e.__class__("%s\n %s" % (e, path)) construct.core.ConstError: expected 0 but parsed 25 parsing -> _reserved1 ``` It's failing at the byte 0xD with the value 0x19 (25 decimal). This offset would be the `_reserved1` definition. [tenc_box_from_showtime_exact.zip](https://github.com/beardypig/pymp4/files/11304733/tenc_box_from_showtime_exact.zip)
0.0
47628e8abeb3a7ed0f0c2e45a407fdd205d93d24
[ "tests/test_dashboxes.py::BoxTests::test_tenc_parse" ]
[ "tests/test_dashboxes.py::BoxTests::test_tenc_build" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_media" ], "has_test_patch": true, "is_lite": false }
2023-04-23 19:09:01+00:00
apache-2.0
1,306
beartype__beartype-88
diff --git a/beartype/_util/func/utilfuncmake.py b/beartype/_util/func/utilfuncmake.py index 24cc669c..2bf3b6de 100644 --- a/beartype/_util/func/utilfuncmake.py +++ b/beartype/_util/func/utilfuncmake.py @@ -20,6 +20,8 @@ from beartype._util.text.utiltextmunge import number_lines from collections.abc import Callable from functools import update_wrapper from typing import Optional, Type +from weakref import finalize +import linecache # See the "beartype.cave" submodule for further commentary. __all__ = ['STAR_IMPORTS_CONSIDERED_HARMFUL'] @@ -152,33 +154,29 @@ def make_func( # Python fails to capture that function (i.e., expose that function to # this function) when the locals() dictionary is passed; instead, a # unique local dictionary *MUST* be passed. - # - # Note that the same result may also be achieved via the compile() - # builtin and "types.FunctionType" class: e.g., - # func_code_compiled = compile( - # func_code, "<string>", "exec").co_consts[0] - # return types.FunctionType( - # code=func_code_compiled, - # globals=_GLOBAL_ATTRS, - # argdefs=('__beartype_func', func) - # ) - # - # Since doing so is both more verbose and obfuscatory for no tangible - # gain, the current circumspect approach is preferred. - exec(func_code, func_globals, func_locals) - - #FIXME: See above. - #FIXME: Should "exec" be "single" instead? Does it matter? Is there any - #performance gap between the two? - # func_code_compiled = compile( - # func_code, func_wrapper_filename, "exec").co_consts[0] - # return FunctionType( - # code=func_code_compiled, - # globals=_GLOBAL_ATTRS, - # - # #FIXME: This really doesn't seem right, but... *shrug* - # argdefs=tuple(local_attrs.values()), - # ) + + # Make up a filename for compilation and possibly the linecache entry + # (if we make one). A fully-qualified name and ID *should* be unique + # for the life of the process. + func_full_name = ( + f"{func_wrapped.__module__}{func_wrapped.__name__}" + if func_wrapped else + func_name + ) + linecache_file_name = f"<@beartype({func_full_name}) at {id(func_wrapped):#x}>" + + # We use the more verbose and obfuscatory compile() builtin instead of + # simply calling exec(func_code, func_globals, func_locals) because + # exec does not provide a way to set the resulting function object's + # .__code__.co_filename read-only attribute. We can use "single" + # instead of "exec" if we are willing to accept that func_code is + # constrained to a single statement. In casual testing, there is very + # little performance difference (with an imperceptibly slight edge + # going to "single"). + func_code_compiled = compile( + func_code, linecache_file_name, "exec") + assert func_name not in func_locals + exec(func_code_compiled, func_globals, func_locals) # If doing so fails for any reason... except Exception as exception: # Raise an exception suffixed by that function's declaration such that @@ -235,6 +233,23 @@ def make_func( func.__doc__ = func_doc # Else, that function is undocumented. + # Since we went through the trouble of printing its definition, we might + # as well make its compiled version debuggable, too. + if is_debug: + linecache.cache[linecache_file_name] = ( # type: ignore[assignment] + len(func_code), # type: ignore[assignment] # Y u gotta b diff'rnt Python 3.7? WHY?! + None, # mtime, but should be None to avoid being discarded + func_code.splitlines(keepends=True), + linecache_file_name, + ) + + # Define and register a cleanup callback for removing func's linecache + # entry if func is ever garbage collected. + def remove_linecache_entry_for_func(): + linecache.cache.pop(linecache_file_name, None) + + finalize(func, remove_linecache_entry_for_func) + # Return that function. return func
beartype/beartype
4190bc3112d0f56d81af586a5f01a10c3e19feae
diff --git a/beartype_test/a00_unit/a90_decor/test_decorconf.py b/beartype_test/a00_unit/a90_decor/test_decorconf.py index 98735bd7..0a33a891 100644 --- a/beartype_test/a00_unit/a90_decor/test_decorconf.py +++ b/beartype_test/a00_unit/a90_decor/test_decorconf.py @@ -118,6 +118,52 @@ def test_decor_conf_is_debug(capsys) -> None: assert '# is <function _earthquake ' in standard_captured.out +def test_decor_conf_is_debug_updates_linecache(capsys) -> None: + ''' + Test the :func:`beartype.beartype` decorator passed the optional ``conf`` + parameter passed the optional ``is_debug`` parameter results + in an updated linecache. + + Parameters + ---------- + capsys + :mod:`pytest` fixture enabling standard output and error to be reliably + captured and tested against from within unit tests and fixtures. + + Parameters + ---------- + https://docs.pytest.org/en/latest/how-to/capture-stdout-stderr.html#accessing-captured-output-from-a-test-function + Official ``capsys`` reference documentation. + ''' + + # Defer heavyweight imports. + from beartype import BeartypeConf, beartype + import linecache + + # @beartype subdecorator printing wrapper function definitions. + beartype_printing = beartype(conf=BeartypeConf(is_debug=True)) + + beartyped_earthquake = beartype_printing(_earthquake) + + # Pytest object freezing the current state of standard output and error as + # uniquely written to by this unit test up to this statement. + standard_captured = capsys.readouterr() + standard_lines = standard_captured.out.splitlines(keepends=True) + + # This is probably overkill, but check to see that we generated lines in + # our linecache that correspond to the ones we printed. This a fragile + # coupling, but we can relax this later to avoid making those line-by-line + # comparisons and just check for the decorated function's filename's + # presence in the cache. + assert beartyped_earthquake.__code__.co_filename in linecache.cache + code_len, mtime, code_lines, code_filename = linecache.cache[beartyped_earthquake.__code__.co_filename] + assert mtime is None + assert len(code_lines) == len(standard_lines) + for code_line, standard_line in zip(code_lines, standard_lines): + assert code_line in standard_line + assert code_filename == beartyped_earthquake.__code__.co_filename + + def test_decor_conf_strategy() -> None: ''' Test the :func:`beartype.beartype` decorator passed the optional ``conf``
[Feature Request] Debuggable wrapper functions `@beartype` dynamically wraps each decorated callable with a unique wrapper function efficiently type-checking that callable. That's great, because efficient. But that's also currently invisible to debuggers, because `@beartype` fails to register the Python code underlying these wrapper functions with [the little-known standard `linecache` module](https://docs.python.org/3/library/linecache.html). This feature request tracks work towards rendering `@beartype` compatible with debuggers. For now, please refer to [this superb writeup by witty QA strongman @posita for a detailed discussion of the exact issue under discussion here – complete with the inevitable working solution infused with wisdom by cautionary AI futurologist @TeamSpen210](https://github.com/beartype/beartype/discussions/84). Humble gratitude to both @posita and @TeamSpen210 for finally shaming me into doing this thing that desperately needs doing.
0.0
4190bc3112d0f56d81af586a5f01a10c3e19feae
[ "beartype_test/a00_unit/a90_decor/test_decorconf.py::test_decor_conf_is_debug_updates_linecache" ]
[ "beartype_test/a00_unit/a90_decor/test_decorconf.py::test_decor_conf", "beartype_test/a00_unit/a90_decor/test_decorconf.py::test_decor_conf_is_debug", "beartype_test/a00_unit/a90_decor/test_decorconf.py::test_decor_conf_strategy" ]
{ "failed_lite_validators": [ "has_hyperlinks" ], "has_test_patch": true, "is_lite": false }
2022-01-27 14:32:29+00:00
mit
1,307
beartype__plum-106
diff --git a/plum/function.py b/plum/function.py index 7b97c00..15b4b9c 100644 --- a/plum/function.py +++ b/plum/function.py @@ -1,3 +1,4 @@ +import os import textwrap from functools import wraps from types import MethodType @@ -141,6 +142,11 @@ class Function(metaclass=_FunctionMeta): # clearing the cache. self.clear_cache(reregister=False) + # Don't do any fancy appending of docstrings when the environment variable + # `PLUM_SIMPLE_DOC` is set to `1`. + if "PLUM_SIMPLE_DOC" in os.environ and os.environ["PLUM_SIMPLE_DOC"] == "1": + return self._doc + # Derive the basis of the docstring from `self._f`, removing any indentation. doc = self._doc.strip() if doc: diff --git a/plum/parametric.py b/plum/parametric.py index ec69cf3..12aafa3 100644 --- a/plum/parametric.py +++ b/plum/parametric.py @@ -240,7 +240,7 @@ def parametric(original_class=None): return original_class.__new__(cls) cls.__new__ = class_new - original_class.__init_subclass__(**kw_args) + super(original_class, cls).__init_subclass__(**kw_args) # Create parametric class. parametric_class = meta( diff --git a/pyproject.toml b/pyproject.toml index 6671c47..addea61 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -19,7 +19,7 @@ dynamic = ["version"] requires-python = ">=3.8" dependencies = [ - "beartype", + "beartype >= 0.16", "typing-extensions; python_version<='3.10'", ]
beartype/plum
57e11e65cd91cb45a1e66eb6c2e0c19a7f2a1523
diff --git a/tests/test_function.py b/tests/test_function.py index c4f8229..234875d 100644 --- a/tests/test_function.py +++ b/tests/test_function.py @@ -1,4 +1,5 @@ import abc +import os import textwrap import typing @@ -221,6 +222,31 @@ def test_doc(monkeypatch): assert g.__doc__ == textwrap.dedent(expected_doc).strip() +def test_simple_doc(monkeypatch): + @dispatch + def f(x: int): + """First.""" + + @dispatch + def f(x: str): + """Second.""" + + monkeypatch.setitem(os.environ, "PLUM_SIMPLE_DOC", "1") + assert f.__doc__ == "First." + + monkeypatch.setitem(os.environ, "PLUM_SIMPLE_DOC", "0") + expected_doc = """ + First. + + ----------- + + f(x: str) + + Second. + """ + assert f.__doc__ == textwrap.dedent(expected_doc).strip() + + def test_methods(): dispatch = Dispatcher() diff --git a/tests/test_parametric.py b/tests/test_parametric.py index d9fa79c..00bbcfa 100644 --- a/tests/test_parametric.py +++ b/tests/test_parametric.py @@ -507,3 +507,25 @@ def test_val(): Val[1].__init__(MockVal()) assert repr(Val[1]()) == "plum.parametric.Val[1]()" + + +def test_init_subclass_correct_args(): + # See issue https://github.com/beartype/plum/issues/105 + + from plum import parametric + + register = set() + + class Pytree: + def __init_subclass__(cls, **kwargs): + if cls in register: + raise ValueError("duplicate") + else: + register.add(cls) + + @parametric + class Wrapper(Pytree): + pass + + Wrapper[int] + assert Wrapper[int] in register
`parametric` conflicts with certain usages of customized `__init_subclass__` Hi! I'm using JAX, and also using `plum` -- in my library, I've define a mixin class called `Pytree` which automatically implements the Pytree interface for classes which mix it in. It's quite simple: ```python class Pytree: def __init_subclass__(cls, **kwargs): jtu.register_pytree_node( cls, cls.flatten, cls.unflatten, ) ``` If I wish to use this mixin, and `parametric` -- I'm in for problems, I get duplicate registration: ``` ERROR ... ValueError: Duplicate custom PyTreeDef type registration for <class...> ``` I'm not exactly sure why this occurs, but I'm hoping to find a fix -- because I'd like to use parametric classes to guide some of the dispatch in my library functions.
0.0
57e11e65cd91cb45a1e66eb6c2e0c19a7f2a1523
[ "tests/test_function.py::test_simple_doc", "tests/test_parametric.py::test_init_subclass_correct_args" ]
[ "tests/test_function.py::test_convert_reference", "tests/test_function.py::test_change_function_name", "tests/test_function.py::test_function", "tests/test_function.py::test_repr", "tests/test_function.py::test_owner", "tests/test_function.py::test_resolve_method_with_cache_no_arguments", "tests/test_function.py::test_owner_transfer", "tests/test_function.py::test_functionmeta", "tests/test_function.py::test_doc", "tests/test_function.py::test_methods", "tests/test_function.py::test_function_dispatch", "tests/test_function.py::test_function_multi_dispatch", "tests/test_function.py::test_register", "tests/test_function.py::test_resolve_pending_registrations", "tests/test_function.py::test_enhance_exception", "tests/test_function.py::test_call_exception_enhancement", "tests/test_function.py::test_call_mro", "tests/test_function.py::test_call_abstract", "tests/test_function.py::test_call_object", "tests/test_function.py::test_call_type", "tests/test_function.py::test_call_convert", "tests/test_function.py::test_invoke", "tests/test_function.py::test_invoke_convert", "tests/test_function.py::test_invoke_wrapping", "tests/test_function.py::test_bound", "tests/test_parametric.py::test_covariantmeta", "tests/test_parametric.py::test_parametric[type]", "tests/test_parametric.py::test_parametric[MyType]", "tests/test_parametric.py::test_parametric_inheritance", "tests/test_parametric.py::test_parametric_covariance", "tests/test_parametric.py::test_parametric_constructor", "tests/test_parametric.py::test_parametric_override_infer_type_parameter", "tests/test_parametric.py::test_parametric_override_init_type_parameter", "tests/test_parametric.py::test_parametric_override_le_type_parameter", "tests/test_parametric.py::test_parametric_custom_metaclass", "tests/test_parametric.py::test_parametric_custom_metaclass_name_metaclass", "tests/test_parametric.py::test_parametric_owner_inference", "tests/test_parametric.py::test_is_concrete", "tests/test_parametric.py::test_is_type", "tests/test_parametric.py::test_type_parameter", "tests/test_parametric.py::test_kind", "tests/test_parametric.py::test_val" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-09-18 21:04:17+00:00
mit
1,308
beetbox__beets-2270
diff --git a/beets/mediafile.py b/beets/mediafile.py index 87a9d10a..7d1b0728 100644 --- a/beets/mediafile.py +++ b/beets/mediafile.py @@ -920,7 +920,16 @@ class MP3ImageStorageStyle(ListStorageStyle, MP3StorageStyle): frame.data = image.data frame.mime = image.mime_type frame.desc = image.desc or u'' - frame.encoding = 3 # UTF-8 encoding of desc + + # For compatibility with OS X/iTunes prefer latin-1 if possible. + # See issue #899 + try: + frame.desc.encode("latin-1") + except UnicodeEncodeError: + frame.encoding = mutagen.id3.Encoding.UTF16 + else: + frame.encoding = mutagen.id3.Encoding.LATIN1 + frame.type = image.type_index return frame
beetbox/beets
02bd7946c1f6dd84e0fd28d152f4bca5c09d9e0a
diff --git a/test/test_mediafile_edge.py b/test/test_mediafile_edge.py index 0be17769..ae758f14 100644 --- a/test/test_mediafile_edge.py +++ b/test/test_mediafile_edge.py @@ -19,6 +19,7 @@ from __future__ import division, absolute_import, print_function import os import shutil +import mutagen.id3 from test import _common from test._common import unittest @@ -375,30 +376,30 @@ class ID3v23Test(unittest.TestCase, TestHelper): finally: self._delete_test() - def test_v24_image_encoding(self): - mf = self._make_test(id3v23=False) - try: - mf.images = [beets.mediafile.Image(b'test data')] - mf.save() - frame = mf.mgfile.tags.getall('APIC')[0] - self.assertEqual(frame.encoding, 3) - finally: - self._delete_test() + def test_image_encoding(self): + """For compatibility with OS X/iTunes. - @unittest.skip("a bug, see #899") - def test_v23_image_encoding(self): - """For compatibility with OS X/iTunes (and strict adherence to - the standard), ID3v2.3 tags need to use an inferior text - encoding: UTF-8 is not supported. + See https://github.com/beetbox/beets/issues/899#issuecomment-62437773 """ - mf = self._make_test(id3v23=True) - try: - mf.images = [beets.mediafile.Image(b'test data')] - mf.save() - frame = mf.mgfile.tags.getall('APIC')[0] - self.assertEqual(frame.encoding, 1) - finally: - self._delete_test() + + for v23 in [True, False]: + mf = self._make_test(id3v23=v23) + try: + mf.images = [ + beets.mediafile.Image(b'data', desc=u""), + beets.mediafile.Image(b'data', desc=u"foo"), + beets.mediafile.Image(b'data', desc=u"\u0185"), + ] + mf.save() + apic_frames = mf.mgfile.tags.getall('APIC') + encodings = dict([(f.desc, f.encoding) for f in apic_frames]) + self.assertEqual(encodings, { + u"": mutagen.id3.Encoding.LATIN1, + u"foo": mutagen.id3.Encoding.LATIN1, + u"\u0185": mutagen.id3.Encoding.UTF16, + }) + finally: + self._delete_test() def suite():
MediaFile: use older text encodings in ID3v2.3 mode I am trying to create an auto-tagging configuration in which my tags are saved in ID3v2.3 (as 2.4 lacks compatibility with some players I use) and I like the cover art to be embedded in each music file. However, the cover art of the output files is not recognised by OS X 10.9.4 (i.e. in Finder) or iTunes. Here is a simple configuration with which the problem occurs: ``` yaml directory: /mnt/data/home/Music plugins: mbsync fetchart embedart per_disc_numbering: yes id3v23: yes import: copy: yes write: yes paths: default: $albumartist/$album%aunique{}/$disc-$track $title ``` When I comment the `id3v23: yes` option the covers of the output files are correctly recognised in Mac. The cover art of the ID3v2.3 output files is recognised in Windows, so it seems a Mac-specific issue. Strangely enough, the input files I used already have ID3v2.3 tags and embedded cover art and are correctly recognised in Mac. Below you have a diff of the ID3v2.3 tags (sorted by name) between an input and an output file taken with `mid3v2`: ``` diff 1c1,2 < APIC= (image/jpeg, 32205 bytes) --- > APIC= (image/jpeg, 111083 bytes) > COMM=iTunNORM='eng'= 00001700 00001700 00003981 00003981 00000000 00000000 00008187 00008187 00000000 00000000 3a5,6 > TBPM=0 > TCMP=0 5a9 > TLAN=eng 12a17,19 > TXXX=Album Artist Credit=The Beatles > TXXX=ALBUMARTISTSORT=Beatles, The > TXXX=Artist Credit=The Beatles 14a22 > TXXX=MusicBrainz Album Comment=UK mono 17c25 < TXXX=MusicBrainz Album Status=official --- > TXXX=MusicBrainz Album Status=Official 27a36 > USLT=[unrepresentable data] ```
0.0
02bd7946c1f6dd84e0fd28d152f4bca5c09d9e0a
[ "test/test_mediafile_edge.py::ID3v23Test::test_image_encoding" ]
[ "test/test_mediafile_edge.py::EdgeTest::test_discc_alternate_field", "test/test_mediafile_edge.py::EdgeTest::test_emptylist", "test/test_mediafile_edge.py::EdgeTest::test_old_ape_version_bitrate", "test/test_mediafile_edge.py::EdgeTest::test_only_magic_bytes_jpeg", "test/test_mediafile_edge.py::EdgeTest::test_release_time_with_space", "test/test_mediafile_edge.py::EdgeTest::test_release_time_with_t", "test/test_mediafile_edge.py::EdgeTest::test_soundcheck_non_ascii", "test/test_mediafile_edge.py::EdgeTest::test_tempo_with_bpm", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_float_with_dot_only", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_float_with_multiple_dots", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_float_with_no_numbers", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_int_string_to_int", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_int_to_float", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_intstring_to_bool", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_negative_string_to_float", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_special_chars_to_unicode", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_string_to_bool", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_string_to_float", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_string_to_int", "test/test_mediafile_edge.py::InvalidValueToleranceTest::test_safe_cast_string_with_cruft_to_float", "test/test_mediafile_edge.py::SafetyTest::test_broken_symlink", "test/test_mediafile_edge.py::SafetyTest::test_corrupt_flac_raises_unreadablefileerror", "test/test_mediafile_edge.py::SafetyTest::test_corrupt_monkeys_raises_unreadablefileerror", "test/test_mediafile_edge.py::SafetyTest::test_corrupt_mp3_raises_unreadablefileerror", "test/test_mediafile_edge.py::SafetyTest::test_corrupt_mp4_raises_unreadablefileerror", "test/test_mediafile_edge.py::SafetyTest::test_corrupt_ogg_raises_unreadablefileerror", "test/test_mediafile_edge.py::SafetyTest::test_invalid_extension_raises_filetypeerror", "test/test_mediafile_edge.py::SafetyTest::test_invalid_ogg_header_raises_unreadablefileerror", "test/test_mediafile_edge.py::SafetyTest::test_magic_xml_raises_unreadablefileerror", "test/test_mediafile_edge.py::SideEffectsTest::test_opening_tagless_file_leaves_untouched", "test/test_mediafile_edge.py::MP4EncodingTest::test_unicode_label_in_m4a", "test/test_mediafile_edge.py::MP3EncodingTest::test_comment_with_latin1_encoding", "test/test_mediafile_edge.py::MissingAudioDataTest::test_bitrate_with_zero_length", "test/test_mediafile_edge.py::TypeTest::test_set_date_to_none", "test/test_mediafile_edge.py::TypeTest::test_set_replaygain_gain_to_none", "test/test_mediafile_edge.py::TypeTest::test_set_replaygain_peak_to_none", "test/test_mediafile_edge.py::TypeTest::test_set_track_to_none", "test/test_mediafile_edge.py::TypeTest::test_set_year_to_none", "test/test_mediafile_edge.py::TypeTest::test_year_integer_in_string", "test/test_mediafile_edge.py::SoundCheckTest::test_decode_handles_unicode", "test/test_mediafile_edge.py::SoundCheckTest::test_decode_zero", "test/test_mediafile_edge.py::SoundCheckTest::test_malformatted", "test/test_mediafile_edge.py::SoundCheckTest::test_round_trip", "test/test_mediafile_edge.py::SoundCheckTest::test_special_characters", "test/test_mediafile_edge.py::ID3v23Test::test_v23_on_non_mp3_is_noop", "test/test_mediafile_edge.py::ID3v23Test::test_v23_year_tag", "test/test_mediafile_edge.py::ID3v23Test::test_v24_year_tag" ]
{ "failed_lite_validators": [], "has_test_patch": true, "is_lite": true }
2016-11-18 00:14:15+00:00
mit
1,309
beetbox__beets-3167
diff --git a/beetsplug/hook.py b/beetsplug/hook.py index de44c1b8..ac0c4aca 100644 --- a/beetsplug/hook.py +++ b/beetsplug/hook.py @@ -18,7 +18,6 @@ from __future__ import division, absolute_import, print_function import string import subprocess -import six from beets.plugins import BeetsPlugin from beets.util import shlex_split, arg_encoding @@ -46,10 +45,8 @@ class CodingFormatter(string.Formatter): See str.format and string.Formatter.format. """ - try: + if isinstance(format_string, bytes): format_string = format_string.decode(self._coding) - except UnicodeEncodeError: - pass return super(CodingFormatter, self).format(format_string, *args, **kwargs) @@ -96,10 +93,7 @@ class HookPlugin(BeetsPlugin): return # Use a string formatter that works on Unicode strings. - if six.PY2: - formatter = CodingFormatter(arg_encoding()) - else: - formatter = string.Formatter() + formatter = CodingFormatter(arg_encoding()) command_pieces = shlex_split(command) diff --git a/docs/changelog.rst b/docs/changelog.rst index f311571d..7e98a836 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -150,6 +150,8 @@ Fixes: * :doc:`/plugins/badfiles`: Avoid a crash when the underlying tool emits undecodable output. :bug:`3165` +* :doc:`/plugins/hook`: Fix byte string interpolation in hook commands. + :bug:`2967` :bug:`3167` .. _python-itunes: https://github.com/ocelma/python-itunes diff --git a/setup.py b/setup.py index 648e6d4d..ae8f76ff 100755 --- a/setup.py +++ b/setup.py @@ -88,10 +88,14 @@ setup( install_requires=[ 'six>=1.9', 'mutagen>=1.33', - 'munkres~=1.0.0', 'unidecode', 'musicbrainzngs>=0.4', 'pyyaml', + ] + [ + # Avoid a version of munkres incompatible with Python 3. + 'munkres~=1.0.0' if sys.version_info < (3, 5, 0) else + 'munkres!=1.1.0,!=1.1.1' if sys.version_info < (3, 6, 0) else + 'munkres>=1.0.0', ] + ( # Use the backport of Python 3.4's `enum` module. ['enum34>=1.0.4'] if sys.version_info < (3, 4, 0) else []
beetbox/beets
80f4f0a0f235b9764f516990c174f6b73695175b
diff --git a/test/test_hook.py b/test/test_hook.py index 39fd0895..81363c73 100644 --- a/test/test_hook.py +++ b/test/test_hook.py @@ -110,6 +110,25 @@ class HookTest(_common.TestCase, TestHelper): self.assertTrue(os.path.isfile(path)) os.remove(path) + def test_hook_bytes_interpolation(self): + temporary_paths = [ + get_temporary_path().encode('utf-8') + for i in range(self.TEST_HOOK_COUNT) + ] + + for index, path in enumerate(temporary_paths): + self._add_hook('test_bytes_event_{0}'.format(index), + 'touch "{path}"') + + self.load_plugins('hook') + + for index, path in enumerate(temporary_paths): + plugins.send('test_bytes_event_{0}'.format(index), path=path) + + for path in temporary_paths: + self.assertTrue(os.path.isfile(path)) + os.remove(path) + def suite(): return unittest.TestLoader().loadTestsFromName(__name__)
hook: Interpolate paths (and other bytestrings) correctly into commands ### Problem I have the following configuration for the Hook plugin in my config.yaml: ``` hook: hooks: - event: album_imported command: /usr/bin/ls -l "{album.path}" ``` This is just a test to see how beets presents the path values. It appears that the paths are returned as bytes objects rather than strings. This is problematic when using the path values as arguments for external shell scripts. As can be seen below, the shell is unable to use the value provided by {album.path}. ```sh $ beet -vv import /tmp/music/new/Al\ Di\ Meola\ -\ Elegant\ Gypsy/ ``` Led to this problem: ``` hook: running command "/usr/bin/ls -l b'/tmp/music/FLAC/Al Di Meola/Elegant Gypsy'" for event album_imported /usr/bin/ls: cannot access "b'/tmp/music/FLAC/Al Di Meola/Elegant Gypsy'": No such file or directory ``` The path "/tmp/music/FLAC/Al Di Meola/Elegant Gypsy" does exist on the filesystem after the import is complete. ### Setup * OS: Arch Linux * Python version: 3.4.5 * beets version: 1.4.7 * Turning off plugins made problem go away (yes/no): This issue is related to a plugin, so I didn't turn them off My configuration (output of `beet config`) is: ```yaml plugins: inline convert badfiles info missing lastgenre fetchart mbsync scrub smartplaylist hook directory: /tmp/music/FLAC library: ~/.config/beets/library.db import: copy: yes write: yes log: ~/.config/beets/import.log languages: en per_disc_numbering: yes paths: default: $albumartist/$album%aunique{}/$disc_and_track - $title comp: $albumartist/$album%aunique{}/$disc_and_track - $title item_fields: disc_and_track: u'%01i-%02i' % (disc, track) if disctotal > 1 else u'%02i' % (track) ui: color: yes match: ignored: missing_tracks unmatched_tracks convert: copy_album_art: yes dest: /tmp/music/ogg embed: yes never_convert_lossy_files: yes format: ogg formats: ogg: command: oggenc -Q -q 4 -o $dest $source extension: ogg aac: command: ffmpeg -i $source -y -vn -acodec aac -aq 1 $dest extension: m4a alac: command: ffmpeg -i $source -y -vn -acodec alac $dest extension: m4a flac: ffmpeg -i $source -y -vn -acodec flac $dest mp3: ffmpeg -i $source -y -vn -aq 2 $dest opus: ffmpeg -i $source -y -vn -acodec libopus -ab 96k $dest wma: ffmpeg -i $source -y -vn -acodec wmav2 -vn $dest pretend: no threads: 4 max_bitrate: 500 auto: no tmpdir: quiet: no paths: {} no_convert: '' album_art_maxwidth: 0 lastgenre: force: yes prefer_specific: no min_weight: 20 count: 4 separator: '; ' whitelist: yes fallback: canonical: no source: album auto: yes fetchart: sources: filesystem coverart amazon albumart auto: yes minwidth: 0 maxwidth: 0 enforce_ratio: no cautious: no cover_names: - cover - front - art - album - folder google_key: REDACTED google_engine: 001442825323518660753:hrh5ch1gjzm fanarttv_key: REDACTED store_source: no hook: hooks: [{event: album_imported, command: '/usr/bin/ls -l "{album.path}"'}] pathfields: {} album_fields: {} scrub: auto: yes missing: count: no total: no album: no smartplaylist: relative_to: playlist_dir: . auto: yes playlists: [] ``` I created a Python 2 virtual environment, installed beets and any dependencies in to that virtualenv, cleaned my test library, and imported the same files using the same config.yaml. This time the shell was able to use the path value returned by the hook configuration: ``` hook: running command "/usr/bin/ls -l /tmp/music/FLAC/Al Di Meola/Elegant Gypsy" for event album_imported total 254944 -rw-r--r-- 1 mike mike 50409756 Jun 24 13:46 01 - Flight Over Rio.flac -rw-r--r-- 1 mike mike 43352354 Jun 24 13:46 02 - Midnight Tango.flac -rw-r--r-- 1 mike mike 7726389 Jun 24 13:46 03 - Percussion Intro.flac -rw-r--r-- 1 mike mike 32184646 Jun 24 13:46 04 - Mediterranean Sundance.flac -rw-r--r-- 1 mike mike 45770796 Jun 24 13:46 05 - Race With Devil on Spanish Highway.flac -rw-r--r-- 1 mike mike 10421006 Jun 24 13:46 06 - Lady of Rome, Sister of Brazil.flac -rw-r--r-- 1 mike mike 65807504 Jun 24 13:46 07 - Elegant Gypsy Suite.flac -rw-r--r-- 1 mike mike 5366515 Jun 24 13:46 cover.jpg ``` I'm guessing this is due to a data type difference between Python 2 and Python 3.
0.0
80f4f0a0f235b9764f516990c174f6b73695175b
[ "test/test_hook.py::HookTest::test_hook_bytes_interpolation" ]
[ "test/test_hook.py::HookTest::test_hook_argument_substitution", "test/test_hook.py::HookTest::test_hook_event_substitution", "test/test_hook.py::HookTest::test_hook_no_arguments" ]
{ "failed_lite_validators": [ "has_media", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2019-02-25 10:02:14+00:00
mit
1,310
beetbox__beets-3209
diff --git a/beets/random.py b/beets/random.py new file mode 100644 index 00000000..5387da4d --- /dev/null +++ b/beets/random.py @@ -0,0 +1,115 @@ +# -*- coding: utf-8 -*- +# This file is part of beets. +# Copyright 2016, Philippe Mongeau. +# +# Permission is hereby granted, free of charge, to any person obtaining +# a copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, +# distribute, sublicense, and/or sell copies of the Software, and to +# permit persons to whom the Software is furnished to do so, subject to +# the following conditions: +# +# The above copyright notice and this permission notice shall be +# included in all copies or substantial portions of the Software. + +"""Get a random song or album from the library. +""" +from __future__ import division, absolute_import, print_function + +import random +from operator import attrgetter +from itertools import groupby + + +def _length(obj, album): + """Get the duration of an item or album. + """ + if album: + return sum(i.length for i in obj.items()) + else: + return obj.length + + +def _equal_chance_permutation(objs, field='albumartist', random_gen=None): + """Generate (lazily) a permutation of the objects where every group + with equal values for `field` have an equal chance of appearing in + any given position. + """ + rand = random_gen or random + + # Group the objects by artist so we can sample from them. + key = attrgetter(field) + objs.sort(key=key) + objs_by_artists = {} + for artist, v in groupby(objs, key): + objs_by_artists[artist] = list(v) + + # While we still have artists with music to choose from, pick one + # randomly and pick a track from that artist. + while objs_by_artists: + # Choose an artist and an object for that artist, removing + # this choice from the pool. + artist = rand.choice(list(objs_by_artists.keys())) + objs_from_artist = objs_by_artists[artist] + i = rand.randint(0, len(objs_from_artist) - 1) + yield objs_from_artist.pop(i) + + # Remove the artist if we've used up all of its objects. + if not objs_from_artist: + del objs_by_artists[artist] + + +def _take(iter, num): + """Return a list containing the first `num` values in `iter` (or + fewer, if the iterable ends early). + """ + out = [] + for val in iter: + out.append(val) + num -= 1 + if num <= 0: + break + return out + + +def _take_time(iter, secs, album): + """Return a list containing the first values in `iter`, which should + be Item or Album objects, that add up to the given amount of time in + seconds. + """ + out = [] + total_time = 0.0 + for obj in iter: + length = _length(obj, album) + if total_time + length <= secs: + out.append(obj) + total_time += length + return out + + +def random_objs(objs, album, number=1, time=None, equal_chance=False, + random_gen=None): + """Get a random subset of the provided `objs`. + + If `number` is provided, produce that many matches. Otherwise, if + `time` is provided, instead select a list whose total time is close + to that number of minutes. If `equal_chance` is true, give each + artist an equal chance of being included so that artists with more + songs are not represented disproportionately. + """ + rand = random_gen or random + + # Permute the objects either in a straightforward way or an + # artist-balanced way. + if equal_chance: + perm = _equal_chance_permutation(objs) + else: + perm = objs + rand.shuffle(perm) # N.B. This shuffles the original list. + + # Select objects by time our count. + if time: + return _take_time(perm, time * 60, album) + else: + return _take(perm, number) diff --git a/beets/util/__init__.py b/beets/util/__init__.py index f3dedcb4..f5ad2da2 100644 --- a/beets/util/__init__.py +++ b/beets/util/__init__.py @@ -283,13 +283,13 @@ def prune_dirs(path, root=None, clutter=('.DS_Store', 'Thumbs.db')): continue clutter = [bytestring_path(c) for c in clutter] match_paths = [bytestring_path(d) for d in os.listdir(directory)] - if fnmatch_all(match_paths, clutter): - # Directory contains only clutter (or nothing). - try: + try: + if fnmatch_all(match_paths, clutter): + # Directory contains only clutter (or nothing). shutil.rmtree(directory) - except OSError: + else: break - else: + except OSError: break diff --git a/beetsplug/mpdstats.py b/beetsplug/mpdstats.py index 423cde2b..876dcacd 100644 --- a/beetsplug/mpdstats.py +++ b/beetsplug/mpdstats.py @@ -256,10 +256,6 @@ class MPDStats(object): if not path: return - if is_url(path): - self._log.info(u'playing stream {0}', displayable_path(path)) - return - played, duration = map(int, status['time'].split(':', 1)) remaining = duration - played @@ -276,6 +272,14 @@ class MPDStats(object): if diff <= self.time_threshold: return + if self.now_playing['path'] == path and played == 0: + self.handle_song_change(self.now_playing) + + if is_url(path): + self._log.info(u'playing stream {0}', displayable_path(path)) + self.now_playing = None + return + self._log.info(u'playing {0}', displayable_path(path)) self.now_playing = { diff --git a/beetsplug/random.py b/beetsplug/random.py index 65caaf90..a8e29313 100644 --- a/beetsplug/random.py +++ b/beetsplug/random.py @@ -19,97 +19,7 @@ from __future__ import division, absolute_import, print_function from beets.plugins import BeetsPlugin from beets.ui import Subcommand, decargs, print_ -import random -from operator import attrgetter -from itertools import groupby - - -def _length(obj, album): - """Get the duration of an item or album. - """ - if album: - return sum(i.length for i in obj.items()) - else: - return obj.length - - -def _equal_chance_permutation(objs, field='albumartist'): - """Generate (lazily) a permutation of the objects where every group - with equal values for `field` have an equal chance of appearing in - any given position. - """ - # Group the objects by artist so we can sample from them. - key = attrgetter(field) - objs.sort(key=key) - objs_by_artists = {} - for artist, v in groupby(objs, key): - objs_by_artists[artist] = list(v) - - # While we still have artists with music to choose from, pick one - # randomly and pick a track from that artist. - while objs_by_artists: - # Choose an artist and an object for that artist, removing - # this choice from the pool. - artist = random.choice(list(objs_by_artists.keys())) - objs_from_artist = objs_by_artists[artist] - i = random.randint(0, len(objs_from_artist) - 1) - yield objs_from_artist.pop(i) - - # Remove the artist if we've used up all of its objects. - if not objs_from_artist: - del objs_by_artists[artist] - - -def _take(iter, num): - """Return a list containing the first `num` values in `iter` (or - fewer, if the iterable ends early). - """ - out = [] - for val in iter: - out.append(val) - num -= 1 - if num <= 0: - break - return out - - -def _take_time(iter, secs, album): - """Return a list containing the first values in `iter`, which should - be Item or Album objects, that add up to the given amount of time in - seconds. - """ - out = [] - total_time = 0.0 - for obj in iter: - length = _length(obj, album) - if total_time + length <= secs: - out.append(obj) - total_time += length - return out - - -def random_objs(objs, album, number=1, time=None, equal_chance=False): - """Get a random subset of the provided `objs`. - - If `number` is provided, produce that many matches. Otherwise, if - `time` is provided, instead select a list whose total time is close - to that number of minutes. If `equal_chance` is true, give each - artist an equal chance of being included so that artists with more - songs are not represented disproportionately. - """ - # Permute the objects either in a straightforward way or an - # artist-balanced way. - if equal_chance: - perm = _equal_chance_permutation(objs) - else: - perm = objs - random.shuffle(perm) # N.B. This shuffles the original list. - - # Select objects by time our count. - if time: - return _take_time(perm, time * 60, album) - else: - return _take(perm, number) +from beets.random import random_objs def random_func(lib, opts, args): diff --git a/docs/changelog.rst b/docs/changelog.rst index b2c8437b..43b6b20f 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -195,6 +195,8 @@ Fixes: is long. Thanks to :user:`ray66`. :bug:`3207` :bug:`2752` +* Fix an unhandled exception when pruning empty directories. + :bug:`1996` :bug:`3209` .. _python-itunes: https://github.com/ocelma/python-itunes
beetbox/beets
ebed21f31905bd61cf89a28f07a800b78d8c5656
diff --git a/test/test_random.py b/test/test_random.py new file mode 100644 index 00000000..4c31acdd --- /dev/null +++ b/test/test_random.py @@ -0,0 +1,82 @@ +# -*- coding: utf-8 -*- +# This file is part of beets. +# Copyright 2019, Carl Suster +# +# Permission is hereby granted, free of charge, to any person obtaining +# a copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, +# distribute, sublicense, and/or sell copies of the Software, and to +# permit persons to whom the Software is furnished to do so, subject to +# the following conditions: +# +# The above copyright notice and this permission notice shall be +# included in all copies or substantial portions of the Software. + +"""Test the beets.random utilities associated with the random plugin. +""" + +from __future__ import division, absolute_import, print_function + +import unittest +from test.helper import TestHelper + +import math +from random import Random + +from beets import random + + +class RandomTest(unittest.TestCase, TestHelper): + def setUp(self): + self.lib = None + self.artist1 = 'Artist 1' + self.artist2 = 'Artist 2' + self.item1 = self.create_item(artist=self.artist1) + self.item2 = self.create_item(artist=self.artist2) + self.items = [self.item1, self.item2] + for _ in range(8): + self.items.append(self.create_item(artist=self.artist2)) + self.random_gen = Random() + self.random_gen.seed(12345) + + def tearDown(self): + pass + + def _stats(self, data): + mean = sum(data) / len(data) + stdev = math.sqrt( + sum((p - mean) ** 2 for p in data) / (len(data) - 1)) + quot, rem = divmod(len(data), 2) + if rem: + median = sorted(data)[quot] + else: + median = sum(sorted(data)[quot - 1:quot + 1]) / 2 + return mean, stdev, median + + def test_equal_permutation(self): + """We have a list of items where only one item is from artist1 and the + rest are from artist2. If we permute weighted by the artist field then + the solo track will almost always end up near the start. If we use a + different field then it'll be in the middle on average. + """ + def experiment(field, histogram=False): + """Permutes the list of items 500 times and calculates the position + of self.item1 each time. Returns stats about that position. + """ + positions = [] + for _ in range(500): + shuffled = list(random._equal_chance_permutation( + self.items, field=field, random_gen=self.random_gen)) + positions.append(shuffled.index(self.item1)) + # Print a histogram (useful for debugging). + if histogram: + for i in range(len(self.items)): + print('{:2d} {}'.format(i, '*' * positions.count(i))) + return self._stats(positions) + + mean1, stdev1, median1 = experiment('artist') + mean2, stdev2, median2 = experiment('track') + self.assertAlmostEqual(0, median1, delta=1) + self.assertAlmostEqual(len(self.items) // 2, median2, delta=1) + self.assertGreater(stdev2, stdev1)
program abortion due to exception in fnmatch_all() Sorry about not knowing much how to compose this kind of messages! In the duplicates init.py, under _def prune_dirs(),_ I've added a try-except-clause in order to avoid error and aborted run It now looks like this: ``` for directory in ancestors: directory = syspath(directory) if not os.path.exists(directory): # Directory gone already. continue try: if fnmatch_all(os.listdir(directory), clutter): # Directory contains only clutter (or nothing). try: shutil.rmtree(directory) except OSError: break except BaseException as foo: print(foo) break else: break except BaseException as foo: print(foo) break ``` ### Problem I haven't really understood exactly what's going wrong. (Maybe some day when I've more energy?) The program aborts with a non-intuitive error message about too many levels of symlinks. ### Setup - Linux Mint 64 bits, 17.3, xfce - Python 2.7 - beets version: beets version 1.3.17 plugins: duplicates My configuration is: ``` yaml user configuration: /home/johan/.config/beets/config.yaml data directory: /home/johan/.config/beets plugin paths: Sending event: pluginload library database: /tmp/tracks.db library directory: /mnt/qnap-212 Sending event: library_opened verbose: 1 library: /tmp/tracks.db per_disc_numbering: yes statefile: prints.pickle chroma: auto: yes original_date: yes ftintitle: format: (feat. {0}) lastgenre: auto: no count: 5 force: no source: track separator: '/ ' whitelist: ~/.config/beets/whitelist.txt canonical: ~/.config/beets/canonical.txt duplicates: keys: [acoustid_fingerprint] merge: yes tiebreak: items: [bitrate] count: no full: no format: '' move: '' tag: '' path: no copy: '' album: no strict: no checksum: '' delete: no plugins: chroma duplicates directory: /mnt/qnap-212/ import: write: yes copy: no move: no link: no delete: no resume: ask incremental: yes timid: no log: prints_importlog.txt autotag: no singletons: yes default_action: apply detail: no flat: no group_albums: no pretend: no languages: - en - de - es - sv - fr - pt - fi - it log: printslog.txt paths: singleton: '%if{$mb_trackid,mb_}Singles/%if{$artist,%lower{%asciify{$artist}},_}/%if{$album,%lower{%asciify{$album}},_}%if{$year, [$year]}/%if{$disc,$disc-}%if{$track,$track. }%if{$title,$title,_}%if{$album, [$album]}' comp: '%if{$mb_albumid,mb_}Collections/%lower{%asciify{$album}}_%aunique{}/%if{$disc,$disc-}%if{$track,$track. }%if{$title,$title,_}%if{$artist, [$artist]}' default: '%if{$mb_albumid,mb_}Albums/%if{$albumartist,%lower{%asciify{$albumartist}}_}/%lower{%asciify{$album}}_%aunique{}/%if{$disc,$disc-}%if{$track,$track. }%if{$title,$title,_}' replace: '[\\/\xa0]': _ '[`\x27]': "\u2019" '[\"]': "\u201D" \.\.\.: "\u2026" ^\-: _ ^\.: _ '[\x00-\x1f]': _ '[<>:"\?\*\|]': _ \.$: _ \s+$: '' ^\s+: '' match: distance_weights: artist: 2.0 album: 2.5 year: 1.0 label: 0.5 catalognum: 0.5 albumdisambig: 0.5 album_id: 2.0 tracks: 2.0 missing_tracks: 0.1 unmatched_tracks: 5.0 track_title: 2.0 track_artist: 2.0 track_index: 1.0 track_length: 9.0 track_id: 2.0 preferred: countries: [] media: [] original_year: yes ignored: [track_length unmatched_tracks] track_length_grace: 3 track_length_max: 15 ```
0.0
ebed21f31905bd61cf89a28f07a800b78d8c5656
[ "test/test_random.py::RandomTest::test_equal_permutation" ]
[]
{ "failed_lite_validators": [ "has_added_files", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2019-04-07 06:36:36+00:00
mit
1,311
beetbox__beets-3688
diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml index f9cce8d2..08e7548f 100644 --- a/.github/workflows/ci.yaml +++ b/.github/workflows/ci.yaml @@ -6,7 +6,7 @@ jobs: strategy: matrix: platform: [ubuntu-latest] - python-version: [2.7, 3.5, 3.6, 3.7, 3.8] + python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9-dev] env: PY_COLORS: 1 @@ -82,4 +82,4 @@ jobs: python -m pip install tox sphinx - name: Lint with flake8 - run: tox -e py-lint \ No newline at end of file + run: tox -e py-lint diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 6c024e3d..dc861771 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -247,7 +247,7 @@ guidelines to follow: with the ``logging`` module, feed it through this function. Editor Settings -^^^^^^^^^^^^^^^ +--------------- Personally, I work on beets with `vim <http://www.vim.org/>`__. Here are some ``.vimrc`` lines that might help with PEP 8-compliant Python @@ -285,6 +285,11 @@ Other ways to run the tests: You can also see the latest test results on `Linux`_ and on `Windows`_. +Note, if you are on Windows and are seeing errors running tox, it may be related to `this issue`_, +in which case you may have to install tox v3.8.3 e.g. ``python -m pip install tox=3.8.3`` + +.. _this issue: https://github.com/tox-dev/tox/issues/1550 + Coverage ^^^^^^^^ diff --git a/beets/config_default.yaml b/beets/config_default.yaml index 0fd6eb59..c75778b8 100644 --- a/beets/config_default.yaml +++ b/beets/config_default.yaml @@ -44,6 +44,7 @@ replace: '^\s+': '' '^-': _ path_sep_replace: _ +drive_sep_replace: _ asciify_paths: false art_filename: cover max_filename_length: 0 diff --git a/beets/dbcore/db.py b/beets/dbcore/db.py index b13f2638..46b47a2e 100755 --- a/beets/dbcore/db.py +++ b/beets/dbcore/db.py @@ -19,6 +19,7 @@ from __future__ import division, absolute_import, print_function import time import os +import re from collections import defaultdict import threading import sqlite3 @@ -84,6 +85,11 @@ class FormattedMapping(Mapping): if self.for_path: sep_repl = beets.config['path_sep_replace'].as_str() + sep_drive = beets.config['drive_sep_replace'].as_str() + + if re.match(r'^\w:', value): + value = re.sub(r'(?<=^\w):', sep_drive, value) + for sep in (os.path.sep, os.path.altsep): if sep: value = value.replace(sep, sep_repl) diff --git a/docs/changelog.rst b/docs/changelog.rst index 0f41c38e..64e6ab85 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -235,6 +235,9 @@ Fixes: * :doc:`/plugins/ipfs`: Fix Python 3 compatibility. Thanks to :user:`musoke`. :bug:`2554` +* Fix a bug that caused metadata starting with something resembling a drive + letter to be incorrectly split into an extra directory after the colon. + :bug:`3685` For plugin developers:
beetbox/beets
a2f66a042727964e18d698a7b1676a802b5f7050
diff --git a/.github/workflows/integration_test.yaml b/.github/workflows/integration_test.yaml index 05623bf2..1e8b3a77 100644 --- a/.github/workflows/integration_test.yaml +++ b/.github/workflows/integration_test.yaml @@ -12,10 +12,10 @@ jobs: steps: - uses: actions/checkout@v2 - - name: Set up Python 3.8 + - name: Set up latest Python version uses: actions/setup-python@v2 with: - python-version: 3.8 + python-version: 3.9-dev - name: Install base dependencies run: | diff --git a/test/test_files.py b/test/test_files.py index f3177967..13a8b440 100644 --- a/test/test_files.py +++ b/test/test_files.py @@ -102,6 +102,25 @@ class MoveTest(_common.TestCase): self.i.move() self.assertEqual(self.i.path, old_path) + def test_move_file_with_colon(self): + self.i.artist = u'C:DOS' + self.i.move() + self.assertIn('C_DOS', self.i.path.decode()) + + def test_move_file_with_multiple_colons(self): + print(beets.config['replace']) + self.i.artist = u'COM:DOS' + self.i.move() + self.assertIn('COM_DOS', self.i.path.decode()) + + def test_move_file_with_colon_alt_separator(self): + old = beets.config['drive_sep_replace'] + beets.config["drive_sep_replace"] = '0' + self.i.artist = u'C:DOS' + self.i.move() + self.assertIn('C0DOS', self.i.path.decode()) + beets.config["drive_sep_replace"] = old + def test_read_only_file_copied_writable(self): # Make the source file read-only. os.chmod(self.path, 0o444)
Sanitize colons in drive-letter-like positions in Windows filenames like path separators I have an album containing an artist named "F:A.R.". When doing any activity that involves moving this artist, the file is not moved to "M:\F:A.R." (I'm using unicode replacements instead of underscores), but to "M:\F:\A.R.". ### Problem Running this command in verbose (`-vv`) mode: ```sh ~ ❯ beet -vv move "artist::^F.A.R" user configuration: C:\Users\Gunther Schmidl\AppData\Roaming\beets\config.yaml data directory: C:\Users\Gunther Schmidl\AppData\Roaming\beets plugin paths: Sending event: pluginload lyrics: Disabling google source: no API key configured. library database: M:\lib.db library directory: M:\ Sending event: library_opened Moving 1 item. moving: M:\F꞉A․R․ / Muslimgauze\[1990] Manipulation (Live Extracts) / Death of Saint Jarnaii Singh Bhindranwaie\01-01 - Manipulation (Live Extracts).mp3 Sending event: before_item_moved Sending event: item_moved Sending event: database_change Sending event: database_change Sending event: cli_exit ``` Led to this problem: Here's what it looks like in the database: https://i.imgur.com/Z5lvGqE.png Here's a link to the music files that trigger the bug (if relevant): https://www.dropbox.com/s/pastihu349z6kce/01-01%20-%20Manipulation%20%28Live%20Extracts%29.mp3?dl=0 ### Setup * OS: Windows 10 v2004 * Python version: 3.8.0 * beets version: latest trunk, but also happens in latest released version * Turning off plugins made problem go away (yes/no): no My configuration (output of `beet config`) is: ```yaml lyrics: bing_lang_from: [] auto: yes bing_client_secret: REDACTED bing_lang_to: google_API_key: REDACTED google_engine_ID: REDACTED genius_api_key: REDACTED fallback: force: no local: no sources: - google - lyricwiki - musixmatch - genius directory: M:/ library: M:/lib.db paths: default: $albumartist/[$year] $album%aunique{}/$disc-$track - $title singleton: Non-Album/$artist/$title comp: Compilations/[$year] $album%aunique{}/$disc-$track - $title plugins: web mbsync discogs chroma lastgenre duplicates lyrics lastgenre: source: artist force: no whitelist: yes min_weight: 10 count: 1 fallback: canonical: no auto: yes separator: ', ' prefer_specific: no replace: \\: "\uFF3C" /: "\uFF0F" \.{3}: "\u2026" \.: "\u2024" '[\x00-\x1f]': '' <: "\uFE64" '>': "\uFE65" ':': "\uA789" '"': "\uFF02" \?: "\uFF1F" \*: "\u204E" \|: "\u2502" \s+$: '' ^\s+: '' ^-: "\u2012" path_sep_replace: "\uFF0F" discogs: apikey: REDACTED apisecret: REDACTED tokenfile: discogs_token.json source_weight: 0.5 user_token: REDACTED separator: ', ' index_tracks: no web: host: 127.0.0.1 port: 8337 cors: '' cors_supports_credentials: no reverse_proxy: no include_paths: no duplicates: album: no checksum: '' copy: '' count: no delete: no format: '' full: no keys: [] merge: no move: '' path: no tiebreak: {} strict: no tag: '' chroma: auto: yes ```
0.0
a2f66a042727964e18d698a7b1676a802b5f7050
[ "test/test_files.py::MoveTest::test_move_file_with_colon_alt_separator" ]
[ "test/test_files.py::MoveTest::test_copy_already_at_destination", "test/test_files.py::MoveTest::test_copy_arrives", "test/test_files.py::MoveTest::test_copy_does_not_depart", "test/test_files.py::MoveTest::test_hardlink_arrives", "test/test_files.py::MoveTest::test_hardlink_changes_path", "test/test_files.py::MoveTest::test_hardlink_does_not_depart", "test/test_files.py::MoveTest::test_link_arrives", "test/test_files.py::MoveTest::test_link_changes_path", "test/test_files.py::MoveTest::test_link_does_not_depart", "test/test_files.py::MoveTest::test_move_already_at_destination", "test/test_files.py::MoveTest::test_move_arrives", "test/test_files.py::MoveTest::test_move_avoids_collision_with_existing_file", "test/test_files.py::MoveTest::test_move_changes_path", "test/test_files.py::MoveTest::test_move_departs", "test/test_files.py::MoveTest::test_move_file_with_colon", "test/test_files.py::MoveTest::test_move_file_with_multiple_colons", "test/test_files.py::MoveTest::test_move_in_lib_prunes_empty_dir", "test/test_files.py::MoveTest::test_move_to_custom_dir", "test/test_files.py::MoveTest::test_read_only_file_copied_writable", "test/test_files.py::HelperTest::test_ancestry_works_on_dir", "test/test_files.py::HelperTest::test_ancestry_works_on_file", "test/test_files.py::HelperTest::test_ancestry_works_on_relative", "test/test_files.py::HelperTest::test_components_works_on_dir", "test/test_files.py::HelperTest::test_components_works_on_file", "test/test_files.py::HelperTest::test_components_works_on_relative", "test/test_files.py::HelperTest::test_forward_slash", "test/test_files.py::AlbumFileTest::test_albuminfo_move_changes_paths", "test/test_files.py::AlbumFileTest::test_albuminfo_move_copies_file", "test/test_files.py::AlbumFileTest::test_albuminfo_move_moves_file", "test/test_files.py::AlbumFileTest::test_albuminfo_move_to_custom_dir", "test/test_files.py::ArtFileTest::test_art_deleted_when_items_deleted", "test/test_files.py::ArtFileTest::test_art_moves_with_album", "test/test_files.py::ArtFileTest::test_art_moves_with_album_to_custom_dir", "test/test_files.py::ArtFileTest::test_move_last_file_moves_albumart", "test/test_files.py::ArtFileTest::test_move_not_last_file_does_not_move_albumart", "test/test_files.py::ArtFileTest::test_setart_copies_image", "test/test_files.py::ArtFileTest::test_setart_sets_permissions", "test/test_files.py::ArtFileTest::test_setart_to_conflicting_file_gets_new_path", "test/test_files.py::ArtFileTest::test_setart_to_existing_art_works", "test/test_files.py::ArtFileTest::test_setart_to_existing_but_unset_art_works", "test/test_files.py::RemoveTest::test_removing_item_outside_of_library_deletes_nothing", "test/test_files.py::RemoveTest::test_removing_last_item_in_album_with_albumart_prunes_dir", "test/test_files.py::RemoveTest::test_removing_last_item_preserves_library_dir", "test/test_files.py::RemoveTest::test_removing_last_item_preserves_nonempty_dir", "test/test_files.py::RemoveTest::test_removing_last_item_prunes_dir_with_blacklisted_file", "test/test_files.py::RemoveTest::test_removing_last_item_prunes_empty_dir", "test/test_files.py::RemoveTest::test_removing_without_delete_leaves_file", "test/test_files.py::SoftRemoveTest::test_soft_remove_deletes_file", "test/test_files.py::SoftRemoveTest::test_soft_remove_silent_on_no_file", "test/test_files.py::SafeMoveCopyTest::test_self_copy", "test/test_files.py::SafeMoveCopyTest::test_self_move", "test/test_files.py::SafeMoveCopyTest::test_successful_copy", "test/test_files.py::SafeMoveCopyTest::test_successful_move", "test/test_files.py::SafeMoveCopyTest::test_unsuccessful_copy", "test/test_files.py::SafeMoveCopyTest::test_unsuccessful_move", "test/test_files.py::PruneTest::test_prune_existent_directory", "test/test_files.py::PruneTest::test_prune_nonexistent_directory", "test/test_files.py::WalkTest::test_ignore_directory", "test/test_files.py::WalkTest::test_ignore_everything", "test/test_files.py::WalkTest::test_ignore_file", "test/test_files.py::WalkTest::test_sorted_files", "test/test_files.py::UniquePathTest::test_conflicting_file_appends_1", "test/test_files.py::UniquePathTest::test_conflicting_file_appends_higher_number", "test/test_files.py::UniquePathTest::test_conflicting_file_with_number_increases_number", "test/test_files.py::UniquePathTest::test_new_file_unchanged", "test/test_files.py::MkDirAllTest::test_child_does_not_exist", "test/test_files.py::MkDirAllTest::test_parent_exists" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_media", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2020-07-25 18:22:12+00:00
mit
1,312
beetbox__beets-3702
diff --git a/beetsplug/the.py b/beetsplug/the.py index 238aec32..dfc58817 100644 --- a/beetsplug/the.py +++ b/beetsplug/the.py @@ -23,7 +23,7 @@ from beets.plugins import BeetsPlugin __author__ = '[email protected]' __version__ = '1.1' -PATTERN_THE = u'^[the]{3}\\s' +PATTERN_THE = u'^the\\s' PATTERN_A = u'^[a][n]?\\s' FORMAT = u'{0}, {1}' diff --git a/docs/changelog.rst b/docs/changelog.rst index 64e6ab85..4a87f7f0 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -145,6 +145,9 @@ New features: Fixes: +* :doc:`/plugins/the`: Fixed incorrect regex for 'the' that matched any + 3-letter combination of the letters t, h, e. + :bug:`3701` * :doc:`/plugins/fetchart`: Fixed a bug that caused fetchart to not take environment variables such as proxy servers into account when making requests :bug:`3450`
beetbox/beets
4e9346942136d86b2250b9299bcb6e616a57c41f
diff --git a/test/test_the.py b/test/test_the.py index 263446b9..1fc48895 100644 --- a/test/test_the.py +++ b/test/test_the.py @@ -36,6 +36,8 @@ class ThePluginTest(_common.TestCase): u'A Thing, An') self.assertEqual(ThePlugin().unthe(u'the An Arse', PATTERN_A), u'the An Arse') + self.assertEqual(ThePlugin().unthe(u'TET - Travailleur', PATTERN_THE), + u'TET - Travailleur') def test_unthe_with_strip(self): config['the']['strip'] = True
"the" plugin uses incorrect regex ### Problem Running this command: ```sh ~ ❯ beet move artist:trance -p Moving 40 items. M:\TET - Travailleur En Trance\[2008] Cobra Coded Escalation\01-01 - Cobra Reporting In.mp3 -> M:\‒ Travailleur En Trance, TET\[2008] Cobra Coded Escalation\01-01 - Cobra Reporting In.mp3 ``` Led to this problem: "TET" is recognized by the "the" plugin as something it should move. This is because the regex used in the.py, line 26, is ``` PATTERN_THE = u'^[the]{3}\\s' ``` which matches "TET". It should probably be: ``` PATTERN_THE = u'^the\\s' ``` ### Setup * OS: Windows 10 2004 * Python version: 3.8 * beets version: latest trunk * Turning off plugins made problem go away (yes/no): obviously, if I disable 'the' it no longer does this
0.0
4e9346942136d86b2250b9299bcb6e616a57c41f
[ "test/test_the.py::ThePluginTest::test_unthe_with_default_patterns" ]
[ "test/test_the.py::ThePluginTest::test_custom_format", "test/test_the.py::ThePluginTest::test_custom_pattern", "test/test_the.py::ThePluginTest::test_template_function_with_defaults", "test/test_the.py::ThePluginTest::test_unthe_with_strip" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2020-07-30 20:48:38+00:00
mit
1,313
beetbox__beets-3805
diff --git a/beetsplug/keyfinder.py b/beetsplug/keyfinder.py index a75b8d97..702003f0 100644 --- a/beetsplug/keyfinder.py +++ b/beetsplug/keyfinder.py @@ -76,7 +76,14 @@ class KeyFinderPlugin(BeetsPlugin): item.path) continue - key_raw = output.rsplit(None, 1)[-1] + try: + key_raw = output.rsplit(None, 1)[-1] + except IndexError: + # Sometimes keyfinder-cli returns 0 but with no key, usually + # when the file is silent or corrupt, so we log and skip. + self._log.error(u'no key returned for path: {0}', item.path) + continue + try: key = util.text_string(key_raw) except UnicodeDecodeError: diff --git a/docs/changelog.rst b/docs/changelog.rst index 0de3b15a..41221b1f 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -282,6 +282,8 @@ Fixes: :bug:`3773` :bug:`3774` * Fix a bug causing PIL to generate poor quality JPEGs when resizing artwork. :bug:`3743` +* :doc:`plugins/keyfinder`: Catch output from ``keyfinder-cli`` that is missing key. + :bug:`2242` For plugin developers:
beetbox/beets
9657919968989d67f3d09a72ef045e5b9d057939
diff --git a/test/test_keyfinder.py b/test/test_keyfinder.py index a9ac43a2..c8735e47 100644 --- a/test/test_keyfinder.py +++ b/test/test_keyfinder.py @@ -76,6 +76,16 @@ class KeyFinderTest(unittest.TestCase, TestHelper): item.load() self.assertEqual(item['initial_key'], 'F') + def test_no_key(self, command_output): + item = Item(path='/file') + item.add(self.lib) + + command_output.return_value = util.CommandOutput(b"", b"") + self.run_command('keyfinder') + + item.load() + self.assertEqual(item['initial_key'], None) + def suite(): return unittest.TestLoader().loadTestsFromName(__name__)
keyfinder: Output parsing error ### Problem Running this command in verbose (`-vv`) mode: ``` sh $ beet -vv keyfinder anything ``` Led to this problem: ``` user configuration: /home/diomekes/.config/beets/config.yaml data directory: /home/diomekes/.config/beets plugin paths: Sending event: pluginload inline: adding item field disc_and_track library database: /home/diomekes/.config/beets/library.db library directory: /home/diomekes/media/music Sending event: library_opened Traceback (most recent call last): File "/usr/bin/beet", line 9, in <module> load_entry_point('beets==1.3.19', 'console_scripts', 'beet')() File "/usr/lib/python2.7/site-packages/beets/ui/__init__.py", line 1266, in main _raw_main(args) File "/usr/lib/python2.7/site-packages/beets/ui/__init__.py", line 1253, in _raw_main subcommand.func(lib, suboptions, subargs) File "/usr/lib/python2.7/site-packages/beetsplug/keyfinder.py", line 48, in command self.find_key(lib.items(ui.decargs(args)), write=ui.should_write()) File "/usr/lib/python2.7/site-packages/beetsplug/keyfinder.py", line 74, in find_key key_raw = output.rsplit(None, 1)[-1] IndexError: list index out of range ``` keyfinder-cli works if run directly ### Setup - OS: archlinux - Python version: 2.7.12 - beets version: 1.3.19 - Turning off plugins made problem go away (yes/no): problem is with keyfinder plugin only - libkeyfinder-git 239.0a5ec7f-1 - keyfinder-cli-git 49.40a41ab-1 My configuration (output of `beet config`) is: ``` yaml ... keyfinder: bin: keyfinder-cli auto: yes overwrite: no plugins: badfiles chroma convert duplicates fetchart fromfilename fuzzy info inline keyfinder lastgenre lyrics mbcollection mbsync missing play random scrub smartplaylist zero ... ```
0.0
9657919968989d67f3d09a72ef045e5b9d057939
[ "test/test_keyfinder.py::KeyFinderTest::test_no_key" ]
[ "test/test_keyfinder.py::KeyFinderTest::test_add_key", "test/test_keyfinder.py::KeyFinderTest::test_add_key_on_import", "test/test_keyfinder.py::KeyFinderTest::test_do_not_overwrite", "test/test_keyfinder.py::KeyFinderTest::test_force_overwrite" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2020-12-08 03:08:01+00:00
mit
1,314
beetbox__beets-3863
diff --git a/beets/autotag/mb.py b/beets/autotag/mb.py index 03ea5b38..3ca5463c 100644 --- a/beets/autotag/mb.py +++ b/beets/autotag/mb.py @@ -223,6 +223,8 @@ def track_info(recording, index=None, medium=None, medium_index=None, if recording.get('length'): info.length = int(recording['length']) / (1000.0) + info.trackdisambig = recording.get('disambiguation') + lyricist = [] composer = [] composer_sort = [] diff --git a/beets/library.py b/beets/library.py index a060e93d..78552bb6 100644 --- a/beets/library.py +++ b/beets/library.py @@ -477,6 +477,7 @@ class Item(LibModel): 'mb_artistid': types.STRING, 'mb_albumartistid': types.STRING, 'mb_releasetrackid': types.STRING, + 'trackdisambig': types.STRING, 'albumtype': types.STRING, 'label': types.STRING, 'acoustid_fingerprint': types.STRING, diff --git a/docs/changelog.rst b/docs/changelog.rst index f8debb3d..2f31ecfe 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -180,6 +180,9 @@ New features: :bug:`3478` * Removes usage of the bs1770gain replaygain backend. Thanks to :user:`SamuelCook`. +* Added ``trackdisambig`` which stores the recording disambiguation from + MusicBrainz for each track. + :bug:`1904` Fixes:
beetbox/beets
5b6dff34a16d1c0a666877e375c7b30c5104187c
diff --git a/test/test_mb.py b/test/test_mb.py index de1ffd9a..9eca57c8 100644 --- a/test/test_mb.py +++ b/test/test_mb.py @@ -111,7 +111,8 @@ class MBAlbumInfoTest(_common.TestCase): }) return release - def _make_track(self, title, tr_id, duration, artist=False, video=False): + def _make_track(self, title, tr_id, duration, artist=False, video=False, + disambiguation=None): track = { 'title': title, 'id': tr_id, @@ -131,6 +132,8 @@ class MBAlbumInfoTest(_common.TestCase): ] if video: track['video'] = 'true' + if disambiguation: + track['disambiguation'] = disambiguation return track def test_parse_release_with_year(self): @@ -445,6 +448,18 @@ class MBAlbumInfoTest(_common.TestCase): self.assertEqual(d.tracks[1].title, 'TITLE TWO') self.assertEqual(d.tracks[2].title, 'TITLE VIDEO') + def test_track_disambiguation(self): + tracks = [self._make_track('TITLE ONE', 'ID ONE', 100.0 * 1000.0), + self._make_track('TITLE TWO', 'ID TWO', 200.0 * 1000.0, + disambiguation="SECOND TRACK")] + release = self._make_release(tracks=tracks) + + d = mb.album_info(release) + t = d.tracks + self.assertEqual(len(t), 2) + self.assertEqual(t[0].trackdisambig, None) + self.assertEqual(t[1].trackdisambig, "SECOND TRACK") + class ParseIDTest(_common.TestCase): def test_parse_id_correct(self):
Fetch MusicBrainz track disambiguation field I download a lot of singletons, and I use the track disambiguation notes frequently. I saw that Beets has a way to note the **album** disambig, but not the **track** disambig. Can this be inserted into a future release? This way, my beets.yaml config could look something like this: `paths: default: $albumartist_sort/[$year] $album/$track. $artist- $title comp: Various Artists/[$year] $album/$track. $artist- $title singleton: $artist_sort/$artist- $title [$trackdisambig] `
0.0
5b6dff34a16d1c0a666877e375c7b30c5104187c
[ "test/test_mb.py::MBAlbumInfoTest::test_track_disambiguation" ]
[ "test/test_mb.py::MBAlbumInfoTest::test_data_source", "test/test_mb.py::MBAlbumInfoTest::test_detect_various_artists", "test/test_mb.py::MBAlbumInfoTest::test_ignored_media", "test/test_mb.py::MBAlbumInfoTest::test_missing_language", "test/test_mb.py::MBAlbumInfoTest::test_no_durations", "test/test_mb.py::MBAlbumInfoTest::test_no_ignored_media", "test/test_mb.py::MBAlbumInfoTest::test_no_release_date", "test/test_mb.py::MBAlbumInfoTest::test_no_skip_audio_data_tracks_if_configured", "test/test_mb.py::MBAlbumInfoTest::test_no_skip_video_data_tracks_if_configured", "test/test_mb.py::MBAlbumInfoTest::test_no_skip_video_tracks_if_configured", "test/test_mb.py::MBAlbumInfoTest::test_parse_artist_sort_name", "test/test_mb.py::MBAlbumInfoTest::test_parse_asin", "test/test_mb.py::MBAlbumInfoTest::test_parse_catalognum", "test/test_mb.py::MBAlbumInfoTest::test_parse_country", "test/test_mb.py::MBAlbumInfoTest::test_parse_disambig", "test/test_mb.py::MBAlbumInfoTest::test_parse_disctitle", "test/test_mb.py::MBAlbumInfoTest::test_parse_media", "test/test_mb.py::MBAlbumInfoTest::test_parse_medium_numbers_single_medium", "test/test_mb.py::MBAlbumInfoTest::test_parse_medium_numbers_two_mediums", "test/test_mb.py::MBAlbumInfoTest::test_parse_recording_artist", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_full_date", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_type", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_with_year", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_year_month_only", "test/test_mb.py::MBAlbumInfoTest::test_parse_releasegroupid", "test/test_mb.py::MBAlbumInfoTest::test_parse_status", "test/test_mb.py::MBAlbumInfoTest::test_parse_textrepr", "test/test_mb.py::MBAlbumInfoTest::test_parse_track_indices", "test/test_mb.py::MBAlbumInfoTest::test_parse_tracks", "test/test_mb.py::MBAlbumInfoTest::test_skip_audio_data_tracks_by_default", "test/test_mb.py::MBAlbumInfoTest::test_skip_data_track", "test/test_mb.py::MBAlbumInfoTest::test_skip_video_data_tracks_by_default", "test/test_mb.py::MBAlbumInfoTest::test_skip_video_tracks_by_default", "test/test_mb.py::MBAlbumInfoTest::test_track_artist_overrides_recording_artist", "test/test_mb.py::MBAlbumInfoTest::test_track_length_overrides_recording_length", "test/test_mb.py::MBAlbumInfoTest::test_various_artists_defaults_false", "test/test_mb.py::ParseIDTest::test_parse_id_correct", "test/test_mb.py::ParseIDTest::test_parse_id_non_id_returns_none", "test/test_mb.py::ParseIDTest::test_parse_id_url_finds_id", "test/test_mb.py::ArtistFlatteningTest::test_alias", "test/test_mb.py::ArtistFlatteningTest::test_single_artist", "test/test_mb.py::ArtistFlatteningTest::test_two_artists", "test/test_mb.py::MBLibraryTest::test_match_album", "test/test_mb.py::MBLibraryTest::test_match_album_empty", "test/test_mb.py::MBLibraryTest::test_match_track", "test/test_mb.py::MBLibraryTest::test_match_track_empty" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2021-02-27 20:05:59+00:00
mit
1,315
beetbox__beets-3868
diff --git a/beetsplug/web/__init__.py b/beetsplug/web/__init__.py index a982809c..e80c8c29 100644 --- a/beetsplug/web/__init__.py +++ b/beetsplug/web/__init__.py @@ -59,7 +59,10 @@ def _rep(obj, expand=False): return out elif isinstance(obj, beets.library.Album): - del out['artpath'] + if app.config.get('INCLUDE_PATHS', False): + out['artpath'] = util.displayable_path(out['artpath']) + else: + del out['artpath'] if expand: out['items'] = [_rep(item) for item in obj.items()] return out diff --git a/docs/changelog.rst b/docs/changelog.rst index 2f31ecfe..b9020621 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -186,6 +186,9 @@ New features: Fixes: +* :bug:`/plugins/web`: Fixed a small bug which caused album artpath to be + redacted even when ``include_paths`` option is set. + :bug:`3866` * :bug:`/plugins/discogs`: Fixed a bug with ``index_tracks`` options that sometimes caused the index to be discarded. Also remove the extra semicolon that was added when there is no index track. diff --git a/docs/plugins/web.rst b/docs/plugins/web.rst index 4b069a94..16dd4317 100644 --- a/docs/plugins/web.rst +++ b/docs/plugins/web.rst @@ -261,6 +261,8 @@ For albums, the following endpoints are provided: * ``GET /album/5`` +* ``GET /album/5/art`` + * ``DELETE /album/5`` * ``GET /album/5,7``
beetbox/beets
461fee45ec7cb29d562d6a90e64a1adca916c1be
diff --git a/test/test_web.py b/test/test_web.py index e9ca028d..88be3136 100644 --- a/test/test_web.py +++ b/test/test_web.py @@ -31,7 +31,7 @@ class WebPluginTest(_common.LibTestCase): self.lib.add(Item(title=u'and a third')) # The following adds will create albums #1 and #2 self.lib.add(Album(album=u'album')) - self.lib.add(Album(album=u'other album')) + self.lib.add(Album(album=u'other album', artpath='/art_path_2')) web.app.config['TESTING'] = True web.app.config['lib'] = self.lib @@ -46,6 +46,14 @@ class WebPluginTest(_common.LibTestCase): self.assertEqual(response.status_code, 200) self.assertEqual(res_json['path'], u'/path_1') + def test_config_include_artpaths_true(self): + web.app.config['INCLUDE_PATHS'] = True + response = self.client.get('/album/2') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(res_json['artpath'], u'/art_path_2') + def test_config_include_paths_false(self): web.app.config['INCLUDE_PATHS'] = False response = self.client.get('/item/1') @@ -54,6 +62,14 @@ class WebPluginTest(_common.LibTestCase): self.assertEqual(response.status_code, 200) self.assertNotIn('path', res_json) + def test_config_include_artpaths_false(self): + web.app.config['INCLUDE_PATHS'] = False + response = self.client.get('/album/2') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertNotIn('artpath', res_json) + def test_get_all_items(self): response = self.client.get('/item/') res_json = json.loads(response.data.decode('utf-8'))
web: GET /album/<n> hides artpath even when INCLUDE_PATHS is set ### Problem `beet web` provides GET operations to fetch track items and albums. By default this removes paths from the returned data, but the config setting INCLUDE_PATHS should allow the paths to be returned. This works correctly for GET /item/... but not GET /album/... In the album case, the artpath is unconditionally deleted from the results. ### Setup Add to config file: web: include_paths: true Use `beet web` to make a webserver available and do a GET /album/N, where N is the album id of an album in the database which has a cover art set. The JSON result should include the 'artpath' value but does not. * OS: Linux (Debian Testing) * Python version: 3.9.1-1 * beets version: 1.4.9-7 * Turning off plugins made problem go away (yes/no): bug in web plugin Note this is a small issue, although I have hit it. I have a fix (and a regression test) which I will submit as a small PR once my first PR has been finished (so I can learn from the mistakes I made in that!).
0.0
461fee45ec7cb29d562d6a90e64a1adca916c1be
[ "test/test_web.py::WebPluginTest::test_config_include_artpaths_true" ]
[ "test/test_web.py::WebPluginTest::test_config_include_artpaths_false", "test/test_web.py::WebPluginTest::test_config_include_paths_false", "test/test_web.py::WebPluginTest::test_config_include_paths_true", "test/test_web.py::WebPluginTest::test_get_album_details", "test/test_web.py::WebPluginTest::test_get_album_empty_query", "test/test_web.py::WebPluginTest::test_get_all_albums", "test/test_web.py::WebPluginTest::test_get_all_items", "test/test_web.py::WebPluginTest::test_get_item_empty_query", "test/test_web.py::WebPluginTest::test_get_multiple_albums_by_id", "test/test_web.py::WebPluginTest::test_get_multiple_items_by_id", "test/test_web.py::WebPluginTest::test_get_simple_album_query", "test/test_web.py::WebPluginTest::test_get_simple_item_query", "test/test_web.py::WebPluginTest::test_get_single_album_by_id", "test/test_web.py::WebPluginTest::test_get_single_item_by_id", "test/test_web.py::WebPluginTest::test_get_single_item_by_path_not_found_if_not_in_library", "test/test_web.py::WebPluginTest::test_get_single_item_not_found", "test/test_web.py::WebPluginTest::test_get_stats" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2021-03-06 23:18:07+00:00
mit
1,316
beetbox__beets-3869
diff --git a/beetsplug/web/__init__.py b/beetsplug/web/__init__.py index e80c8c29..c8f979fa 100644 --- a/beetsplug/web/__init__.py +++ b/beetsplug/web/__init__.py @@ -244,7 +244,9 @@ class QueryConverter(PathConverter): def to_python(self, value): queries = value.split('/') - return [query.replace('\\', os.sep) for query in queries] + """Do not do path substitution on regex value tests""" + return [query if '::' in query else query.replace('\\', os.sep) + for query in queries] def to_url(self, value): return ','.join([v.replace(os.sep, '\\') for v in value]) diff --git a/docs/changelog.rst b/docs/changelog.rst index 929ab8cb..f39c4158 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -206,6 +206,8 @@ Other new things: Fixes: +* :bug:`/plugins/web`: Allow use of backslash in regex web queries. + :bug:`3867` * :bug:`/plugins/web`: Fixed a small bug which caused album artpath to be redacted even when ``include_paths`` option is set. :bug:`3866` diff --git a/docs/plugins/lyrics.rst b/docs/plugins/lyrics.rst index b7176404..b4990967 100644 --- a/docs/plugins/lyrics.rst +++ b/docs/plugins/lyrics.rst @@ -58,7 +58,7 @@ configuration file. The available options are: sources known to be scrapeable. - **sources**: List of sources to search for lyrics. An asterisk ``*`` expands to all available sources. - Default: ``google lyricwiki musixmatch genius``, i.e., all the + Default: ``google musixmatch genius``, i.e., all the available sources. The ``google`` source will be automatically deactivated if no ``google_API_key`` is setup. Both it and the ``genius`` source will only be enabled if BeautifulSoup is
beetbox/beets
debd382837ef1d30574c2234710d536bb299f979
diff --git a/test/test_web.py b/test/test_web.py index 88be3136..606f1e24 100644 --- a/test/test_web.py +++ b/test/test_web.py @@ -13,11 +13,22 @@ from test import _common from beets.library import Item, Album from beetsplug import web +import platform + +from beets import logging + class WebPluginTest(_common.LibTestCase): def setUp(self): + super(WebPluginTest, self).setUp() + self.log = logging.getLogger('beets.web') + + if platform.system() == 'Windows': + self.path_prefix = u'C:' + else: + self.path_prefix = u'' # Add fixtures for track in self.lib.items(): @@ -26,12 +37,30 @@ class WebPluginTest(_common.LibTestCase): # Add library elements. Note that self.lib.add overrides any "id=<n>" # and assigns the next free id number. # The following adds will create items #1, #2 and #3 - self.lib.add(Item(title=u'title', path='/path_1', album_id=2)) - self.lib.add(Item(title=u'another title', path='/path_2')) - self.lib.add(Item(title=u'and a third')) + path1 = self.path_prefix + os.sep + \ + os.path.join(b'path_1').decode('utf-8') + self.lib.add(Item(title=u'title', + path=path1, + album_id=2, + artist='AAA Singers')) + path2 = self.path_prefix + os.sep + \ + os.path.join(b'somewhere', b'a').decode('utf-8') + self.lib.add(Item(title=u'another title', + path=path2, + artist='AAA Singers')) + path3 = self.path_prefix + os.sep + \ + os.path.join(b'somewhere', b'abc').decode('utf-8') + self.lib.add(Item(title=u'and a third', + testattr='ABC', + path=path3, + album_id=2)) # The following adds will create albums #1 and #2 - self.lib.add(Album(album=u'album')) - self.lib.add(Album(album=u'other album', artpath='/art_path_2')) + self.lib.add(Album(album=u'album', + albumtest='xyz')) + path4 = self.path_prefix + os.sep + \ + os.path.join(b'somewhere2', b'art_path_2').decode('utf-8') + self.lib.add(Album(album=u'other album', + artpath=path4)) web.app.config['TESTING'] = True web.app.config['lib'] = self.lib @@ -42,17 +71,25 @@ class WebPluginTest(_common.LibTestCase): web.app.config['INCLUDE_PATHS'] = True response = self.client.get('/item/1') res_json = json.loads(response.data.decode('utf-8')) + expected_path = self.path_prefix + os.sep \ + + os.path.join(b'path_1').decode('utf-8') self.assertEqual(response.status_code, 200) - self.assertEqual(res_json['path'], u'/path_1') + self.assertEqual(res_json['path'], expected_path) + + web.app.config['INCLUDE_PATHS'] = False def test_config_include_artpaths_true(self): web.app.config['INCLUDE_PATHS'] = True response = self.client.get('/album/2') res_json = json.loads(response.data.decode('utf-8')) + expected_path = self.path_prefix + os.sep \ + + os.path.join(b'somewhere2', b'art_path_2').decode('utf-8') self.assertEqual(response.status_code, 200) - self.assertEqual(res_json['artpath'], u'/art_path_2') + self.assertEqual(res_json['artpath'], expected_path) + + web.app.config['INCLUDE_PATHS'] = False def test_config_include_paths_false(self): web.app.config['INCLUDE_PATHS'] = False @@ -91,8 +128,8 @@ class WebPluginTest(_common.LibTestCase): self.assertEqual(response.status_code, 200) self.assertEqual(len(res_json['items']), 2) - response_titles = [item['title'] for item in res_json['items']] - assertCountEqual(self, response_titles, [u'title', u'another title']) + response_titles = {item['title'] for item in res_json['items']} + self.assertEqual(response_titles, {u'title', u'another title'}) def test_get_single_item_not_found(self): response = self.client.get('/item/4') @@ -116,6 +153,7 @@ class WebPluginTest(_common.LibTestCase): self.assertEqual(response.status_code, 404) def test_get_item_empty_query(self): + """ testing item query: <empty> """ response = self.client.get('/item/query/') res_json = json.loads(response.data.decode('utf-8')) @@ -123,6 +161,7 @@ class WebPluginTest(_common.LibTestCase): self.assertEqual(len(res_json['items']), 3) def test_get_simple_item_query(self): + """ testing item query: another """ response = self.client.get('/item/query/another') res_json = json.loads(response.data.decode('utf-8')) @@ -131,6 +170,52 @@ class WebPluginTest(_common.LibTestCase): self.assertEqual(res_json['results'][0]['title'], u'another title') + def test_query_item_string(self): + """ testing item query: testattr:ABC """ + response = self.client.get('/item/query/testattr%3aABC') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['title'], + u'and a third') + + def test_query_item_regex(self): + """ testing item query: testattr::[A-C]+ """ + response = self.client.get('/item/query/testattr%3a%3a[A-C]%2b') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['title'], + u'and a third') + + def test_query_item_regex_backslash(self): + # """ testing item query: testattr::\w+ """ + response = self.client.get('/item/query/testattr%3a%3a%5cw%2b') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['title'], + u'and a third') + + def test_query_item_path(self): + # """ testing item query: path:\somewhere\a """ + """ Note: path queries are special: the query item must match the path + from the root all the way to a directory, so this matches 1 item """ + """ Note: filesystem separators in the query must be '\' """ + + response = self.client.get('/item/query/path:' + + self.path_prefix + + '\\somewhere\\a') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['title'], + u'another title') + def test_get_all_albums(self): response = self.client.get('/album/') res_json = json.loads(response.data.decode('utf-8')) @@ -177,10 +262,43 @@ class WebPluginTest(_common.LibTestCase): res_json = json.loads(response.data.decode('utf-8')) self.assertEqual(response.status_code, 200) - self.assertEqual(len(res_json['items']), 1) + self.assertEqual(len(res_json['items']), 2) self.assertEqual(res_json['items'][0]['album'], u'other album') - self.assertEqual(res_json['items'][0]['id'], 1) + self.assertEqual(res_json['items'][1]['album'], + u'other album') + response_track_titles = {item['title'] for item in res_json['items']} + self.assertEqual(response_track_titles, {u'title', u'and a third'}) + + def test_query_album_string(self): + """ testing query: albumtest:xy """ + response = self.client.get('/album/query/albumtest%3axy') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['album'], + u'album') + + def test_query_album_artpath_regex(self): + """ testing query: artpath::art_ """ + response = self.client.get('/album/query/artpath%3a%3aart_') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['album'], + u'other album') + + def test_query_album_regex_backslash(self): + # """ testing query: albumtest::\w+ """ + response = self.client.get('/album/query/albumtest%3a%3a%5cw%2b') + res_json = json.loads(response.data.decode('utf-8')) + + self.assertEqual(response.status_code, 200) + self.assertEqual(len(res_json['results']), 1) + self.assertEqual(res_json['results'][0]['album'], + u'album') def test_get_stats(self): response = self.client.get('/stats')
web: web page search box doesn't work for regex searches ### Problem This is not a problem in the web API itself, but in the web pages which provide the simple web user interface. Bringing up the web interface and entering a query such as ``somefield::.`` never returns any results. The problem is that the web page ends up double URI encoding the search before passing it to GET /item/query. I have a fix (in `static/beets.js`) which I can submit once the current PR is done. However, I have no idea how to create a test for this as it would mean starting the webserver, submitting an HTTP request and checking the resulting (complex) HTML. Does anyone have any example of doing that in the beets pytest environment? I know very little python and nothing about pytest but I may be able to steal a similar test if one exists! EDIT: Actually, it is the last step - parsing and checking the resulting HTML which is hard (the rest is what the tests already do - but they are dealing with JSON responses, not HTML responses). Does anyone have any tools or examples of checking HTML responses? Or do I just do some simple string searches and hope nothing changes too much to change the page in the future?
0.0
debd382837ef1d30574c2234710d536bb299f979
[ "test/test_web.py::WebPluginTest::test_query_album_regex_backslash", "test/test_web.py::WebPluginTest::test_query_item_regex_backslash" ]
[ "test/test_web.py::WebPluginTest::test_config_include_artpaths_false", "test/test_web.py::WebPluginTest::test_config_include_artpaths_true", "test/test_web.py::WebPluginTest::test_config_include_paths_false", "test/test_web.py::WebPluginTest::test_config_include_paths_true", "test/test_web.py::WebPluginTest::test_get_album_details", "test/test_web.py::WebPluginTest::test_get_album_empty_query", "test/test_web.py::WebPluginTest::test_get_all_albums", "test/test_web.py::WebPluginTest::test_get_all_items", "test/test_web.py::WebPluginTest::test_get_item_empty_query", "test/test_web.py::WebPluginTest::test_get_multiple_albums_by_id", "test/test_web.py::WebPluginTest::test_get_multiple_items_by_id", "test/test_web.py::WebPluginTest::test_get_simple_album_query", "test/test_web.py::WebPluginTest::test_get_simple_item_query", "test/test_web.py::WebPluginTest::test_get_single_album_by_id", "test/test_web.py::WebPluginTest::test_get_single_item_by_id", "test/test_web.py::WebPluginTest::test_get_single_item_by_path_not_found_if_not_in_library", "test/test_web.py::WebPluginTest::test_get_single_item_not_found", "test/test_web.py::WebPluginTest::test_get_stats", "test/test_web.py::WebPluginTest::test_query_album_artpath_regex", "test/test_web.py::WebPluginTest::test_query_album_string", "test/test_web.py::WebPluginTest::test_query_item_path", "test/test_web.py::WebPluginTest::test_query_item_regex", "test/test_web.py::WebPluginTest::test_query_item_string" ]
{ "failed_lite_validators": [ "has_many_modified_files" ], "has_test_patch": true, "is_lite": false }
2021-03-08 17:19:51+00:00
mit
1,317
beetbox__beets-3982
diff --git a/beets/library.py b/beets/library.py index 6e13bf82..dcd5a6a1 100644 --- a/beets/library.py +++ b/beets/library.py @@ -1753,7 +1753,7 @@ class DefaultTemplateFunctions(object): :param falseval: The string if the condition is false :return: The string, based on condition """ - if self.item.formatted().get(field): + if field in self.item: return trueval if trueval else self.item.formatted().get(field) else: return falseval diff --git a/docs/changelog.rst b/docs/changelog.rst index 26cf39ee..69e2f01a 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -377,6 +377,9 @@ Fixes: * :doc`/reference/cli`: Remove reference to rarfile version in link * Fix :bug:`2873`. Duplicates can now generate checksums. Thanks user:`wisp3rwind` for the pointer to how to solve. Thanks to :user:`arogl`. +* Templates that use ``%ifdef`` now produce the expected behavior when used in + conjunction with non-string fields from the :doc:`/plugins/types`. + :bug:`3852` For plugin developers:
beetbox/beets
5fad8ee0b2db11557a1beb492f526ac42d4c8eba
diff --git a/test/test_types_plugin.py b/test/test_types_plugin.py index 77d6c8bc..65ad7bee 100644 --- a/test/test_types_plugin.py +++ b/test/test_types_plugin.py @@ -145,6 +145,39 @@ class TypesPluginTest(unittest.TestCase, TestHelper): with self.assertRaises(ConfigValueError): self.run_command(u'ls') + def test_template_if_def(self): + # Tests for a subtle bug when using %ifdef in templates along with + # types that have truthy default values (e.g. '0', '0.0', 'False') + # https://github.com/beetbox/beets/issues/3852 + self.config['types'] = {'playcount': u'int', 'rating': u'float', + 'starred': u'bool'} + + with_fields = self.add_item(artist=u'prince') + self.modify(u'playcount=10', u'artist=prince') + self.modify(u'rating=5.0', u'artist=prince') + self.modify(u'starred=yes', u'artist=prince') + with_fields.load() + + without_fields = self.add_item(artist=u'britney') + + int_template = u'%ifdef{playcount,Play count: $playcount,Not played}' + self.assertEqual(with_fields.evaluate_template(int_template), + u'Play count: 10') + self.assertEqual(without_fields.evaluate_template(int_template), + u'Not played') + + float_template = u'%ifdef{rating,Rating: $rating,Not rated}' + self.assertEqual(with_fields.evaluate_template(float_template), + u'Rating: 5.0') + self.assertEqual(without_fields.evaluate_template(float_template), + u'Not rated') + + bool_template = u'%ifdef{starred,Starred: $starred,Not starred}' + self.assertIn(with_fields.evaluate_template(bool_template).lower(), + (u'starred: true', u'starred: yes', u'starred: y')) + self.assertEqual(without_fields.evaluate_template(bool_template), + u'Not starred') + def modify(self, *args): return self.run_with_output(u'modify', u'--yes', u'--nowrite', u'--nomove', *args)
%ifdef does not work on flexattr fields with int or float ### Problem If I try `beet ls -a -f '$albumartist - (%if{$original_year,$original_year,$year}) - $album%ifdef{albumdisambig, (%title{$albumdisambig}),}%ifdef{albumrating, ($albumrating),}' year:2021` then beets will return the album rating for the albums that has the field defined. For albums without the rating, `$albumrating` is returned. Example: ``` Artist1 - (2021) - Albumname1 (2.0) Artist2 - (2021) - Albumname2 ($albumrating) ``` Further information on the forum: https://discourse.beets.io/t/album-rating-as-a-flexible-attribute-in-5-increments/1687/3?u=pandatroubles Comment from adrian: _reading over the implementation of %ifdef, I can see now how it would be confused about types with a default value. That is, int or float fields will produce a '0' string for unset values through the formatter, which gets interpreted as non-missing. We clearly need better logic for this._ ### Setup * OS: Arch Linux * beets version 1.5.0 * Python version 3.9.1 * plugins: edit, extrafiles, fetchart, importadded, importfeeds, info, inline, lastimport, mbcollection, mbsync, mpdstats, mpdupdate, originquery, scrub, smartplaylist, types, zero
0.0
5fad8ee0b2db11557a1beb492f526ac42d4c8eba
[ "test/test_types_plugin.py::TypesPluginTest::test_template_if_def" ]
[ "test/test_types_plugin.py::TypesPluginTest::test_album_integer_modify_and_query", "test/test_types_plugin.py::TypesPluginTest::test_bool_modify_and_query", "test/test_types_plugin.py::TypesPluginTest::test_date_modify_and_query", "test/test_types_plugin.py::TypesPluginTest::test_float_modify_and_query", "test/test_types_plugin.py::TypesPluginTest::test_integer_modify_and_query", "test/test_types_plugin.py::TypesPluginTest::test_unknown_type_error" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files", "has_pytest_match_arg" ], "has_test_patch": true, "is_lite": false }
2021-06-16 16:04:17+00:00
mit
1,318
beetbox__beets-4359
diff --git a/beets/library.py b/beets/library.py index 69fcd34c..c8fa2b5f 100644 --- a/beets/library.py +++ b/beets/library.py @@ -1382,7 +1382,7 @@ def parse_query_parts(parts, model_cls): `Query` and `Sort` they represent. Like `dbcore.parse_sorted_query`, with beets query prefixes and - special path query detection. + ensuring that implicit path queries are made explicit with 'path::<query>' """ # Get query types and their prefix characters. prefixes = { @@ -1394,28 +1394,14 @@ def parse_query_parts(parts, model_cls): # Special-case path-like queries, which are non-field queries # containing path separators (/). - path_parts = [] - non_path_parts = [] - for s in parts: - if PathQuery.is_path_query(s): - path_parts.append(s) - else: - non_path_parts.append(s) + parts = [f"path:{s}" if PathQuery.is_path_query(s) else s for s in parts] case_insensitive = beets.config['sort_case_insensitive'].get(bool) - query, sort = dbcore.parse_sorted_query( - model_cls, non_path_parts, prefixes, case_insensitive + return dbcore.parse_sorted_query( + model_cls, parts, prefixes, case_insensitive ) - # Add path queries to aggregate query. - # Match field / flexattr depending on whether the model has the path field - fast_path_query = 'path' in model_cls._fields - query.subqueries += [PathQuery('path', s, fast_path_query) - for s in path_parts] - - return query, sort - def parse_query_string(s, model_cls): """Given a beets query string, return the `Query` and `Sort` they diff --git a/docs/changelog.rst b/docs/changelog.rst index 72b1cf1f..d6c74e45 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -28,6 +28,9 @@ New features: Bug fixes: +* Fix implicit paths OR queries (e.g. ``beet list /path/ , /other-path/``) + which have previously been returning the entire library. + :bug:`1865` * The Discogs release ID is now populated correctly to the discogs_albumid field again (it was no longer working after Discogs changed their release URL format). diff --git a/docs/plugins/beatport.rst b/docs/plugins/beatport.rst index 6117c4a1..f44fdeb3 100644 --- a/docs/plugins/beatport.rst +++ b/docs/plugins/beatport.rst @@ -32,7 +32,7 @@ from MusicBrainz and other sources. If you have a Beatport ID or a URL for a release or track you want to tag, you can just enter one of the two at the "enter Id" prompt in the importer. You can -also search for an id like so: +also search for an id like so:: beet import path/to/music/library --search-id id
beetbox/beets
988bf2672cac36fb625e0fc87983ff641a0c214e
diff --git a/test/test_query.py b/test/test_query.py index 0be4b7d7..8a9043fa 100644 --- a/test/test_query.py +++ b/test/test_query.py @@ -31,7 +31,10 @@ from beets.dbcore.query import (NoneQuery, ParsingError, InvalidQueryArgumentValueError) from beets.library import Library, Item from beets import util -import platform + +# Because the absolute path begins with something like C:, we +# can't disambiguate it from an ordinary query. +WIN32_NO_IMPLICIT_PATHS = 'Implicit paths are not supported on Windows' class TestHelper(helper.TestHelper): @@ -521,6 +524,7 @@ class PathQueryTest(_common.LibTestCase, TestHelper, AssertsMixin): results = self.lib.albums(q) self.assert_albums_matched(results, ['path album']) + @unittest.skipIf(sys.platform == 'win32', WIN32_NO_IMPLICIT_PATHS) def test_slashed_query_matches_path(self): q = '/a/b' results = self.lib.items(q) @@ -529,7 +533,7 @@ class PathQueryTest(_common.LibTestCase, TestHelper, AssertsMixin): results = self.lib.albums(q) self.assert_albums_matched(results, ['path album']) - @unittest.skip('unfixed (#1865)') + @unittest.skipIf(sys.platform == 'win32', WIN32_NO_IMPLICIT_PATHS) def test_path_query_in_or_query(self): q = '/a/b , /a/b' results = self.lib.items(q) @@ -649,12 +653,8 @@ class PathQueryTest(_common.LibTestCase, TestHelper, AssertsMixin): self.assertFalse(is_path('foo:bar/')) self.assertFalse(is_path('foo:/bar')) + @unittest.skipIf(sys.platform == 'win32', WIN32_NO_IMPLICIT_PATHS) def test_detect_absolute_path(self): - if platform.system() == 'Windows': - # Because the absolute path begins with something like C:, we - # can't disambiguate it from an ordinary query. - self.skipTest('Windows absolute paths do not work as queries') - # Don't patch `os.path.exists`; we'll actually create a file when # it exists. self.patcher_exists.stop()
Implicit path queries in a boolean OR yield a match-all query This is best illustrated by example: - `beet list /path/to/library/A` — works - `beet list /path/to/library/B` — works - `beet list /path/to/library/A , /path/to/library/B` — **lists the whole library** - `beet list path:/path/to/library/A , path:/path/to/library/B` — works
0.0
988bf2672cac36fb625e0fc87983ff641a0c214e
[ "test/test_query.py::PathQueryTest::test_path_query_in_or_query" ]
[ "test/test_query.py::AnyFieldQueryTest::test_eq", "test/test_query.py::AnyFieldQueryTest::test_no_restriction", "test/test_query.py::AnyFieldQueryTest::test_restriction_completeness", "test/test_query.py::AnyFieldQueryTest::test_restriction_soundness", "test/test_query.py::GetTest::test_album_field_fallback", "test/test_query.py::GetTest::test_compilation_false", "test/test_query.py::GetTest::test_compilation_true", "test/test_query.py::GetTest::test_get_empty", "test/test_query.py::GetTest::test_get_no_matches", "test/test_query.py::GetTest::test_get_no_matches_exact", "test/test_query.py::GetTest::test_get_none", "test/test_query.py::GetTest::test_get_one_keyed_exact", "test/test_query.py::GetTest::test_get_one_keyed_exact_nocase", "test/test_query.py::GetTest::test_get_one_keyed_regexp", "test/test_query.py::GetTest::test_get_one_keyed_term", "test/test_query.py::GetTest::test_get_one_unkeyed_exact", "test/test_query.py::GetTest::test_get_one_unkeyed_exact_nocase", "test/test_query.py::GetTest::test_get_one_unkeyed_regexp", "test/test_query.py::GetTest::test_get_one_unkeyed_term", "test/test_query.py::GetTest::test_invalid_key", "test/test_query.py::GetTest::test_invalid_query", "test/test_query.py::GetTest::test_item_field_name_matches_nothing_in_album_query", "test/test_query.py::GetTest::test_key_case_insensitive", "test/test_query.py::GetTest::test_keyed_matches_exact_nocase", "test/test_query.py::GetTest::test_keyed_regexp_matches_only_one_column", "test/test_query.py::GetTest::test_keyed_term_matches_only_one_column", "test/test_query.py::GetTest::test_mixed_terms_regexps_narrow_search", "test/test_query.py::GetTest::test_multiple_regexps_narrow_search", "test/test_query.py::GetTest::test_multiple_terms_narrow_search", "test/test_query.py::GetTest::test_numeric_search_negative", "test/test_query.py::GetTest::test_numeric_search_positive", "test/test_query.py::GetTest::test_regexp_case_sensitive", "test/test_query.py::GetTest::test_single_year", "test/test_query.py::GetTest::test_singleton_false", "test/test_query.py::GetTest::test_singleton_true", "test/test_query.py::GetTest::test_term_case_insensitive", "test/test_query.py::GetTest::test_term_case_insensitive_with_key", "test/test_query.py::GetTest::test_unicode_query", "test/test_query.py::GetTest::test_unkeyed_regexp_matches_multiple_columns", "test/test_query.py::GetTest::test_unkeyed_term_matches_multiple_columns", "test/test_query.py::GetTest::test_unknown_field_name_no_results", "test/test_query.py::GetTest::test_unknown_field_name_no_results_in_album_query", "test/test_query.py::GetTest::test_year_range", "test/test_query.py::MatchTest::test_bitrate_range_negative", "test/test_query.py::MatchTest::test_bitrate_range_positive", "test/test_query.py::MatchTest::test_eq", "test/test_query.py::MatchTest::test_exact_match_nocase_negative", "test/test_query.py::MatchTest::test_exact_match_nocase_positive", "test/test_query.py::MatchTest::test_open_range", "test/test_query.py::MatchTest::test_regex_match_negative", "test/test_query.py::MatchTest::test_regex_match_non_string_value", "test/test_query.py::MatchTest::test_regex_match_positive", "test/test_query.py::MatchTest::test_substring_match_negative", "test/test_query.py::MatchTest::test_substring_match_non_string_value", "test/test_query.py::MatchTest::test_substring_match_positive", "test/test_query.py::MatchTest::test_year_match_negative", "test/test_query.py::MatchTest::test_year_match_positive", "test/test_query.py::PathQueryTest::test_case_sensitivity", "test/test_query.py::PathQueryTest::test_detect_absolute_path", "test/test_query.py::PathQueryTest::test_detect_relative_path", "test/test_query.py::PathQueryTest::test_escape_backslash", "test/test_query.py::PathQueryTest::test_escape_percent", "test/test_query.py::PathQueryTest::test_escape_underscore", "test/test_query.py::PathQueryTest::test_fragment_no_match", "test/test_query.py::PathQueryTest::test_no_match", "test/test_query.py::PathQueryTest::test_non_slashed_does_not_match_path", "test/test_query.py::PathQueryTest::test_nonnorm_path", "test/test_query.py::PathQueryTest::test_parent_directory_no_slash", "test/test_query.py::PathQueryTest::test_parent_directory_with_slash", "test/test_query.py::PathQueryTest::test_path_album_regex", "test/test_query.py::PathQueryTest::test_path_exact_match", "test/test_query.py::PathQueryTest::test_path_item_regex", "test/test_query.py::PathQueryTest::test_path_sep_detection", "test/test_query.py::PathQueryTest::test_slashed_query_matches_path", "test/test_query.py::PathQueryTest::test_slashes_in_explicit_field_does_not_match_path", "test/test_query.py::IntQueryTest::test_exact_value_match", "test/test_query.py::IntQueryTest::test_flex_dont_match_missing", "test/test_query.py::IntQueryTest::test_flex_range_match", "test/test_query.py::IntQueryTest::test_no_substring_match", "test/test_query.py::IntQueryTest::test_range_match", "test/test_query.py::BoolQueryTest::test_flex_parse_0", "test/test_query.py::BoolQueryTest::test_flex_parse_1", "test/test_query.py::BoolQueryTest::test_flex_parse_any_string", "test/test_query.py::BoolQueryTest::test_flex_parse_false", "test/test_query.py::BoolQueryTest::test_flex_parse_true", "test/test_query.py::BoolQueryTest::test_parse_true", "test/test_query.py::DefaultSearchFieldsTest::test_albums_matches_album", "test/test_query.py::DefaultSearchFieldsTest::test_albums_matches_albumartist", "test/test_query.py::DefaultSearchFieldsTest::test_items_does_not_match_year", "test/test_query.py::DefaultSearchFieldsTest::test_items_matches_title", "test/test_query.py::NoneQueryTest::test_match_after_set_none", "test/test_query.py::NoneQueryTest::test_match_singletons", "test/test_query.py::NoneQueryTest::test_match_slow", "test/test_query.py::NoneQueryTest::test_match_slow_after_set_none", "test/test_query.py::NotQueryMatchTest::test_bitrate_range_negative", "test/test_query.py::NotQueryMatchTest::test_bitrate_range_positive", "test/test_query.py::NotQueryMatchTest::test_open_range", "test/test_query.py::NotQueryMatchTest::test_regex_match_negative", "test/test_query.py::NotQueryMatchTest::test_regex_match_non_string_value", "test/test_query.py::NotQueryMatchTest::test_regex_match_positive", "test/test_query.py::NotQueryMatchTest::test_substring_match_negative", "test/test_query.py::NotQueryMatchTest::test_substring_match_non_string_value", "test/test_query.py::NotQueryMatchTest::test_substring_match_positive", "test/test_query.py::NotQueryMatchTest::test_year_match_negative", "test/test_query.py::NotQueryMatchTest::test_year_match_positive", "test/test_query.py::NotQueryTest::test_fast_vs_slow", "test/test_query.py::NotQueryTest::test_get_mixed_terms", "test/test_query.py::NotQueryTest::test_get_multiple_terms", "test/test_query.py::NotQueryTest::test_get_one_keyed_regexp", "test/test_query.py::NotQueryTest::test_get_one_unkeyed_regexp", "test/test_query.py::NotQueryTest::test_get_prefixes_keyed", "test/test_query.py::NotQueryTest::test_get_prefixes_unkeyed", "test/test_query.py::NotQueryTest::test_type_and", "test/test_query.py::NotQueryTest::test_type_anyfield", "test/test_query.py::NotQueryTest::test_type_boolean", "test/test_query.py::NotQueryTest::test_type_date", "test/test_query.py::NotQueryTest::test_type_false", "test/test_query.py::NotQueryTest::test_type_match", "test/test_query.py::NotQueryTest::test_type_none", "test/test_query.py::NotQueryTest::test_type_numeric", "test/test_query.py::NotQueryTest::test_type_or", "test/test_query.py::NotQueryTest::test_type_regexp", "test/test_query.py::NotQueryTest::test_type_substring", "test/test_query.py::NotQueryTest::test_type_true" ]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-05-31 20:57:57+00:00
mit
1,319
beetbox__beets-4549
diff --git a/beets/autotag/mb.py b/beets/autotag/mb.py index 3bd7e8c8..5b8d4513 100644 --- a/beets/autotag/mb.py +++ b/beets/autotag/mb.py @@ -202,6 +202,19 @@ def _flatten_artist_credit(credit): ) +def _get_related_artist_names(relations, relation_type): + """Given a list representing the artist relationships extract the names of + the remixers and concatenate them. + """ + related_artists = [] + + for relation in relations: + if relation['type'] == relation_type: + related_artists.append(relation['artist']['name']) + + return ', '.join(related_artists) + + def track_info(recording, index=None, medium=None, medium_index=None, medium_total=None): """Translates a MusicBrainz recording result dictionary into a beets @@ -231,6 +244,12 @@ def track_info(recording, index=None, medium=None, medium_index=None, artist = recording['artist-credit'][0]['artist'] info.artist_id = artist['id'] + if recording.get('artist-relation-list'): + info.remixer = _get_related_artist_names( + recording['artist-relation-list'], + relation_type='remixer' + ) + if recording.get('length'): info.length = int(recording['length']) / (1000.0) diff --git a/beets/library.py b/beets/library.py index becf1939..98156397 100644 --- a/beets/library.py +++ b/beets/library.py @@ -466,6 +466,7 @@ class Item(LibModel): 'artist': types.STRING, 'artist_sort': types.STRING, 'artist_credit': types.STRING, + 'remixer': types.STRING, 'album': types.STRING, 'albumartist': types.STRING, 'albumartist_sort': types.STRING, diff --git a/docs/changelog.rst b/docs/changelog.rst index 646417f2..3703f422 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -8,6 +8,8 @@ Changelog goes here! New features: +* We now import the remixer field from Musicbrainz into the library. + :bug:`4428` * :doc:`/plugins/mbsubmit`: Added a new `mbsubmit` command to print track information to be submitted to MusicBrainz after initial import. :bug:`4455` * Added `spotify_updated` field to track when the information was last updated. diff --git a/docs/reference/config.rst b/docs/reference/config.rst index afabe1aa..e59937dc 100644 --- a/docs/reference/config.rst +++ b/docs/reference/config.rst @@ -582,7 +582,7 @@ from_scratch ~~~~~~~~~~~~ Either ``yes`` or ``no`` (default), controlling whether existing metadata is -discarded when a match is applied. This corresponds to the ``--from_scratch`` +discarded when a match is applied. This corresponds to the ``--from-scratch`` flag to ``beet import``. .. _quiet:
beetbox/beets
e201dd4fe57b0aa2e80890dc3939b0a803e3448d
diff --git a/test/test_mb.py b/test/test_mb.py index 0b39d6ce..f005c741 100644 --- a/test/test_mb.py +++ b/test/test_mb.py @@ -109,8 +109,8 @@ class MBAlbumInfoTest(_common.TestCase): }) return release - def _make_track(self, title, tr_id, duration, artist=False, video=False, - disambiguation=None): + def _make_track(self, title, tr_id, duration, artist=False, + video=False, disambiguation=None, remixer=False): track = { 'title': title, 'id': tr_id, @@ -128,6 +128,22 @@ class MBAlbumInfoTest(_common.TestCase): 'name': 'RECORDING ARTIST CREDIT', } ] + if remixer: + track['artist-relation-list'] = [ + { + 'type': 'remixer', + 'type-id': 'RELATION TYPE ID', + 'target': 'RECORDING REMIXER ARTIST ID', + 'direction': 'RECORDING RELATION DIRECTION', + 'artist': + { + 'id': 'RECORDING REMIXER ARTIST ID', + 'type': 'RECORDING REMIXER ARTIST TYPE', + 'name': 'RECORDING REMIXER ARTIST NAME', + 'sort-name': 'RECORDING REMIXER ARTIST SORT NAME' + } + } + ] if video: track['video'] = 'true' if disambiguation: @@ -339,6 +355,12 @@ class MBAlbumInfoTest(_common.TestCase): self.assertEqual(track.artist_sort, 'TRACK ARTIST SORT NAME') self.assertEqual(track.artist_credit, 'TRACK ARTIST CREDIT') + def test_parse_recording_remixer(self): + tracks = [self._make_track('a', 'b', 1, remixer=True)] + release = self._make_release(None, tracks=tracks) + track = mb.album_info(release).tracks[0] + self.assertEqual(track.remixer, 'RECORDING REMIXER ARTIST NAME') + def test_data_source(self): release = self._make_release() d = mb.album_info(release)
Fetch `remixer` field from MusicBrainz ### Problem Musicbrainz seems to have a field called `remixer` for remixes, this info isn't written to my files. Example: https://musicbrainz.org/release/7c7f7ddf-c021-4ee8-993d-d1c330b4a36a Using it for the artist field, similarly to the `artist_credit` option would also be nice. According to the [Picard docs](https://picard-docs.musicbrainz.org/en/appendices/tag_mapping.html) there would even be a proper place to store it in ID3. ### Setup * OS: Arch Linux * Python version: 3.10 * beets version: 1.6.0 * Turning off plugins made problem go away (yes/no): no
0.0
e201dd4fe57b0aa2e80890dc3939b0a803e3448d
[ "test/test_mb.py::MBAlbumInfoTest::test_parse_recording_remixer" ]
[ "test/test_mb.py::MBAlbumInfoTest::test_data_source", "test/test_mb.py::MBAlbumInfoTest::test_detect_various_artists", "test/test_mb.py::MBAlbumInfoTest::test_ignored_media", "test/test_mb.py::MBAlbumInfoTest::test_missing_language", "test/test_mb.py::MBAlbumInfoTest::test_no_durations", "test/test_mb.py::MBAlbumInfoTest::test_no_ignored_media", "test/test_mb.py::MBAlbumInfoTest::test_no_release_date", "test/test_mb.py::MBAlbumInfoTest::test_no_skip_audio_data_tracks_if_configured", "test/test_mb.py::MBAlbumInfoTest::test_no_skip_video_data_tracks_if_configured", "test/test_mb.py::MBAlbumInfoTest::test_no_skip_video_tracks_if_configured", "test/test_mb.py::MBAlbumInfoTest::test_parse_artist_sort_name", "test/test_mb.py::MBAlbumInfoTest::test_parse_asin", "test/test_mb.py::MBAlbumInfoTest::test_parse_catalognum", "test/test_mb.py::MBAlbumInfoTest::test_parse_country", "test/test_mb.py::MBAlbumInfoTest::test_parse_disambig", "test/test_mb.py::MBAlbumInfoTest::test_parse_disctitle", "test/test_mb.py::MBAlbumInfoTest::test_parse_media", "test/test_mb.py::MBAlbumInfoTest::test_parse_medium_numbers_single_medium", "test/test_mb.py::MBAlbumInfoTest::test_parse_medium_numbers_two_mediums", "test/test_mb.py::MBAlbumInfoTest::test_parse_recording_artist", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_full_date", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_type", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_with_year", "test/test_mb.py::MBAlbumInfoTest::test_parse_release_year_month_only", "test/test_mb.py::MBAlbumInfoTest::test_parse_releasegroupid", "test/test_mb.py::MBAlbumInfoTest::test_parse_status", "test/test_mb.py::MBAlbumInfoTest::test_parse_textrepr", "test/test_mb.py::MBAlbumInfoTest::test_parse_track_indices", "test/test_mb.py::MBAlbumInfoTest::test_parse_tracks", "test/test_mb.py::MBAlbumInfoTest::test_skip_audio_data_tracks_by_default", "test/test_mb.py::MBAlbumInfoTest::test_skip_data_track", "test/test_mb.py::MBAlbumInfoTest::test_skip_video_data_tracks_by_default", "test/test_mb.py::MBAlbumInfoTest::test_skip_video_tracks_by_default", "test/test_mb.py::MBAlbumInfoTest::test_track_artist_overrides_recording_artist", "test/test_mb.py::MBAlbumInfoTest::test_track_disambiguation", "test/test_mb.py::MBAlbumInfoTest::test_track_length_overrides_recording_length", "test/test_mb.py::MBAlbumInfoTest::test_various_artists_defaults_false", "test/test_mb.py::ParseIDTest::test_parse_id_correct", "test/test_mb.py::ParseIDTest::test_parse_id_non_id_returns_none", "test/test_mb.py::ParseIDTest::test_parse_id_url_finds_id", "test/test_mb.py::ArtistFlatteningTest::test_alias", "test/test_mb.py::ArtistFlatteningTest::test_single_artist", "test/test_mb.py::ArtistFlatteningTest::test_two_artists", "test/test_mb.py::MBLibraryTest::test_match_album", "test/test_mb.py::MBLibraryTest::test_match_album_empty", "test/test_mb.py::MBLibraryTest::test_match_track", "test/test_mb.py::MBLibraryTest::test_match_track_empty" ]
{ "failed_lite_validators": [ "has_hyperlinks", "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2022-11-15 16:59:57+00:00
mit
1,320
beetbox__beets-5022
diff --git a/beets/ui/commands.py b/beets/ui/commands.py index 63f25fca..26eb5320 100755 --- a/beets/ui/commands.py +++ b/beets/ui/commands.py @@ -1506,6 +1506,20 @@ import_cmd.parser.add_option( action="store_false", help="do not skip already-imported directories", ) +import_cmd.parser.add_option( + "-R", + "--incremental-skip-later", + action="store_true", + dest="incremental_skip_later", + help="do not record skipped files during incremental import", +) +import_cmd.parser.add_option( + "-r", + "--noincremental-skip-later", + action="store_false", + dest="incremental_skip_later", + help="record skipped files during incremental import", +) import_cmd.parser.add_option( "--from-scratch", dest="from_scratch", diff --git a/beetsplug/mbsubmit.py b/beetsplug/mbsubmit.py index e4c0f372..d215e616 100644 --- a/beetsplug/mbsubmit.py +++ b/beetsplug/mbsubmit.py @@ -21,11 +21,13 @@ implemented by MusicBrainz yet. [1] https://wiki.musicbrainz.org/History:How_To_Parse_Track_Listings """ +import subprocess from beets import ui from beets.autotag import Recommendation from beets.plugins import BeetsPlugin from beets.ui.commands import PromptChoice +from beets.util import displayable_path from beetsplug.info import print_data @@ -37,6 +39,7 @@ class MBSubmitPlugin(BeetsPlugin): { "format": "$track. $title - $artist ($length)", "threshold": "medium", + "picard_path": "picard", } ) @@ -56,7 +59,21 @@ class MBSubmitPlugin(BeetsPlugin): def before_choose_candidate_event(self, session, task): if task.rec <= self.threshold: - return [PromptChoice("p", "Print tracks", self.print_tracks)] + return [ + PromptChoice("p", "Print tracks", self.print_tracks), + PromptChoice("o", "Open files with Picard", self.picard), + ] + + def picard(self, session, task): + paths = [] + for p in task.paths: + paths.append(displayable_path(p)) + try: + picard_path = self.config["picard_path"].as_str() + subprocess.Popen([picard_path] + paths) + self._log.info("launched picard from\n{}", picard_path) + except OSError as exc: + self._log.error(f"Could not open picard, got error:\n{exc}") def print_tracks(self, session, task): for i in sorted(task.items, key=lambda i: i.track): diff --git a/docs/changelog.rst b/docs/changelog.rst index c69ec82f..74b925e2 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -17,6 +17,7 @@ Major new features: New features: +* :doc:`plugins/mbsubmit`: add new prompt choices helping further to submit unmatched tracks to MusicBrainz faster. * :doc:`plugins/spotify`: We now fetch track's ISRC, EAN, and UPC identifiers from Spotify when using the ``spotifysync`` command. :bug:`4992` * :doc:`plugins/discogs`: supply a value for the `cover_art_url` attribute, for use by `fetchart`. @@ -146,6 +147,7 @@ New features: * :doc:`/plugins/lyrics`: Add LRCLIB as a new lyrics provider and a new `synced` option to prefer synced lyrics over plain lyrics. * :ref:`import-cmd`: Expose import.quiet_fallback as CLI option. +* :ref:`import-cmd`: Expose `import.incremental_skip_later` as CLI option. Bug fixes: diff --git a/docs/plugins/mbsubmit.rst b/docs/plugins/mbsubmit.rst index 5cb9be8f..0e86ddc6 100644 --- a/docs/plugins/mbsubmit.rst +++ b/docs/plugins/mbsubmit.rst @@ -1,23 +1,40 @@ MusicBrainz Submit Plugin ========================= -The ``mbsubmit`` plugin provides an extra prompt choice during an import -session and a ``mbsubmit`` command that prints the tracks of the current -album in a format that is parseable by MusicBrainz's `track parser`_. +The ``mbsubmit`` plugin provides extra prompt choices when an import session +fails to find a good enough match for a release. Additionally, it provides an +``mbsubmit`` command that prints the tracks of the current album in a format +that is parseable by MusicBrainz's `track parser`_. The prompt choices are: + +- Print the tracks to stdout in a format suitable for MusicBrainz's `track + parser`_. + +- Open the program `Picard`_ with the unmatched folder as an input, allowing + you to start submitting the unmatched release to MusicBrainz with many input + fields already filled in, thanks to Picard reading the preexisting tags of + the files. + +For the last option, `Picard`_ is assumed to be installed and available on the +machine including a ``picard`` executable. Picard developers list `download +options`_. `other GNU/Linux distributions`_ may distribute Picard via their +package manager as well. .. _track parser: https://wiki.musicbrainz.org/History:How_To_Parse_Track_Listings +.. _Picard: https://picard.musicbrainz.org/ +.. _download options: https://picard.musicbrainz.org/downloads/ +.. _other GNU/Linux distributions: https://repology.org/project/picard-tagger/versions Usage ----- Enable the ``mbsubmit`` plugin in your configuration (see :ref:`using-plugins`) -and select the ``Print tracks`` choice which is by default displayed when no -strong recommendations are found for the album:: +and select one of the options mentioned above. Here the option ``Print tracks`` +choice is demonstrated:: No matching release found for 3 tracks. For help, see: https://beets.readthedocs.org/en/latest/faq.html#nomatch [U]se as-is, as Tracks, Group albums, Skip, Enter search, enter Id, aBort, - Print tracks? p + Print tracks, Open files with Picard? p 01. An Obscure Track - An Obscure Artist (3:37) 02. Another Obscure Track - An Obscure Artist (2:05) 03. The Third Track - Another Obscure Artist (3:02) @@ -53,6 +70,11 @@ file. The following options are available: Default: ``medium`` (causing the choice to be displayed for all albums that have a recommendation of medium strength or lower). Valid values: ``none``, ``low``, ``medium``, ``strong``. +- **picard_path**: The path to the ``picard`` executable. Could be an absolute + path, and if not, ``$PATH`` is consulted. The default value is simply + ``picard``. Windows users will have to find and specify the absolute path to + their ``picard.exe``. That would probably be: + ``C:\Program Files\MusicBrainz Picard\picard.exe``. Please note that some values of the ``threshold`` configuration option might require other ``beets`` command line switches to be enabled in order to work as diff --git a/docs/reference/cli.rst b/docs/reference/cli.rst index a2997c70..8caf7076 100644 --- a/docs/reference/cli.rst +++ b/docs/reference/cli.rst @@ -115,6 +115,15 @@ Optional command flags: time, when no subdirectories will be skipped. So consider enabling the ``incremental`` configuration option. +* If you don't want to record skipped files during an *incremental* import, use + the ``--incremental-skip-later`` flag which corresponds to the + ``incremental_skip_later`` configuration option. + Setting the flag prevents beets from persisting skip decisions during a + non-interactive import so that a user can make a decision regarding + previously skipped files during a subsequent interactive import run. + To record skipped files during incremental import explicitly, use the + ``--noincremental-skip-later`` option. + * When beets applies metadata to your music, it will retain the value of any existing tags that weren't overwritten, and import them into the database. You may prefer to only use existing metadata for finding matches, and to erase it
beetbox/beets
a384bee6bfb97c8b07fd9435a00a3641cbb21e49
diff --git a/test/plugins/test_mbsubmit.py b/test/plugins/test_mbsubmit.py index 6f9c81c0..e495a73a 100644 --- a/test/plugins/test_mbsubmit.py +++ b/test/plugins/test_mbsubmit.py @@ -45,7 +45,7 @@ class MBSubmitPluginTest( # Manually build the string for comparing the output. tracklist = ( - "Print tracks? " + "Open files with Picard? " "01. Tag Title 1 - Tag Artist (0:01)\n" "02. Tag Title 2 - Tag Artist (0:01)" ) @@ -61,7 +61,9 @@ class MBSubmitPluginTest( self.importer.run() # Manually build the string for comparing the output. - tracklist = "Print tracks? " "02. Tag Title 2 - Tag Artist (0:01)" + tracklist = ( + "Open files with Picard? " "02. Tag Title 2 - Tag Artist (0:01)" + ) self.assertIn(tracklist, output.getvalue())
Expose import.incremental_skip_later config option as CLI option To reduce the time needed for manual prompting during the import of a large number of tracks and to import as much as possible without prompting, I run the import once with the `-i --quiet` options and the `import.incremental_skip_later` config option enabled and then again without the `--quiet` option and `import.incremental_skip_later` disabled within the configuration, letting beets prompt for the previously skipped tracks only. However, currently that requires me to change the configuration file during the workflow which I'd like to avoid. <!-- If you're landing here as a user, we ask you bring up your idea in the Discussions (https://github.com/beetbox/beets/discussions). --> ### Proposed solution In addition to the configuration file the `import.incremental_skip_later` option should be configurable via the CLI as e.g. `--incremental-skip-later` option of the `import` command. Usage example: ```sh beet import -siq --incremental-skip-later # no questions asked, tracks skipped in doubt beet import -si # prompting for previously skipped tracks only ``` ### Objective <!-- Ref to Discussions --> #### Goals <!-- What is the purpose of feature request? --> Improving the import workflow / making the import command more flexible. #### Non-goals <!-- What else could be accomplished with this feature request, but is currently out of scope? --> Changing the import behaviour internally. #### Anti-goals <!-- What could go wrong (side effects) if we implement this feature request? --> Any additional complexity or more prompts. :smile:
0.0
a384bee6bfb97c8b07fd9435a00a3641cbb21e49
[ "test/plugins/test_mbsubmit.py::MBSubmitPluginTest::test_print_tracks_output", "test/plugins/test_mbsubmit.py::MBSubmitPluginTest::test_print_tracks_output_as_tracks" ]
[]
{ "failed_lite_validators": [ "has_many_modified_files", "has_many_hunks" ], "has_test_patch": true, "is_lite": false }
2023-12-04 02:14:43+00:00
mit
1,321