repo
stringlengths
5
69
instance_id
stringlengths
11
74
base_commit
stringlengths
40
40
patch
stringlengths
169
823k
test_patch
stringclasses
1 value
problem_statement
stringlengths
22
84.7k
hints_text
stringlengths
0
274k
created_at
timestamp[ns]date
2013-07-02 23:04:30
2024-12-13 21:22:22
environment_setup_commit
stringclasses
1 value
version
stringclasses
1 value
FAIL_TO_PASS
sequencelengths
0
0
PASS_TO_PASS
sequencelengths
0
0
open-mmlab/mmyolo
open-mmlab__mmyolo-735
1aa1ecd27b0af8bf7c027bf09098213104364bff
diff --git a/configs/yolov5/README.md b/configs/yolov5/README.md index cc6eff2bc..c5980658e 100644 --- a/configs/yolov5/README.md +++ b/configs/yolov5/README.md @@ -53,6 +53,20 @@ YOLOv5-l-P6 model structure 7. The performance of `Mask Refine` training is for the weight performance officially released by YOLOv5. `Mask Refine` means refining bbox by mask while loading annotations and transforming after `YOLOv5RandomAffine`, `Copy Paste` means using `YOLOv5CopyPaste`. 8. `YOLOv5u` models use the same loss functions and split Detect head as `YOLOv8` models for improved performance, but only requires 300 epochs. +### COCO Instance segmentation + +| Backbone | Arch | size | SyncBN | AMP | Mem (GB) | Box AP | Mask AP | Config | Download | +| :-------------------: | :--: | :--: | :----: | :-: | :------: | :----: | :-----: | :--------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | +| YOLOv5-n | P5 | 640 | Yes | Yes | 3.3 | 27.9 | 23.7 | [config](./ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance_20230424_104807-84cc9240.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance_20230424_104807.log.json) | +| YOLOv5-s | P5 | 640 | Yes | Yes | 4.8 | 38.1 | 32.0 | [config](./ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance_20230426_012542-3e570436.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance_20230426_012542.log.json) | +| YOLOv5-s(non-overlap) | P5 | 640 | Yes | Yes | 4.8 | 38.0 | 32.1 | [config](./ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance_20230424_104642-6780d34e.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance_20230424_104642.log.json) | +| YOLOv5-m | P5 | 640 | Yes | Yes | 7.3 | 45.1 | 37.3 | [config](./ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance_20230424_111529-ef5ba1a9.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance_20230424_111529.log.json) | + +**Note**: + +1. `Non-overlap` refers to the instance-level masks being stored in the format (num_instances, h, w) instead of (h, w). Storing masks in overlap format consumes less memory and GPU memory. +2. We found that the mAP of the N/S/M model is higher than the official version, but the L/X model is lower than the official version. We will resolve this issue as soon as possible. + ### VOC | Backbone | size | Batchsize | AMP | Mem (GB) | box AP(COCO metric) | Config | Download | diff --git a/configs/yolov5/ins_seg/yolov5_ins_l-v61_syncbn_fast_8xb16-300e_coco_instance.py b/configs/yolov5/ins_seg/yolov5_ins_l-v61_syncbn_fast_8xb16-300e_coco_instance.py new file mode 100644 index 000000000..dd15b1bf5 --- /dev/null +++ b/configs/yolov5/ins_seg/yolov5_ins_l-v61_syncbn_fast_8xb16-300e_coco_instance.py @@ -0,0 +1,15 @@ +_base_ = './yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance.py' # noqa + +deepen_factor = 1.0 +widen_factor = 1.0 + +model = dict( + backbone=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + neck=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + bbox_head=dict(head_module=dict(widen_factor=widen_factor))) diff --git a/configs/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance.py b/configs/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance.py new file mode 100644 index 000000000..2951c9e3d --- /dev/null +++ b/configs/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance.py @@ -0,0 +1,88 @@ +_base_ = './yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py' # noqa + +# ========================modified parameters====================== +deepen_factor = 0.67 +widen_factor = 0.75 +lr_factor = 0.1 +affine_scale = 0.9 +loss_cls_weight = 0.3 +loss_obj_weight = 0.7 +mixup_prob = 0.1 + +# =======================Unmodified in most cases================== +num_classes = _base_.num_classes +num_det_layers = _base_.num_det_layers +img_scale = _base_.img_scale + +model = dict( + backbone=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + neck=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + bbox_head=dict( + head_module=dict(widen_factor=widen_factor), + loss_cls=dict(loss_weight=loss_cls_weight * + (num_classes / 80 * 3 / num_det_layers)), + loss_obj=dict(loss_weight=loss_obj_weight * + ((img_scale[0] / 640)**2 * 3 / num_det_layers)))) + +pre_transform = _base_.pre_transform +albu_train_transforms = _base_.albu_train_transforms + +mosaic_affine_pipeline = [ + dict( + type='Mosaic', + img_scale=img_scale, + pad_val=114.0, + pre_transform=pre_transform), + dict( + type='YOLOv5RandomAffine', + max_rotate_degree=0.0, + max_shear_degree=0.0, + scaling_ratio_range=(1 - _base_.affine_scale, 1 + _base_.affine_scale), + border=(-_base_.img_scale[0] // 2, -_base_.img_scale[1] // 2), + border_val=(114, 114, 114), + min_area_ratio=_base_.min_area_ratio, + max_aspect_ratio=_base_.max_aspect_ratio, + use_mask_refine=_base_.use_mask2refine), +] + +# enable mixup +train_pipeline = [ + *pre_transform, + *mosaic_affine_pipeline, + dict( + type='YOLOv5MixUp', + prob=mixup_prob, + pre_transform=[*pre_transform, *mosaic_affine_pipeline]), + # TODO: support mask transform in albu + # Geometric transformations are not supported in albu now. + dict( + type='mmdet.Albu', + transforms=albu_train_transforms, + bbox_params=dict( + type='BboxParams', + format='pascal_voc', + label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), + keymap={ + 'img': 'image', + 'gt_bboxes': 'bboxes' + }), + dict(type='YOLOv5HSVRandomAug'), + dict(type='mmdet.RandomFlip', prob=0.5), + dict( + type='Polygon2Mask', + downsample_ratio=_base_.downsample_ratio, + mask_overlap=_base_.mask_overlap), + dict( + type='PackDetInputs', + meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', + 'flip_direction')) +] + +train_dataloader = dict(dataset=dict(pipeline=train_pipeline)) +default_hooks = dict(param_scheduler=dict(lr_factor=lr_factor)) diff --git a/configs/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance.py b/configs/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance.py new file mode 100644 index 000000000..e06130bd3 --- /dev/null +++ b/configs/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance.py @@ -0,0 +1,15 @@ +_base_ = './yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py' # noqa + +deepen_factor = 0.33 +widen_factor = 0.25 + +model = dict( + backbone=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + neck=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + bbox_head=dict(head_module=dict(widen_factor=widen_factor))) diff --git a/configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py b/configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py new file mode 100644 index 000000000..bd73139e4 --- /dev/null +++ b/configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py @@ -0,0 +1,126 @@ +_base_ = '../yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py' # noqa + +# ========================modified parameters====================== +# YOLOv5RandomAffine +use_mask2refine = True +max_aspect_ratio = 100 +min_area_ratio = 0.01 +# Polygon2Mask +downsample_ratio = 4 +mask_overlap = True +# LeterResize +# half_pad_param: if set to True, left and right pad_param will +# be given by dividing padding_h by 2. If set to False, pad_param is +# in int format. We recommend setting this to False for object +# detection tasks, and True for instance segmentation tasks. +# Default to False. +half_pad_param = True + +# Testing take a long time due to model_test_cfg. +# If you want to speed it up, you can increase score_thr +# or decraese nms_pre and max_per_img +model_test_cfg = dict( + multi_label=True, + nms_pre=30000, + min_bbox_size=0, + score_thr=0.001, + nms=dict(type='nms', iou_threshold=0.6), + max_per_img=300, + mask_thr_binary=0.5, + # fast_test: Whether to use fast test methods. When set + # to False, the implementation here is the same as the + # official, with higher mAP. If set to True, mask will first + # be upsampled to origin image shape through Pytorch, and + # then use mask_thr_binary to determine which pixels belong + # to the object. If set to False, will first use + # mask_thr_binary to determine which pixels belong to the + # object , and then use opencv to upsample mask to origin + # image shape. Default to False. + fast_test=True) + +# ===============================Unmodified in most cases==================== +model = dict( + type='YOLODetector', + bbox_head=dict( + type='YOLOv5InsHead', + head_module=dict( + type='YOLOv5InsHeadModule', mask_channels=32, proto_channels=256), + mask_overlap=mask_overlap, + loss_mask=dict( + type='mmdet.CrossEntropyLoss', use_sigmoid=True, reduction='none'), + loss_mask_weight=0.05), + test_cfg=model_test_cfg) + +pre_transform = [ + dict(type='LoadImageFromFile', backend_args=_base_.backend_args), + dict( + type='LoadAnnotations', + with_bbox=True, + with_mask=True, + mask2bbox=use_mask2refine) +] + +train_pipeline = [ + *pre_transform, + dict( + type='Mosaic', + img_scale=_base_.img_scale, + pad_val=114.0, + pre_transform=pre_transform), + dict( + type='YOLOv5RandomAffine', + max_rotate_degree=0.0, + max_shear_degree=0.0, + scaling_ratio_range=(1 - _base_.affine_scale, 1 + _base_.affine_scale), + border=(-_base_.img_scale[0] // 2, -_base_.img_scale[1] // 2), + border_val=(114, 114, 114), + min_area_ratio=min_area_ratio, + max_aspect_ratio=max_aspect_ratio, + use_mask_refine=use_mask2refine), + # TODO: support mask transform in albu + # Geometric transformations are not supported in albu now. + dict( + type='mmdet.Albu', + transforms=_base_.albu_train_transforms, + bbox_params=dict( + type='BboxParams', + format='pascal_voc', + label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), + keymap={ + 'img': 'image', + 'gt_bboxes': 'bboxes', + }), + dict(type='YOLOv5HSVRandomAug'), + dict(type='mmdet.RandomFlip', prob=0.5), + dict( + type='Polygon2Mask', + downsample_ratio=downsample_ratio, + mask_overlap=mask_overlap), + dict( + type='PackDetInputs', + meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', + 'flip_direction')) +] + +test_pipeline = [ + dict(type='LoadImageFromFile', backend_args=_base_.backend_args), + dict(type='YOLOv5KeepRatioResize', scale=_base_.img_scale), + dict( + type='LetterResize', + scale=_base_.img_scale, + allow_scale_up=False, + half_pad_param=half_pad_param, + pad_val=dict(img=114)), + dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), + dict( + type='mmdet.PackDetInputs', + meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', + 'scale_factor', 'pad_param')) +] + +train_dataloader = dict(dataset=dict(pipeline=train_pipeline)) +val_dataloader = dict(dataset=dict(pipeline=test_pipeline)) +test_dataloader = val_dataloader + +val_evaluator = dict(metric=['bbox', 'segm']) +test_evaluator = val_evaluator diff --git a/configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance.py b/configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance.py new file mode 100644 index 000000000..83b48cab6 --- /dev/null +++ b/configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance.py @@ -0,0 +1,49 @@ +_base_ = './yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py' # noqa + +# ========================modified parameters====================== +mask_overlap = False # Polygon2Mask + +# ===============================Unmodified in most cases==================== +model = dict(bbox_head=dict(mask_overlap=mask_overlap)) + +train_pipeline = [ + *_base_.pre_transform, + dict( + type='Mosaic', + img_scale=_base_.img_scale, + pad_val=114.0, + pre_transform=_base_.pre_transform), + dict( + type='YOLOv5RandomAffine', + max_rotate_degree=0.0, + max_shear_degree=0.0, + scaling_ratio_range=(1 - _base_.affine_scale, 1 + _base_.affine_scale), + border=(-_base_.img_scale[0] // 2, -_base_.img_scale[1] // 2), + border_val=(114, 114, 114), + min_area_ratio=_base_.min_area_ratio, + max_aspect_ratio=_base_.max_aspect_ratio, + use_mask_refine=True), + dict( + type='mmdet.Albu', + transforms=_base_.albu_train_transforms, + bbox_params=dict( + type='BboxParams', + format='pascal_voc', + label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), + keymap={ + 'img': 'image', + 'gt_bboxes': 'bboxes', + }), + dict(type='YOLOv5HSVRandomAug'), + dict(type='mmdet.RandomFlip', prob=0.5), + dict( + type='Polygon2Mask', + downsample_ratio=_base_.downsample_ratio, + mask_overlap=mask_overlap), + dict( + type='PackDetInputs', + meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', + 'flip_direction')) +] + +train_dataloader = dict(dataset=dict(pipeline=train_pipeline)) diff --git a/configs/yolov5/ins_seg/yolov5_ins_x-v61_syncbn_fast_8xb16-300e_coco_instance.py b/configs/yolov5/ins_seg/yolov5_ins_x-v61_syncbn_fast_8xb16-300e_coco_instance.py new file mode 100644 index 000000000..e08d43047 --- /dev/null +++ b/configs/yolov5/ins_seg/yolov5_ins_x-v61_syncbn_fast_8xb16-300e_coco_instance.py @@ -0,0 +1,15 @@ +_base_ = './yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance.py' # noqa + +deepen_factor = 1.33 +widen_factor = 1.25 + +model = dict( + backbone=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + neck=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + bbox_head=dict(head_module=dict(widen_factor=widen_factor))) diff --git a/configs/yolov5/metafile.yml b/configs/yolov5/metafile.yml index bfa92bdbe..97a5416bf 100644 --- a/configs/yolov5/metafile.yml +++ b/configs/yolov5/metafile.yml @@ -248,3 +248,67 @@ Models: Metrics: box AP: 50.9 Weights: https://download.openmmlab.com/mmyolo/v0/yolov5/mask_refine/yolov5_x_mask-refine-v61_syncbn_fast_8xb16-300e_coco/yolov5_x_mask-refine-v61_syncbn_fast_8xb16-300e_coco_20230305_154321-07edeb62.pth + - Name: yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance + In Collection: YOLOv5 + Config: configs/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance.py + Metadata: + Training Memory (GB): 3.3 + Epochs: 300 + Results: + - Task: Object Detection + Dataset: COCO + Metrics: + box AP: 27.9 + - Task: Instance Segmentation + Dataset: COCO + Metrics: + mask AP: 23.7 + Weights: https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_n-v61_syncbn_fast_8xb16-300e_coco_instance_20230424_104807-84cc9240.pth + - Name: yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance + In Collection: YOLOv5 + Config: configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance.py + Metadata: + Training Memory (GB): 4.8 + Epochs: 300 + Results: + - Task: Object Detection + Dataset: COCO + Metrics: + box AP: 38.1 + - Task: Instance Segmentation + Dataset: COCO + Metrics: + mask AP: 32.0 + Weights: https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_s-v61_syncbn_fast_8xb16-300e_coco_instance_20230426_012542-3e570436.pth + - Name: yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance + In Collection: YOLOv5 + Config: configs/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance.py + Metadata: + Training Memory (GB): 4.8 + Epochs: 300 + Results: + - Task: Object Detection + Dataset: COCO + Metrics: + box AP: 38.0 + - Task: Instance Segmentation + Dataset: COCO + Metrics: + mask AP: 32.1 + Weights: https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance/yolov5_ins_s-v61_syncbn_fast_non_overlap_8xb16-300e_coco_instance_20230424_104642-6780d34e.pth + - Name: yolov5_ins_m-v61_syncbn_fast=_8xb16-300e_coco_instance + In Collection: YOLOv5 + Config: configs/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast=_8xb16-300e_coco_instance.py + Metadata: + Training Memory (GB): 7.3 + Epochs: 300 + Results: + - Task: Object Detection + Dataset: COCO + Metrics: + box AP: 45.1 + - Task: Instance Segmentation + Dataset: COCO + Metrics: + mask AP: 37.3 + Weights: https://download.openmmlab.com/mmyolo/v0/yolov5/ins_seg/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance/yolov5_ins_m-v61_syncbn_fast_8xb16-300e_coco_instance_20230424_111529-ef5ba1a9.pth diff --git a/mmyolo/datasets/transforms/__init__.py b/mmyolo/datasets/transforms/__init__.py index 58f4e6fdb..6719ac337 100644 --- a/mmyolo/datasets/transforms/__init__.py +++ b/mmyolo/datasets/transforms/__init__.py @@ -1,14 +1,16 @@ # Copyright (c) OpenMMLab. All rights reserved. +from .formatting import PackDetInputs from .mix_img_transforms import Mosaic, Mosaic9, YOLOv5MixUp, YOLOXMixUp -from .transforms import (LetterResize, LoadAnnotations, PPYOLOERandomCrop, - PPYOLOERandomDistort, RegularizeRotatedBox, - RemoveDataElement, YOLOv5CopyPaste, - YOLOv5HSVRandomAug, YOLOv5KeepRatioResize, - YOLOv5RandomAffine) +from .transforms import (LetterResize, LoadAnnotations, Polygon2Mask, + PPYOLOERandomCrop, PPYOLOERandomDistort, + RegularizeRotatedBox, RemoveDataElement, + YOLOv5CopyPaste, YOLOv5HSVRandomAug, + YOLOv5KeepRatioResize, YOLOv5RandomAffine) __all__ = [ 'YOLOv5KeepRatioResize', 'LetterResize', 'Mosaic', 'YOLOXMixUp', 'YOLOv5MixUp', 'YOLOv5HSVRandomAug', 'LoadAnnotations', 'YOLOv5RandomAffine', 'PPYOLOERandomDistort', 'PPYOLOERandomCrop', - 'Mosaic9', 'YOLOv5CopyPaste', 'RemoveDataElement', 'RegularizeRotatedBox' + 'Mosaic9', 'YOLOv5CopyPaste', 'RemoveDataElement', 'RegularizeRotatedBox', + 'Polygon2Mask', 'PackDetInputs' ] diff --git a/mmyolo/datasets/transforms/formatting.py b/mmyolo/datasets/transforms/formatting.py new file mode 100644 index 000000000..0185d78c3 --- /dev/null +++ b/mmyolo/datasets/transforms/formatting.py @@ -0,0 +1,102 @@ +# Copyright (c) OpenMMLab. All rights reserved. +import numpy as np +from mmcv.transforms import to_tensor +from mmdet.datasets.transforms import PackDetInputs as MMDET_PackDetInputs +from mmdet.structures import DetDataSample +from mmdet.structures.bbox import BaseBoxes +from mmengine.structures import InstanceData, PixelData + +from mmyolo.registry import TRANSFORMS + + [email protected]_module() +class PackDetInputs(MMDET_PackDetInputs): + """Pack the inputs data for the detection / semantic segmentation / + panoptic segmentation. + + Compared to mmdet, we just add the `gt_panoptic_seg` field and logic. + """ + + def transform(self, results: dict) -> dict: + """Method to pack the input data. + Args: + results (dict): Result dict from the data pipeline. + Returns: + dict: + - 'inputs' (obj:`torch.Tensor`): The forward data of models. + - 'data_sample' (obj:`DetDataSample`): The annotation info of the + sample. + """ + packed_results = dict() + if 'img' in results: + img = results['img'] + if len(img.shape) < 3: + img = np.expand_dims(img, -1) + # To improve the computational speed by by 3-5 times, apply: + # If image is not contiguous, use + # `numpy.transpose()` followed by `numpy.ascontiguousarray()` + # If image is already contiguous, use + # `torch.permute()` followed by `torch.contiguous()` + # Refer to https://github.com/open-mmlab/mmdetection/pull/9533 + # for more details + if not img.flags.c_contiguous: + img = np.ascontiguousarray(img.transpose(2, 0, 1)) + img = to_tensor(img) + else: + img = to_tensor(img).permute(2, 0, 1).contiguous() + + packed_results['inputs'] = img + + if 'gt_ignore_flags' in results: + valid_idx = np.where(results['gt_ignore_flags'] == 0)[0] + ignore_idx = np.where(results['gt_ignore_flags'] == 1)[0] + + data_sample = DetDataSample() + instance_data = InstanceData() + ignore_instance_data = InstanceData() + + for key in self.mapping_table.keys(): + if key not in results: + continue + if key == 'gt_masks' or isinstance(results[key], BaseBoxes): + if 'gt_ignore_flags' in results: + instance_data[ + self.mapping_table[key]] = results[key][valid_idx] + ignore_instance_data[ + self.mapping_table[key]] = results[key][ignore_idx] + else: + instance_data[self.mapping_table[key]] = results[key] + else: + if 'gt_ignore_flags' in results: + instance_data[self.mapping_table[key]] = to_tensor( + results[key][valid_idx]) + ignore_instance_data[self.mapping_table[key]] = to_tensor( + results[key][ignore_idx]) + else: + instance_data[self.mapping_table[key]] = to_tensor( + results[key]) + data_sample.gt_instances = instance_data + data_sample.ignored_instances = ignore_instance_data + + if 'gt_seg_map' in results: + gt_sem_seg_data = dict( + sem_seg=to_tensor(results['gt_seg_map'][None, ...].copy())) + data_sample.gt_sem_seg = PixelData(**gt_sem_seg_data) + + # In order to unify the support for the overlap mask annotations + # i.e. mask overlap annotations in (h,w) format, + # we use the gt_panoptic_seg field to unify the modeling + if 'gt_panoptic_seg' in results: + data_sample.gt_panoptic_seg = PixelData( + pan_seg=results['gt_panoptic_seg']) + + img_meta = {} + for key in self.meta_keys: + assert key in results, f'`{key}` is not found in `results`, ' \ + f'the valid keys are {list(results)}.' + img_meta[key] = results[key] + + data_sample.set_metainfo(img_meta) + packed_results['data_samples'] = data_sample + + return packed_results diff --git a/mmyolo/datasets/transforms/mix_img_transforms.py b/mmyolo/datasets/transforms/mix_img_transforms.py index 4a25f6f7e..4753ecc3a 100644 --- a/mmyolo/datasets/transforms/mix_img_transforms.py +++ b/mmyolo/datasets/transforms/mix_img_transforms.py @@ -374,7 +374,7 @@ def mix_img_transform(self, results: dict) -> dict: mosaic_ignore_flags.append(gt_ignore_flags_i) if with_mask and results_patch.get('gt_masks', None) is not None: gt_masks_i = results_patch['gt_masks'] - gt_masks_i = gt_masks_i.rescale(float(scale_ratio_i)) + gt_masks_i = gt_masks_i.resize(img_i.shape[:2]) gt_masks_i = gt_masks_i.translate( out_shape=(int(self.img_scale[0] * 2), int(self.img_scale[1] * 2)), diff --git a/mmyolo/datasets/transforms/transforms.py b/mmyolo/datasets/transforms/transforms.py index 2cdc6a5f8..30dfdb3f7 100644 --- a/mmyolo/datasets/transforms/transforms.py +++ b/mmyolo/datasets/transforms/transforms.py @@ -13,7 +13,7 @@ from mmdet.datasets.transforms import Resize as MMDET_Resize from mmdet.structures.bbox import (HorizontalBoxes, autocast_box_type, get_box_type) -from mmdet.structures.mask import PolygonMasks +from mmdet.structures.mask import PolygonMasks, polygon_to_bitmap from numpy import random from mmyolo.registry import TRANSFORMS @@ -99,17 +99,21 @@ def _resize_img(self, results: dict): self.scale) if ratio != 1: - # resize image according to the ratio - image = mmcv.imrescale( + # resize image according to the shape + # NOTE: We are currently testing on COCO that modifying + # this code will not affect the results. + # If you find that it has an effect on your results, + # please feel free to contact us. + image = mmcv.imresize( img=image, - scale=ratio, + size=(int(original_w * ratio), int(original_h * ratio)), interpolation='area' if ratio < 1 else 'bilinear', backend=self.backend) resized_h, resized_w = image.shape[:2] - scale_ratio = resized_h / original_h - - scale_factor = (scale_ratio, scale_ratio) + scale_ratio_h = resized_h / original_h + scale_ratio_w = resized_w / original_w + scale_factor = (scale_ratio_w, scale_ratio_h) results['img'] = image results['img_shape'] = image.shape[:2] @@ -142,6 +146,11 @@ class LetterResize(MMDET_Resize): stretch_only (bool): Whether stretch to the specified size directly. Defaults to False allow_scale_up (bool): Allow scale up when ratio > 1. Defaults to True + half_pad_param (bool): If set to True, left and right pad_param will + be given by dividing padding_h by 2. If set to False, pad_param is + in int format. We recommend setting this to False for object + detection tasks, and True for instance segmentation tasks. + Default to False. """ def __init__(self, @@ -150,6 +159,7 @@ def __init__(self, use_mini_pad: bool = False, stretch_only: bool = False, allow_scale_up: bool = True, + half_pad_param: bool = False, **kwargs): super().__init__(scale=scale, keep_ratio=True, **kwargs) @@ -162,6 +172,7 @@ def __init__(self, self.use_mini_pad = use_mini_pad self.stretch_only = stretch_only self.allow_scale_up = allow_scale_up + self.half_pad_param = half_pad_param def _resize_img(self, results: dict): """Resize images with ``results['scale']``.""" @@ -212,7 +223,8 @@ def _resize_img(self, results: dict): interpolation=self.interpolation, backend=self.backend) - scale_factor = (ratio[1], ratio[0]) # mmcv scale factor is (w, h) + scale_factor = (no_pad_shape[1] / image_shape[1], + no_pad_shape[0] / image_shape[0]) if 'scale_factor' in results: results['scale_factor_origin'] = results['scale_factor'] @@ -246,7 +258,15 @@ def _resize_img(self, results: dict): if 'pad_param' in results: results['pad_param_origin'] = results['pad_param'] * \ np.repeat(ratio, 2) - results['pad_param'] = np.array(padding_list, dtype=np.float32) + + if self.half_pad_param: + results['pad_param'] = np.array( + [padding_h / 2, padding_h / 2, padding_w / 2, padding_w / 2], + dtype=np.float32) + else: + # We found in object detection, using padding list with + # int type can get higher mAP. + results['pad_param'] = np.array(padding_list, dtype=np.float32) def _resize_masks(self, results: dict): """Resize masks with ``results['scale']``""" @@ -370,13 +390,26 @@ def __repr__(self) -> str: class LoadAnnotations(MMDET_LoadAnnotations): """Because the yolo series does not need to consider ignore bboxes for the time being, in order to speed up the pipeline, it can be excluded in - advance.""" + advance. + + Args: + mask2bbox (bool): Whether to use mask annotation to get bbox. + Defaults to False. + poly2mask (bool): Whether to transform the polygons to bitmaps. + Defaults to False. + merge_polygons (bool): Whether to merge polygons into one polygon. + If merged, the storage structure is simpler and training is more + effcient, especially if the mask inside a bbox is divided into + multiple polygons. Defaults to True. + """ def __init__(self, mask2bbox: bool = False, poly2mask: bool = False, - **kwargs) -> None: + merge_polygons: bool = True, + **kwargs): self.mask2bbox = mask2bbox + self.merge_polygons = merge_polygons assert not poly2mask, 'Does not support BitmapMasks considering ' \ 'that bitmap consumes more memory.' super().__init__(poly2mask=poly2mask, **kwargs) @@ -485,6 +518,8 @@ def _load_masks(self, results: dict) -> None: # ignore self._mask_ignore_flag.append(0) else: + if len(gt_mask) > 1 and self.merge_polygons: + gt_mask = self.merge_multi_segment(gt_mask) gt_masks.append(gt_mask) gt_ignore_flags.append(instance['ignore_flag']) self._mask_ignore_flag.append(1) @@ -503,6 +538,79 @@ def _load_masks(self, results: dict) -> None: gt_masks = PolygonMasks([mask for mask in gt_masks], h, w) results['gt_masks'] = gt_masks + def merge_multi_segment(self, + gt_masks: List[np.ndarray]) -> List[np.ndarray]: + """Merge multi segments to one list. + + Find the coordinates with min distance between each segment, + then connect these coordinates with one thin line to merge all + segments into one. + Args: + gt_masks(List(np.array)): + original segmentations in coco's json file. + like [segmentation1, segmentation2,...], + each segmentation is a list of coordinates. + Return: + gt_masks(List(np.array)): merged gt_masks + """ + s = [] + segments = [np.array(i).reshape(-1, 2) for i in gt_masks] + idx_list = [[] for _ in range(len(gt_masks))] + + # record the indexes with min distance between each segment + for i in range(1, len(segments)): + idx1, idx2 = self.min_index(segments[i - 1], segments[i]) + idx_list[i - 1].append(idx1) + idx_list[i].append(idx2) + + # use two round to connect all the segments + # first round: first to end, i.e. A->B(partial)->C + # second round: end to first, i.e. C->B(remaining)-A + for k in range(2): + # forward first round + if k == 0: + for i, idx in enumerate(idx_list): + # middle segments have two indexes + # reverse the index of middle segments + if len(idx) == 2 and idx[0] > idx[1]: + idx = idx[::-1] + segments[i] = segments[i][::-1, :] + # add the idx[0] point for connect next segment + segments[i] = np.roll(segments[i], -idx[0], axis=0) + segments[i] = np.concatenate( + [segments[i], segments[i][:1]]) + # deal with the first segment and the last one + if i in [0, len(idx_list) - 1]: + s.append(segments[i]) + # deal with the middle segment + # Note that in the first round, only partial segment + # are appended. + else: + idx = [0, idx[1] - idx[0]] + s.append(segments[i][idx[0]:idx[1] + 1]) + # forward second round + else: + for i in range(len(idx_list) - 1, -1, -1): + # deal with the middle segment + # append the remaining points + if i not in [0, len(idx_list) - 1]: + idx = idx_list[i] + nidx = abs(idx[1] - idx[0]) + s.append(segments[i][nidx:]) + return [np.concatenate(s).reshape(-1, )] + + def min_index(self, arr1: np.ndarray, arr2: np.ndarray) -> Tuple[int, int]: + """Find a pair of indexes with the shortest distance. + + Args: + arr1: (N, 2). + arr2: (M, 2). + Return: + tuple: a pair of indexes. + """ + dis = ((arr1[:, None, :] - arr2[None, :, :])**2).sum(-1) + return np.unravel_index(np.argmin(dis, axis=None), dis.shape) + def __repr__(self) -> str: repr_str = self.__class__.__name__ repr_str += f'(with_bbox={self.with_bbox}, ' @@ -571,7 +679,7 @@ class YOLOv5RandomAffine(BaseTransform): min_area_ratio (float): Threshold of area ratio between original bboxes and wrapped bboxes. If smaller than this value, the box will be removed. Defaults to 0.1. - use_mask_refine (bool): Whether to refine bbox by mask. + use_mask_refine (bool): Whether to refine bbox by mask. Deprecated. max_aspect_ratio (float): Aspect ratio of width and height threshold to filter bboxes. If max(h/w, w/h) larger than this value, the box will be removed. Defaults to 20. @@ -603,6 +711,7 @@ def __init__(self, self.bbox_clip_border = bbox_clip_border self.min_bbox_size = min_bbox_size self.min_area_ratio = min_area_ratio + # The use_mask_refine parameter has been deprecated. self.use_mask_refine = use_mask_refine self.max_aspect_ratio = max_aspect_ratio self.resample_num = resample_num @@ -644,7 +753,7 @@ def transform(self, results: dict) -> dict: num_bboxes = len(bboxes) if num_bboxes: orig_bboxes = bboxes.clone() - if self.use_mask_refine and 'gt_masks' in results: + if 'gt_masks' in results: # If the dataset has annotations of mask, # the mask will be used to refine bbox. gt_masks = results['gt_masks'] @@ -654,10 +763,13 @@ def transform(self, results: dict) -> dict: img_h, img_w) # refine bboxes by masks - bboxes = gt_masks.get_bboxes(dst_type='hbox') + bboxes = self.segment2box(gt_masks, height, width) # filter bboxes outside image valid_index = self.filter_gt_bboxes(orig_bboxes, bboxes).numpy() + if self.bbox_clip_border: + bboxes.clip_([height - 1e-3, width - 1e-3]) + gt_masks = self.clip_polygons(gt_masks, height, width) results['gt_masks'] = gt_masks[valid_index] else: bboxes.project_(warp_matrix) @@ -671,18 +783,84 @@ def transform(self, results: dict) -> dict: # otherwise it will raise out of bounds when len(valid_index)=1 valid_index = self.filter_gt_bboxes(orig_bboxes, bboxes).numpy() - if 'gt_masks' in results: - results['gt_masks'] = PolygonMasks( - results['gt_masks'].masks, img_h, img_w) results['gt_bboxes'] = bboxes[valid_index] results['gt_bboxes_labels'] = results['gt_bboxes_labels'][ valid_index] results['gt_ignore_flags'] = results['gt_ignore_flags'][ valid_index] + else: + if 'gt_masks' in results: + results['gt_masks'] = PolygonMasks([], img_h, img_w) return results + def segment2box(self, gt_masks: PolygonMasks, height: int, + width: int) -> HorizontalBoxes: + """ + Convert 1 segment label to 1 box label, applying inside-image + constraint i.e. (xy1, xy2, ...) to (xyxy) + Args: + gt_masks (torch.Tensor): the segment label + width (int): the width of the image. Defaults to 640 + height (int): The height of the image. Defaults to 640 + Returns: + HorizontalBoxes: the clip bboxes from gt_masks. + """ + bboxes = [] + for _, poly_per_obj in enumerate(gt_masks): + # simply use a number that is big enough for comparison with + # coordinates + xy_min = np.array([width * 2, height * 2], dtype=np.float32) + xy_max = np.zeros(2, dtype=np.float32) - 1 + + for p in poly_per_obj: + xy = np.array(p).reshape(-1, 2).astype(np.float32) + x, y = xy.T + inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) + x, y = x[inside], y[inside] + if not any(x): + continue + xy = np.stack([x, y], axis=0).T + + xy_min = np.minimum(xy_min, np.min(xy, axis=0)) + xy_max = np.maximum(xy_max, np.max(xy, axis=0)) + if xy_max[0] == -1: + bbox = np.zeros(4, dtype=np.float32) + else: + bbox = np.concatenate([xy_min, xy_max], axis=0) + bboxes.append(bbox) + + return HorizontalBoxes(np.stack(bboxes, axis=0)) + + # TODO: Move to mmdet + def clip_polygons(self, gt_masks: PolygonMasks, height: int, + width: int) -> PolygonMasks: + """Function to clip points of polygons with height and width. + + Args: + gt_masks (PolygonMasks): Annotations of instance segmentation. + height (int): height of clip border. + width (int): width of clip border. + Return: + clipped_masks (PolygonMasks): + Clip annotations of instance segmentation. + """ + if len(gt_masks) == 0: + clipped_masks = PolygonMasks([], height, width) + else: + clipped_masks = [] + for poly_per_obj in gt_masks: + clipped_poly_per_obj = [] + for p in poly_per_obj: + p = p.copy() + p[0::2] = p[0::2].clip(0, width) + p[1::2] = p[1::2].clip(0, height) + clipped_poly_per_obj.append(p) + clipped_masks.append(clipped_poly_per_obj) + clipped_masks = PolygonMasks(clipped_masks, height, width) + return clipped_masks + @staticmethod def warp_poly(poly: np.ndarray, warp_matrix: np.ndarray, img_w: int, img_h: int) -> np.ndarray: @@ -707,10 +885,7 @@ def warp_poly(poly: np.ndarray, warp_matrix: np.ndarray, img_w: int, poly = poly @ warp_matrix.T poly = poly[:, :2] / poly[:, 2:3] - # filter point outside image - x, y = poly.T - valid_ind_point = (x >= 0) & (y >= 0) & (x <= img_w) & (y <= img_h) - return poly[valid_ind_point].reshape(-1) + return poly.reshape(-1) def warp_mask(self, gt_masks: PolygonMasks, warp_matrix: np.ndarray, img_w: int, img_h: int) -> PolygonMasks: @@ -1374,7 +1549,7 @@ def transform(self, results: dict) -> Union[dict, None]: if len(results.get('gt_masks', [])) == 0: return results gt_masks = results['gt_masks'] - assert isinstance(gt_masks, PolygonMasks),\ + assert isinstance(gt_masks, PolygonMasks), \ 'only support type of PolygonMasks,' \ ' but get type: %s' % type(gt_masks) gt_bboxes = results['gt_bboxes'] @@ -1555,3 +1730,145 @@ def transform(self, results: dict) -> dict: results['gt_bboxes'] = self.box_type( results['gt_bboxes'].regularize_boxes(self.angle_version)) return results + + [email protected]_module() +class Polygon2Mask(BaseTransform): + """Polygons to bitmaps in YOLOv5. + + Args: + downsample_ratio (int): Downsample ratio of mask. + mask_overlap (bool): Whether to use maskoverlap in mask process. + When set to True, the implementation here is the same as the + official, with higher training speed. If set to True, all gt masks + will compress into one overlap mask, the value of mask indicates + the index of gt masks. If set to False, one mask is a binary mask. + Default to True. + coco_style (bool): Whether to use coco_style to convert the polygons to + bitmaps. Note that this option is only used to test if there is an + improvement in training speed and we recommend setting it to False. + """ + + def __init__(self, + downsample_ratio: int = 4, + mask_overlap: bool = True, + coco_style: bool = False): + self.downsample_ratio = downsample_ratio + self.mask_overlap = mask_overlap + self.coco_style = coco_style + + def polygon2mask(self, + img_shape: Tuple[int, int], + polygons: np.ndarray, + color: int = 1) -> np.ndarray: + """ + Args: + img_shape (tuple): The image size. + polygons (np.ndarray): [N, M], N is the number of polygons, + M is the number of points(Be divided by 2). + color (int): color in fillPoly. + Return: + np.ndarray: the overlap mask. + """ + nh, nw = (img_shape[0] // self.downsample_ratio, + img_shape[1] // self.downsample_ratio) + if self.coco_style: + # This practice can lead to the loss of small objects + # polygons = polygons.resize((nh, nw)).masks + # polygons = np.asarray(polygons).reshape(-1) + # mask = polygon_to_bitmap([polygons], nh, nw) + + polygons = np.asarray(polygons).reshape(-1) + mask = polygon_to_bitmap([polygons], img_shape[0], + img_shape[1]).astype(np.uint8) + mask = mmcv.imresize(mask, (nw, nh)) + else: + mask = np.zeros(img_shape, dtype=np.uint8) + polygons = np.asarray(polygons) + polygons = polygons.astype(np.int32) + shape = polygons.shape + polygons = polygons.reshape(shape[0], -1, 2) + cv2.fillPoly(mask, polygons, color=color) + # NOTE: fillPoly firstly then resize is trying the keep the same + # way of loss calculation when mask-ratio=1. + mask = mmcv.imresize(mask, (nw, nh)) + return mask + + def polygons2masks(self, + img_shape: Tuple[int, int], + polygons: PolygonMasks, + color: int = 1) -> np.ndarray: + """Return a list of bitmap masks. + + Args: + img_shape (tuple): The image size. + polygons (PolygonMasks): The mask annotations. + color (int): color in fillPoly. + Return: + List[np.ndarray]: the list of masks in bitmaps. + """ + if self.coco_style: + nh, nw = (img_shape[0] // self.downsample_ratio, + img_shape[1] // self.downsample_ratio) + masks = polygons.resize((nh, nw)).to_ndarray() + return masks + else: + masks = [] + for si in range(len(polygons)): + mask = self.polygon2mask(img_shape, polygons[si], color) + masks.append(mask) + return np.array(masks) + + def polygons2masks_overlap( + self, img_shape: Tuple[int, int], + polygons: PolygonMasks) -> Tuple[np.ndarray, np.ndarray]: + """Return a overlap mask and the sorted idx of area. + + Args: + img_shape (tuple): The image size. + polygons (PolygonMasks): The mask annotations. + color (int): color in fillPoly. + Return: + Tuple[np.ndarray, np.ndarray]: + the overlap mask and the sorted idx of area. + """ + masks = np.zeros((img_shape[0] // self.downsample_ratio, + img_shape[1] // self.downsample_ratio), + dtype=np.int32 if len(polygons) > 255 else np.uint8) + areas = [] + ms = [] + for si in range(len(polygons)): + mask = self.polygon2mask(img_shape, polygons[si], color=1) + ms.append(mask) + areas.append(mask.sum()) + areas = np.asarray(areas) + index = np.argsort(-areas) + ms = np.array(ms)[index] + for i in range(len(polygons)): + mask = ms[i] * (i + 1) + masks = masks + mask + masks = np.clip(masks, a_min=0, a_max=i + 1) + return masks, index + + def transform(self, results: dict) -> dict: + gt_masks = results['gt_masks'] + assert isinstance(gt_masks, PolygonMasks) + + if self.mask_overlap: + masks, sorted_idx = self.polygons2masks_overlap( + (gt_masks.height, gt_masks.width), gt_masks) + results['gt_bboxes'] = results['gt_bboxes'][sorted_idx] + results['gt_bboxes_labels'] = results['gt_bboxes_labels'][ + sorted_idx] + + # In this case we put gt_masks in gt_panoptic_seg + results.pop('gt_masks') + results['gt_panoptic_seg'] = torch.from_numpy(masks[None]) + else: + masks = self.polygons2masks((gt_masks.height, gt_masks.width), + gt_masks, + color=1) + masks = torch.from_numpy(masks) + # Consistent logic with mmdet + results['gt_masks'] = masks + return results diff --git a/mmyolo/datasets/utils.py b/mmyolo/datasets/utils.py index 62fe5484b..d50207c8d 100644 --- a/mmyolo/datasets/utils.py +++ b/mmyolo/datasets/utils.py @@ -4,6 +4,7 @@ import numpy as np import torch from mmengine.dataset import COLLATE_FUNCTIONS +from mmengine.dist import get_dist_info from ..registry import TASK_UTILS @@ -28,9 +29,10 @@ def yolov5_collate(data_batch: Sequence, gt_bboxes = datasamples.gt_instances.bboxes.tensor gt_labels = datasamples.gt_instances.labels if 'masks' in datasamples.gt_instances: - masks = datasamples.gt_instances.masks.to_tensor( - dtype=torch.bool, device=gt_bboxes.device) + masks = datasamples.gt_instances.masks batch_masks.append(masks) + if 'gt_panoptic_seg' in datasamples: + batch_masks.append(datasamples.gt_panoptic_seg.pan_seg) batch_idx = gt_labels.new_full((len(gt_labels), 1), i) bboxes_labels = torch.cat((batch_idx, gt_labels[:, None], gt_bboxes), dim=1) @@ -70,10 +72,14 @@ def __init__(self, img_size: int = 640, size_divisor: int = 32, extra_pad_ratio: float = 0.5): - self.batch_size = batch_size self.img_size = img_size self.size_divisor = size_divisor self.extra_pad_ratio = extra_pad_ratio + _, world_size = get_dist_info() + # During multi-gpu testing, the batchsize should be multiplied by + # worldsize, so that the number of batches can be calculated correctly. + # The index of batches will affect the calculation of batch shape. + self.batch_size = batch_size * world_size def __call__(self, data_list: List[dict]) -> List[dict]: image_shapes = [] diff --git a/mmyolo/models/dense_heads/__init__.py b/mmyolo/models/dense_heads/__init__.py index a95abd611..ac65c42e5 100644 --- a/mmyolo/models/dense_heads/__init__.py +++ b/mmyolo/models/dense_heads/__init__.py @@ -5,6 +5,7 @@ from .rtmdet_rotated_head import (RTMDetRotatedHead, RTMDetRotatedSepBNHeadModule) from .yolov5_head import YOLOv5Head, YOLOv5HeadModule +from .yolov5_ins_head import YOLOv5InsHead, YOLOv5InsHeadModule from .yolov6_head import YOLOv6Head, YOLOv6HeadModule from .yolov7_head import YOLOv7Head, YOLOv7HeadModule, YOLOv7p6HeadModule from .yolov8_head import YOLOv8Head, YOLOv8HeadModule @@ -16,5 +17,5 @@ 'RTMDetSepBNHeadModule', 'YOLOv7Head', 'PPYOLOEHead', 'PPYOLOEHeadModule', 'YOLOv7HeadModule', 'YOLOv7p6HeadModule', 'YOLOv8Head', 'YOLOv8HeadModule', 'RTMDetRotatedHead', 'RTMDetRotatedSepBNHeadModule', 'RTMDetInsSepBNHead', - 'RTMDetInsSepBNHeadModule' + 'RTMDetInsSepBNHeadModule', 'YOLOv5InsHead', 'YOLOv5InsHeadModule' ] diff --git a/mmyolo/models/dense_heads/yolov5_head.py b/mmyolo/models/dense_heads/yolov5_head.py index c49d08518..fb24617fc 100644 --- a/mmyolo/models/dense_heads/yolov5_head.py +++ b/mmyolo/models/dense_heads/yolov5_head.py @@ -95,7 +95,12 @@ def init_weights(self): b = mi.bias.data.view(self.num_base_priors, -1) # obj (8 objects per 640 image) b.data[:, 4] += math.log(8 / (640 / s)**2) - b.data[:, 5:] += math.log(0.6 / (self.num_classes - 0.999999)) + # NOTE: The following initialization can only be performed on the + # bias of the category, if the following initialization is + # performed on the bias of mask coefficient, + # there will be a significant decrease in mask AP. + b.data[:, 5:5 + self.num_classes] += math.log( + 0.6 / (self.num_classes - 0.999999)) mi.bias.data = b.view(-1) diff --git a/mmyolo/models/dense_heads/yolov5_ins_head.py b/mmyolo/models/dense_heads/yolov5_ins_head.py new file mode 100644 index 000000000..df94f422e --- /dev/null +++ b/mmyolo/models/dense_heads/yolov5_ins_head.py @@ -0,0 +1,740 @@ +# Copyright (c) OpenMMLab. All rights reserved. +import copy +from typing import List, Optional, Sequence, Tuple, Union + +import mmcv +import torch +import torch.nn as nn +import torch.nn.functional as F +from mmcv.cnn import ConvModule +from mmdet.models.utils import filter_scores_and_topk, multi_apply +from mmdet.structures.bbox import bbox_cxcywh_to_xyxy +from mmdet.utils import ConfigType, OptInstanceList +from mmengine.config import ConfigDict +from mmengine.dist import get_dist_info +from mmengine.model import BaseModule +from mmengine.structures import InstanceData +from torch import Tensor + +from mmyolo.registry import MODELS +from ..utils import make_divisible +from .yolov5_head import YOLOv5Head, YOLOv5HeadModule + + +class ProtoModule(BaseModule): + """Mask Proto module for segmentation models of YOLOv5. + + Args: + in_channels (int): Number of channels in the input feature map. + middle_channels (int): Number of channels in the middle feature map. + mask_channels (int): Number of channels in the output mask feature + map. This is the channel count of the mask. + norm_cfg (:obj:`ConfigDict` or dict): Config dict for normalization + layer. Defaults to ``dict(type='BN', momentum=0.03, eps=0.001)``. + act_cfg (:obj:`ConfigDict` or dict): Config dict for activation layer. + Default: dict(type='SiLU', inplace=True). + """ + + def __init__(self, + *args, + in_channels: int = 32, + middle_channels: int = 256, + mask_channels: int = 32, + norm_cfg: ConfigType = dict( + type='BN', momentum=0.03, eps=0.001), + act_cfg: ConfigType = dict(type='SiLU', inplace=True), + **kwargs): + super().__init__(*args, **kwargs) + self.conv1 = ConvModule( + in_channels, + middle_channels, + kernel_size=3, + padding=1, + norm_cfg=norm_cfg, + act_cfg=act_cfg) + self.upsample = nn.Upsample(scale_factor=2, mode='nearest') + self.conv2 = ConvModule( + middle_channels, + middle_channels, + kernel_size=3, + padding=1, + norm_cfg=norm_cfg, + act_cfg=act_cfg) + self.conv3 = ConvModule( + middle_channels, + mask_channels, + kernel_size=1, + norm_cfg=norm_cfg, + act_cfg=act_cfg) + + def forward(self, x: Tensor) -> Tensor: + return self.conv3(self.conv2(self.upsample(self.conv1(x)))) + + [email protected]_module() +class YOLOv5InsHeadModule(YOLOv5HeadModule): + """Detection and Instance Segmentation Head of YOLOv5. + + Args: + num_classes (int): Number of categories excluding the background + category. + mask_channels (int): Number of channels in the mask feature map. + This is the channel count of the mask. + proto_channels (int): Number of channels in the proto feature map. + widen_factor (float): Width multiplier, multiply number of + channels in each layer by this amount. Defaults to 1.0. + norm_cfg (:obj:`ConfigDict` or dict): Config dict for normalization + layer. Defaults to ``dict(type='BN', momentum=0.03, eps=0.001)``. + act_cfg (:obj:`ConfigDict` or dict): Config dict for activation layer. + Default: dict(type='SiLU', inplace=True). + """ + + def __init__(self, + *args, + num_classes: int, + mask_channels: int = 32, + proto_channels: int = 256, + widen_factor: float = 1.0, + norm_cfg: ConfigType = dict( + type='BN', momentum=0.03, eps=0.001), + act_cfg: ConfigType = dict(type='SiLU', inplace=True), + **kwargs): + self.mask_channels = mask_channels + self.num_out_attrib_with_proto = 5 + num_classes + mask_channels + self.proto_channels = make_divisible(proto_channels, widen_factor) + self.norm_cfg = norm_cfg + self.act_cfg = act_cfg + super().__init__( + *args, + num_classes=num_classes, + widen_factor=widen_factor, + **kwargs) + + def _init_layers(self): + """initialize conv layers in YOLOv5 Ins head.""" + self.convs_pred = nn.ModuleList() + for i in range(self.num_levels): + conv_pred = nn.Conv2d( + self.in_channels[i], + self.num_base_priors * self.num_out_attrib_with_proto, 1) + self.convs_pred.append(conv_pred) + + self.proto_pred = ProtoModule( + in_channels=self.in_channels[0], + middle_channels=self.proto_channels, + mask_channels=self.mask_channels, + norm_cfg=self.norm_cfg, + act_cfg=self.act_cfg) + + def forward(self, x: Tuple[Tensor]) -> Tuple[List]: + """Forward features from the upstream network. + + Args: + x (Tuple[Tensor]): Features from the upstream network, each is + a 4D-tensor. + Returns: + Tuple[List]: A tuple of multi-level classification scores, bbox + predictions, objectnesses, and mask predictions. + """ + assert len(x) == self.num_levels + cls_scores, bbox_preds, objectnesses, coeff_preds = multi_apply( + self.forward_single, x, self.convs_pred) + mask_protos = self.proto_pred(x[0]) + return cls_scores, bbox_preds, objectnesses, coeff_preds, mask_protos + + def forward_single( + self, x: Tensor, + convs_pred: nn.Module) -> Tuple[Tensor, Tensor, Tensor, Tensor]: + """Forward feature of a single scale level.""" + + pred_map = convs_pred(x) + bs, _, ny, nx = pred_map.shape + pred_map = pred_map.view(bs, self.num_base_priors, + self.num_out_attrib_with_proto, ny, nx) + + cls_score = pred_map[:, :, 5:self.num_classes + 5, + ...].reshape(bs, -1, ny, nx) + bbox_pred = pred_map[:, :, :4, ...].reshape(bs, -1, ny, nx) + objectness = pred_map[:, :, 4:5, ...].reshape(bs, -1, ny, nx) + coeff_pred = pred_map[:, :, self.num_classes + 5:, + ...].reshape(bs, -1, ny, nx) + + return cls_score, bbox_pred, objectness, coeff_pred + + [email protected]_module() +class YOLOv5InsHead(YOLOv5Head): + """YOLOv5 Instance Segmentation and Detection head. + + Args: + mask_overlap(bool): Defaults to True. + loss_mask (:obj:`ConfigDict` or dict): Config of mask loss. + loss_mask_weight (float): The weight of mask loss. + """ + + def __init__(self, + *args, + mask_overlap: bool = True, + loss_mask: ConfigType = dict( + type='mmdet.CrossEntropyLoss', + use_sigmoid=True, + reduction='none'), + loss_mask_weight=0.05, + **kwargs): + super().__init__(*args, **kwargs) + self.mask_overlap = mask_overlap + self.loss_mask: nn.Module = MODELS.build(loss_mask) + self.loss_mask_weight = loss_mask_weight + + def loss(self, x: Tuple[Tensor], batch_data_samples: Union[list, + dict]) -> dict: + """Perform forward propagation and loss calculation of the detection + head on the features of the upstream network. + + Args: + x (tuple[Tensor]): Features from the upstream network, each is + a 4D-tensor. + batch_data_samples (List[:obj:`DetDataSample`], dict): The Data + Samples. It usually includes information such as + `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. + + Returns: + dict: A dictionary of loss components. + """ + + if isinstance(batch_data_samples, list): + # TODO: support non-fast version ins segmention + raise NotImplementedError + else: + outs = self(x) + # Fast version + loss_inputs = outs + (batch_data_samples['bboxes_labels'], + batch_data_samples['masks'], + batch_data_samples['img_metas']) + losses = self.loss_by_feat(*loss_inputs) + + return losses + + def loss_by_feat( + self, + cls_scores: Sequence[Tensor], + bbox_preds: Sequence[Tensor], + objectnesses: Sequence[Tensor], + coeff_preds: Sequence[Tensor], + proto_preds: Tensor, + batch_gt_instances: Sequence[InstanceData], + batch_gt_masks: Sequence[Tensor], + batch_img_metas: Sequence[dict], + batch_gt_instances_ignore: OptInstanceList = None) -> dict: + """Calculate the loss based on the features extracted by the detection + head. + + Args: + cls_scores (Sequence[Tensor]): Box scores for each scale level, + each is a 4D-tensor, the channel number is + num_priors * num_classes. + bbox_preds (Sequence[Tensor]): Box energies / deltas for each scale + level, each is a 4D-tensor, the channel number is + num_priors * 4. + objectnesses (Sequence[Tensor]): Score factor for + all scale level, each is a 4D-tensor, has shape + (batch_size, 1, H, W). + coeff_preds (Sequence[Tensor]): Mask coefficient for each scale + level, each is a 4D-tensor, the channel number is + num_priors * mask_channels. + proto_preds (Tensor): Mask prototype features extracted from the + mask head, has shape (batch_size, mask_channels, H, W). + batch_gt_instances (Sequence[InstanceData]): Batch of + gt_instance. It usually includes ``bboxes`` and ``labels`` + attributes. + batch_gt_masks (Sequence[Tensor]): Batch of gt_mask. + batch_img_metas (Sequence[dict]): Meta information of each image, + e.g., image size, scaling factor, etc. + batch_gt_instances_ignore (list[:obj:`InstanceData`], optional): + Batch of gt_instances_ignore. It includes ``bboxes`` attribute + data that is ignored during training and testing. + Defaults to None. + Returns: + dict[str, Tensor]: A dictionary of losses. + """ + # 1. Convert gt to norm format + batch_targets_normed = self._convert_gt_to_norm_format( + batch_gt_instances, batch_img_metas) + + device = cls_scores[0].device + loss_cls = torch.zeros(1, device=device) + loss_box = torch.zeros(1, device=device) + loss_obj = torch.zeros(1, device=device) + loss_mask = torch.zeros(1, device=device) + scaled_factor = torch.ones(8, device=device) + + for i in range(self.num_levels): + batch_size, _, h, w = bbox_preds[i].shape + target_obj = torch.zeros_like(objectnesses[i]) + + # empty gt bboxes + if batch_targets_normed.shape[1] == 0: + loss_box += bbox_preds[i].sum() * 0 + loss_cls += cls_scores[i].sum() * 0 + loss_obj += self.loss_obj( + objectnesses[i], target_obj) * self.obj_level_weights[i] + loss_mask += coeff_preds[i].sum() * 0 + continue + + priors_base_sizes_i = self.priors_base_sizes[i] + # feature map scale whwh + scaled_factor[2:6] = torch.tensor( + bbox_preds[i].shape)[[3, 2, 3, 2]] + # Scale batch_targets from range 0-1 to range 0-features_maps size. + # (num_base_priors, num_bboxes, 8) + batch_targets_scaled = batch_targets_normed * scaled_factor + + # 2. Shape match + wh_ratio = batch_targets_scaled[..., + 4:6] / priors_base_sizes_i[:, None] + match_inds = torch.max( + wh_ratio, 1 / wh_ratio).max(2)[0] < self.prior_match_thr + batch_targets_scaled = batch_targets_scaled[match_inds] + + # no gt bbox matches anchor + if batch_targets_scaled.shape[0] == 0: + loss_box += bbox_preds[i].sum() * 0 + loss_cls += cls_scores[i].sum() * 0 + loss_obj += self.loss_obj( + objectnesses[i], target_obj) * self.obj_level_weights[i] + loss_mask += coeff_preds[i].sum() * 0 + continue + + # 3. Positive samples with additional neighbors + + # check the left, up, right, bottom sides of the + # targets grid, and determine whether assigned + # them as positive samples as well. + batch_targets_cxcy = batch_targets_scaled[:, 2:4] + grid_xy = scaled_factor[[2, 3]] - batch_targets_cxcy + left, up = ((batch_targets_cxcy % 1 < self.near_neighbor_thr) & + (batch_targets_cxcy > 1)).T + right, bottom = ((grid_xy % 1 < self.near_neighbor_thr) & + (grid_xy > 1)).T + offset_inds = torch.stack( + (torch.ones_like(left), left, up, right, bottom)) + + batch_targets_scaled = batch_targets_scaled.repeat( + (5, 1, 1))[offset_inds] + retained_offsets = self.grid_offset.repeat(1, offset_inds.shape[1], + 1)[offset_inds] + + # prepare pred results and positive sample indexes to + # calculate class loss and bbox lo + _chunk_targets = batch_targets_scaled.chunk(4, 1) + img_class_inds, grid_xy, grid_wh,\ + priors_targets_inds = _chunk_targets + (priors_inds, targets_inds) = priors_targets_inds.long().T + (img_inds, class_inds) = img_class_inds.long().T + + grid_xy_long = (grid_xy - + retained_offsets * self.near_neighbor_thr).long() + grid_x_inds, grid_y_inds = grid_xy_long.T + bboxes_targets = torch.cat((grid_xy - grid_xy_long, grid_wh), 1) + + # 4. Calculate loss + # bbox loss + retained_bbox_pred = bbox_preds[i].reshape( + batch_size, self.num_base_priors, -1, h, + w)[img_inds, priors_inds, :, grid_y_inds, grid_x_inds] + priors_base_sizes_i = priors_base_sizes_i[priors_inds] + decoded_bbox_pred = self._decode_bbox_to_xywh( + retained_bbox_pred, priors_base_sizes_i) + loss_box_i, iou = self.loss_bbox(decoded_bbox_pred, bboxes_targets) + loss_box += loss_box_i + + # obj loss + iou = iou.detach().clamp(0) + target_obj[img_inds, priors_inds, grid_y_inds, + grid_x_inds] = iou.type(target_obj.dtype) + loss_obj += self.loss_obj(objectnesses[i], + target_obj) * self.obj_level_weights[i] + + # cls loss + if self.num_classes > 1: + pred_cls_scores = cls_scores[i].reshape( + batch_size, self.num_base_priors, -1, h, + w)[img_inds, priors_inds, :, grid_y_inds, grid_x_inds] + + target_class = torch.full_like(pred_cls_scores, 0.) + target_class[range(batch_targets_scaled.shape[0]), + class_inds] = 1. + loss_cls += self.loss_cls(pred_cls_scores, target_class) + else: + loss_cls += cls_scores[i].sum() * 0 + + # mask regression + retained_coeff_preds = coeff_preds[i].reshape( + batch_size, self.num_base_priors, -1, h, + w)[img_inds, priors_inds, :, grid_y_inds, grid_x_inds] + + _, c, mask_h, mask_w = proto_preds.shape + if batch_gt_masks.shape[-2:] != (mask_h, mask_w): + batch_gt_masks = F.interpolate( + batch_gt_masks[None], (mask_h, mask_w), mode='nearest')[0] + + xywh_normed = batch_targets_scaled[:, 2:6] / scaled_factor[2:6] + area_normed = xywh_normed[:, 2:].prod(1) + xywh_scaled = xywh_normed * torch.tensor( + proto_preds.shape, device=device)[[3, 2, 3, 2]] + xyxy_scaled = bbox_cxcywh_to_xyxy(xywh_scaled) + + for bs in range(batch_size): + match_inds = (img_inds == bs) # matching index + if not match_inds.any(): + continue + + if self.mask_overlap: + mask_gti = torch.where( + batch_gt_masks[bs][None] == + targets_inds[match_inds].view(-1, 1, 1), 1.0, 0.0) + else: + mask_gti = batch_gt_masks[targets_inds][match_inds] + + mask_preds = (retained_coeff_preds[match_inds] + @ proto_preds[bs].view(c, -1)).view( + -1, mask_h, mask_w) + loss_mask_full = self.loss_mask(mask_preds, mask_gti) + loss_mask += ( + self.crop_mask(loss_mask_full[None], + xyxy_scaled[match_inds]).mean(dim=(2, 3)) / + area_normed[match_inds]).mean() + + _, world_size = get_dist_info() + return dict( + loss_cls=loss_cls * batch_size * world_size, + loss_obj=loss_obj * batch_size * world_size, + loss_bbox=loss_box * batch_size * world_size, + loss_mask=loss_mask * self.loss_mask_weight * world_size) + + def _convert_gt_to_norm_format(self, + batch_gt_instances: Sequence[InstanceData], + batch_img_metas: Sequence[dict]) -> Tensor: + """Add target_inds for instance segmentation.""" + batch_targets_normed = super()._convert_gt_to_norm_format( + batch_gt_instances, batch_img_metas) + + if self.mask_overlap: + batch_size = len(batch_img_metas) + target_inds = [] + for i in range(batch_size): + # find number of targets of each image + num_gts = (batch_gt_instances[:, 0] == i).sum() + # (num_anchor, num_gts) + target_inds.append( + torch.arange(num_gts, device=batch_gt_instances.device). + float().view(1, num_gts).repeat(self.num_base_priors, 1) + + 1) + target_inds = torch.cat(target_inds, 1) + else: + num_gts = batch_gt_instances.shape[0] + target_inds = torch.arange( + num_gts, device=batch_gt_instances.device).float().view( + 1, num_gts).repeat(self.num_base_priors, 1) + batch_targets_normed = torch.cat( + [batch_targets_normed, target_inds[..., None]], 2) + return batch_targets_normed + + def predict_by_feat(self, + cls_scores: List[Tensor], + bbox_preds: List[Tensor], + objectnesses: Optional[List[Tensor]] = None, + coeff_preds: Optional[List[Tensor]] = None, + proto_preds: Optional[Tensor] = None, + batch_img_metas: Optional[List[dict]] = None, + cfg: Optional[ConfigDict] = None, + rescale: bool = True, + with_nms: bool = True) -> List[InstanceData]: + """Transform a batch of output features extracted from the head into + bbox results. + Note: When score_factors is not None, the cls_scores are + usually multiplied by it then obtain the real score used in NMS. + Args: + cls_scores (list[Tensor]): Classification scores for all + scale levels, each is a 4D-tensor, has shape + (batch_size, num_priors * num_classes, H, W). + bbox_preds (list[Tensor]): Box energies / deltas for all + scale levels, each is a 4D-tensor, has shape + (batch_size, num_priors * 4, H, W). + objectnesses (list[Tensor], Optional): Score factor for + all scale level, each is a 4D-tensor, has shape + (batch_size, 1, H, W). + coeff_preds (list[Tensor]): Mask coefficients predictions + for all scale levels, each is a 4D-tensor, has shape + (batch_size, mask_channels, H, W). + proto_preds (Tensor): Mask prototype features extracted from the + mask head, has shape (batch_size, mask_channels, H, W). + batch_img_metas (list[dict], Optional): Batch image meta info. + Defaults to None. + cfg (ConfigDict, optional): Test / postprocessing + configuration, if None, test_cfg would be used. + Defaults to None. + rescale (bool): If True, return boxes in original image space. + Defaults to False. + with_nms (bool): If True, do nms before return boxes. + Defaults to True. + Returns: + list[:obj:`InstanceData`]: Object detection and instance + segmentation results of each image after the post process. + Each item usually contains following keys. + - scores (Tensor): Classification scores, has a shape + (num_instance, ) + - labels (Tensor): Labels of bboxes, has a shape + (num_instances, ). + - bboxes (Tensor): Has a shape (num_instances, 4), + the last dimension 4 arrange as (x1, y1, x2, y2). + - masks (Tensor): Has a shape (num_instances, h, w). + """ + assert len(cls_scores) == len(bbox_preds) == len(coeff_preds) + if objectnesses is None: + with_objectnesses = False + else: + with_objectnesses = True + assert len(cls_scores) == len(objectnesses) + + cfg = self.test_cfg if cfg is None else cfg + cfg = copy.deepcopy(cfg) + + multi_label = cfg.multi_label + multi_label &= self.num_classes > 1 + cfg.multi_label = multi_label + + num_imgs = len(batch_img_metas) + featmap_sizes = [cls_score.shape[2:] for cls_score in cls_scores] + + # If the shape does not change, use the previous mlvl_priors + if featmap_sizes != self.featmap_sizes: + self.mlvl_priors = self.prior_generator.grid_priors( + featmap_sizes, + dtype=cls_scores[0].dtype, + device=cls_scores[0].device) + self.featmap_sizes = featmap_sizes + flatten_priors = torch.cat(self.mlvl_priors) + + mlvl_strides = [ + flatten_priors.new_full( + (featmap_size.numel() * self.num_base_priors, ), stride) for + featmap_size, stride in zip(featmap_sizes, self.featmap_strides) + ] + flatten_stride = torch.cat(mlvl_strides) + + # flatten cls_scores, bbox_preds and objectness + flatten_cls_scores = [ + cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, + self.num_classes) + for cls_score in cls_scores + ] + flatten_bbox_preds = [ + bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) + for bbox_pred in bbox_preds + ] + flatten_coeff_preds = [ + coeff_pred.permute(0, 2, 3, + 1).reshape(num_imgs, -1, + self.head_module.mask_channels) + for coeff_pred in coeff_preds + ] + + flatten_cls_scores = torch.cat(flatten_cls_scores, dim=1).sigmoid() + flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1) + flatten_decoded_bboxes = self.bbox_coder.decode( + flatten_priors.unsqueeze(0), flatten_bbox_preds, flatten_stride) + + flatten_coeff_preds = torch.cat(flatten_coeff_preds, dim=1) + + if with_objectnesses: + flatten_objectness = [ + objectness.permute(0, 2, 3, 1).reshape(num_imgs, -1) + for objectness in objectnesses + ] + flatten_objectness = torch.cat(flatten_objectness, dim=1).sigmoid() + else: + flatten_objectness = [None for _ in range(len(featmap_sizes))] + + results_list = [] + for (bboxes, scores, objectness, coeffs, mask_proto, + img_meta) in zip(flatten_decoded_bboxes, flatten_cls_scores, + flatten_objectness, flatten_coeff_preds, + proto_preds, batch_img_metas): + ori_shape = img_meta['ori_shape'] + batch_input_shape = img_meta['batch_input_shape'] + input_shape_h, input_shape_w = batch_input_shape + if 'pad_param' in img_meta: + pad_param = img_meta['pad_param'] + input_shape_withoutpad = (input_shape_h - pad_param[0] - + pad_param[1], input_shape_w - + pad_param[2] - pad_param[3]) + else: + pad_param = None + input_shape_withoutpad = batch_input_shape + scale_factor = (input_shape_withoutpad[1] / ori_shape[1], + input_shape_withoutpad[0] / ori_shape[0]) + + score_thr = cfg.get('score_thr', -1) + # yolox_style does not require the following operations + if objectness is not None and score_thr > 0 and not cfg.get( + 'yolox_style', False): + conf_inds = objectness > score_thr + bboxes = bboxes[conf_inds, :] + scores = scores[conf_inds, :] + objectness = objectness[conf_inds] + coeffs = coeffs[conf_inds] + + if objectness is not None: + # conf = obj_conf * cls_conf + scores *= objectness[:, None] + # NOTE: Important + coeffs *= objectness[:, None] + + if scores.shape[0] == 0: + empty_results = InstanceData() + empty_results.bboxes = bboxes + empty_results.scores = scores[:, 0] + empty_results.labels = scores[:, 0].int() + h, w = ori_shape[:2] if rescale else img_meta['img_shape'][:2] + empty_results.masks = torch.zeros( + size=(0, h, w), dtype=torch.bool, device=bboxes.device) + results_list.append(empty_results) + continue + + nms_pre = cfg.get('nms_pre', 100000) + if cfg.multi_label is False: + scores, labels = scores.max(1, keepdim=True) + scores, _, keep_idxs, results = filter_scores_and_topk( + scores, + score_thr, + nms_pre, + results=dict(labels=labels[:, 0], coeffs=coeffs)) + labels = results['labels'] + coeffs = results['coeffs'] + else: + out = filter_scores_and_topk( + scores, score_thr, nms_pre, results=dict(coeffs=coeffs)) + scores, labels, keep_idxs, filtered_results = out + coeffs = filtered_results['coeffs'] + + results = InstanceData( + scores=scores, + labels=labels, + bboxes=bboxes[keep_idxs], + coeffs=coeffs) + + if cfg.get('yolox_style', False): + # do not need max_per_img + cfg.max_per_img = len(results) + + results = self._bbox_post_process( + results=results, + cfg=cfg, + rescale=False, + with_nms=with_nms, + img_meta=img_meta) + + if len(results.bboxes): + masks = self.process_mask(mask_proto, results.coeffs, + results.bboxes, + (input_shape_h, input_shape_w), True) + if rescale: + if pad_param is not None: + # bbox minus pad param + top_pad, _, left_pad, _ = pad_param + results.bboxes -= results.bboxes.new_tensor( + [left_pad, top_pad, left_pad, top_pad]) + # mask crop pad param + top, left = int(top_pad), int(left_pad) + bottom, right = int(input_shape_h - + top_pad), int(input_shape_w - + left_pad) + masks = masks[:, :, top:bottom, left:right] + results.bboxes /= results.bboxes.new_tensor( + scale_factor).repeat((1, 2)) + + fast_test = cfg.get('fast_test', False) + if fast_test: + masks = F.interpolate( + masks, + size=ori_shape, + mode='bilinear', + align_corners=False) + masks = masks.squeeze(0) + masks = masks > cfg.mask_thr_binary + else: + masks.gt_(cfg.mask_thr_binary) + masks = torch.as_tensor(masks, dtype=torch.uint8) + masks = masks[0].permute(1, 2, + 0).contiguous().cpu().numpy() + masks = mmcv.imresize(masks, + (ori_shape[1], ori_shape[0])) + + if len(masks.shape) == 2: + masks = masks[:, :, None] + masks = torch.from_numpy(masks).permute(2, 0, 1) + + results.bboxes[:, 0::2].clamp_(0, ori_shape[1]) + results.bboxes[:, 1::2].clamp_(0, ori_shape[0]) + + results.masks = masks.bool() + results_list.append(results) + else: + h, w = ori_shape[:2] if rescale else img_meta['img_shape'][:2] + results.masks = torch.zeros( + size=(0, h, w), dtype=torch.bool, device=bboxes.device) + results_list.append(results) + return results_list + + def process_mask(self, + mask_proto: Tensor, + mask_coeff_pred: Tensor, + bboxes: Tensor, + shape: Tuple[int, int], + upsample: bool = False) -> Tensor: + """Generate mask logits results. + + Args: + mask_proto (Tensor): Mask prototype features. + Has shape (num_instance, mask_channels). + mask_coeff_pred (Tensor): Mask coefficients prediction for + single image. Has shape (mask_channels, H, W) + bboxes (Tensor): Tensor of the bbox. Has shape (num_instance, 4). + shape (Tuple): Batch input shape of image. + upsample (bool): Whether upsample masks results to batch input + shape. Default to False. + Return: + Tensor: Instance segmentation masks for each instance. + Has shape (num_instance, H, W). + """ + c, mh, mw = mask_proto.shape # CHW + masks = ( + mask_coeff_pred @ mask_proto.float().view(c, -1)).sigmoid().view( + -1, mh, mw)[None] + if upsample: + masks = F.interpolate( + masks, shape, mode='bilinear', align_corners=False) # 1CHW + masks = self.crop_mask(masks, bboxes) + return masks + + def crop_mask(self, masks: Tensor, boxes: Tensor) -> Tensor: + """Crop mask by the bounding box. + + Args: + masks (Tensor): Predicted mask results. Has shape + (1, num_instance, H, W). + boxes (Tensor): Tensor of the bbox. Has shape (num_instance, 4). + Returns: + (torch.Tensor): The masks are being cropped to the bounding box. + """ + _, n, h, w = masks.shape + x1, y1, x2, y2 = torch.chunk(boxes[:, :, None], 4, 1) + r = torch.arange( + w, device=masks.device, + dtype=x1.dtype)[None, None, None, :] # rows shape(1, 1, w, 1) + c = torch.arange( + h, device=masks.device, + dtype=x1.dtype)[None, None, :, None] # cols shape(1, h, 1, 1) + + return masks * ((r >= x1) * (r < x2) * (c >= y1) * (c < y2)) diff --git a/tools/model_converters/yolov5_to_mmyolo.py b/tools/model_converters/yolov5_to_mmyolo.py index c1d4e41d4..a4e62a2f7 100644 --- a/tools/model_converters/yolov5_to_mmyolo.py +++ b/tools/model_converters/yolov5_to_mmyolo.py @@ -25,6 +25,7 @@ 'model.21': 'neck.downsample_layers.1', 'model.23': 'neck.bottom_up_layers.1', 'model.24.m': 'bbox_head.head_module.convs_pred', + 'model.24.proto': 'bbox_head.head_module.proto_preds', } convert_dict_p6 = { @@ -54,6 +55,7 @@ 'model.30': 'neck.downsample_layers.2', 'model.32': 'neck.bottom_up_layers.2', 'model.33.m': 'bbox_head.head_module.convs_pred', + 'model.33.proto': 'bbox_head.head_module.proto_preds', } @@ -94,6 +96,10 @@ def convert(src, dst): if '.m.' in new_key: new_key = new_key.replace('.m.', '.blocks.') new_key = new_key.replace('.cv', '.conv') + elif 'bbox_head.head_module.proto_preds.cv' in new_key: + new_key = new_key.replace( + 'bbox_head.head_module.proto_preds.cv', + 'bbox_head.head_module.proto_preds.conv') else: new_key = new_key.replace('.cv1', '.main_conv') new_key = new_key.replace('.cv2', '.short_conv') diff --git a/tools/model_converters/yolov8_to_mmyolo.py b/tools/model_converters/yolov8_to_mmyolo.py index df0c514b0..4ed64f249 100644 --- a/tools/model_converters/yolov8_to_mmyolo.py +++ b/tools/model_converters/yolov8_to_mmyolo.py @@ -53,6 +53,19 @@ def convert(src, dst): if '.m.' in new_key: new_key = new_key.replace('.m.', '.blocks.') new_key = new_key.replace('.cv', '.conv') + elif 'bbox_head.head_module.proto.cv' in new_key: + new_key = new_key.replace( + 'bbox_head.head_module.proto.cv', + 'bbox_head.head_module.proto_preds.conv') + elif 'bbox_head.head_module.proto' in new_key: + new_key = new_key.replace('bbox_head.head_module.proto', + 'bbox_head.head_module.proto_preds') + elif 'bbox_head.head_module.cv4.' in new_key: + new_key = new_key.replace( + 'bbox_head.head_module.cv4', + 'bbox_head.head_module.mask_coeff_preds') + new_key = new_key.replace('.2.weight', '.2.conv.weight') + new_key = new_key.replace('.2.bias', '.2.conv.bias') elif 'bbox_head.head_module' in new_key: new_key = new_key.replace('.cv2', '.reg_preds') new_key = new_key.replace('.cv3', '.cls_preds')
[Feature] Support YOLOv8 Ins Segmentation Inference Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. ## Motivation Please describe the motivation for this PR and the goal you want to achieve through this PR. ## Modification Support YOLOv8 Ins Segmentation Inference Can reproduce bbox and mask mAP on yolov8-s model. ![image](https://user-images.githubusercontent.com/40284075/223897540-f0bddcb6-ebf5-44e7-8cb6-876fe3b8df9b.png) Official mAP results: ![image](https://user-images.githubusercontent.com/40284075/223652928-fb1fc63f-0028-484f-b7fb-aebcdd13ba69.png)
2023-04-18T11:08:42
0.0
[]
[]
open-mmlab/mmyolo
open-mmlab__mmyolo-694
600343eb083e148e9c8da1301a1559ef105d2e0f
diff --git a/.dev_scripts/gather_models.py b/.dev_scripts/gather_models.py index ba5039c22..f05e2b5b3 100644 --- a/.dev_scripts/gather_models.py +++ b/.dev_scripts/gather_models.py @@ -108,6 +108,7 @@ def get_dataset_name(config): name_map = dict( CityscapesDataset='Cityscapes', CocoDataset='COCO', + PoseCocoDataset='COCO Person', YOLOv5CocoDataset='COCO', CocoPanopticDataset='COCO', YOLOv5DOTADataset='DOTA 1.0', diff --git a/configs/_base_/pose/coco.py b/configs/_base_/pose/coco.py new file mode 100644 index 000000000..865a95bc0 --- /dev/null +++ b/configs/_base_/pose/coco.py @@ -0,0 +1,181 @@ +dataset_info = dict( + dataset_name='coco', + paper_info=dict( + author='Lin, Tsung-Yi and Maire, Michael and ' + 'Belongie, Serge and Hays, James and ' + 'Perona, Pietro and Ramanan, Deva and ' + r'Doll{\'a}r, Piotr and Zitnick, C Lawrence', + title='Microsoft coco: Common objects in context', + container='European conference on computer vision', + year='2014', + homepage='http://cocodataset.org/', + ), + keypoint_info={ + 0: + dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''), + 1: + dict( + name='left_eye', + id=1, + color=[51, 153, 255], + type='upper', + swap='right_eye'), + 2: + dict( + name='right_eye', + id=2, + color=[51, 153, 255], + type='upper', + swap='left_eye'), + 3: + dict( + name='left_ear', + id=3, + color=[51, 153, 255], + type='upper', + swap='right_ear'), + 4: + dict( + name='right_ear', + id=4, + color=[51, 153, 255], + type='upper', + swap='left_ear'), + 5: + dict( + name='left_shoulder', + id=5, + color=[0, 255, 0], + type='upper', + swap='right_shoulder'), + 6: + dict( + name='right_shoulder', + id=6, + color=[255, 128, 0], + type='upper', + swap='left_shoulder'), + 7: + dict( + name='left_elbow', + id=7, + color=[0, 255, 0], + type='upper', + swap='right_elbow'), + 8: + dict( + name='right_elbow', + id=8, + color=[255, 128, 0], + type='upper', + swap='left_elbow'), + 9: + dict( + name='left_wrist', + id=9, + color=[0, 255, 0], + type='upper', + swap='right_wrist'), + 10: + dict( + name='right_wrist', + id=10, + color=[255, 128, 0], + type='upper', + swap='left_wrist'), + 11: + dict( + name='left_hip', + id=11, + color=[0, 255, 0], + type='lower', + swap='right_hip'), + 12: + dict( + name='right_hip', + id=12, + color=[255, 128, 0], + type='lower', + swap='left_hip'), + 13: + dict( + name='left_knee', + id=13, + color=[0, 255, 0], + type='lower', + swap='right_knee'), + 14: + dict( + name='right_knee', + id=14, + color=[255, 128, 0], + type='lower', + swap='left_knee'), + 15: + dict( + name='left_ankle', + id=15, + color=[0, 255, 0], + type='lower', + swap='right_ankle'), + 16: + dict( + name='right_ankle', + id=16, + color=[255, 128, 0], + type='lower', + swap='left_ankle') + }, + skeleton_info={ + 0: + dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]), + 1: + dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]), + 2: + dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]), + 3: + dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]), + 4: + dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]), + 5: + dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]), + 6: + dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]), + 7: + dict( + link=('left_shoulder', 'right_shoulder'), + id=7, + color=[51, 153, 255]), + 8: + dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]), + 9: + dict( + link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]), + 10: + dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]), + 11: + dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]), + 12: + dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]), + 13: + dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]), + 14: + dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]), + 15: + dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]), + 16: + dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]), + 17: + dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]), + 18: + dict( + link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255]) + }, + joint_weights=[ + 1., 1., 1., 1., 1., 1., 1., 1.2, 1.2, 1.5, 1.5, 1., 1., 1.2, 1.2, 1.5, + 1.5 + ], + sigmas=[ + 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062, + 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089 + ]) diff --git a/configs/yolox/README.md b/configs/yolox/README.md index e646dd20e..7d5dc683c 100644 --- a/configs/yolox/README.md +++ b/configs/yolox/README.md @@ -45,6 +45,35 @@ The modified training parameters are as follows: 1. The test score threshold is 0.001. 2. Due to the need for pre-training weights, we cannot reproduce the performance of the `yolox-nano` model. Please refer to https://github.com/Megvii-BaseDetection/YOLOX/issues/674 for more information. +## YOLOX-Pose + +Based on [MMPose](https://github.com/open-mmlab/mmpose/blob/main/projects/yolox-pose/README.md), we have implemented a YOLOX-based human pose estimator, utilizing the approach outlined in **YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object Keypoint Similarity Loss (CVPRW 2022)**. This pose estimator is lightweight and quick, making it well-suited for crowded scenes. + +<div align=center> +<img src="https://user-images.githubusercontent.com/26127467/226655503-3cee746e-6e42-40be-82ae-6e7cae2a4c7e.jpg"/> +</div> + +### Results + +| Backbone | Size | Batch Size | AMP | RTMDet-Hyp | Mem (GB) | AP | Config | Download | +| :--------: | :--: | :--------: | :-: | :--------: | :------: | :--: | :------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | +| YOLOX-tiny | 416 | 8xb32 | Yes | Yes | 5.3 | 52.8 | [config](./pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco_20230427_080351-2117af67.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco_20230427_080351.log.json) | +| YOLOX-s | 640 | 8xb32 | Yes | Yes | 10.7 | 63.7 | [config](./pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco_20230427_005150-e87d843a.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco_20230427_005150.log.json) | +| YOLOX-m | 640 | 8xb32 | Yes | Yes | 19.2 | 69.3 | [config](./pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco_20230427_094024-bbeacc1c.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco_20230427_094024.log.json) | +| YOLOX-l | 640 | 8xb32 | Yes | Yes | 30.3 | 71.1 | [config](./pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco.py) | [model](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco_20230427_041140-82d65ac8.pth) \| [log](https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco_20230427_041140.log.json) | + +**Note** + +1. The performance is unstable and may fluctuate and the highest performance weight in `COCO` training may not be the last epoch. The performance shown above is the best model. + +### Installation + +Install MMPose + +``` +mim install -r requirements/mmpose.txt +``` + ## Citation ```latex diff --git a/configs/yolox/metafile.yml b/configs/yolox/metafile.yml index 0926519ec..78ede704a 100644 --- a/configs/yolox/metafile.yml +++ b/configs/yolox/metafile.yml @@ -116,3 +116,51 @@ Models: Metrics: box AP: 47.5 Weights: https://download.openmmlab.com/mmyolo/v0/yolox/yolox_m_fast_8xb32-300e-rtmdet-hyp_coco/yolox_m_fast_8xb32-300e-rtmdet-hyp_coco_20230210_144328-e657e182.pth + - Name: yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco + In Collection: YOLOX + Config: yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco.py + Metadata: + Training Memory (GB): 5.3 + Epochs: 300 + Results: + - Task: Human Pose Estimation + Dataset: COCO + Metrics: + AP: 52.8 + Weights: https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco_20230427_080351-2117af67.pth + - Name: yolox-pose_s_8xb32-300e-rtmdet-hyp_coco + In Collection: YOLOX + Config: yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py + Metadata: + Training Memory (GB): 10.7 + Epochs: 300 + Results: + - Task: Human Pose Estimation + Dataset: COCO + Metrics: + AP: 63.7 + Weights: https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco_20230427_005150-e87d843a.pth + - Name: yolox-pose_m_8xb32-300e-rtmdet-hyp_coco + In Collection: YOLOX + Config: yolox-pose_m_8xb32-300e-rtmdet-hyp_coco.py + Metadata: + Training Memory (GB): 19.2 + Epochs: 300 + Results: + - Task: Human Pose Estimation + Dataset: COCO + Metrics: + AP: 69.3 + Weights: https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco_20230427_094024-bbeacc1c.pth + - Name: yolox-pose_l_8xb32-300e-rtmdet-hyp_coco + In Collection: YOLOX + Config: yolox-pose_l_8xb32-300e-rtmdet-hyp_coco.py + Metadata: + Training Memory (GB): 30.3 + Epochs: 300 + Results: + - Task: Human Pose Estimation + Dataset: COCO + Metrics: + AP: 71.1 + Weights: https://download.openmmlab.com/mmyolo/v0/yolox/pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco_20230427_041140-82d65ac8.pth diff --git a/configs/yolox/pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco.py b/configs/yolox/pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco.py new file mode 100644 index 000000000..96de5e981 --- /dev/null +++ b/configs/yolox/pose/yolox-pose_l_8xb32-300e-rtmdet-hyp_coco.py @@ -0,0 +1,14 @@ +_base_ = ['./yolox-pose_m_8xb32-300e-rtmdet-hyp_coco.py'] + +load_from = 'https://download.openmmlab.com/mmyolo/v0/yolox/yolox_l_fast_8xb8-300e_coco/yolox_l_fast_8xb8-300e_coco_20230213_160715-c731eb1c.pth' # noqa + +# ========================modified parameters====================== +deepen_factor = 1.0 +widen_factor = 1.0 + +# =======================Unmodified in most cases================== +# model settings +model = dict( + backbone=dict(deepen_factor=deepen_factor, widen_factor=widen_factor), + neck=dict(deepen_factor=deepen_factor, widen_factor=widen_factor), + bbox_head=dict(head_module=dict(widen_factor=widen_factor))) diff --git a/configs/yolox/pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco.py b/configs/yolox/pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco.py new file mode 100644 index 000000000..f78d6a3a2 --- /dev/null +++ b/configs/yolox/pose/yolox-pose_m_8xb32-300e-rtmdet-hyp_coco.py @@ -0,0 +1,14 @@ +_base_ = ['./yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py'] + +load_from = 'https://download.openmmlab.com/mmyolo/v0/yolox/yolox_m_fast_8xb32-300e-rtmdet-hyp_coco/yolox_m_fast_8xb32-300e-rtmdet-hyp_coco_20230210_144328-e657e182.pth' # noqa + +# ========================modified parameters====================== +deepen_factor = 0.67 +widen_factor = 0.75 + +# =======================Unmodified in most cases================== +# model settings +model = dict( + backbone=dict(deepen_factor=deepen_factor, widen_factor=widen_factor), + neck=dict(deepen_factor=deepen_factor, widen_factor=widen_factor), + bbox_head=dict(head_module=dict(widen_factor=widen_factor))) diff --git a/configs/yolox/pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py b/configs/yolox/pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py new file mode 100644 index 000000000..8fa2172c9 --- /dev/null +++ b/configs/yolox/pose/yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py @@ -0,0 +1,136 @@ +_base_ = '../yolox_s_fast_8xb32-300e-rtmdet-hyp_coco.py' + +load_from = 'https://download.openmmlab.com/mmyolo/v0/yolox/yolox_s_fast_8xb32-300e-rtmdet-hyp_coco/yolox_s_fast_8xb32-300e-rtmdet-hyp_coco_20230210_134645-3a8dfbd7.pth' # noqa + +num_keypoints = 17 +scaling_ratio_range = (0.75, 1.0) +mixup_ratio_range = (0.8, 1.6) +num_last_epochs = 20 + +# model settings +model = dict( + bbox_head=dict( + type='YOLOXPoseHead', + head_module=dict( + type='YOLOXPoseHeadModule', + num_classes=1, + num_keypoints=num_keypoints, + ), + loss_pose=dict( + type='OksLoss', + metainfo='configs/_base_/pose/coco.py', + loss_weight=30.0)), + train_cfg=dict( + assigner=dict( + type='PoseSimOTAAssigner', + center_radius=2.5, + oks_weight=3.0, + iou_calculator=dict(type='mmdet.BboxOverlaps2D'), + oks_calculator=dict( + type='OksLoss', metainfo='configs/_base_/pose/coco.py'))), + test_cfg=dict(score_thr=0.01)) + +# pipelines +pre_transform = [ + dict(type='LoadImageFromFile', backend_args=_base_.backend_args), + dict(type='LoadAnnotations', with_keypoints=True) +] + +img_scale = _base_.img_scale + +train_pipeline_stage1 = [ + *pre_transform, + dict( + type='Mosaic', + img_scale=img_scale, + pad_val=114.0, + pre_transform=pre_transform), + dict( + type='RandomAffine', + scaling_ratio_range=scaling_ratio_range, + border=(-img_scale[0] // 2, -img_scale[1] // 2)), + dict( + type='YOLOXMixUp', + img_scale=img_scale, + ratio_range=mixup_ratio_range, + pad_val=114.0, + pre_transform=pre_transform), + dict(type='mmdet.YOLOXHSVRandomAug'), + dict(type='RandomFlip', prob=0.5), + dict(type='FilterAnnotations', by_keypoints=True, keep_empty=False), + dict( + type='PackDetInputs', + meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape')) +] + +train_pipeline_stage2 = [ + *pre_transform, + dict(type='Resize', scale=img_scale, keep_ratio=True), + dict( + type='mmdet.Pad', + pad_to_square=True, + pad_val=dict(img=(114.0, 114.0, 114.0))), + dict(type='mmdet.YOLOXHSVRandomAug'), + dict(type='RandomFlip', prob=0.5), + dict(type='FilterAnnotations', by_keypoints=True, keep_empty=False), + dict(type='PackDetInputs') +] + +test_pipeline = [ + *pre_transform, + dict(type='Resize', scale=img_scale, keep_ratio=True), + dict( + type='mmdet.Pad', + pad_to_square=True, + pad_val=dict(img=(114.0, 114.0, 114.0))), + dict( + type='PackDetInputs', + meta_keys=('id', 'img_id', 'img_path', 'ori_shape', 'img_shape', + 'scale_factor', 'flip_indices')) +] + +# dataset settings +dataset_type = 'PoseCocoDataset' + +train_dataloader = dict( + dataset=dict( + type=dataset_type, + data_mode='bottomup', + ann_file='annotations/person_keypoints_train2017.json', + pipeline=train_pipeline_stage1)) + +val_dataloader = dict( + dataset=dict( + type=dataset_type, + data_mode='bottomup', + ann_file='annotations/person_keypoints_val2017.json', + pipeline=test_pipeline)) +test_dataloader = val_dataloader + +# evaluators +val_evaluator = dict( + _delete_=True, + type='mmpose.CocoMetric', + ann_file=_base_.data_root + 'annotations/person_keypoints_val2017.json', + score_mode='bbox') +test_evaluator = val_evaluator + +default_hooks = dict(checkpoint=dict(save_best='coco/AP', rule='greater')) + +visualizer = dict(type='mmpose.PoseLocalVisualizer') + +custom_hooks = [ + dict( + type='YOLOXModeSwitchHook', + num_last_epochs=num_last_epochs, + new_train_pipeline=train_pipeline_stage2, + priority=48), + dict(type='mmdet.SyncNormHook', priority=48), + dict( + type='EMAHook', + ema_type='ExpMomentumEMA', + momentum=0.0002, + update_buffers=True, + strict_load=False, + priority=49) +] diff --git a/configs/yolox/pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco.py b/configs/yolox/pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco.py new file mode 100644 index 000000000..a7399065e --- /dev/null +++ b/configs/yolox/pose/yolox-pose_tiny_8xb32-300e-rtmdet-hyp_coco.py @@ -0,0 +1,70 @@ +_base_ = './yolox-pose_s_8xb32-300e-rtmdet-hyp_coco.py' + +load_from = 'https://download.openmmlab.com/mmyolo/v0/yolox/yolox_tiny_fast_8xb32-300e-rtmdet-hyp_coco/yolox_tiny_fast_8xb32-300e-rtmdet-hyp_coco_20230210_143637-4c338102.pth' # noqa + +deepen_factor = 0.33 +widen_factor = 0.375 +scaling_ratio_range = (0.75, 1.0) + +# model settings +model = dict( + data_preprocessor=dict(batch_augments=[ + dict( + type='YOLOXBatchSyncRandomResize', + random_size_range=(320, 640), + size_divisor=32, + interval=1) + ]), + backbone=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + neck=dict( + deepen_factor=deepen_factor, + widen_factor=widen_factor, + ), + bbox_head=dict(head_module=dict(widen_factor=widen_factor))) + +# data settings +img_scale = _base_.img_scale +pre_transform = _base_.pre_transform + +train_pipeline_stage1 = [ + *pre_transform, + dict( + type='Mosaic', + img_scale=img_scale, + pad_val=114.0, + pre_transform=pre_transform), + dict( + type='RandomAffine', + scaling_ratio_range=scaling_ratio_range, + border=(-img_scale[0] // 2, -img_scale[1] // 2)), + dict(type='mmdet.YOLOXHSVRandomAug'), + dict(type='RandomFlip', prob=0.5), + dict( + type='FilterAnnotations', + by_keypoints=True, + min_gt_bbox_wh=(1, 1), + keep_empty=False), + dict( + type='PackDetInputs', + meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape')) +] + +test_pipeline = [ + *pre_transform, + dict(type='Resize', scale=(416, 416), keep_ratio=True), + dict( + type='mmdet.Pad', + pad_to_square=True, + pad_val=dict(img=(114.0, 114.0, 114.0))), + dict( + type='PackDetInputs', + meta_keys=('id', 'img_id', 'img_path', 'ori_shape', 'img_shape', + 'scale_factor', 'flip_indices')) +] + +train_dataloader = dict(dataset=dict(pipeline=train_pipeline_stage1)) +val_dataloader = dict(dataset=dict(pipeline=test_pipeline)) +test_dataloader = val_dataloader diff --git a/mmyolo/datasets/__init__.py b/mmyolo/datasets/__init__.py index b3b6b9719..9db439045 100644 --- a/mmyolo/datasets/__init__.py +++ b/mmyolo/datasets/__init__.py @@ -1,4 +1,5 @@ # Copyright (c) OpenMMLab. All rights reserved. +from .pose_coco import PoseCocoDataset from .transforms import * # noqa: F401,F403 from .utils import BatchShapePolicy, yolov5_collate from .yolov5_coco import YOLOv5CocoDataset @@ -8,5 +9,6 @@ __all__ = [ 'YOLOv5CocoDataset', 'YOLOv5VOCDataset', 'BatchShapePolicy', - 'yolov5_collate', 'YOLOv5CrowdHumanDataset', 'YOLOv5DOTADataset' + 'yolov5_collate', 'YOLOv5CrowdHumanDataset', 'YOLOv5DOTADataset', + 'PoseCocoDataset' ] diff --git a/mmyolo/datasets/pose_coco.py b/mmyolo/datasets/pose_coco.py new file mode 100644 index 000000000..85041f14c --- /dev/null +++ b/mmyolo/datasets/pose_coco.py @@ -0,0 +1,24 @@ +# Copyright (c) OpenMMLab. All rights reserved. +from typing import Any + +from mmengine.dataset import force_full_init + +try: + from mmpose.datasets import CocoDataset as MMPoseCocoDataset +except ImportError: + raise ImportError('Please run "mim install -r requirements/mmpose.txt" ' + 'to install mmpose first for rotated detection.') + +from ..registry import DATASETS + + [email protected]_module() +class PoseCocoDataset(MMPoseCocoDataset): + + METAINFO: dict = dict(from_file='configs/_base_/pose/coco.py') + + @force_full_init + def prepare_data(self, idx) -> Any: + data_info = self.get_data_info(idx) + data_info['dataset'] = self + return self.pipeline(data_info) diff --git a/mmyolo/datasets/transforms/__init__.py b/mmyolo/datasets/transforms/__init__.py index 6719ac337..7cdcf8625 100644 --- a/mmyolo/datasets/transforms/__init__.py +++ b/mmyolo/datasets/transforms/__init__.py @@ -1,16 +1,18 @@ # Copyright (c) OpenMMLab. All rights reserved. from .formatting import PackDetInputs from .mix_img_transforms import Mosaic, Mosaic9, YOLOv5MixUp, YOLOXMixUp -from .transforms import (LetterResize, LoadAnnotations, Polygon2Mask, - PPYOLOERandomCrop, PPYOLOERandomDistort, - RegularizeRotatedBox, RemoveDataElement, - YOLOv5CopyPaste, YOLOv5HSVRandomAug, - YOLOv5KeepRatioResize, YOLOv5RandomAffine) +from .transforms import (FilterAnnotations, LetterResize, LoadAnnotations, + Polygon2Mask, PPYOLOERandomCrop, PPYOLOERandomDistort, + RandomAffine, RandomFlip, RegularizeRotatedBox, + RemoveDataElement, Resize, YOLOv5CopyPaste, + YOLOv5HSVRandomAug, YOLOv5KeepRatioResize, + YOLOv5RandomAffine) __all__ = [ 'YOLOv5KeepRatioResize', 'LetterResize', 'Mosaic', 'YOLOXMixUp', 'YOLOv5MixUp', 'YOLOv5HSVRandomAug', 'LoadAnnotations', 'YOLOv5RandomAffine', 'PPYOLOERandomDistort', 'PPYOLOERandomCrop', 'Mosaic9', 'YOLOv5CopyPaste', 'RemoveDataElement', 'RegularizeRotatedBox', - 'Polygon2Mask', 'PackDetInputs' + 'Polygon2Mask', 'PackDetInputs', 'RandomAffine', 'RandomFlip', 'Resize', + 'FilterAnnotations' ] diff --git a/mmyolo/datasets/transforms/formatting.py b/mmyolo/datasets/transforms/formatting.py index 0185d78c3..07eb0121e 100644 --- a/mmyolo/datasets/transforms/formatting.py +++ b/mmyolo/datasets/transforms/formatting.py @@ -16,6 +16,13 @@ class PackDetInputs(MMDET_PackDetInputs): Compared to mmdet, we just add the `gt_panoptic_seg` field and logic. """ + mapping_table = { + 'gt_bboxes': 'bboxes', + 'gt_bboxes_labels': 'labels', + 'gt_masks': 'masks', + 'gt_keypoints': 'keypoints', + 'gt_keypoints_visible': 'keypoints_visible' + } def transform(self, results: dict) -> dict: """Method to pack the input data. @@ -50,6 +57,10 @@ def transform(self, results: dict) -> dict: if 'gt_ignore_flags' in results: valid_idx = np.where(results['gt_ignore_flags'] == 0)[0] ignore_idx = np.where(results['gt_ignore_flags'] == 1)[0] + if 'gt_keypoints' in results: + results['gt_keypoints_visible'] = results[ + 'gt_keypoints'].keypoints_visible + results['gt_keypoints'] = results['gt_keypoints'].keypoints data_sample = DetDataSample() instance_data = InstanceData() diff --git a/mmyolo/datasets/transforms/keypoint_structure.py b/mmyolo/datasets/transforms/keypoint_structure.py new file mode 100644 index 000000000..7b8402be9 --- /dev/null +++ b/mmyolo/datasets/transforms/keypoint_structure.py @@ -0,0 +1,248 @@ +# Copyright (c) OpenMMLab. All rights reserved. +from abc import ABCMeta +from copy import deepcopy +from typing import List, Optional, Sequence, Tuple, Type, TypeVar, Union + +import numpy as np +import torch +from torch import Tensor + +DeviceType = Union[str, torch.device] +T = TypeVar('T') +IndexType = Union[slice, int, list, torch.LongTensor, torch.cuda.LongTensor, + torch.BoolTensor, torch.cuda.BoolTensor, np.ndarray] + + +class Keypoints(metaclass=ABCMeta): + """The Keypoints class is for keypoints representation. + + Args: + keypoints (Tensor or np.ndarray): The keypoint data with shape of + (N, K, 2). + keypoints_visible (Tensor or np.ndarray): The visibility of keypoints + with shape of (N, K). + device (str or torch.device, Optional): device of keypoints. + Default to None. + clone (bool): Whether clone ``keypoints`` or not. Defaults to True. + flip_indices (list, Optional): The indices of keypoints when the + images is flipped. Defaults to None. + + Notes: + N: the number of instances. + K: the number of keypoints. + """ + + def __init__(self, + keypoints: Union[Tensor, np.ndarray], + keypoints_visible: Union[Tensor, np.ndarray], + device: Optional[DeviceType] = None, + clone: bool = True, + flip_indices: Optional[List] = None) -> None: + + assert len(keypoints_visible) == len(keypoints) + assert keypoints.ndim == 3 + assert keypoints_visible.ndim == 2 + + keypoints = torch.as_tensor(keypoints) + keypoints_visible = torch.as_tensor(keypoints_visible) + + if device is not None: + keypoints = keypoints.to(device=device) + keypoints_visible = keypoints_visible.to(device=device) + + if clone: + keypoints = keypoints.clone() + keypoints_visible = keypoints_visible.clone() + + self.keypoints = keypoints + self.keypoints_visible = keypoints_visible + self.flip_indices = flip_indices + + def flip_(self, + img_shape: Tuple[int, int], + direction: str = 'horizontal') -> None: + """Flip boxes & kpts horizontally in-place. + + Args: + img_shape (Tuple[int, int]): A tuple of image height and width. + direction (str): Flip direction, options are "horizontal", + "vertical" and "diagonal". Defaults to "horizontal" + """ + assert direction == 'horizontal' + self.keypoints[..., 0] = img_shape[1] - self.keypoints[..., 0] + self.keypoints = self.keypoints[:, self.flip_indices] + self.keypoints_visible = self.keypoints_visible[:, self.flip_indices] + + def translate_(self, distances: Tuple[float, float]) -> None: + """Translate boxes and keypoints in-place. + + Args: + distances (Tuple[float, float]): translate distances. The first + is horizontal distance and the second is vertical distance. + """ + assert len(distances) == 2 + distances = self.keypoints.new_tensor(distances).reshape(1, 1, 2) + self.keypoints = self.keypoints + distances + + def rescale_(self, scale_factor: Tuple[float, float]) -> None: + """Rescale boxes & keypoints w.r.t. rescale_factor in-place. + + Note: + Both ``rescale_`` and ``resize_`` will enlarge or shrink boxes + w.r.t ``scale_facotr``. The difference is that ``resize_`` only + changes the width and the height of boxes, but ``rescale_`` also + rescales the box centers simultaneously. + + Args: + scale_factor (Tuple[float, float]): factors for scaling boxes. + The length should be 2. + """ + assert len(scale_factor) == 2 + + scale_factor = self.keypoints.new_tensor(scale_factor).reshape(1, 1, 2) + self.keypoints = self.keypoints * scale_factor + + def clip_(self, img_shape: Tuple[int, int]) -> None: + """Clip bounding boxes and set invisible keypoints outside the image + boundary in-place. + + Args: + img_shape (Tuple[int, int]): A tuple of image height and width. + """ + + kpt_outside = torch.logical_or( + torch.logical_or(self.keypoints[..., 0] < 0, + self.keypoints[..., 1] < 0), + torch.logical_or(self.keypoints[..., 0] > img_shape[1], + self.keypoints[..., 1] > img_shape[0])) + self.keypoints_visible[kpt_outside] *= 0 + + def project_(self, homography_matrix: Union[Tensor, np.ndarray]) -> None: + """Geometrically transform bounding boxes and keypoints in-place using + a homography matrix. + + Args: + homography_matrix (Tensor or np.ndarray): A 3x3 tensor or ndarray + representing the homography matrix for the transformation. + """ + keypoints = self.keypoints + if isinstance(homography_matrix, np.ndarray): + homography_matrix = keypoints.new_tensor(homography_matrix) + + # Convert keypoints to homogeneous coordinates + keypoints = torch.cat([ + self.keypoints, + self.keypoints.new_ones(*self.keypoints.shape[:-1], 1) + ], + dim=-1) + + # Transpose keypoints for matrix multiplication + keypoints_T = torch.transpose(keypoints, -1, 0).contiguous().flatten(1) + + # Apply homography matrix to corners and keypoints + keypoints_T = torch.matmul(homography_matrix, keypoints_T) + + # Transpose back to original shape + keypoints_T = keypoints_T.reshape(3, self.keypoints.shape[1], -1) + keypoints = torch.transpose(keypoints_T, -1, 0).contiguous() + + # Convert corners and keypoints back to non-homogeneous coordinates + keypoints = keypoints[..., :2] / keypoints[..., 2:3] + + # Convert corners back to bounding boxes and update object attributes + self.keypoints = keypoints + + @classmethod + def cat(cls: Type[T], kps_list: Sequence[T], dim: int = 0) -> T: + """Cancatenates an instance list into one single instance. Similar to + ``torch.cat``. + + Args: + box_list (Sequence[T]): A sequence of instances. + dim (int): The dimension over which the box and keypoint are + concatenated. Defaults to 0. + + Returns: + T: Concatenated instance. + """ + assert isinstance(kps_list, Sequence) + if len(kps_list) == 0: + raise ValueError('kps_list should not be a empty list.') + + assert dim == 0 + assert all(isinstance(keypoints, cls) for keypoints in kps_list) + + th_kpt_list = torch.cat( + [keypoints.keypoints for keypoints in kps_list], dim=dim) + th_kpt_vis_list = torch.cat( + [keypoints.keypoints_visible for keypoints in kps_list], dim=dim) + flip_indices = kps_list[0].flip_indices + return cls( + th_kpt_list, + th_kpt_vis_list, + clone=False, + flip_indices=flip_indices) + + def __getitem__(self: T, index: IndexType) -> T: + """Rewrite getitem to protect the last dimension shape.""" + if isinstance(index, np.ndarray): + index = torch.as_tensor(index, device=self.device) + if isinstance(index, Tensor) and index.dtype == torch.bool: + assert index.dim() < self.keypoints.dim() - 1 + elif isinstance(index, tuple): + assert len(index) < self.keypoints.dim() - 1 + # `Ellipsis`(...) is commonly used in index like [None, ...]. + # When `Ellipsis` is in index, it must be the last item. + if Ellipsis in index: + assert index[-1] is Ellipsis + + keypoints = self.keypoints[index] + keypoints_visible = self.keypoints_visible[index] + if self.keypoints.dim() == 2: + keypoints = keypoints.reshape(1, -1, 2) + keypoints_visible = keypoints_visible.reshape(1, -1) + return type(self)( + keypoints, + keypoints_visible, + flip_indices=self.flip_indices, + clone=False) + + def __repr__(self) -> str: + """Return a strings that describes the object.""" + return self.__class__.__name__ + '(\n' + str(self.keypoints) + ')' + + @property + def num_keypoints(self) -> Tensor: + """Compute the number of visible keypoints for each object.""" + return self.keypoints_visible.sum(dim=1).int() + + def __deepcopy__(self, memo): + """Only clone the tensors when applying deepcopy.""" + cls = self.__class__ + other = cls.__new__(cls) + memo[id(self)] = other + other.keypoints = self.keypoints.clone() + other.keypoints_visible = self.keypoints_visible.clone() + other.flip_indices = deepcopy(self.flip_indices) + return other + + def clone(self: T) -> T: + """Reload ``clone`` for tensors.""" + return type(self)( + self.keypoints, + self.keypoints_visible, + flip_indices=self.flip_indices, + clone=True) + + def to(self: T, *args, **kwargs) -> T: + """Reload ``to`` for tensors.""" + return type(self)( + self.keypoints.to(*args, **kwargs), + self.keypoints_visible.to(*args, **kwargs), + flip_indices=self.flip_indices, + clone=False) + + @property + def device(self) -> torch.device: + """Reload ``device`` from self.tensor.""" + return self.keypoints.device diff --git a/mmyolo/datasets/transforms/mix_img_transforms.py b/mmyolo/datasets/transforms/mix_img_transforms.py index 4753ecc3a..29e4a4057 100644 --- a/mmyolo/datasets/transforms/mix_img_transforms.py +++ b/mmyolo/datasets/transforms/mix_img_transforms.py @@ -318,7 +318,9 @@ def mix_img_transform(self, results: dict) -> dict: mosaic_bboxes_labels = [] mosaic_ignore_flags = [] mosaic_masks = [] + mosaic_kps = [] with_mask = True if 'gt_masks' in results else False + with_kps = True if 'gt_keypoints' in results else False # self.img_scale is wh format img_scale_w, img_scale_h = self.img_scale @@ -386,6 +388,12 @@ def mix_img_transform(self, results: dict) -> dict: offset=padh, direction='vertical') mosaic_masks.append(gt_masks_i) + if with_kps and results_patch.get('gt_keypoints', + None) is not None: + gt_kps_i = results_patch['gt_keypoints'] + gt_kps_i.rescale_([scale_ratio_i, scale_ratio_i]) + gt_kps_i.translate_([padw, padh]) + mosaic_kps.append(gt_kps_i) mosaic_bboxes = mosaic_bboxes[0].cat(mosaic_bboxes, 0) mosaic_bboxes_labels = np.concatenate(mosaic_bboxes_labels, 0) @@ -396,6 +404,10 @@ def mix_img_transform(self, results: dict) -> dict: if with_mask: mosaic_masks = mosaic_masks[0].cat(mosaic_masks) results['gt_masks'] = mosaic_masks + if with_kps: + mosaic_kps = mosaic_kps[0].cat(mosaic_kps, 0) + mosaic_kps.clip_([2 * img_scale_h, 2 * img_scale_w]) + results['gt_keypoints'] = mosaic_kps else: # remove outside bboxes inside_inds = mosaic_bboxes.is_inside( @@ -406,6 +418,10 @@ def mix_img_transform(self, results: dict) -> dict: if with_mask: mosaic_masks = mosaic_masks[0].cat(mosaic_masks)[inside_inds] results['gt_masks'] = mosaic_masks + if with_kps: + mosaic_kps = mosaic_kps[0].cat(mosaic_kps, 0) + mosaic_kps = mosaic_kps[inside_inds] + results['gt_keypoints'] = mosaic_kps results['img'] = mosaic_img results['img_shape'] = mosaic_img.shape @@ -1131,6 +1147,31 @@ def mix_img_transform(self, results: dict) -> dict: mixup_gt_bboxes_labels = mixup_gt_bboxes_labels[inside_inds] mixup_gt_ignore_flags = mixup_gt_ignore_flags[inside_inds] + if 'gt_keypoints' in results: + # adjust kps + retrieve_gt_keypoints = retrieve_results['gt_keypoints'] + retrieve_gt_keypoints.rescale_([scale_ratio, scale_ratio]) + if self.bbox_clip_border: + retrieve_gt_keypoints.clip_([origin_h, origin_w]) + + if is_filp: + retrieve_gt_keypoints.flip_([origin_h, origin_w], + direction='horizontal') + + # filter + cp_retrieve_gt_keypoints = retrieve_gt_keypoints.clone() + cp_retrieve_gt_keypoints.translate_([-x_offset, -y_offset]) + if self.bbox_clip_border: + cp_retrieve_gt_keypoints.clip_([target_h, target_w]) + + # mixup + mixup_gt_keypoints = cp_retrieve_gt_keypoints.cat( + (results['gt_keypoints'], cp_retrieve_gt_keypoints), dim=0) + if not self.bbox_clip_border: + # remove outside bbox + mixup_gt_keypoints = mixup_gt_keypoints[inside_inds] + results['gt_keypoints'] = mixup_gt_keypoints + results['img'] = mixup_img.astype(np.uint8) results['img_shape'] = mixup_img.shape results['gt_bboxes'] = mixup_gt_bboxes diff --git a/mmyolo/datasets/transforms/transforms.py b/mmyolo/datasets/transforms/transforms.py index 30dfdb3f7..12d15c960 100644 --- a/mmyolo/datasets/transforms/transforms.py +++ b/mmyolo/datasets/transforms/transforms.py @@ -7,9 +7,13 @@ import mmcv import numpy as np import torch +from mmcv.image.geometric import _scale_size from mmcv.transforms import BaseTransform, Compose from mmcv.transforms.utils import cache_randomness +from mmdet.datasets.transforms import FilterAnnotations as FilterDetAnnotations from mmdet.datasets.transforms import LoadAnnotations as MMDET_LoadAnnotations +from mmdet.datasets.transforms import RandomAffine as MMDET_RandomAffine +from mmdet.datasets.transforms import RandomFlip as MMDET_RandomFlip from mmdet.datasets.transforms import Resize as MMDET_Resize from mmdet.structures.bbox import (HorizontalBoxes, autocast_box_type, get_box_type) @@ -17,6 +21,7 @@ from numpy import random from mmyolo.registry import TRANSFORMS +from .keypoint_structure import Keypoints # TODO: Waiting for MMCV support TRANSFORMS.register_module(module=Compose, force=True) @@ -435,6 +440,11 @@ def transform(self, results: dict) -> dict: self._update_mask_ignore_data(results) gt_bboxes = results['gt_masks'].get_bboxes(dst_type='hbox') results['gt_bboxes'] = gt_bboxes + elif self.with_keypoints: + self._load_kps(results) + _, box_type_cls = get_box_type(self.box_type) + results['gt_bboxes'] = box_type_cls( + results.get('bbox', []), dtype=torch.float32) else: results = super().transform(results) self._update_mask_ignore_data(results) @@ -611,6 +621,36 @@ def min_index(self, arr1: np.ndarray, arr2: np.ndarray) -> Tuple[int, int]: dis = ((arr1[:, None, :] - arr2[None, :, :])**2).sum(-1) return np.unravel_index(np.argmin(dis, axis=None), dis.shape) + def _load_kps(self, results: dict) -> None: + """Private function to load keypoints annotations. + + Args: + results (dict): Result dict from + :class:`mmengine.dataset.BaseDataset`. + + Returns: + dict: The dict contains loaded keypoints annotations. + """ + results['height'] = results['img_shape'][0] + results['width'] = results['img_shape'][1] + num_instances = len(results.get('bbox', [])) + + if num_instances == 0: + results['keypoints'] = np.empty( + (0, len(results['flip_indices']), 2), dtype=np.float32) + results['keypoints_visible'] = np.empty( + (0, len(results['flip_indices'])), dtype=np.int32) + results['category_id'] = [] + + results['gt_keypoints'] = Keypoints( + keypoints=results['keypoints'], + keypoints_visible=results['keypoints_visible'], + flip_indices=results['flip_indices'], + ) + + results['gt_ignore_flags'] = np.array([False] * num_instances) + results['gt_bboxes_labels'] = np.array(results['category_id']) - 1 + def __repr__(self) -> str: repr_str = self.__class__.__name__ repr_str += f'(with_bbox={self.with_bbox}, ' @@ -1872,3 +1912,192 @@ def transform(self, results: dict) -> dict: # Consistent logic with mmdet results['gt_masks'] = masks return results + + [email protected]_module() +class FilterAnnotations(FilterDetAnnotations): + """Filter invalid annotations. + + In addition to the conditions checked by ``FilterDetAnnotations``, this + filter adds a new condition requiring instances to have at least one + visible keypoints. + """ + + def __init__(self, by_keypoints: bool = False, **kwargs) -> None: + # TODO: add more filter options + super().__init__(**kwargs) + self.by_keypoints = by_keypoints + + @autocast_box_type() + def transform(self, results: dict) -> Union[dict, None]: + """Transform function to filter annotations. + + Args: + results (dict): Result dict. + Returns: + dict: Updated result dict. + """ + assert 'gt_bboxes' in results + gt_bboxes = results['gt_bboxes'] + if gt_bboxes.shape[0] == 0: + return results + + tests = [] + if self.by_box: + tests.append( + ((gt_bboxes.widths > self.min_gt_bbox_wh[0]) & + (gt_bboxes.heights > self.min_gt_bbox_wh[1])).numpy()) + + if self.by_mask: + assert 'gt_masks' in results + gt_masks = results['gt_masks'] + tests.append(gt_masks.areas >= self.min_gt_mask_area) + + if self.by_keypoints: + assert 'gt_keypoints' in results + num_keypoints = results['gt_keypoints'].num_keypoints + tests.append((num_keypoints > 0).numpy()) + + keep = tests[0] + for t in tests[1:]: + keep = keep & t + + if not keep.any(): + if self.keep_empty: + return None + + keys = ('gt_bboxes', 'gt_bboxes_labels', 'gt_masks', 'gt_ignore_flags', + 'gt_keypoints') + for key in keys: + if key in results: + results[key] = results[key][keep] + + return results + + +# TODO: Check if it can be merged with mmdet.YOLOXHSVRandomAug [email protected]_module() +class RandomAffine(MMDET_RandomAffine): + + def __init__(self, **kwargs) -> None: + super().__init__(**kwargs) + + @autocast_box_type() + def transform(self, results: dict) -> dict: + img = results['img'] + height = img.shape[0] + self.border[1] * 2 + width = img.shape[1] + self.border[0] * 2 + + warp_matrix = self._get_random_homography_matrix(height, width) + + img = cv2.warpPerspective( + img, + warp_matrix, + dsize=(width, height), + borderValue=self.border_val) + results['img'] = img + results['img_shape'] = img.shape + + bboxes = results['gt_bboxes'] + num_bboxes = len(bboxes) + if num_bboxes: + bboxes.project_(warp_matrix) + if self.bbox_clip_border: + bboxes.clip_([height, width]) + # remove outside bbox + valid_index = bboxes.is_inside([height, width]).numpy() + results['gt_bboxes'] = bboxes[valid_index] + results['gt_bboxes_labels'] = results['gt_bboxes_labels'][ + valid_index] + results['gt_ignore_flags'] = results['gt_ignore_flags'][ + valid_index] + + if 'gt_masks' in results: + raise NotImplementedError('RandomAffine only supports bbox.') + + if 'gt_keypoints' in results: + keypoints = results['gt_keypoints'] + keypoints.project_(warp_matrix) + if self.bbox_clip_border: + keypoints.clip_([height, width]) + results['gt_keypoints'] = keypoints[valid_index] + + return results + + def __repr__(self) -> str: + repr_str = self.__class__.__name__ + repr_str += f'(hue_delta={self.hue_delta}, ' + repr_str += f'saturation_delta={self.saturation_delta}, ' + repr_str += f'value_delta={self.value_delta})' + return repr_str + + +# TODO: Check if it can be merged with mmdet.YOLOXHSVRandomAug [email protected]_module() +class RandomFlip(MMDET_RandomFlip): + + @autocast_box_type() + def _flip(self, results: dict) -> None: + """Flip images, bounding boxes, and semantic segmentation map.""" + # flip image + results['img'] = mmcv.imflip( + results['img'], direction=results['flip_direction']) + + img_shape = results['img'].shape[:2] + + # flip bboxes + if results.get('gt_bboxes', None) is not None: + results['gt_bboxes'].flip_(img_shape, results['flip_direction']) + + # flip keypoints + if results.get('gt_keypoints', None) is not None: + results['gt_keypoints'].flip_(img_shape, results['flip_direction']) + + # flip masks + if results.get('gt_masks', None) is not None: + results['gt_masks'] = results['gt_masks'].flip( + results['flip_direction']) + + # flip segs + if results.get('gt_seg_map', None) is not None: + results['gt_seg_map'] = mmcv.imflip( + results['gt_seg_map'], direction=results['flip_direction']) + + # record homography matrix for flip + self._record_homography_matrix(results) + + [email protected]_module() +class Resize(MMDET_Resize): + + def _resize_keypoints(self, results: dict) -> None: + """Resize bounding boxes with ``results['scale_factor']``.""" + if results.get('gt_keypoints', None) is not None: + results['gt_keypoints'].rescale_(results['scale_factor']) + if self.clip_object_border: + results['gt_keypoints'].clip_(results['img_shape']) + + @autocast_box_type() + def transform(self, results: dict) -> dict: + """Transform function to resize images, bounding boxes and semantic + segmentation map. + + Args: + results (dict): Result dict from loading pipeline. + Returns: + dict: Resized results, 'img', 'gt_bboxes', 'gt_seg_map', + 'scale', 'scale_factor', 'height', 'width', and 'keep_ratio' keys + are updated in result dict. + """ + if self.scale: + results['scale'] = self.scale + else: + img_shape = results['img'].shape[:2] + results['scale'] = _scale_size(img_shape[::-1], self.scale_factor) + self._resize_img(results) + self._resize_bboxes(results) + self._resize_keypoints(results) + self._resize_masks(results) + self._resize_seg(results) + self._record_homography_matrix(results) + return results diff --git a/mmyolo/datasets/utils.py b/mmyolo/datasets/utils.py index d50207c8d..efa2ff5ef 100644 --- a/mmyolo/datasets/utils.py +++ b/mmyolo/datasets/utils.py @@ -21,6 +21,8 @@ def yolov5_collate(data_batch: Sequence, batch_imgs = [] batch_bboxes_labels = [] batch_masks = [] + batch_keyponits = [] + batch_keypoints_visible = [] for i in range(len(data_batch)): datasamples = data_batch[i]['data_samples'] inputs = data_batch[i]['inputs'] @@ -33,11 +35,16 @@ def yolov5_collate(data_batch: Sequence, batch_masks.append(masks) if 'gt_panoptic_seg' in datasamples: batch_masks.append(datasamples.gt_panoptic_seg.pan_seg) + if 'keypoints' in datasamples.gt_instances: + keypoints = datasamples.gt_instances.keypoints + keypoints_visible = datasamples.gt_instances.keypoints_visible + batch_keyponits.append(keypoints) + batch_keypoints_visible.append(keypoints_visible) + batch_idx = gt_labels.new_full((len(gt_labels), 1), i) bboxes_labels = torch.cat((batch_idx, gt_labels[:, None], gt_bboxes), dim=1) batch_bboxes_labels.append(bboxes_labels) - collated_results = { 'data_samples': { 'bboxes_labels': torch.cat(batch_bboxes_labels, 0) @@ -46,6 +53,12 @@ def yolov5_collate(data_batch: Sequence, if len(batch_masks) > 0: collated_results['data_samples']['masks'] = torch.cat(batch_masks, 0) + if len(batch_keyponits) > 0: + collated_results['data_samples']['keypoints'] = torch.cat( + batch_keyponits, 0) + collated_results['data_samples']['keypoints_visible'] = torch.cat( + batch_keypoints_visible, 0) + if use_ms_training: collated_results['inputs'] = batch_imgs else: diff --git a/mmyolo/models/data_preprocessors/data_preprocessor.py b/mmyolo/models/data_preprocessors/data_preprocessor.py index f09fd8e74..a29b90844 100644 --- a/mmyolo/models/data_preprocessors/data_preprocessor.py +++ b/mmyolo/models/data_preprocessors/data_preprocessor.py @@ -49,6 +49,10 @@ def forward(self, inputs: Tensor, data_samples: dict) -> Tensor and dict: data_samples['bboxes_labels'][:, 2::2] *= scale_x data_samples['bboxes_labels'][:, 3::2] *= scale_y + if 'keypoints' in data_samples: + data_samples['keypoints'][..., 0] *= scale_x + data_samples['keypoints'][..., 1] *= scale_y + message_hub = MessageHub.get_current_instance() if (message_hub.get_info('iter') + 1) % self._interval == 0: self._input_size = self._get_random_size( @@ -102,6 +106,10 @@ def forward(self, data: dict, training: bool = False) -> dict: } if 'masks' in data_samples: data_samples_output['masks'] = data_samples['masks'] + if 'keypoints' in data_samples: + data_samples_output['keypoints'] = data_samples['keypoints'] + data_samples_output['keypoints_visible'] = data_samples[ + 'keypoints_visible'] return {'inputs': inputs, 'data_samples': data_samples_output} diff --git a/mmyolo/models/dense_heads/__init__.py b/mmyolo/models/dense_heads/__init__.py index ac65c42e5..90587c3fb 100644 --- a/mmyolo/models/dense_heads/__init__.py +++ b/mmyolo/models/dense_heads/__init__.py @@ -10,6 +10,7 @@ from .yolov7_head import YOLOv7Head, YOLOv7HeadModule, YOLOv7p6HeadModule from .yolov8_head import YOLOv8Head, YOLOv8HeadModule from .yolox_head import YOLOXHead, YOLOXHeadModule +from .yolox_pose_head import YOLOXPoseHead, YOLOXPoseHeadModule __all__ = [ 'YOLOv5Head', 'YOLOv6Head', 'YOLOXHead', 'YOLOv5HeadModule', @@ -17,5 +18,6 @@ 'RTMDetSepBNHeadModule', 'YOLOv7Head', 'PPYOLOEHead', 'PPYOLOEHeadModule', 'YOLOv7HeadModule', 'YOLOv7p6HeadModule', 'YOLOv8Head', 'YOLOv8HeadModule', 'RTMDetRotatedHead', 'RTMDetRotatedSepBNHeadModule', 'RTMDetInsSepBNHead', - 'RTMDetInsSepBNHeadModule', 'YOLOv5InsHead', 'YOLOv5InsHeadModule' + 'RTMDetInsSepBNHeadModule', 'YOLOv5InsHead', 'YOLOv5InsHeadModule', + 'YOLOXPoseHead', 'YOLOXPoseHeadModule' ] diff --git a/mmyolo/models/dense_heads/yolox_pose_head.py b/mmyolo/models/dense_heads/yolox_pose_head.py new file mode 100644 index 000000000..96264e552 --- /dev/null +++ b/mmyolo/models/dense_heads/yolox_pose_head.py @@ -0,0 +1,409 @@ +# Copyright (c) OpenMMLab. All rights reserved. +from collections import defaultdict +from typing import List, Optional, Sequence, Tuple, Union + +import torch +import torch.nn as nn +from mmcv.ops import batched_nms +from mmdet.models.utils import filter_scores_and_topk +from mmdet.utils import ConfigType, OptInstanceList +from mmengine.config import ConfigDict +from mmengine.model import ModuleList, bias_init_with_prob +from mmengine.structures import InstanceData +from torch import Tensor + +from mmyolo.registry import MODELS +from ..utils import OutputSaveFunctionWrapper, OutputSaveObjectWrapper +from .yolox_head import YOLOXHead, YOLOXHeadModule + + [email protected]_module() +class YOLOXPoseHeadModule(YOLOXHeadModule): + """YOLOXPoseHeadModule serves as a head module for `YOLOX-Pose`. + + In comparison to `YOLOXHeadModule`, this module introduces branches for + keypoint prediction. + """ + + def __init__(self, num_keypoints: int, *args, **kwargs): + self.num_keypoints = num_keypoints + super().__init__(*args, **kwargs) + + def _init_layers(self): + """Initializes the layers in the head module.""" + super()._init_layers() + + # The pose branch requires additional layers for precise regression + self.stacked_convs *= 2 + + # Create separate layers for each level of feature maps + pose_convs, offsets_preds, vis_preds = [], [], [] + for _ in self.featmap_strides: + pose_convs.append(self._build_stacked_convs()) + offsets_preds.append( + nn.Conv2d(self.feat_channels, self.num_keypoints * 2, 1)) + vis_preds.append( + nn.Conv2d(self.feat_channels, self.num_keypoints, 1)) + + self.multi_level_pose_convs = ModuleList(pose_convs) + self.multi_level_conv_offsets = ModuleList(offsets_preds) + self.multi_level_conv_vis = ModuleList(vis_preds) + + def init_weights(self): + """Initialize weights of the head.""" + super().init_weights() + + # Use prior in model initialization to improve stability + bias_init = bias_init_with_prob(0.01) + for conv_vis in self.multi_level_conv_vis: + conv_vis.bias.data.fill_(bias_init) + + def forward(self, x: Tuple[Tensor]) -> Tuple[List]: + """Forward features from the upstream network.""" + offsets_pred, vis_pred = [], [] + for i in range(len(x)): + pose_feat = self.multi_level_pose_convs[i](x[i]) + offsets_pred.append(self.multi_level_conv_offsets[i](pose_feat)) + vis_pred.append(self.multi_level_conv_vis[i](pose_feat)) + return (*super().forward(x), offsets_pred, vis_pred) + + [email protected]_module() +class YOLOXPoseHead(YOLOXHead): + """YOLOXPoseHead head used in `YOLO-Pose. + + <https://arxiv.org/abs/2204.06806>`_. + Args: + loss_pose (ConfigDict, optional): Config of keypoint OKS loss. + """ + + def __init__( + self, + loss_pose: Optional[ConfigType] = None, + *args, + **kwargs, + ): + super().__init__(*args, **kwargs) + self.loss_pose = MODELS.build(loss_pose) + self.num_keypoints = self.head_module.num_keypoints + + # set up buffers to save variables generated in methods of + # the class's base class. + self._log = defaultdict(list) + self.sampler = OutputSaveObjectWrapper(self.sampler) + + # ensure that the `sigmas` in self.assigner.oks_calculator + # is on the same device as the model + if hasattr(self.assigner, 'oks_calculator'): + self.add_module('assigner_oks_calculator', + self.assigner.oks_calculator) + + def _clear(self): + """Clear variable buffers.""" + self.sampler.clear() + self._log.clear() + + def loss(self, x: Tuple[Tensor], batch_data_samples: Union[list, + dict]) -> dict: + + if isinstance(batch_data_samples, list): + losses = super().loss(x, batch_data_samples) + else: + outs = self(x) + # Fast version + loss_inputs = outs + (batch_data_samples['bboxes_labels'], + batch_data_samples['keypoints'], + batch_data_samples['keypoints_visible'], + batch_data_samples['img_metas']) + losses = self.loss_by_feat(*loss_inputs) + + return losses + + def loss_by_feat( + self, + cls_scores: Sequence[Tensor], + bbox_preds: Sequence[Tensor], + objectnesses: Sequence[Tensor], + kpt_preds: Sequence[Tensor], + vis_preds: Sequence[Tensor], + batch_gt_instances: Tensor, + batch_gt_keypoints: Tensor, + batch_gt_keypoints_visible: Tensor, + batch_img_metas: Sequence[dict], + batch_gt_instances_ignore: OptInstanceList = None) -> dict: + """Calculate the loss based on the features extracted by the detection + head. + + In addition to the base class method, keypoint losses are also + calculated in this method. + """ + + self._clear() + batch_gt_instances = self.gt_kps_instances_preprocess( + batch_gt_instances, batch_gt_keypoints, batch_gt_keypoints_visible, + len(batch_img_metas)) + + # collect keypoints coordinates and visibility from model predictions + kpt_preds = torch.cat([ + kpt_pred.flatten(2).permute(0, 2, 1).contiguous() + for kpt_pred in kpt_preds + ], + dim=1) + + featmap_sizes = [cls_score.shape[2:] for cls_score in cls_scores] + mlvl_priors = self.prior_generator.grid_priors( + featmap_sizes, + dtype=cls_scores[0].dtype, + device=cls_scores[0].device, + with_stride=True) + grid_priors = torch.cat(mlvl_priors) + + flatten_kpts = self.decode_pose(grid_priors[..., :2], kpt_preds, + grid_priors[..., 2]) + + vis_preds = torch.cat([ + vis_pred.flatten(2).permute(0, 2, 1).contiguous() + for vis_pred in vis_preds + ], + dim=1) + + # compute detection losses and collect targets for keypoints + # predictions simultaneously + self._log['pred_keypoints'] = list(flatten_kpts.detach().split( + 1, dim=0)) + self._log['pred_keypoints_vis'] = list(vis_preds.detach().split( + 1, dim=0)) + + losses = super().loss_by_feat(cls_scores, bbox_preds, objectnesses, + batch_gt_instances, batch_img_metas, + batch_gt_instances_ignore) + + kpt_targets, vis_targets = [], [] + sampling_results = self.sampler.log['sample'] + sampling_result_idx = 0 + for gt_instances in batch_gt_instances: + if len(gt_instances) > 0: + sampling_result = sampling_results[sampling_result_idx] + kpt_target = gt_instances['keypoints'][ + sampling_result.pos_assigned_gt_inds] + vis_target = gt_instances['keypoints_visible'][ + sampling_result.pos_assigned_gt_inds] + sampling_result_idx += 1 + kpt_targets.append(kpt_target) + vis_targets.append(vis_target) + + if len(kpt_targets) > 0: + kpt_targets = torch.cat(kpt_targets, 0) + vis_targets = torch.cat(vis_targets, 0) + + # compute keypoint losses + if len(kpt_targets) > 0: + vis_targets = (vis_targets > 0).float() + pos_masks = torch.cat(self._log['foreground_mask'], 0) + bbox_targets = torch.cat(self._log['bbox_target'], 0) + loss_kpt = self.loss_pose( + flatten_kpts.view(-1, self.num_keypoints, 2)[pos_masks], + kpt_targets, vis_targets, bbox_targets) + loss_vis = self.loss_cls( + vis_preds.view(-1, self.num_keypoints)[pos_masks], + vis_targets) / vis_targets.sum() + else: + loss_kpt = kpt_preds.sum() * 0 + loss_vis = vis_preds.sum() * 0 + + losses.update(dict(loss_kpt=loss_kpt, loss_vis=loss_vis)) + + self._clear() + return losses + + @torch.no_grad() + def _get_targets_single( + self, + priors: Tensor, + cls_preds: Tensor, + decoded_bboxes: Tensor, + objectness: Tensor, + gt_instances: InstanceData, + img_meta: dict, + gt_instances_ignore: Optional[InstanceData] = None) -> tuple: + """Calculates targets for a single image, and saves them to the log. + + This method is similar to the _get_targets_single method in the base + class, but additionally saves the foreground mask and bbox targets to + the log. + """ + + # Construct a combined representation of bboxes and keypoints to + # ensure keypoints are also involved in the positive sample + # assignment process + kpt = self._log['pred_keypoints'].pop(0).squeeze(0) + kpt_vis = self._log['pred_keypoints_vis'].pop(0).squeeze(0) + kpt = torch.cat((kpt, kpt_vis.unsqueeze(-1)), dim=-1) + decoded_bboxes = torch.cat((decoded_bboxes, kpt.flatten(1)), dim=1) + + targets = super()._get_targets_single(priors, cls_preds, + decoded_bboxes, objectness, + gt_instances, img_meta, + gt_instances_ignore) + self._log['foreground_mask'].append(targets[0]) + self._log['bbox_target'].append(targets[3]) + return targets + + def predict_by_feat(self, + cls_scores: List[Tensor], + bbox_preds: List[Tensor], + objectnesses: Optional[List[Tensor]] = None, + kpt_preds: Optional[List[Tensor]] = None, + vis_preds: Optional[List[Tensor]] = None, + batch_img_metas: Optional[List[dict]] = None, + cfg: Optional[ConfigDict] = None, + rescale: bool = True, + with_nms: bool = True) -> List[InstanceData]: + """Transform a batch of output features extracted by the head into bbox + and keypoint results. + + In addition to the base class method, keypoint predictions are also + calculated in this method. + """ + """calculate predicted bboxes and get the kept instances indices. + + use OutputSaveFunctionWrapper as context manager to obtain + intermediate output from a parent class without copying a + arge block of code + """ + with OutputSaveFunctionWrapper( + filter_scores_and_topk, + super().predict_by_feat.__globals__) as outputs_1: + with OutputSaveFunctionWrapper( + batched_nms, + super()._bbox_post_process.__globals__) as outputs_2: + results_list = super().predict_by_feat(cls_scores, bbox_preds, + objectnesses, + batch_img_metas, cfg, + rescale, with_nms) + keep_indices_topk = [ + out[2][:cfg.max_per_img] for out in outputs_1 + ] + keep_indices_nms = [ + out[1][:cfg.max_per_img] for out in outputs_2 + ] + + num_imgs = len(batch_img_metas) + + # recover keypoints coordinates from model predictions + featmap_sizes = [vis_pred.shape[2:] for vis_pred in vis_preds] + priors = torch.cat(self.mlvl_priors) + strides = [ + priors.new_full((featmap_size.numel() * self.num_base_priors, ), + stride) for featmap_size, stride in zip( + featmap_sizes, self.featmap_strides) + ] + strides = torch.cat(strides) + kpt_preds = torch.cat([ + kpt_pred.permute(0, 2, 3, 1).reshape( + num_imgs, -1, self.num_keypoints * 2) for kpt_pred in kpt_preds + ], + dim=1) + flatten_decoded_kpts = self.decode_pose(priors, kpt_preds, strides) + + vis_preds = torch.cat([ + vis_pred.permute(0, 2, 3, 1).reshape( + num_imgs, -1, self.num_keypoints) for vis_pred in vis_preds + ], + dim=1).sigmoid() + + # select keypoints predictions according to bbox scores and nms result + keep_indices_nms_idx = 0 + for pred_instances, kpts, kpts_vis, img_meta, keep_idxs \ + in zip( + results_list, flatten_decoded_kpts, vis_preds, + batch_img_metas, keep_indices_topk): + + pred_instances.bbox_scores = pred_instances.scores + + if len(pred_instances) == 0: + pred_instances.keypoints = kpts[:0] + pred_instances.keypoint_scores = kpts_vis[:0] + continue + + kpts = kpts[keep_idxs] + kpts_vis = kpts_vis[keep_idxs] + + if rescale: + pad_param = img_meta.get('img_meta', None) + scale_factor = img_meta['scale_factor'] + if pad_param is not None: + kpts -= kpts.new_tensor([pad_param[2], pad_param[0]]) + kpts /= kpts.new_tensor(scale_factor).repeat( + (1, self.num_keypoints, 1)) + + keep_idxs_nms = keep_indices_nms[keep_indices_nms_idx] + kpts = kpts[keep_idxs_nms] + kpts_vis = kpts_vis[keep_idxs_nms] + keep_indices_nms_idx += 1 + + pred_instances.keypoints = kpts + pred_instances.keypoint_scores = kpts_vis + + results_list = [r.numpy() for r in results_list] + return results_list + + def decode_pose(self, grids: torch.Tensor, offsets: torch.Tensor, + strides: Union[torch.Tensor, int]) -> torch.Tensor: + """Decode regression offsets to keypoints. + + Args: + grids (torch.Tensor): The coordinates of the feature map grids. + offsets (torch.Tensor): The predicted offset of each keypoint + relative to its corresponding grid. + strides (torch.Tensor | int): The stride of the feature map for + each instance. + Returns: + torch.Tensor: The decoded keypoints coordinates. + """ + + if isinstance(strides, int): + strides = torch.tensor([strides]).to(offsets) + + strides = strides.reshape(1, -1, 1, 1) + offsets = offsets.reshape(*offsets.shape[:2], -1, 2) + xy_coordinates = (offsets[..., :2] * strides) + grids.unsqueeze(1) + return xy_coordinates + + @staticmethod + def gt_kps_instances_preprocess(batch_gt_instances: Tensor, + batch_gt_keypoints, + batch_gt_keypoints_visible, + batch_size: int) -> List[InstanceData]: + """Split batch_gt_instances with batch size. + + Args: + batch_gt_instances (Tensor): Ground truth + a 2D-Tensor for whole batch, shape [all_gt_bboxes, 6] + batch_size (int): Batch size. + + Returns: + List: batch gt instances data, shape [batch_size, InstanceData] + """ + # faster version + batch_instance_list = [] + for i in range(batch_size): + batch_gt_instance_ = InstanceData() + single_batch_instance = \ + batch_gt_instances[batch_gt_instances[:, 0] == i, :] + keypoints = \ + batch_gt_keypoints[batch_gt_instances[:, 0] == i, :] + keypoints_visible = \ + batch_gt_keypoints_visible[batch_gt_instances[:, 0] == i, :] + batch_gt_instance_.bboxes = single_batch_instance[:, 2:] + batch_gt_instance_.labels = single_batch_instance[:, 1] + batch_gt_instance_.keypoints = keypoints + batch_gt_instance_.keypoints_visible = keypoints_visible + batch_instance_list.append(batch_gt_instance_) + + return batch_instance_list + + @staticmethod + def gt_instances_preprocess(batch_gt_instances: List[InstanceData], *args, + **kwargs) -> List[InstanceData]: + return batch_gt_instances diff --git a/mmyolo/models/losses/__init__.py b/mmyolo/models/losses/__init__.py index ee192921b..c89fe4dc4 100644 --- a/mmyolo/models/losses/__init__.py +++ b/mmyolo/models/losses/__init__.py @@ -1,4 +1,5 @@ # Copyright (c) OpenMMLab. All rights reserved. from .iou_loss import IoULoss, bbox_overlaps +from .oks_loss import OksLoss -__all__ = ['IoULoss', 'bbox_overlaps'] +__all__ = ['IoULoss', 'bbox_overlaps', 'OksLoss'] diff --git a/mmyolo/models/losses/oks_loss.py b/mmyolo/models/losses/oks_loss.py new file mode 100644 index 000000000..8440f06ed --- /dev/null +++ b/mmyolo/models/losses/oks_loss.py @@ -0,0 +1,88 @@ +# Copyright (c) OpenMMLab. All rights reserved. +from typing import Optional + +import torch +import torch.nn as nn +from torch import Tensor + +from mmyolo.registry import MODELS + +try: + from mmpose.datasets.datasets.utils import parse_pose_metainfo +except ImportError: + raise ImportError('Please run "mim install -r requirements/mmpose.txt" ' + 'to install mmpose first for rotated detection.') + + [email protected]_module() +class OksLoss(nn.Module): + """A PyTorch implementation of the Object Keypoint Similarity (OKS) loss as + described in the paper "YOLO-Pose: Enhancing YOLO for Multi Person Pose + Estimation Using Object Keypoint Similarity Loss" by Debapriya et al. + + (2022). + The OKS loss is used for keypoint-based object recognition and consists + of a measure of the similarity between predicted and ground truth + keypoint locations, adjusted by the size of the object in the image. + The loss function takes as input the predicted keypoint locations, the + ground truth keypoint locations, a mask indicating which keypoints are + valid, and bounding boxes for the objects. + Args: + metainfo (Optional[str]): Path to a JSON file containing information + about the dataset's annotations. + loss_weight (float): Weight for the loss. + """ + + def __init__(self, + metainfo: Optional[str] = None, + loss_weight: float = 1.0): + super().__init__() + + if metainfo is not None: + metainfo = parse_pose_metainfo(dict(from_file=metainfo)) + sigmas = metainfo.get('sigmas', None) + if sigmas is not None: + self.register_buffer('sigmas', torch.as_tensor(sigmas)) + self.loss_weight = loss_weight + + def forward(self, + output: Tensor, + target: Tensor, + target_weights: Tensor, + bboxes: Optional[Tensor] = None) -> Tensor: + oks = self.compute_oks(output, target, target_weights, bboxes) + loss = 1 - oks + return loss * self.loss_weight + + def compute_oks(self, + output: Tensor, + target: Tensor, + target_weights: Tensor, + bboxes: Optional[Tensor] = None) -> Tensor: + """Calculates the OKS loss. + + Args: + output (Tensor): Predicted keypoints in shape N x k x 2, where N + is batch size, k is the number of keypoints, and 2 are the + xy coordinates. + target (Tensor): Ground truth keypoints in the same shape as + output. + target_weights (Tensor): Mask of valid keypoints in shape N x k, + with 1 for valid and 0 for invalid. + bboxes (Optional[Tensor]): Bounding boxes in shape N x 4, + where 4 are the xyxy coordinates. + Returns: + Tensor: The calculated OKS loss. + """ + + dist = torch.norm(output - target, dim=-1) + + if hasattr(self, 'sigmas'): + sigmas = self.sigmas.reshape(*((1, ) * (dist.ndim - 1)), -1) + dist = dist / sigmas + if bboxes is not None: + area = torch.norm(bboxes[..., 2:] - bboxes[..., :2], dim=-1) + dist = dist / area.clip(min=1e-8).unsqueeze(-1) + + return (torch.exp(-dist.pow(2) / 2) * target_weights).sum( + dim=-1) / target_weights.sum(dim=-1).clip(min=1e-8) diff --git a/mmyolo/models/task_modules/assigners/__init__.py b/mmyolo/models/task_modules/assigners/__init__.py index e74ab728b..7b2e2e69c 100644 --- a/mmyolo/models/task_modules/assigners/__init__.py +++ b/mmyolo/models/task_modules/assigners/__init__.py @@ -2,11 +2,13 @@ from .batch_atss_assigner import BatchATSSAssigner from .batch_dsl_assigner import BatchDynamicSoftLabelAssigner from .batch_task_aligned_assigner import BatchTaskAlignedAssigner +from .pose_sim_ota_assigner import PoseSimOTAAssigner from .utils import (select_candidates_in_gts, select_highest_overlaps, yolov6_iou_calculator) __all__ = [ 'BatchATSSAssigner', 'BatchTaskAlignedAssigner', 'select_candidates_in_gts', 'select_highest_overlaps', - 'yolov6_iou_calculator', 'BatchDynamicSoftLabelAssigner' + 'yolov6_iou_calculator', 'BatchDynamicSoftLabelAssigner', + 'PoseSimOTAAssigner' ] diff --git a/mmyolo/models/task_modules/assigners/pose_sim_ota_assigner.py b/mmyolo/models/task_modules/assigners/pose_sim_ota_assigner.py new file mode 100644 index 000000000..e66a9bf15 --- /dev/null +++ b/mmyolo/models/task_modules/assigners/pose_sim_ota_assigner.py @@ -0,0 +1,210 @@ +# Copyright (c) OpenMMLab. All rights reserved. +from typing import Optional, Tuple + +import torch +import torch.nn.functional as F +from mmdet.models.task_modules.assigners import AssignResult, SimOTAAssigner +from mmdet.utils import ConfigType +from mmengine.structures import InstanceData +from torch import Tensor + +from mmyolo.registry import MODELS, TASK_UTILS + +INF = 100000.0 +EPS = 1.0e-7 + + +@TASK_UTILS.register_module() +class PoseSimOTAAssigner(SimOTAAssigner): + + def __init__(self, + center_radius: float = 2.5, + candidate_topk: int = 10, + iou_weight: float = 3.0, + cls_weight: float = 1.0, + oks_weight: float = 0.0, + vis_weight: float = 0.0, + iou_calculator: ConfigType = dict(type='BboxOverlaps2D'), + oks_calculator: ConfigType = dict(type='OksLoss')): + + self.center_radius = center_radius + self.candidate_topk = candidate_topk + self.iou_weight = iou_weight + self.cls_weight = cls_weight + self.oks_weight = oks_weight + self.vis_weight = vis_weight + + self.iou_calculator = TASK_UTILS.build(iou_calculator) + self.oks_calculator = MODELS.build(oks_calculator) + + def assign(self, + pred_instances: InstanceData, + gt_instances: InstanceData, + gt_instances_ignore: Optional[InstanceData] = None, + **kwargs) -> AssignResult: + """Assign gt to priors using SimOTA. + + Args: + pred_instances (:obj:`InstanceData`): Instances of model + predictions. It includes ``priors``, and the priors can + be anchors or points, or the bboxes predicted by the + previous stage, has shape (n, 4). The bboxes predicted by + the current model or stage will be named ``bboxes``, + ``labels``, and ``scores``, the same as the ``InstanceData`` + in other places. + gt_instances (:obj:`InstanceData`): Ground truth of instance + annotations. It usually includes ``bboxes``, with shape (k, 4), + and ``labels``, with shape (k, ). + gt_instances_ignore (:obj:`InstanceData`, optional): Instances + to be ignored during training. It includes ``bboxes`` + attribute data that is ignored during training and testing. + Defaults to None. + Returns: + obj:`AssignResult`: The assigned result. + """ + gt_bboxes = gt_instances.bboxes + gt_labels = gt_instances.labels + gt_keypoints = gt_instances.keypoints + gt_keypoints_visible = gt_instances.keypoints_visible + num_gt = gt_bboxes.size(0) + + decoded_bboxes = pred_instances.bboxes[..., :4] + pred_kpts = pred_instances.bboxes[..., 4:] + pred_kpts = pred_kpts.reshape(*pred_kpts.shape[:-1], -1, 3) + pred_kpts_vis = pred_kpts[..., -1] + pred_kpts = pred_kpts[..., :2] + pred_scores = pred_instances.scores + priors = pred_instances.priors + num_bboxes = decoded_bboxes.size(0) + + # assign 0 by default + assigned_gt_inds = decoded_bboxes.new_full((num_bboxes, ), + 0, + dtype=torch.long) + if num_gt == 0 or num_bboxes == 0: + # No ground truth or boxes, return empty assignment + max_overlaps = decoded_bboxes.new_zeros((num_bboxes, )) + assigned_labels = decoded_bboxes.new_full((num_bboxes, ), + -1, + dtype=torch.long) + return AssignResult( + num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) + + valid_mask, is_in_boxes_and_center = self.get_in_gt_and_in_center_info( + priors, gt_bboxes) + valid_decoded_bbox = decoded_bboxes[valid_mask] + valid_pred_scores = pred_scores[valid_mask] + valid_pred_kpts = pred_kpts[valid_mask] + valid_pred_kpts_vis = pred_kpts_vis[valid_mask] + num_valid = valid_decoded_bbox.size(0) + if num_valid == 0: + # No valid bboxes, return empty assignment + max_overlaps = decoded_bboxes.new_zeros((num_bboxes, )) + assigned_labels = decoded_bboxes.new_full((num_bboxes, ), + -1, + dtype=torch.long) + return AssignResult( + num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) + + cost_matrix = (~is_in_boxes_and_center) * INF + + # calculate iou + pairwise_ious = self.iou_calculator(valid_decoded_bbox, gt_bboxes) + if self.iou_weight > 0: + iou_cost = -torch.log(pairwise_ious + EPS) + cost_matrix = cost_matrix + iou_cost * self.iou_weight + + # calculate oks + pairwise_oks = self.oks_calculator.compute_oks( + valid_pred_kpts.unsqueeze(1), # [num_valid, -1, k, 2] + gt_keypoints.unsqueeze(0), # [1, num_gt, k, 2] + gt_keypoints_visible.unsqueeze(0), # [1, num_gt, k] + bboxes=gt_bboxes.unsqueeze(0), # [1, num_gt, 4] + ) # -> [num_valid, num_gt] + if self.oks_weight > 0: + oks_cost = -torch.log(pairwise_oks + EPS) + cost_matrix = cost_matrix + oks_cost * self.oks_weight + + # calculate cls + if self.cls_weight > 0: + gt_onehot_label = ( + F.one_hot(gt_labels.to(torch.int64), + pred_scores.shape[-1]).float().unsqueeze(0).repeat( + num_valid, 1, 1)) + + valid_pred_scores = valid_pred_scores.unsqueeze(1).repeat( + 1, num_gt, 1) + # disable AMP autocast to avoid overflow + with torch.cuda.amp.autocast(enabled=False): + cls_cost = ( + F.binary_cross_entropy( + valid_pred_scores.to(dtype=torch.float32), + gt_onehot_label, + reduction='none', + ).sum(-1).to(dtype=valid_pred_scores.dtype)) + cost_matrix = cost_matrix + cls_cost * self.cls_weight + + # calculate vis + if self.vis_weight > 0: + valid_pred_kpts_vis = valid_pred_kpts_vis.sigmoid().unsqueeze( + 1).repeat(1, num_gt, 1) # [num_valid, 1, k] + gt_kpt_vis = gt_keypoints_visible.unsqueeze( + 0).float() # [1, num_gt, k] + with torch.cuda.amp.autocast(enabled=False): + vis_cost = ( + F.binary_cross_entropy( + valid_pred_kpts_vis.to(dtype=torch.float32), + gt_kpt_vis.repeat(num_valid, 1, 1), + reduction='none', + ).sum(-1).to(dtype=valid_pred_kpts_vis.dtype)) + cost_matrix = cost_matrix + vis_cost * self.vis_weight + + # mixed metric + pairwise_oks = pairwise_oks.pow(0.5) + matched_pred_oks, matched_gt_inds = \ + self.dynamic_k_matching( + cost_matrix, pairwise_ious, pairwise_oks, num_gt, valid_mask) + + # convert to AssignResult format + assigned_gt_inds[valid_mask] = matched_gt_inds + 1 + assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) + assigned_labels[valid_mask] = gt_labels[matched_gt_inds].long() + max_overlaps = assigned_gt_inds.new_full((num_bboxes, ), + -INF, + dtype=torch.float32) + max_overlaps[valid_mask] = matched_pred_oks + return AssignResult( + num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) + + def dynamic_k_matching(self, cost: Tensor, pairwise_ious: Tensor, + pairwise_oks: Tensor, num_gt: int, + valid_mask: Tensor) -> Tuple[Tensor, Tensor]: + """Use IoU and matching cost to calculate the dynamic top-k positive + targets.""" + matching_matrix = torch.zeros_like(cost, dtype=torch.uint8) + # select candidate topk ious for dynamic-k calculation + candidate_topk = min(self.candidate_topk, pairwise_ious.size(0)) + topk_ious, _ = torch.topk(pairwise_ious, candidate_topk, dim=0) + # calculate dynamic k for each gt + dynamic_ks = torch.clamp(topk_ious.sum(0).int(), min=1) + for gt_idx in range(num_gt): + _, pos_idx = torch.topk( + cost[:, gt_idx], k=dynamic_ks[gt_idx], largest=False) + matching_matrix[:, gt_idx][pos_idx] = 1 + + del topk_ious, dynamic_ks, pos_idx + + prior_match_gt_mask = matching_matrix.sum(1) > 1 + if prior_match_gt_mask.sum() > 0: + cost_min, cost_argmin = torch.min( + cost[prior_match_gt_mask, :], dim=1) + matching_matrix[prior_match_gt_mask, :] *= 0 + matching_matrix[prior_match_gt_mask, cost_argmin] = 1 + # get foreground mask inside box and center prior + fg_mask_inboxes = matching_matrix.sum(1) > 0 + valid_mask[valid_mask.clone()] = fg_mask_inboxes + + matched_gt_inds = matching_matrix[fg_mask_inboxes, :].argmax(1) + matched_pred_oks = (matching_matrix * + pairwise_oks).sum(1)[fg_mask_inboxes] + return matched_pred_oks, matched_gt_inds diff --git a/mmyolo/models/utils/__init__.py b/mmyolo/models/utils/__init__.py index cdfeaaf0f..d62ff80e2 100644 --- a/mmyolo/models/utils/__init__.py +++ b/mmyolo/models/utils/__init__.py @@ -1,4 +1,8 @@ # Copyright (c) OpenMMLab. All rights reserved. -from .misc import gt_instances_preprocess, make_divisible, make_round +from .misc import (OutputSaveFunctionWrapper, OutputSaveObjectWrapper, + gt_instances_preprocess, make_divisible, make_round) -__all__ = ['make_divisible', 'make_round', 'gt_instances_preprocess'] +__all__ = [ + 'make_divisible', 'make_round', 'gt_instances_preprocess', + 'OutputSaveFunctionWrapper', 'OutputSaveObjectWrapper' +] diff --git a/mmyolo/models/utils/misc.py b/mmyolo/models/utils/misc.py index 531558b69..96cd1195a 100644 --- a/mmyolo/models/utils/misc.py +++ b/mmyolo/models/utils/misc.py @@ -1,6 +1,8 @@ # Copyright (c) OpenMMLab. All rights reserved. import math -from typing import Sequence, Union +from collections import defaultdict +from copy import deepcopy +from typing import Any, Callable, Dict, Optional, Sequence, Tuple, Union import torch from mmdet.structures.bbox.transforms import get_box_tensor @@ -95,3 +97,90 @@ def gt_instances_preprocess(batch_gt_instances: Union[Tensor, Sequence], device=batch_gt_instances.device) return batch_instance + + +class OutputSaveObjectWrapper: + """A wrapper class that saves the output of function calls on an object.""" + + def __init__(self, obj: Any) -> None: + self.obj = obj + self.log = defaultdict(list) + + def __getattr__(self, attr: str) -> Any: + """Overrides the default behavior when an attribute is accessed. + + - If the attribute is callable, hooks the attribute and saves the + returned value of the function call to the log. + - If the attribute is not callable, saves the attribute's value to the + log and returns the value. + """ + orig_attr = getattr(self.obj, attr) + + if not callable(orig_attr): + self.log[attr].append(orig_attr) + return orig_attr + + def hooked(*args: Tuple, **kwargs: Dict) -> Any: + """The hooked function that logs the return value of the original + function.""" + result = orig_attr(*args, **kwargs) + self.log[attr].append(result) + return result + + return hooked + + def clear(self): + """Clears the log of function call outputs.""" + self.log.clear() + + def __deepcopy__(self, memo): + """Only copy the object when applying deepcopy.""" + other = type(self)(deepcopy(self.obj)) + memo[id(self)] = other + return other + + +class OutputSaveFunctionWrapper: + """A class that wraps a function and saves its outputs. + + This class can be used to decorate a function to save its outputs. It wraps + the function with a `__call__` method that calls the original function and + saves the results in a log attribute. + Args: + func (Callable): A function to wrap. + spec (Optional[Dict]): A dictionary of global variables to use as the + namespace for the wrapper. If `None`, the global namespace of the + original function is used. + """ + + def __init__(self, func: Callable, spec: Optional[Dict]) -> None: + """Initializes the OutputSaveFunctionWrapper instance.""" + assert callable(func) + self.log = [] + self.func = func + self.func_name = func.__name__ + + if isinstance(spec, dict): + self.spec = spec + elif hasattr(func, '__globals__'): + self.spec = func.__globals__ + else: + raise ValueError + + def __call__(self, *args, **kwargs) -> Any: + """Calls the wrapped function with the given arguments and saves the + results in the `log` attribute.""" + results = self.func(*args, **kwargs) + self.log.append(results) + return results + + def __enter__(self) -> None: + """Enters the context and sets the wrapped function to be a global + variable in the specified namespace.""" + self.spec[self.func_name] = self + return self.log + + def __exit__(self, exc_type, exc_val, exc_tb) -> None: + """Exits the context and resets the wrapped function to its original + value in the specified namespace.""" + self.spec[self.func_name] = self.func diff --git a/requirements/mmpose.txt b/requirements/mmpose.txt new file mode 100644 index 000000000..8e4726e68 --- /dev/null +++ b/requirements/mmpose.txt @@ -0,0 +1,1 @@ +mmpose>=1.0.0 diff --git a/tools/analysis_tools/browse_dataset.py b/tools/analysis_tools/browse_dataset.py index 42bcade3f..21a1d709d 100644 --- a/tools/analysis_tools/browse_dataset.py +++ b/tools/analysis_tools/browse_dataset.py @@ -19,6 +19,7 @@ # TODO: Support for printing the change in key of results +# TODO: Some bug. If you meet some bug, please use the original def parse_args(): parser = argparse.ArgumentParser(description='Browse a dataset') parser.add_argument('config', help='train config file path') diff --git a/tools/analysis_tools/browse_dataset_simple.py b/tools/analysis_tools/browse_dataset_simple.py new file mode 100644 index 000000000..ebacbde3a --- /dev/null +++ b/tools/analysis_tools/browse_dataset_simple.py @@ -0,0 +1,89 @@ +# Copyright (c) OpenMMLab. All rights reserved. +import argparse +import os.path as osp + +from mmdet.models.utils import mask2ndarray +from mmdet.structures.bbox import BaseBoxes +from mmengine.config import Config, DictAction +from mmengine.registry import init_default_scope +from mmengine.utils import ProgressBar + +from mmyolo.registry import DATASETS, VISUALIZERS + + +def parse_args(): + parser = argparse.ArgumentParser(description='Browse a dataset') + parser.add_argument('config', help='train config file path') + parser.add_argument( + '--output-dir', + default=None, + type=str, + help='If there is no display interface, you can save it') + parser.add_argument('--not-show', default=False, action='store_true') + parser.add_argument( + '--show-interval', + type=float, + default=0, + help='the interval of show (s)') + parser.add_argument( + '--cfg-options', + nargs='+', + action=DictAction, + help='override some settings in the used config, the key-value pair ' + 'in xxx=yyy format will be merged into config file. If the value to ' + 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' + 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' + 'Note that the quotation marks are necessary and that no white space ' + 'is allowed.') + args = parser.parse_args() + return args + + +def main(): + args = parse_args() + cfg = Config.fromfile(args.config) + if args.cfg_options is not None: + cfg.merge_from_dict(args.cfg_options) + + # register all modules in mmdet into the registries + init_default_scope(cfg.get('default_scope', 'mmyolo')) + + dataset = DATASETS.build(cfg.train_dataloader.dataset) + visualizer = VISUALIZERS.build(cfg.visualizer) + visualizer.dataset_meta = dataset.metainfo + + progress_bar = ProgressBar(len(dataset)) + for item in dataset: + img = item['inputs'].permute(1, 2, 0).numpy() + data_sample = item['data_samples'].numpy() + gt_instances = data_sample.gt_instances + img_path = osp.basename(item['data_samples'].img_path) + + out_file = osp.join( + args.output_dir, + osp.basename(img_path)) if args.output_dir is not None else None + + img = img[..., [2, 1, 0]] # bgr to rgb + gt_bboxes = gt_instances.get('bboxes', None) + if gt_bboxes is not None and isinstance(gt_bboxes, BaseBoxes): + gt_instances.bboxes = gt_bboxes.tensor + gt_masks = gt_instances.get('masks', None) + if gt_masks is not None: + masks = mask2ndarray(gt_masks) + gt_instances.masks = masks.astype(bool) + data_sample.gt_instances = gt_instances + + visualizer.add_datasample( + osp.basename(img_path), + img, + data_sample, + draw_pred=False, + show=not args.not_show, + wait_time=args.show_interval, + out_file=out_file) + + progress_bar.update() + + +if __name__ == '__main__': + main()
[Feature] Support YOLOv8 Ins Segmentation Inference Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. ## Motivation Please describe the motivation for this PR and the goal you want to achieve through this PR. ## Modification Support YOLOv8 Ins Segmentation Inference Can reproduce bbox and mask mAP on yolov8-s model. ![image](https://user-images.githubusercontent.com/40284075/223897540-f0bddcb6-ebf5-44e7-8cb6-876fe3b8df9b.png) Official mAP results: ![image](https://user-images.githubusercontent.com/40284075/223652928-fb1fc63f-0028-484f-b7fb-aebcdd13ba69.png)
2023-03-25T03:06:45
0.0
[]
[]
aogier/starlette-authlib
aogier__starlette-authlib-177
5d83c8641bd8fba50b25d9821dafddfd689e17d8
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 0429f72..8590a47 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -9,7 +9,7 @@ repos: - id: check-yaml - id: check-added-large-files - repo: https://github.com/PyCQA/isort - rev: 5.12.0 + rev: 5.11.5 hooks: - id: isort args: @@ -29,7 +29,7 @@ repos: - --follow-imports - skip - repo: https://github.com/PyCQA/autoflake - rev: v2.2.0 + rev: v2.1.1 hooks: - id: autoflake args: diff --git a/starlette_authlib/middleware.py b/starlette_authlib/middleware.py index 9aa14b0..a6ab75a 100644 --- a/starlette_authlib/middleware.py +++ b/starlette_authlib/middleware.py @@ -9,7 +9,12 @@ from collections import namedtuple from authlib.jose import jwt -from authlib.jose.errors import BadSignatureError, DecodeError, ExpiredTokenError +from authlib.jose.errors import ( + BadSignatureError, + DecodeError, + ExpiredTokenError, + InvalidTokenError, +) from starlette.config import Config from starlette.datastructures import MutableHeaders, Secret from starlette.requests import HTTPConnection @@ -86,9 +91,15 @@ async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: ), ) jwt_payload.validate_exp(time.time(), 0) + jwt_payload.validate_nbf(time.time(), 0) scope["session"] = jwt_payload initial_session_was_empty = False - except (BadSignatureError, ExpiredTokenError, DecodeError): + except ( + BadSignatureError, + ExpiredTokenError, + DecodeError, + InvalidTokenError, + ): scope["session"] = {} else: scope["session"] = {}
Middleware Does Not Honor nbf Claim Issue: If an [nbf](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.5) registered claim is provided to this middleware, it is not honored. Expected: If an [nbf](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.5) claim is provided and it is in the future, then the session dict should be empty since the session is not technically valid.
2023-08-26T08:24:26
0.0
[]
[]
aogier/starlette-authlib
aogier__starlette-authlib-176
cb833ed91f1564a8049bdc4453458f2407fff139
diff --git a/starlette_authlib/middleware.py b/starlette_authlib/middleware.py index 9aa14b0..6106a9a 100644 --- a/starlette_authlib/middleware.py +++ b/starlette_authlib/middleware.py @@ -9,7 +9,7 @@ from collections import namedtuple from authlib.jose import jwt -from authlib.jose.errors import BadSignatureError, DecodeError, ExpiredTokenError +from authlib.jose.errors import BadSignatureError, DecodeError, ExpiredTokenError, InvalidTokenError from starlette.config import Config from starlette.datastructures import MutableHeaders, Secret from starlette.requests import HTTPConnection @@ -86,9 +86,10 @@ async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: ), ) jwt_payload.validate_exp(time.time(), 0) + jwt_payload.validate_nbf(time.time(), 0) scope["session"] = jwt_payload initial_session_was_empty = False - except (BadSignatureError, ExpiredTokenError, DecodeError): + except (BadSignatureError, ExpiredTokenError, DecodeError, InvalidTokenError): scope["session"] = {} else: scope["session"] = {}
Middleware Does Not Honor nbf Claim Issue: If an [nbf](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.5) registered claim is provided to this middleware, it is not honored. Expected: If an [nbf](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.5) claim is provided and it is in the future, then the session dict should be empty since the session is not technically valid.
2023-08-26T04:05:04
0.0
[]
[]
camlab-bioml/starling
camlab-bioml__starling-54
eb2a0521bbe58642e7973778900680100b79586f
diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 8b81995..3227acd 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -7,9 +7,11 @@ on: branches: [ "main" ] jobs: - build: + test: runs-on: ubuntu-latest - + strategy: + matrix: + python-version: [ '3.9', '3.10', '3.11', '3.12' ] steps: - uses: actions/checkout@v3 - name: Install Poetry @@ -19,7 +21,7 @@ jobs: uses: snok/install-poetry@v1 - uses: actions/setup-python@v4 with: - python-version: '3.9' + python-version: ${{ matrix.python-version }} cache: 'poetry' - name: Install project if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true' @@ -31,6 +33,23 @@ jobs: - name: Pytest run: | poetry run pytest + + docs: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - name: Install Poetry + with: + virtualenvs-create: true + virtualenvs-in-project: true + uses: snok/install-poetry@v1 + - uses: actions/setup-python@v4 + with: + python-version: '3.12' + cache: 'poetry' + - name: Install project + if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true' + run: poetry install --with docs,dev --no-interaction - name: Build documentation run: | mkdir gh-pages diff --git a/Dockerfile b/Dockerfile index 52d7df5..c4348b3 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,4 @@ -FROM python:3.10-slim +FROM python:3.12-slim WORKDIR /code @@ -25,7 +25,7 @@ ENV POETRY_HOME=/opt/poetry # install poetry into its own venv RUN python3 -m venv $POETRY_HOME && \ - $POETRY_HOME/bin/pip install poetry==1.7.1 + $POETRY_HOME/bin/pip install poetry==1.8.0 ENV VIRTUAL_ENV=/poetry-env \ PATH="/poetry-env/bin:$POETRY_HOME/bin:$PATH" @@ -39,6 +39,6 @@ USER $USERNAME COPY --chown=${USER_UID}:${USER_GID} pyproject.toml poetry.lock README.md /code/ COPY --chown=${USER_UID}:${USER_GID} starling/__init__.py /code/starling/__init__.py -RUN poetry install --with docs,dev +RUN poetry install --with docs,dev && poetry self add poetry-plugin-export COPY --chown=${USER_UID}:${USER_GID} . . diff --git a/README.md b/README.md index 11c0864..a70ba5c 100644 --- a/README.md +++ b/README.md @@ -3,6 +3,8 @@ ![build](https://github.com/camlab-bioml/starling/actions/workflows/main.yml/badge.svg) ![](https://img.shields.io/badge/Python-3.9-blue) ![](https://img.shields.io/badge/Python-3.10-blue) +![](https://img.shields.io/badge/Python-3.11-blue) +![](https://img.shields.io/badge/Python-3.12-blue) STARLING is a probabilistic model for clustering cells measured with spatial expression assays (e.g. IMC, MIBI, etc...) while accounting for segmentation errors. diff --git a/poetry.lock b/poetry.lock index 34a8004..4b4d259 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,4 +1,4 @@ -# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand. +# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. [[package]] name = "absl-py" @@ -11,92 +11,104 @@ files = [ {file = "absl_py-2.1.0-py3-none-any.whl", hash = "sha256:526a04eadab8b4ee719ce68f204172ead1027549089702d99b9059f129ff1308"}, ] +[[package]] +name = "aiohappyeyeballs" +version = "2.3.5" +description = "Happy Eyeballs for asyncio" +optional = false +python-versions = ">=3.8" +files = [ + {file = "aiohappyeyeballs-2.3.5-py3-none-any.whl", hash = "sha256:4d6dea59215537dbc746e93e779caea8178c866856a721c9c660d7a5a7b8be03"}, + {file = "aiohappyeyeballs-2.3.5.tar.gz", hash = "sha256:6fa48b9f1317254f122a07a131a86b71ca6946ca989ce6326fff54a99a920105"}, +] + [[package]] name = "aiohttp" -version = "3.9.5" +version = "3.10.3" description = "Async http client/server framework (asyncio)" optional = false python-versions = ">=3.8" files = [ - {file = "aiohttp-3.9.5-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:fcde4c397f673fdec23e6b05ebf8d4751314fa7c24f93334bf1f1364c1c69ac7"}, - {file = "aiohttp-3.9.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5d6b3f1fabe465e819aed2c421a6743d8debbde79b6a8600739300630a01bf2c"}, - {file = "aiohttp-3.9.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6ae79c1bc12c34082d92bf9422764f799aee4746fd7a392db46b7fd357d4a17a"}, - {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d3ebb9e1316ec74277d19c5f482f98cc65a73ccd5430540d6d11682cd857430"}, - {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84dabd95154f43a2ea80deffec9cb44d2e301e38a0c9d331cc4aa0166fe28ae3"}, - {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c8a02fbeca6f63cb1f0475c799679057fc9268b77075ab7cf3f1c600e81dd46b"}, - {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c26959ca7b75ff768e2776d8055bf9582a6267e24556bb7f7bd29e677932be72"}, - {file = "aiohttp-3.9.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:714d4e5231fed4ba2762ed489b4aec07b2b9953cf4ee31e9871caac895a839c0"}, - {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e7a6a8354f1b62e15d48e04350f13e726fa08b62c3d7b8401c0a1314f02e3558"}, - {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:c413016880e03e69d166efb5a1a95d40f83d5a3a648d16486592c49ffb76d0db"}, - {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:ff84aeb864e0fac81f676be9f4685f0527b660f1efdc40dcede3c251ef1e867f"}, - {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:ad7f2919d7dac062f24d6f5fe95d401597fbb015a25771f85e692d043c9d7832"}, - {file = "aiohttp-3.9.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:702e2c7c187c1a498a4e2b03155d52658fdd6fda882d3d7fbb891a5cf108bb10"}, - {file = "aiohttp-3.9.5-cp310-cp310-win32.whl", hash = "sha256:67c3119f5ddc7261d47163ed86d760ddf0e625cd6246b4ed852e82159617b5fb"}, - {file = "aiohttp-3.9.5-cp310-cp310-win_amd64.whl", hash = "sha256:471f0ef53ccedec9995287f02caf0c068732f026455f07db3f01a46e49d76bbb"}, - {file = "aiohttp-3.9.5-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ae53e33ee7476dd3d1132f932eeb39bf6125083820049d06edcdca4381f342"}, - {file = "aiohttp-3.9.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c088c4d70d21f8ca5c0b8b5403fe84a7bc8e024161febdd4ef04575ef35d474d"}, - {file = "aiohttp-3.9.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:639d0042b7670222f33b0028de6b4e2fad6451462ce7df2af8aee37dcac55424"}, - {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f26383adb94da5e7fb388d441bf09c61e5e35f455a3217bfd790c6b6bc64b2ee"}, - {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:66331d00fb28dc90aa606d9a54304af76b335ae204d1836f65797d6fe27f1ca2"}, - {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4ff550491f5492ab5ed3533e76b8567f4b37bd2995e780a1f46bca2024223233"}, - {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f22eb3a6c1080d862befa0a89c380b4dafce29dc6cd56083f630073d102eb595"}, - {file = "aiohttp-3.9.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a81b1143d42b66ffc40a441379387076243ef7b51019204fd3ec36b9f69e77d6"}, - {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f64fd07515dad67f24b6ea4a66ae2876c01031de91c93075b8093f07c0a2d93d"}, - {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:93e22add827447d2e26d67c9ac0161756007f152fdc5210277d00a85f6c92323"}, - {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:55b39c8684a46e56ef8c8d24faf02de4a2b2ac60d26cee93bc595651ff545de9"}, - {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4715a9b778f4293b9f8ae7a0a7cef9829f02ff8d6277a39d7f40565c737d3771"}, - {file = "aiohttp-3.9.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:afc52b8d969eff14e069a710057d15ab9ac17cd4b6753042c407dcea0e40bf75"}, - {file = "aiohttp-3.9.5-cp311-cp311-win32.whl", hash = "sha256:b3df71da99c98534be076196791adca8819761f0bf6e08e07fd7da25127150d6"}, - {file = "aiohttp-3.9.5-cp311-cp311-win_amd64.whl", hash = "sha256:88e311d98cc0bf45b62fc46c66753a83445f5ab20038bcc1b8a1cc05666f428a"}, - {file = "aiohttp-3.9.5-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:c7a4b7a6cf5b6eb11e109a9755fd4fda7d57395f8c575e166d363b9fc3ec4678"}, - {file = "aiohttp-3.9.5-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:0a158704edf0abcac8ac371fbb54044f3270bdbc93e254a82b6c82be1ef08f3c"}, - {file = "aiohttp-3.9.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d153f652a687a8e95ad367a86a61e8d53d528b0530ef382ec5aaf533140ed00f"}, - {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:82a6a97d9771cb48ae16979c3a3a9a18b600a8505b1115cfe354dfb2054468b4"}, - {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:60cdbd56f4cad9f69c35eaac0fbbdf1f77b0ff9456cebd4902f3dd1cf096464c"}, - {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8676e8fd73141ded15ea586de0b7cda1542960a7b9ad89b2b06428e97125d4fa"}, - {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da00da442a0e31f1c69d26d224e1efd3a1ca5bcbf210978a2ca7426dfcae9f58"}, - {file = "aiohttp-3.9.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:18f634d540dd099c262e9f887c8bbacc959847cfe5da7a0e2e1cf3f14dbf2daf"}, - {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:320e8618eda64e19d11bdb3bd04ccc0a816c17eaecb7e4945d01deee2a22f95f"}, - {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:2faa61a904b83142747fc6a6d7ad8fccff898c849123030f8e75d5d967fd4a81"}, - {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:8c64a6dc3fe5db7b1b4d2b5cb84c4f677768bdc340611eca673afb7cf416ef5a"}, - {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:393c7aba2b55559ef7ab791c94b44f7482a07bf7640d17b341b79081f5e5cd1a"}, - {file = "aiohttp-3.9.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:c671dc117c2c21a1ca10c116cfcd6e3e44da7fcde37bf83b2be485ab377b25da"}, - {file = "aiohttp-3.9.5-cp312-cp312-win32.whl", hash = "sha256:5a7ee16aab26e76add4afc45e8f8206c95d1d75540f1039b84a03c3b3800dd59"}, - {file = "aiohttp-3.9.5-cp312-cp312-win_amd64.whl", hash = "sha256:5ca51eadbd67045396bc92a4345d1790b7301c14d1848feaac1d6a6c9289e888"}, - {file = "aiohttp-3.9.5-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:694d828b5c41255e54bc2dddb51a9f5150b4eefa9886e38b52605a05d96566e8"}, - {file = "aiohttp-3.9.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0605cc2c0088fcaae79f01c913a38611ad09ba68ff482402d3410bf59039bfb8"}, - {file = "aiohttp-3.9.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4558e5012ee03d2638c681e156461d37b7a113fe13970d438d95d10173d25f78"}, - {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dbc053ac75ccc63dc3a3cc547b98c7258ec35a215a92bd9f983e0aac95d3d5b"}, - {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4109adee842b90671f1b689901b948f347325045c15f46b39797ae1bf17019de"}, - {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a6ea1a5b409a85477fd8e5ee6ad8f0e40bf2844c270955e09360418cfd09abac"}, - {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3c2890ca8c59ee683fd09adf32321a40fe1cf164e3387799efb2acebf090c11"}, - {file = "aiohttp-3.9.5-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3916c8692dbd9d55c523374a3b8213e628424d19116ac4308e434dbf6d95bbdd"}, - {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8d1964eb7617907c792ca00b341b5ec3e01ae8c280825deadbbd678447b127e1"}, - {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:d5ab8e1f6bee051a4bf6195e38a5c13e5e161cb7bad83d8854524798bd9fcd6e"}, - {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:52c27110f3862a1afbcb2af4281fc9fdc40327fa286c4625dfee247c3ba90156"}, - {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:7f64cbd44443e80094309875d4f9c71d0401e966d191c3d469cde4642bc2e031"}, - {file = "aiohttp-3.9.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:8b4f72fbb66279624bfe83fd5eb6aea0022dad8eec62b71e7bf63ee1caadeafe"}, - {file = "aiohttp-3.9.5-cp38-cp38-win32.whl", hash = "sha256:6380c039ec52866c06d69b5c7aad5478b24ed11696f0e72f6b807cfb261453da"}, - {file = "aiohttp-3.9.5-cp38-cp38-win_amd64.whl", hash = "sha256:da22dab31d7180f8c3ac7c7635f3bcd53808f374f6aa333fe0b0b9e14b01f91a"}, - {file = "aiohttp-3.9.5-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1732102949ff6087589408d76cd6dea656b93c896b011ecafff418c9661dc4ed"}, - {file = "aiohttp-3.9.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c6021d296318cb6f9414b48e6a439a7f5d1f665464da507e8ff640848ee2a58a"}, - {file = "aiohttp-3.9.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:239f975589a944eeb1bad26b8b140a59a3a320067fb3cd10b75c3092405a1372"}, - {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b7b30258348082826d274504fbc7c849959f1989d86c29bc355107accec6cfb"}, - {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cd2adf5c87ff6d8b277814a28a535b59e20bfea40a101db6b3bdca7e9926bc24"}, - {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e9a3d838441bebcf5cf442700e3963f58b5c33f015341f9ea86dcd7d503c07e2"}, - {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9e3a1ae66e3d0c17cf65c08968a5ee3180c5a95920ec2731f53343fac9bad106"}, - {file = "aiohttp-3.9.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9c69e77370cce2d6df5d12b4e12bdcca60c47ba13d1cbbc8645dd005a20b738b"}, - {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0cbf56238f4bbf49dab8c2dc2e6b1b68502b1e88d335bea59b3f5b9f4c001475"}, - {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d1469f228cd9ffddd396d9948b8c9cd8022b6d1bf1e40c6f25b0fb90b4f893ed"}, - {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:45731330e754f5811c314901cebdf19dd776a44b31927fa4b4dbecab9e457b0c"}, - {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:3fcb4046d2904378e3aeea1df51f697b0467f2aac55d232c87ba162709478c46"}, - {file = "aiohttp-3.9.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8cf142aa6c1a751fcb364158fd710b8a9be874b81889c2bd13aa8893197455e2"}, - {file = "aiohttp-3.9.5-cp39-cp39-win32.whl", hash = "sha256:7b179eea70833c8dee51ec42f3b4097bd6370892fa93f510f76762105568cf09"}, - {file = "aiohttp-3.9.5-cp39-cp39-win_amd64.whl", hash = "sha256:38d80498e2e169bc61418ff36170e0aad0cd268da8b38a17c4cf29d254a8b3f1"}, - {file = "aiohttp-3.9.5.tar.gz", hash = "sha256:edea7d15772ceeb29db4aff55e482d4bcfb6ae160ce144f2682de02f6d693551"}, + {file = "aiohttp-3.10.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cc36cbdedf6f259371dbbbcaae5bb0e95b879bc501668ab6306af867577eb5db"}, + {file = "aiohttp-3.10.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:85466b5a695c2a7db13eb2c200af552d13e6a9313d7fa92e4ffe04a2c0ea74c1"}, + {file = "aiohttp-3.10.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:71bb1d97bfe7e6726267cea169fdf5df7658831bb68ec02c9c6b9f3511e108bb"}, + {file = "aiohttp-3.10.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:baec1eb274f78b2de54471fc4c69ecbea4275965eab4b556ef7a7698dee18bf2"}, + {file = "aiohttp-3.10.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:13031e7ec1188274bad243255c328cc3019e36a5a907978501256000d57a7201"}, + {file = "aiohttp-3.10.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2bbc55a964b8eecb341e492ae91c3bd0848324d313e1e71a27e3d96e6ee7e8e8"}, + {file = "aiohttp-3.10.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e8cc0564b286b625e673a2615ede60a1704d0cbbf1b24604e28c31ed37dc62aa"}, + {file = "aiohttp-3.10.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f817a54059a4cfbc385a7f51696359c642088710e731e8df80d0607193ed2b73"}, + {file = "aiohttp-3.10.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:8542c9e5bcb2bd3115acdf5adc41cda394e7360916197805e7e32b93d821ef93"}, + {file = "aiohttp-3.10.3-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:671efce3a4a0281060edf9a07a2f7e6230dca3a1cbc61d110eee7753d28405f7"}, + {file = "aiohttp-3.10.3-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:0974f3b5b0132edcec92c3306f858ad4356a63d26b18021d859c9927616ebf27"}, + {file = "aiohttp-3.10.3-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:44bb159b55926b57812dca1b21c34528e800963ffe130d08b049b2d6b994ada7"}, + {file = "aiohttp-3.10.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:6ae9ae382d1c9617a91647575255ad55a48bfdde34cc2185dd558ce476bf16e9"}, + {file = "aiohttp-3.10.3-cp310-cp310-win32.whl", hash = "sha256:aed12a54d4e1ee647376fa541e1b7621505001f9f939debf51397b9329fd88b9"}, + {file = "aiohttp-3.10.3-cp310-cp310-win_amd64.whl", hash = "sha256:b51aef59370baf7444de1572f7830f59ddbabd04e5292fa4218d02f085f8d299"}, + {file = "aiohttp-3.10.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e021c4c778644e8cdc09487d65564265e6b149896a17d7c0f52e9a088cc44e1b"}, + {file = "aiohttp-3.10.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:24fade6dae446b183e2410a8628b80df9b7a42205c6bfc2eff783cbeedc224a2"}, + {file = "aiohttp-3.10.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:bc8e9f15939dacb0e1f2d15f9c41b786051c10472c7a926f5771e99b49a5957f"}, + {file = "aiohttp-3.10.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d5a9ec959b5381271c8ec9310aae1713b2aec29efa32e232e5ef7dcca0df0279"}, + {file = "aiohttp-3.10.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2a5d0ea8a6467b15d53b00c4e8ea8811e47c3cc1bdbc62b1aceb3076403d551f"}, + {file = "aiohttp-3.10.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c9ed607dbbdd0d4d39b597e5bf6b0d40d844dfb0ac6a123ed79042ef08c1f87e"}, + {file = "aiohttp-3.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3e66d5b506832e56add66af88c288c1d5ba0c38b535a1a59e436b300b57b23e"}, + {file = "aiohttp-3.10.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fda91ad797e4914cca0afa8b6cccd5d2b3569ccc88731be202f6adce39503189"}, + {file = "aiohttp-3.10.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:61ccb867b2f2f53df6598eb2a93329b5eee0b00646ee79ea67d68844747a418e"}, + {file = "aiohttp-3.10.3-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:6d881353264e6156f215b3cb778c9ac3184f5465c2ece5e6fce82e68946868ef"}, + {file = "aiohttp-3.10.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:b031ce229114825f49cec4434fa844ccb5225e266c3e146cb4bdd025a6da52f1"}, + {file = "aiohttp-3.10.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:5337cc742a03f9e3213b097abff8781f79de7190bbfaa987bd2b7ceb5bb0bdec"}, + {file = "aiohttp-3.10.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ab3361159fd3dcd0e48bbe804006d5cfb074b382666e6c064112056eb234f1a9"}, + {file = "aiohttp-3.10.3-cp311-cp311-win32.whl", hash = "sha256:05d66203a530209cbe40f102ebaac0b2214aba2a33c075d0bf825987c36f1f0b"}, + {file = "aiohttp-3.10.3-cp311-cp311-win_amd64.whl", hash = "sha256:70b4a4984a70a2322b70e088d654528129783ac1ebbf7dd76627b3bd22db2f17"}, + {file = "aiohttp-3.10.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:166de65e2e4e63357cfa8417cf952a519ac42f1654cb2d43ed76899e2319b1ee"}, + {file = "aiohttp-3.10.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:7084876352ba3833d5d214e02b32d794e3fd9cf21fdba99cff5acabeb90d9806"}, + {file = "aiohttp-3.10.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d98c604c93403288591d7d6d7d6cc8a63459168f8846aeffd5b3a7f3b3e5e09"}, + {file = "aiohttp-3.10.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d73b073a25a0bb8bf014345374fe2d0f63681ab5da4c22f9d2025ca3e3ea54fc"}, + {file = "aiohttp-3.10.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8da6b48c20ce78f5721068f383e0e113dde034e868f1b2f5ee7cb1e95f91db57"}, + {file = "aiohttp-3.10.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3a9dcdccf50284b1b0dc72bc57e5bbd3cc9bf019060dfa0668f63241ccc16aa7"}, + {file = "aiohttp-3.10.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56fb94bae2be58f68d000d046172d8b8e6b1b571eb02ceee5535e9633dcd559c"}, + {file = "aiohttp-3.10.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bf75716377aad2c718cdf66451c5cf02042085d84522aec1f9246d3e4b8641a6"}, + {file = "aiohttp-3.10.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6c51ed03e19c885c8e91f574e4bbe7381793f56f93229731597e4a499ffef2a5"}, + {file = "aiohttp-3.10.3-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:b84857b66fa6510a163bb083c1199d1ee091a40163cfcbbd0642495fed096204"}, + {file = "aiohttp-3.10.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:c124b9206b1befe0491f48185fd30a0dd51b0f4e0e7e43ac1236066215aff272"}, + {file = "aiohttp-3.10.3-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:3461d9294941937f07bbbaa6227ba799bc71cc3b22c40222568dc1cca5118f68"}, + {file = "aiohttp-3.10.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:08bd0754d257b2db27d6bab208c74601df6f21bfe4cb2ec7b258ba691aac64b3"}, + {file = "aiohttp-3.10.3-cp312-cp312-win32.whl", hash = "sha256:7f9159ae530297f61a00116771e57516f89a3de6ba33f314402e41560872b50a"}, + {file = "aiohttp-3.10.3-cp312-cp312-win_amd64.whl", hash = "sha256:e1128c5d3a466279cb23c4aa32a0f6cb0e7d2961e74e9e421f90e74f75ec1edf"}, + {file = "aiohttp-3.10.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:d1100e68e70eb72eadba2b932b185ebf0f28fd2f0dbfe576cfa9d9894ef49752"}, + {file = "aiohttp-3.10.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a541414578ff47c0a9b0b8b77381ea86b0c8531ab37fc587572cb662ccd80b88"}, + {file = "aiohttp-3.10.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d5548444ef60bf4c7b19ace21f032fa42d822e516a6940d36579f7bfa8513f9c"}, + {file = "aiohttp-3.10.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ba2e838b5e6a8755ac8297275c9460e729dc1522b6454aee1766c6de6d56e5e"}, + {file = "aiohttp-3.10.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:48665433bb59144aaf502c324694bec25867eb6630fcd831f7a893ca473fcde4"}, + {file = "aiohttp-3.10.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bac352fceed158620ce2d701ad39d4c1c76d114255a7c530e057e2b9f55bdf9f"}, + {file = "aiohttp-3.10.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2b0f670502100cdc567188c49415bebba947eb3edaa2028e1a50dd81bd13363f"}, + {file = "aiohttp-3.10.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:43b09f38a67679e32d380fe512189ccb0b25e15afc79b23fbd5b5e48e4fc8fd9"}, + {file = "aiohttp-3.10.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:cd788602e239ace64f257d1c9d39898ca65525583f0fbf0988bcba19418fe93f"}, + {file = "aiohttp-3.10.3-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:214277dcb07ab3875f17ee1c777d446dcce75bea85846849cc9d139ab8f5081f"}, + {file = "aiohttp-3.10.3-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:32007fdcaab789689c2ecaaf4b71f8e37bf012a15cd02c0a9db8c4d0e7989fa8"}, + {file = "aiohttp-3.10.3-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:123e5819bfe1b87204575515cf448ab3bf1489cdeb3b61012bde716cda5853e7"}, + {file = "aiohttp-3.10.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:812121a201f0c02491a5db335a737b4113151926a79ae9ed1a9f41ea225c0e3f"}, + {file = "aiohttp-3.10.3-cp38-cp38-win32.whl", hash = "sha256:b97dc9a17a59f350c0caa453a3cb35671a2ffa3a29a6ef3568b523b9113d84e5"}, + {file = "aiohttp-3.10.3-cp38-cp38-win_amd64.whl", hash = "sha256:3731a73ddc26969d65f90471c635abd4e1546a25299b687e654ea6d2fc052394"}, + {file = "aiohttp-3.10.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38d91b98b4320ffe66efa56cb0f614a05af53b675ce1b8607cdb2ac826a8d58e"}, + {file = "aiohttp-3.10.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9743fa34a10a36ddd448bba8a3adc2a66a1c575c3c2940301bacd6cc896c6bf1"}, + {file = "aiohttp-3.10.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7c126f532caf238031c19d169cfae3c6a59129452c990a6e84d6e7b198a001dc"}, + {file = "aiohttp-3.10.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:926e68438f05703e500b06fe7148ef3013dd6f276de65c68558fa9974eeb59ad"}, + {file = "aiohttp-3.10.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:434b3ab75833accd0b931d11874e206e816f6e6626fd69f643d6a8269cd9166a"}, + {file = "aiohttp-3.10.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d35235a44ec38109b811c3600d15d8383297a8fab8e3dec6147477ec8636712a"}, + {file = "aiohttp-3.10.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59c489661edbd863edb30a8bd69ecb044bd381d1818022bc698ba1b6f80e5dd1"}, + {file = "aiohttp-3.10.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:50544fe498c81cb98912afabfc4e4d9d85e89f86238348e3712f7ca6a2f01dab"}, + {file = "aiohttp-3.10.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:09bc79275737d4dc066e0ae2951866bb36d9c6b460cb7564f111cc0427f14844"}, + {file = "aiohttp-3.10.3-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:af4dbec58e37f5afff4f91cdf235e8e4b0bd0127a2a4fd1040e2cad3369d2f06"}, + {file = "aiohttp-3.10.3-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:b22cae3c9dd55a6b4c48c63081d31c00fc11fa9db1a20c8a50ee38c1a29539d2"}, + {file = "aiohttp-3.10.3-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:ba562736d3fbfe9241dad46c1a8994478d4a0e50796d80e29d50cabe8fbfcc3f"}, + {file = "aiohttp-3.10.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:f25d6c4e82d7489be84f2b1c8212fafc021b3731abdb61a563c90e37cced3a21"}, + {file = "aiohttp-3.10.3-cp39-cp39-win32.whl", hash = "sha256:b69d832e5f5fa15b1b6b2c8eb6a9fd2c0ec1fd7729cb4322ed27771afc9fc2ac"}, + {file = "aiohttp-3.10.3-cp39-cp39-win_amd64.whl", hash = "sha256:673bb6e3249dc8825df1105f6ef74e2eab779b7ff78e96c15cadb78b04a83752"}, + {file = "aiohttp-3.10.3.tar.gz", hash = "sha256:21650e7032cc2d31fc23d353d7123e771354f2a3d5b05a5647fc30fea214e696"}, ] [package.dependencies] +aiohappyeyeballs = ">=2.3.0" aiosignal = ">=1.1.2" async-timeout = {version = ">=4.0,<5.0", markers = "python_version < \"3.11\""} attrs = ">=17.3.0" @@ -105,7 +117,7 @@ multidict = ">=4.5,<7.0" yarl = ">=1.0,<2.0" [package.extras] -speedups = ["Brotli", "aiodns", "brotlicffi"] +speedups = ["Brotli", "aiodns (>=3.2.0)", "brotlicffi"] [[package]] name = "aiosignal" @@ -220,22 +232,22 @@ files = [ [[package]] name = "attrs" -version = "23.2.0" +version = "24.2.0" description = "Classes Without Boilerplate" optional = false python-versions = ">=3.7" files = [ - {file = "attrs-23.2.0-py3-none-any.whl", hash = "sha256:99b87a485a5820b23b879f04c2305b44b951b502fd64be915879d77a7e8fc6f1"}, - {file = "attrs-23.2.0.tar.gz", hash = "sha256:935dc3b529c262f6cf76e50877d35a4bd3c1de194fd41f47a2b7ae8f19971f30"}, + {file = "attrs-24.2.0-py3-none-any.whl", hash = "sha256:81921eb96de3191c8258c199618104dd27ac608d9366f5e35d011eae1867ede2"}, + {file = "attrs-24.2.0.tar.gz", hash = "sha256:5cfb1b9148b5b086569baec03f20d7b6bf3bcacc9a42bebf87ffaaca362f6346"}, ] [package.extras] -cov = ["attrs[tests]", "coverage[toml] (>=5.3)"] -dev = ["attrs[tests]", "pre-commit"] -docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"] -tests = ["attrs[tests-no-zope]", "zope-interface"] -tests-mypy = ["mypy (>=1.6)", "pytest-mypy-plugins"] -tests-no-zope = ["attrs[tests-mypy]", "cloudpickle", "hypothesis", "pympler", "pytest (>=4.3.0)", "pytest-xdist[psutil]"] +benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier (<24.7)"] +tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"] [[package]] name = "autodoc" @@ -253,27 +265,27 @@ webtest = "*" [[package]] name = "autodocsumm" -version = "0.2.12" +version = "0.2.13" description = "Extended sphinx autodoc including automatic autosummaries" optional = false python-versions = ">=3.7" files = [ - {file = "autodocsumm-0.2.12-py3-none-any.whl", hash = "sha256:b842b53c686c07a4f174721ca4e729b027367703dbf42e2508863a3c6d6c049c"}, - {file = "autodocsumm-0.2.12.tar.gz", hash = "sha256:848fe8c38df433c6635489499b969cb47cc389ed3d7b6e75c8ccbc94d4b3bf9e"}, + {file = "autodocsumm-0.2.13-py3-none-any.whl", hash = "sha256:bf4d82ea7acb3e7d9a3ad8c135e097eca1d3f0bd00800d7804127e848e66741d"}, + {file = "autodocsumm-0.2.13.tar.gz", hash = "sha256:ac5f0cf1adbe957acb136fe0d9e16c38fb74fcaefb45c148204aba26dbb12ee2"}, ] [package.dependencies] -Sphinx = ">=2.2,<8.0" +Sphinx = ">=2.2,<9.0" [[package]] name = "babel" -version = "2.15.0" +version = "2.16.0" description = "Internationalization utilities" optional = false python-versions = ">=3.8" files = [ - {file = "Babel-2.15.0-py3-none-any.whl", hash = "sha256:08706bdad8d0a3413266ab61bd6c34d0c28d6e1e7badf40a2cebe67644e2e1fb"}, - {file = "babel-2.15.0.tar.gz", hash = "sha256:8daf0e265d05768bc6c7a314cf1321e9a123afc328cc635c18622a2f30a04413"}, + {file = "babel-2.16.0-py3-none-any.whl", hash = "sha256:368b5b98b37c06b7daf6696391c3240c938b37767d4584413e8438c5c435fa8b"}, + {file = "babel-2.16.0.tar.gz", hash = "sha256:d1f3554ca26605fe173f3de0c65f750f5a42f924499bf134de6423582298e316"}, ] [package.extras] @@ -399,63 +411,78 @@ files = [ [[package]] name = "cffi" -version = "1.16.0" +version = "1.17.0" description = "Foreign Function Interface for Python calling C code." optional = false python-versions = ">=3.8" files = [ - {file = "cffi-1.16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6b3d6606d369fc1da4fd8c357d026317fbb9c9b75d36dc16e90e84c26854b088"}, - {file = "cffi-1.16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ac0f5edd2360eea2f1daa9e26a41db02dd4b0451b48f7c318e217ee092a213e9"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7e61e3e4fa664a8588aa25c883eab612a188c725755afff6289454d6362b9673"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a72e8961a86d19bdb45851d8f1f08b041ea37d2bd8d4fd19903bc3083d80c896"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5b50bf3f55561dac5438f8e70bfcdfd74543fd60df5fa5f62d94e5867deca684"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7651c50c8c5ef7bdb41108b7b8c5a83013bfaa8a935590c5d74627c047a583c7"}, - {file = "cffi-1.16.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4108df7fe9b707191e55f33efbcb2d81928e10cea45527879a4749cbe472614"}, - {file = "cffi-1.16.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:32c68ef735dbe5857c810328cb2481e24722a59a2003018885514d4c09af9743"}, - {file = "cffi-1.16.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:673739cb539f8cdaa07d92d02efa93c9ccf87e345b9a0b556e3ecc666718468d"}, - {file = "cffi-1.16.0-cp310-cp310-win32.whl", hash = "sha256:9f90389693731ff1f659e55c7d1640e2ec43ff725cc61b04b2f9c6d8d017df6a"}, - {file = "cffi-1.16.0-cp310-cp310-win_amd64.whl", hash = "sha256:e6024675e67af929088fda399b2094574609396b1decb609c55fa58b028a32a1"}, - {file = "cffi-1.16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b84834d0cf97e7d27dd5b7f3aca7b6e9263c56308ab9dc8aae9784abb774d404"}, - {file = "cffi-1.16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1b8ebc27c014c59692bb2664c7d13ce7a6e9a629be20e54e7271fa696ff2b417"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ee07e47c12890ef248766a6e55bd38ebfb2bb8edd4142d56db91b21ea68b7627"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8a9d3ebe49f084ad71f9269834ceccbf398253c9fac910c4fd7053ff1386936"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e70f54f1796669ef691ca07d046cd81a29cb4deb1e5f942003f401c0c4a2695d"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5bf44d66cdf9e893637896c7faa22298baebcd18d1ddb6d2626a6e39793a1d56"}, - {file = "cffi-1.16.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b78010e7b97fef4bee1e896df8a4bbb6712b7f05b7ef630f9d1da00f6444d2e"}, - {file = "cffi-1.16.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:c6a164aa47843fb1b01e941d385aab7215563bb8816d80ff3a363a9f8448a8dc"}, - {file = "cffi-1.16.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e09f3ff613345df5e8c3667da1d918f9149bd623cd9070c983c013792a9a62eb"}, - {file = "cffi-1.16.0-cp311-cp311-win32.whl", hash = "sha256:2c56b361916f390cd758a57f2e16233eb4f64bcbeee88a4881ea90fca14dc6ab"}, - {file = "cffi-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:db8e577c19c0fda0beb7e0d4e09e0ba74b1e4c092e0e40bfa12fe05b6f6d75ba"}, - {file = "cffi-1.16.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:fa3a0128b152627161ce47201262d3140edb5a5c3da88d73a1b790a959126956"}, - {file = "cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:68e7c44931cc171c54ccb702482e9fc723192e88d25a0e133edd7aff8fcd1f6e"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd808f9c129ba2beda4cfc53bde801e5bcf9d6e0f22f095e45327c038bfe68e"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88e2b3c14bdb32e440be531ade29d3c50a1a59cd4e51b1dd8b0865c54ea5d2e2"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcc8eb6d5902bb1cf6dc4f187ee3ea80a1eba0a89aba40a5cb20a5087d961357"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b7be2d771cdba2942e13215c4e340bfd76398e9227ad10402a8767ab1865d2e6"}, - {file = "cffi-1.16.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e715596e683d2ce000574bae5d07bd522c781a822866c20495e52520564f0969"}, - {file = "cffi-1.16.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:2d92b25dbf6cae33f65005baf472d2c245c050b1ce709cc4588cdcdd5495b520"}, - {file = "cffi-1.16.0-cp312-cp312-win32.whl", hash = "sha256:b2ca4e77f9f47c55c194982e10f058db063937845bb2b7a86c84a6cfe0aefa8b"}, - {file = "cffi-1.16.0-cp312-cp312-win_amd64.whl", hash = "sha256:68678abf380b42ce21a5f2abde8efee05c114c2fdb2e9eef2efdb0257fba1235"}, - {file = "cffi-1.16.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0c9ef6ff37e974b73c25eecc13952c55bceed9112be2d9d938ded8e856138bcc"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a09582f178759ee8128d9270cd1344154fd473bb77d94ce0aeb2a93ebf0feaf0"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e760191dd42581e023a68b758769e2da259b5d52e3103c6060ddc02c9edb8d7b"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:80876338e19c951fdfed6198e70bc88f1c9758b94578d5a7c4c91a87af3cf31c"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a6a14b17d7e17fa0d207ac08642c8820f84f25ce17a442fd15e27ea18d67c59b"}, - {file = "cffi-1.16.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6602bc8dc6f3a9e02b6c22c4fc1e47aa50f8f8e6d3f78a5e16ac33ef5fefa324"}, - {file = "cffi-1.16.0-cp38-cp38-win32.whl", hash = "sha256:131fd094d1065b19540c3d72594260f118b231090295d8c34e19a7bbcf2e860a"}, - {file = "cffi-1.16.0-cp38-cp38-win_amd64.whl", hash = "sha256:31d13b0f99e0836b7ff893d37af07366ebc90b678b6664c955b54561fc36ef36"}, - {file = "cffi-1.16.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:582215a0e9adbe0e379761260553ba11c58943e4bbe9c36430c4ca6ac74b15ed"}, - {file = "cffi-1.16.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b29ebffcf550f9da55bec9e02ad430c992a87e5f512cd63388abb76f1036d8d2"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc9b18bf40cc75f66f40a7379f6a9513244fe33c0e8aa72e2d56b0196a7ef872"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cb4a35b3642fc5c005a6755a5d17c6c8b6bcb6981baf81cea8bfbc8903e8ba8"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b86851a328eedc692acf81fb05444bdf1891747c25af7529e39ddafaf68a4f3f"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c0f31130ebc2d37cdd8e44605fb5fa7ad59049298b3f745c74fa74c62fbfcfc4"}, - {file = "cffi-1.16.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f8e709127c6c77446a8c0a8c8bf3c8ee706a06cd44b1e827c3e6a2ee6b8c098"}, - {file = "cffi-1.16.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:748dcd1e3d3d7cd5443ef03ce8685043294ad6bd7c02a38d1bd367cfd968e000"}, - {file = "cffi-1.16.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8895613bcc094d4a1b2dbe179d88d7fb4a15cee43c052e8885783fac397d91fe"}, - {file = "cffi-1.16.0-cp39-cp39-win32.whl", hash = "sha256:ed86a35631f7bfbb28e108dd96773b9d5a6ce4811cf6ea468bb6a359b256b1e4"}, - {file = "cffi-1.16.0-cp39-cp39-win_amd64.whl", hash = "sha256:3686dffb02459559c74dd3d81748269ffb0eb027c39a6fc99502de37d501faa8"}, - {file = "cffi-1.16.0.tar.gz", hash = "sha256:bcb3ef43e58665bbda2fb198698fcae6776483e0c4a631aa5647806c25e02cc0"}, + {file = "cffi-1.17.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f9338cc05451f1942d0d8203ec2c346c830f8e86469903d5126c1f0a13a2bcbb"}, + {file = "cffi-1.17.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a0ce71725cacc9ebf839630772b07eeec220cbb5f03be1399e0457a1464f8e1a"}, + {file = "cffi-1.17.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c815270206f983309915a6844fe994b2fa47e5d05c4c4cef267c3b30e34dbe42"}, + {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d6bdcd415ba87846fd317bee0774e412e8792832e7805938987e4ede1d13046d"}, + {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8a98748ed1a1df4ee1d6f927e151ed6c1a09d5ec21684de879c7ea6aa96f58f2"}, + {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0a048d4f6630113e54bb4b77e315e1ba32a5a31512c31a273807d0027a7e69ab"}, + {file = "cffi-1.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:24aa705a5f5bd3a8bcfa4d123f03413de5d86e497435693b638cbffb7d5d8a1b"}, + {file = "cffi-1.17.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:856bf0924d24e7f93b8aee12a3a1095c34085600aa805693fb7f5d1962393206"}, + {file = "cffi-1.17.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:4304d4416ff032ed50ad6bb87416d802e67139e31c0bde4628f36a47a3164bfa"}, + {file = "cffi-1.17.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:331ad15c39c9fe9186ceaf87203a9ecf5ae0ba2538c9e898e3a6967e8ad3db6f"}, + {file = "cffi-1.17.0-cp310-cp310-win32.whl", hash = "sha256:669b29a9eca6146465cc574659058ed949748f0809a2582d1f1a324eb91054dc"}, + {file = "cffi-1.17.0-cp310-cp310-win_amd64.whl", hash = "sha256:48b389b1fd5144603d61d752afd7167dfd205973a43151ae5045b35793232aa2"}, + {file = "cffi-1.17.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c5d97162c196ce54af6700949ddf9409e9833ef1003b4741c2b39ef46f1d9720"}, + {file = "cffi-1.17.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5ba5c243f4004c750836f81606a9fcb7841f8874ad8f3bf204ff5e56332b72b9"}, + {file = "cffi-1.17.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bb9333f58fc3a2296fb1d54576138d4cf5d496a2cc118422bd77835e6ae0b9cb"}, + {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:435a22d00ec7d7ea533db494da8581b05977f9c37338c80bc86314bec2619424"}, + {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d1df34588123fcc88c872f5acb6f74ae59e9d182a2707097f9e28275ec26a12d"}, + {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:df8bb0010fdd0a743b7542589223a2816bdde4d94bb5ad67884348fa2c1c67e8"}, + {file = "cffi-1.17.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8b5b9712783415695663bd463990e2f00c6750562e6ad1d28e072a611c5f2a6"}, + {file = "cffi-1.17.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ffef8fd58a36fb5f1196919638f73dd3ae0db1a878982b27a9a5a176ede4ba91"}, + {file = "cffi-1.17.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4e67d26532bfd8b7f7c05d5a766d6f437b362c1bf203a3a5ce3593a645e870b8"}, + {file = "cffi-1.17.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:45f7cd36186db767d803b1473b3c659d57a23b5fa491ad83c6d40f2af58e4dbb"}, + {file = "cffi-1.17.0-cp311-cp311-win32.whl", hash = "sha256:a9015f5b8af1bb6837a3fcb0cdf3b874fe3385ff6274e8b7925d81ccaec3c5c9"}, + {file = "cffi-1.17.0-cp311-cp311-win_amd64.whl", hash = "sha256:b50aaac7d05c2c26dfd50c3321199f019ba76bb650e346a6ef3616306eed67b0"}, + {file = "cffi-1.17.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:aec510255ce690d240f7cb23d7114f6b351c733a74c279a84def763660a2c3bc"}, + {file = "cffi-1.17.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2770bb0d5e3cc0e31e7318db06efcbcdb7b31bcb1a70086d3177692a02256f59"}, + {file = "cffi-1.17.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:db9a30ec064129d605d0f1aedc93e00894b9334ec74ba9c6bdd08147434b33eb"}, + {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a47eef975d2b8b721775a0fa286f50eab535b9d56c70a6e62842134cf7841195"}, + {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f3e0992f23bbb0be00a921eae5363329253c3b86287db27092461c887b791e5e"}, + {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6107e445faf057c118d5050560695e46d272e5301feffda3c41849641222a828"}, + {file = "cffi-1.17.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb862356ee9391dc5a0b3cbc00f416b48c1b9a52d252d898e5b7696a5f9fe150"}, + {file = "cffi-1.17.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c1c13185b90bbd3f8b5963cd8ce7ad4ff441924c31e23c975cb150e27c2bf67a"}, + {file = "cffi-1.17.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:17c6d6d3260c7f2d94f657e6872591fe8733872a86ed1345bda872cfc8c74885"}, + {file = "cffi-1.17.0-cp312-cp312-win32.whl", hash = "sha256:c3b8bd3133cd50f6b637bb4322822c94c5ce4bf0d724ed5ae70afce62187c492"}, + {file = "cffi-1.17.0-cp312-cp312-win_amd64.whl", hash = "sha256:dca802c8db0720ce1c49cce1149ff7b06e91ba15fa84b1d59144fef1a1bc7ac2"}, + {file = "cffi-1.17.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:6ce01337d23884b21c03869d2f68c5523d43174d4fc405490eb0091057943118"}, + {file = "cffi-1.17.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cab2eba3830bf4f6d91e2d6718e0e1c14a2f5ad1af68a89d24ace0c6b17cced7"}, + {file = "cffi-1.17.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:14b9cbc8f7ac98a739558eb86fabc283d4d564dafed50216e7f7ee62d0d25377"}, + {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b00e7bcd71caa0282cbe3c90966f738e2db91e64092a877c3ff7f19a1628fdcb"}, + {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:41f4915e09218744d8bae14759f983e466ab69b178de38066f7579892ff2a555"}, + {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e4760a68cab57bfaa628938e9c2971137e05ce48e762a9cb53b76c9b569f1204"}, + {file = "cffi-1.17.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:011aff3524d578a9412c8b3cfaa50f2c0bd78e03eb7af7aa5e0df59b158efb2f"}, + {file = "cffi-1.17.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:a003ac9edc22d99ae1286b0875c460351f4e101f8c9d9d2576e78d7e048f64e0"}, + {file = "cffi-1.17.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ef9528915df81b8f4c7612b19b8628214c65c9b7f74db2e34a646a0a2a0da2d4"}, + {file = "cffi-1.17.0-cp313-cp313-win32.whl", hash = "sha256:70d2aa9fb00cf52034feac4b913181a6e10356019b18ef89bc7c12a283bf5f5a"}, + {file = "cffi-1.17.0-cp313-cp313-win_amd64.whl", hash = "sha256:b7b6ea9e36d32582cda3465f54c4b454f62f23cb083ebc7a94e2ca6ef011c3a7"}, + {file = "cffi-1.17.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:964823b2fc77b55355999ade496c54dde161c621cb1f6eac61dc30ed1b63cd4c"}, + {file = "cffi-1.17.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:516a405f174fd3b88829eabfe4bb296ac602d6a0f68e0d64d5ac9456194a5b7e"}, + {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dec6b307ce928e8e112a6bb9921a1cb00a0e14979bf28b98e084a4b8a742bd9b"}, + {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e4094c7b464cf0a858e75cd14b03509e84789abf7b79f8537e6a72152109c76e"}, + {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2404f3de742f47cb62d023f0ba7c5a916c9c653d5b368cc966382ae4e57da401"}, + {file = "cffi-1.17.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3aa9d43b02a0c681f0bfbc12d476d47b2b2b6a3f9287f11ee42989a268a1833c"}, + {file = "cffi-1.17.0-cp38-cp38-win32.whl", hash = "sha256:0bb15e7acf8ab35ca8b24b90af52c8b391690ef5c4aec3d31f38f0d37d2cc499"}, + {file = "cffi-1.17.0-cp38-cp38-win_amd64.whl", hash = "sha256:93a7350f6706b31f457c1457d3a3259ff9071a66f312ae64dc024f049055f72c"}, + {file = "cffi-1.17.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1a2ddbac59dc3716bc79f27906c010406155031a1c801410f1bafff17ea304d2"}, + {file = "cffi-1.17.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6327b572f5770293fc062a7ec04160e89741e8552bf1c358d1a23eba68166759"}, + {file = "cffi-1.17.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dbc183e7bef690c9abe5ea67b7b60fdbca81aa8da43468287dae7b5c046107d4"}, + {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5bdc0f1f610d067c70aa3737ed06e2726fd9d6f7bfee4a351f4c40b6831f4e82"}, + {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6d872186c1617d143969defeadac5a904e6e374183e07977eedef9c07c8953bf"}, + {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0d46ee4764b88b91f16661a8befc6bfb24806d885e27436fdc292ed7e6f6d058"}, + {file = "cffi-1.17.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f76a90c345796c01d85e6332e81cab6d70de83b829cf1d9762d0a3da59c7932"}, + {file = "cffi-1.17.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e60821d312f99d3e1569202518dddf10ae547e799d75aef3bca3a2d9e8ee693"}, + {file = "cffi-1.17.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:eb09b82377233b902d4c3fbeeb7ad731cdab579c6c6fda1f763cd779139e47c3"}, + {file = "cffi-1.17.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:24658baf6224d8f280e827f0a50c46ad819ec8ba380a42448e24459daf809cf4"}, + {file = "cffi-1.17.0-cp39-cp39-win32.whl", hash = "sha256:0fdacad9e0d9fc23e519efd5ea24a70348305e8d7d85ecbb1a5fa66dc834e7fb"}, + {file = "cffi-1.17.0-cp39-cp39-win_amd64.whl", hash = "sha256:7cbc78dc018596315d4e7841c8c3a7ae31cc4d638c9b627f87d52e8abaaf2d29"}, + {file = "cffi-1.17.0.tar.gz", hash = "sha256:f3157624b7558b914cb039fd1af735e5e8049a87c817cc215109ad1c8779df76"}, ] [package.dependencies] @@ -667,43 +694,38 @@ test-no-images = ["pytest", "pytest-cov", "pytest-xdist", "wurlitzer"] [[package]] name = "cryptography" -version = "42.0.8" +version = "43.0.0" description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." optional = false python-versions = ">=3.7" files = [ - {file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e"}, - {file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949"}, - {file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b"}, - {file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7"}, - {file = "cryptography-42.0.8-cp37-abi3-win32.whl", hash = "sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2"}, - {file = "cryptography-42.0.8-cp37-abi3-win_amd64.whl", hash = "sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba"}, - {file = "cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c"}, - {file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1"}, - {file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14"}, - {file = "cryptography-42.0.8-cp39-abi3-win32.whl", hash = "sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c"}, - {file = "cryptography-42.0.8-cp39-abi3-win_amd64.whl", hash = "sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71"}, - {file = "cryptography-42.0.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648"}, - {file = "cryptography-42.0.8-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad"}, - {file = "cryptography-42.0.8.tar.gz", hash = "sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2"}, + {file = "cryptography-43.0.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:64c3f16e2a4fc51c0d06af28441881f98c5d91009b8caaff40cf3548089e9c74"}, + {file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3dcdedae5c7710b9f97ac6bba7e1052b95c7083c9d0e9df96e02a1932e777895"}, + {file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d9a1eca329405219b605fac09ecfc09ac09e595d6def650a437523fcd08dd22"}, + {file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ea9e57f8ea880eeea38ab5abf9fbe39f923544d7884228ec67d666abd60f5a47"}, + {file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9a8d6802e0825767476f62aafed40532bd435e8a5f7d23bd8b4f5fd04cc80ecf"}, + {file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:cc70b4b581f28d0a254d006f26949245e3657d40d8857066c2ae22a61222ef55"}, + {file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:4a997df8c1c2aae1e1e5ac49c2e4f610ad037fc5a3aadc7b64e39dea42249431"}, + {file = "cryptography-43.0.0-cp37-abi3-win32.whl", hash = "sha256:6e2b11c55d260d03a8cf29ac9b5e0608d35f08077d8c087be96287f43af3ccdc"}, + {file = "cryptography-43.0.0-cp37-abi3-win_amd64.whl", hash = "sha256:31e44a986ceccec3d0498e16f3d27b2ee5fdf69ce2ab89b52eaad1d2f33d8778"}, + {file = "cryptography-43.0.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:7b3f5fe74a5ca32d4d0f302ffe6680fcc5c28f8ef0dc0ae8f40c0f3a1b4fca66"}, + {file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac1955ce000cb29ab40def14fd1bbfa7af2017cca696ee696925615cafd0dce5"}, + {file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:299d3da8e00b7e2b54bb02ef58d73cd5f55fb31f33ebbf33bd00d9aa6807df7e"}, + {file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ee0c405832ade84d4de74b9029bedb7b31200600fa524d218fc29bfa371e97f5"}, + {file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb013933d4c127349b3948aa8aaf2f12c0353ad0eccd715ca789c8a0f671646f"}, + {file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:fdcb265de28585de5b859ae13e3846a8e805268a823a12a4da2597f1f5afc9f0"}, + {file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2905ccf93a8a2a416f3ec01b1a7911c3fe4073ef35640e7ee5296754e30b762b"}, + {file = "cryptography-43.0.0-cp39-abi3-win32.whl", hash = "sha256:47ca71115e545954e6c1d207dd13461ab81f4eccfcb1345eac874828b5e3eaaf"}, + {file = "cryptography-43.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:0663585d02f76929792470451a5ba64424acc3cd5227b03921dab0e2f27b1709"}, + {file = "cryptography-43.0.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c6d112bf61c5ef44042c253e4859b3cbbb50df2f78fa8fae6747a7814484a70"}, + {file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:844b6d608374e7d08f4f6e6f9f7b951f9256db41421917dfb2d003dde4cd6b66"}, + {file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:51956cf8730665e2bdf8ddb8da0056f699c1a5715648c1b0144670c1ba00b48f"}, + {file = "cryptography-43.0.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:aae4d918f6b180a8ab8bf6511a419473d107df4dbb4225c7b48c5c9602c38c7f"}, + {file = "cryptography-43.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:232ce02943a579095a339ac4b390fbbe97f5b5d5d107f8a08260ea2768be8cc2"}, + {file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5bcb8a5620008a8034d39bce21dc3e23735dfdb6a33a06974739bfa04f853947"}, + {file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:08a24a7070b2b6804c1940ff0f910ff728932a9d0e80e7814234269f9d46d069"}, + {file = "cryptography-43.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:e9c5266c432a1e23738d178e51c2c7a5e2ddf790f248be939448c0ba2021f9d1"}, + {file = "cryptography-43.0.0.tar.gz", hash = "sha256:b88075ada2d51aa9f18283532c9f60e72170041bba88d7f37e49cbb10275299e"}, ] [package.dependencies] @@ -716,7 +738,7 @@ nox = ["nox"] pep8test = ["check-sdist", "click", "mypy", "ruff"] sdist = ["build"] ssh = ["bcrypt (>=3.1.5)"] -test = ["certifi", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"] +test = ["certifi", "cryptography-vectors (==43.0.0)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"] test-randomorder = ["pytest-randomly"] [[package]] @@ -736,33 +758,33 @@ tests = ["pytest", "pytest-cov", "pytest-xdist"] [[package]] name = "debugpy" -version = "1.8.2" +version = "1.8.5" description = "An implementation of the Debug Adapter Protocol for Python" optional = false python-versions = ">=3.8" files = [ - {file = "debugpy-1.8.2-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:7ee2e1afbf44b138c005e4380097d92532e1001580853a7cb40ed84e0ef1c3d2"}, - {file = "debugpy-1.8.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f8c3f7c53130a070f0fc845a0f2cee8ed88d220d6b04595897b66605df1edd6"}, - {file = "debugpy-1.8.2-cp310-cp310-win32.whl", hash = "sha256:f179af1e1bd4c88b0b9f0fa153569b24f6b6f3de33f94703336363ae62f4bf47"}, - {file = "debugpy-1.8.2-cp310-cp310-win_amd64.whl", hash = "sha256:0600faef1d0b8d0e85c816b8bb0cb90ed94fc611f308d5fde28cb8b3d2ff0fe3"}, - {file = "debugpy-1.8.2-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:8a13417ccd5978a642e91fb79b871baded925d4fadd4dfafec1928196292aa0a"}, - {file = "debugpy-1.8.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acdf39855f65c48ac9667b2801234fc64d46778021efac2de7e50907ab90c634"}, - {file = "debugpy-1.8.2-cp311-cp311-win32.whl", hash = "sha256:2cbd4d9a2fc5e7f583ff9bf11f3b7d78dfda8401e8bb6856ad1ed190be4281ad"}, - {file = "debugpy-1.8.2-cp311-cp311-win_amd64.whl", hash = "sha256:d3408fddd76414034c02880e891ea434e9a9cf3a69842098ef92f6e809d09afa"}, - {file = "debugpy-1.8.2-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:5d3ccd39e4021f2eb86b8d748a96c766058b39443c1f18b2dc52c10ac2757835"}, - {file = "debugpy-1.8.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:62658aefe289598680193ff655ff3940e2a601765259b123dc7f89c0239b8cd3"}, - {file = "debugpy-1.8.2-cp312-cp312-win32.whl", hash = "sha256:bd11fe35d6fd3431f1546d94121322c0ac572e1bfb1f6be0e9b8655fb4ea941e"}, - {file = "debugpy-1.8.2-cp312-cp312-win_amd64.whl", hash = "sha256:15bc2f4b0f5e99bf86c162c91a74c0631dbd9cef3c6a1d1329c946586255e859"}, - {file = "debugpy-1.8.2-cp38-cp38-macosx_11_0_x86_64.whl", hash = "sha256:5a019d4574afedc6ead1daa22736c530712465c0c4cd44f820d803d937531b2d"}, - {file = "debugpy-1.8.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40f062d6877d2e45b112c0bbade9a17aac507445fd638922b1a5434df34aed02"}, - {file = "debugpy-1.8.2-cp38-cp38-win32.whl", hash = "sha256:c78ba1680f1015c0ca7115671fe347b28b446081dada3fedf54138f44e4ba031"}, - {file = "debugpy-1.8.2-cp38-cp38-win_amd64.whl", hash = "sha256:cf327316ae0c0e7dd81eb92d24ba8b5e88bb4d1b585b5c0d32929274a66a5210"}, - {file = "debugpy-1.8.2-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:1523bc551e28e15147815d1397afc150ac99dbd3a8e64641d53425dba57b0ff9"}, - {file = "debugpy-1.8.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e24ccb0cd6f8bfaec68d577cb49e9c680621c336f347479b3fce060ba7c09ec1"}, - {file = "debugpy-1.8.2-cp39-cp39-win32.whl", hash = "sha256:7f8d57a98c5a486c5c7824bc0b9f2f11189d08d73635c326abef268f83950326"}, - {file = "debugpy-1.8.2-cp39-cp39-win_amd64.whl", hash = "sha256:16c8dcab02617b75697a0a925a62943e26a0330da076e2a10437edd9f0bf3755"}, - {file = "debugpy-1.8.2-py2.py3-none-any.whl", hash = "sha256:16e16df3a98a35c63c3ab1e4d19be4cbc7fdda92d9ddc059294f18910928e0ca"}, - {file = "debugpy-1.8.2.zip", hash = "sha256:95378ed08ed2089221896b9b3a8d021e642c24edc8fef20e5d4342ca8be65c00"}, + {file = "debugpy-1.8.5-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:7e4d594367d6407a120b76bdaa03886e9eb652c05ba7f87e37418426ad2079f7"}, + {file = "debugpy-1.8.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4413b7a3ede757dc33a273a17d685ea2b0c09dbd312cc03f5534a0fd4d40750a"}, + {file = "debugpy-1.8.5-cp310-cp310-win32.whl", hash = "sha256:dd3811bd63632bb25eda6bd73bea8e0521794cda02be41fa3160eb26fc29e7ed"}, + {file = "debugpy-1.8.5-cp310-cp310-win_amd64.whl", hash = "sha256:b78c1250441ce893cb5035dd6f5fc12db968cc07f91cc06996b2087f7cefdd8e"}, + {file = "debugpy-1.8.5-cp311-cp311-macosx_12_0_universal2.whl", hash = "sha256:606bccba19f7188b6ea9579c8a4f5a5364ecd0bf5a0659c8a5d0e10dcee3032a"}, + {file = "debugpy-1.8.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db9fb642938a7a609a6c865c32ecd0d795d56c1aaa7a7a5722d77855d5e77f2b"}, + {file = "debugpy-1.8.5-cp311-cp311-win32.whl", hash = "sha256:4fbb3b39ae1aa3e5ad578f37a48a7a303dad9a3d018d369bc9ec629c1cfa7408"}, + {file = "debugpy-1.8.5-cp311-cp311-win_amd64.whl", hash = "sha256:345d6a0206e81eb68b1493ce2fbffd57c3088e2ce4b46592077a943d2b968ca3"}, + {file = "debugpy-1.8.5-cp312-cp312-macosx_12_0_universal2.whl", hash = "sha256:5b5c770977c8ec6c40c60d6f58cacc7f7fe5a45960363d6974ddb9b62dbee156"}, + {file = "debugpy-1.8.5-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0a65b00b7cdd2ee0c2cf4c7335fef31e15f1b7056c7fdbce9e90193e1a8c8cb"}, + {file = "debugpy-1.8.5-cp312-cp312-win32.whl", hash = "sha256:c9f7c15ea1da18d2fcc2709e9f3d6de98b69a5b0fff1807fb80bc55f906691f7"}, + {file = "debugpy-1.8.5-cp312-cp312-win_amd64.whl", hash = "sha256:28ced650c974aaf179231668a293ecd5c63c0a671ae6d56b8795ecc5d2f48d3c"}, + {file = "debugpy-1.8.5-cp38-cp38-macosx_12_0_x86_64.whl", hash = "sha256:3df6692351172a42af7558daa5019651f898fc67450bf091335aa8a18fbf6f3a"}, + {file = "debugpy-1.8.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cd04a73eb2769eb0bfe43f5bfde1215c5923d6924b9b90f94d15f207a402226"}, + {file = "debugpy-1.8.5-cp38-cp38-win32.whl", hash = "sha256:8f913ee8e9fcf9d38a751f56e6de12a297ae7832749d35de26d960f14280750a"}, + {file = "debugpy-1.8.5-cp38-cp38-win_amd64.whl", hash = "sha256:a697beca97dad3780b89a7fb525d5e79f33821a8bc0c06faf1f1289e549743cf"}, + {file = "debugpy-1.8.5-cp39-cp39-macosx_12_0_x86_64.whl", hash = "sha256:0a1029a2869d01cb777216af8c53cda0476875ef02a2b6ff8b2f2c9a4b04176c"}, + {file = "debugpy-1.8.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84c276489e141ed0b93b0af648eef891546143d6a48f610945416453a8ad406"}, + {file = "debugpy-1.8.5-cp39-cp39-win32.whl", hash = "sha256:ad84b7cde7fd96cf6eea34ff6c4a1b7887e0fe2ea46e099e53234856f9d99a34"}, + {file = "debugpy-1.8.5-cp39-cp39-win_amd64.whl", hash = "sha256:7b0fe36ed9d26cb6836b0a51453653f8f2e347ba7348f2bbfe76bfeb670bfb1c"}, + {file = "debugpy-1.8.5-py2.py3-none-any.whl", hash = "sha256:55919dce65b471eff25901acf82d328bbd5b833526b6c1364bd5133754777a44"}, + {file = "debugpy-1.8.5.zip", hash = "sha256:b2112cfeb34b4507399d298fe7023a16656fc553ed5246536060ca7bd0e668d0"}, ] [[package]] @@ -789,13 +811,13 @@ files = [ [[package]] name = "exceptiongroup" -version = "1.2.1" +version = "1.2.2" description = "Backport of PEP 654 (exception groups)" optional = false python-versions = ">=3.7" files = [ - {file = "exceptiongroup-1.2.1-py3-none-any.whl", hash = "sha256:5258b9ed329c5bbdd31a309f53cbfb0b155341807f6ff7606a1e801a891b29ad"}, - {file = "exceptiongroup-1.2.1.tar.gz", hash = "sha256:a4785e48b045528f5bfe627b6ad554ff32def154f42372786903b7abcfe1aa16"}, + {file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"}, + {file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"}, ] [package.extras] @@ -1172,61 +1194,61 @@ test = ["objgraph", "psutil"] [[package]] name = "grpcio" -version = "1.64.1" +version = "1.65.4" description = "HTTP/2-based RPC framework" optional = false python-versions = ">=3.8" files = [ - {file = "grpcio-1.64.1-cp310-cp310-linux_armv7l.whl", hash = "sha256:55697ecec192bc3f2f3cc13a295ab670f51de29884ca9ae6cd6247df55df2502"}, - {file = "grpcio-1.64.1-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:3b64ae304c175671efdaa7ec9ae2cc36996b681eb63ca39c464958396697daff"}, - {file = "grpcio-1.64.1-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:bac71b4b28bc9af61efcdc7630b166440bbfbaa80940c9a697271b5e1dabbc61"}, - {file = "grpcio-1.64.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c024ffc22d6dc59000faf8ad781696d81e8e38f4078cb0f2630b4a3cf231a90"}, - {file = "grpcio-1.64.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e7cd5c1325f6808b8ae31657d281aadb2a51ac11ab081ae335f4f7fc44c1721d"}, - {file = "grpcio-1.64.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:0a2813093ddb27418a4c99f9b1c223fab0b053157176a64cc9db0f4557b69bd9"}, - {file = "grpcio-1.64.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:2981c7365a9353f9b5c864595c510c983251b1ab403e05b1ccc70a3d9541a73b"}, - {file = "grpcio-1.64.1-cp310-cp310-win32.whl", hash = "sha256:1262402af5a511c245c3ae918167eca57342c72320dffae5d9b51840c4b2f86d"}, - {file = "grpcio-1.64.1-cp310-cp310-win_amd64.whl", hash = "sha256:19264fc964576ddb065368cae953f8d0514ecc6cb3da8903766d9fb9d4554c33"}, - {file = "grpcio-1.64.1-cp311-cp311-linux_armv7l.whl", hash = "sha256:58b1041e7c870bb30ee41d3090cbd6f0851f30ae4eb68228955d973d3efa2e61"}, - {file = "grpcio-1.64.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:bbc5b1d78a7822b0a84c6f8917faa986c1a744e65d762ef6d8be9d75677af2ca"}, - {file = "grpcio-1.64.1-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:5841dd1f284bd1b3d8a6eca3a7f062b06f1eec09b184397e1d1d43447e89a7ae"}, - {file = "grpcio-1.64.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8caee47e970b92b3dd948371230fcceb80d3f2277b3bf7fbd7c0564e7d39068e"}, - {file = "grpcio-1.64.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73819689c169417a4f978e562d24f2def2be75739c4bed1992435d007819da1b"}, - {file = "grpcio-1.64.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:6503b64c8b2dfad299749cad1b595c650c91e5b2c8a1b775380fcf8d2cbba1e9"}, - {file = "grpcio-1.64.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1de403fc1305fd96cfa75e83be3dee8538f2413a6b1685b8452301c7ba33c294"}, - {file = "grpcio-1.64.1-cp311-cp311-win32.whl", hash = "sha256:d4d29cc612e1332237877dfa7fe687157973aab1d63bd0f84cf06692f04c0367"}, - {file = "grpcio-1.64.1-cp311-cp311-win_amd64.whl", hash = "sha256:5e56462b05a6f860b72f0fa50dca06d5b26543a4e88d0396259a07dc30f4e5aa"}, - {file = "grpcio-1.64.1-cp312-cp312-linux_armv7l.whl", hash = "sha256:4657d24c8063e6095f850b68f2d1ba3b39f2b287a38242dcabc166453e950c59"}, - {file = "grpcio-1.64.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:62b4e6eb7bf901719fce0ca83e3ed474ae5022bb3827b0a501e056458c51c0a1"}, - {file = "grpcio-1.64.1-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:ee73a2f5ca4ba44fa33b4d7d2c71e2c8a9e9f78d53f6507ad68e7d2ad5f64a22"}, - {file = "grpcio-1.64.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:198908f9b22e2672a998870355e226a725aeab327ac4e6ff3a1399792ece4762"}, - {file = "grpcio-1.64.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:39b9d0acaa8d835a6566c640f48b50054f422d03e77e49716d4c4e8e279665a1"}, - {file = "grpcio-1.64.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:5e42634a989c3aa6049f132266faf6b949ec2a6f7d302dbb5c15395b77d757eb"}, - {file = "grpcio-1.64.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:b1a82e0b9b3022799c336e1fc0f6210adc019ae84efb7321d668129d28ee1efb"}, - {file = "grpcio-1.64.1-cp312-cp312-win32.whl", hash = "sha256:55260032b95c49bee69a423c2f5365baa9369d2f7d233e933564d8a47b893027"}, - {file = "grpcio-1.64.1-cp312-cp312-win_amd64.whl", hash = "sha256:c1a786ac592b47573a5bb7e35665c08064a5d77ab88a076eec11f8ae86b3e3f6"}, - {file = "grpcio-1.64.1-cp38-cp38-linux_armv7l.whl", hash = "sha256:a011ac6c03cfe162ff2b727bcb530567826cec85eb8d4ad2bfb4bd023287a52d"}, - {file = "grpcio-1.64.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:4d6dab6124225496010bd22690f2d9bd35c7cbb267b3f14e7a3eb05c911325d4"}, - {file = "grpcio-1.64.1-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:a5e771d0252e871ce194d0fdcafd13971f1aae0ddacc5f25615030d5df55c3a2"}, - {file = "grpcio-1.64.1-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2c3c1b90ab93fed424e454e93c0ed0b9d552bdf1b0929712b094f5ecfe7a23ad"}, - {file = "grpcio-1.64.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20405cb8b13fd779135df23fabadc53b86522d0f1cba8cca0e87968587f50650"}, - {file = "grpcio-1.64.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:0cc79c982ccb2feec8aad0e8fb0d168bcbca85bc77b080d0d3c5f2f15c24ea8f"}, - {file = "grpcio-1.64.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a3a035c37ce7565b8f4f35ff683a4db34d24e53dc487e47438e434eb3f701b2a"}, - {file = "grpcio-1.64.1-cp38-cp38-win32.whl", hash = "sha256:1257b76748612aca0f89beec7fa0615727fd6f2a1ad580a9638816a4b2eb18fd"}, - {file = "grpcio-1.64.1-cp38-cp38-win_amd64.whl", hash = "sha256:0a12ddb1678ebc6a84ec6b0487feac020ee2b1659cbe69b80f06dbffdb249122"}, - {file = "grpcio-1.64.1-cp39-cp39-linux_armv7l.whl", hash = "sha256:75dbbf415026d2862192fe1b28d71f209e2fd87079d98470db90bebe57b33179"}, - {file = "grpcio-1.64.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e3d9f8d1221baa0ced7ec7322a981e28deb23749c76eeeb3d33e18b72935ab62"}, - {file = "grpcio-1.64.1-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:5f8b75f64d5d324c565b263c67dbe4f0af595635bbdd93bb1a88189fc62ed2e5"}, - {file = "grpcio-1.64.1-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c84ad903d0d94311a2b7eea608da163dace97c5fe9412ea311e72c3684925602"}, - {file = "grpcio-1.64.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:940e3ec884520155f68a3b712d045e077d61c520a195d1a5932c531f11883489"}, - {file = "grpcio-1.64.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f10193c69fc9d3d726e83bbf0f3d316f1847c3071c8c93d8090cf5f326b14309"}, - {file = "grpcio-1.64.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ac15b6c2c80a4d1338b04d42a02d376a53395ddf0ec9ab157cbaf44191f3ffdd"}, - {file = "grpcio-1.64.1-cp39-cp39-win32.whl", hash = "sha256:03b43d0ccf99c557ec671c7dede64f023c7da9bb632ac65dbc57f166e4970040"}, - {file = "grpcio-1.64.1-cp39-cp39-win_amd64.whl", hash = "sha256:ed6091fa0adcc7e4ff944090cf203a52da35c37a130efa564ded02b7aff63bcd"}, - {file = "grpcio-1.64.1.tar.gz", hash = "sha256:8d51dd1c59d5fa0f34266b80a3805ec29a1f26425c2a54736133f6d87fc4968a"}, + {file = "grpcio-1.65.4-cp310-cp310-linux_armv7l.whl", hash = "sha256:0e85c8766cf7f004ab01aff6a0393935a30d84388fa3c58d77849fcf27f3e98c"}, + {file = "grpcio-1.65.4-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:e4a795c02405c7dfa8affd98c14d980f4acea16ea3b539e7404c645329460e5a"}, + {file = "grpcio-1.65.4-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:d7b984a8dd975d949c2042b9b5ebcf297d6d5af57dcd47f946849ee15d3c2fb8"}, + {file = "grpcio-1.65.4-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:644a783ce604a7d7c91412bd51cf9418b942cf71896344b6dc8d55713c71ce82"}, + {file = "grpcio-1.65.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5764237d751d3031a36fafd57eb7d36fd2c10c658d2b4057c516ccf114849a3e"}, + {file = "grpcio-1.65.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:ee40d058cf20e1dd4cacec9c39e9bce13fedd38ce32f9ba00f639464fcb757de"}, + {file = "grpcio-1.65.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4482a44ce7cf577a1f8082e807a5b909236bce35b3e3897f839f2fbd9ae6982d"}, + {file = "grpcio-1.65.4-cp310-cp310-win32.whl", hash = "sha256:66bb051881c84aa82e4f22d8ebc9d1704b2e35d7867757f0740c6ef7b902f9b1"}, + {file = "grpcio-1.65.4-cp310-cp310-win_amd64.whl", hash = "sha256:870370524eff3144304da4d1bbe901d39bdd24f858ce849b7197e530c8c8f2ec"}, + {file = "grpcio-1.65.4-cp311-cp311-linux_armv7l.whl", hash = "sha256:85e9c69378af02e483bc626fc19a218451b24a402bdf44c7531e4c9253fb49ef"}, + {file = "grpcio-1.65.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2bd672e005afab8bf0d6aad5ad659e72a06dd713020554182a66d7c0c8f47e18"}, + {file = "grpcio-1.65.4-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:abccc5d73f5988e8f512eb29341ed9ced923b586bb72e785f265131c160231d8"}, + {file = "grpcio-1.65.4-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:886b45b29f3793b0c2576201947258782d7e54a218fe15d4a0468d9a6e00ce17"}, + {file = "grpcio-1.65.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:be952436571dacc93ccc7796db06b7daf37b3b56bb97e3420e6503dccfe2f1b4"}, + {file = "grpcio-1.65.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:8dc9ddc4603ec43f6238a5c95400c9a901b6d079feb824e890623da7194ff11e"}, + {file = "grpcio-1.65.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ade1256c98cba5a333ef54636095f2c09e6882c35f76acb04412f3b1aa3c29a5"}, + {file = "grpcio-1.65.4-cp311-cp311-win32.whl", hash = "sha256:280e93356fba6058cbbfc6f91a18e958062ef1bdaf5b1caf46c615ba1ae71b5b"}, + {file = "grpcio-1.65.4-cp311-cp311-win_amd64.whl", hash = "sha256:d2b819f9ee27ed4e3e737a4f3920e337e00bc53f9e254377dd26fc7027c4d558"}, + {file = "grpcio-1.65.4-cp312-cp312-linux_armv7l.whl", hash = "sha256:926a0750a5e6fb002542e80f7fa6cab8b1a2ce5513a1c24641da33e088ca4c56"}, + {file = "grpcio-1.65.4-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:2a1d4c84d9e657f72bfbab8bedf31bdfc6bfc4a1efb10b8f2d28241efabfaaf2"}, + {file = "grpcio-1.65.4-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:17de4fda50967679677712eec0a5c13e8904b76ec90ac845d83386b65da0ae1e"}, + {file = "grpcio-1.65.4-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3dee50c1b69754a4228e933696408ea87f7e896e8d9797a3ed2aeed8dbd04b74"}, + {file = "grpcio-1.65.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:74c34fc7562bdd169b77966068434a93040bfca990e235f7a67cdf26e1bd5c63"}, + {file = "grpcio-1.65.4-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:24a2246e80a059b9eb981e4c2a6d8111b1b5e03a44421adbf2736cc1d4988a8a"}, + {file = "grpcio-1.65.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:18c10f0d054d2dce34dd15855fcca7cc44ec3b811139437543226776730c0f28"}, + {file = "grpcio-1.65.4-cp312-cp312-win32.whl", hash = "sha256:d72962788b6c22ddbcdb70b10c11fbb37d60ae598c51eb47ec019db66ccfdff0"}, + {file = "grpcio-1.65.4-cp312-cp312-win_amd64.whl", hash = "sha256:7656376821fed8c89e68206a522522317787a3d9ed66fb5110b1dff736a5e416"}, + {file = "grpcio-1.65.4-cp38-cp38-linux_armv7l.whl", hash = "sha256:4934077b33aa6fe0b451de8b71dabde96bf2d9b4cb2b3187be86e5adebcba021"}, + {file = "grpcio-1.65.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0cef8c919a3359847c357cb4314e50ed1f0cca070f828ee8f878d362fd744d52"}, + {file = "grpcio-1.65.4-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:a925446e6aa12ca37114840d8550f308e29026cdc423a73da3043fd1603a6385"}, + {file = "grpcio-1.65.4-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf53e6247f1e2af93657e62e240e4f12e11ee0b9cef4ddcb37eab03d501ca864"}, + {file = "grpcio-1.65.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdb34278e4ceb224c89704cd23db0d902e5e3c1c9687ec9d7c5bb4c150f86816"}, + {file = "grpcio-1.65.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:e6cbdd107e56bde55c565da5fd16f08e1b4e9b0674851d7749e7f32d8645f524"}, + {file = "grpcio-1.65.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:626319a156b1f19513156a3b0dbfe977f5f93db63ca673a0703238ebd40670d7"}, + {file = "grpcio-1.65.4-cp38-cp38-win32.whl", hash = "sha256:3d1bbf7e1dd1096378bd83c83f554d3b93819b91161deaf63e03b7022a85224a"}, + {file = "grpcio-1.65.4-cp38-cp38-win_amd64.whl", hash = "sha256:a99e6dffefd3027b438116f33ed1261c8d360f0dd4f943cb44541a2782eba72f"}, + {file = "grpcio-1.65.4-cp39-cp39-linux_armv7l.whl", hash = "sha256:874acd010e60a2ec1e30d5e505b0651ab12eb968157cd244f852b27c6dbed733"}, + {file = "grpcio-1.65.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b07f36faf01fca5427d4aa23645e2d492157d56c91fab7e06fe5697d7e171ad4"}, + {file = "grpcio-1.65.4-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:b81711bf4ec08a3710b534e8054c7dcf90f2edc22bebe11c1775a23f145595fe"}, + {file = "grpcio-1.65.4-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88fcabc332a4aef8bcefadc34a02e9ab9407ab975d2c7d981a8e12c1aed92aa1"}, + {file = "grpcio-1.65.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9ba3e63108a8749994f02c7c0e156afb39ba5bdf755337de8e75eb685be244b"}, + {file = "grpcio-1.65.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:8eb485801957a486bf5de15f2c792d9f9c897a86f2f18db8f3f6795a094b4bb2"}, + {file = "grpcio-1.65.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:075f3903bc1749ace93f2b0664f72964ee5f2da5c15d4b47e0ab68e4f442c257"}, + {file = "grpcio-1.65.4-cp39-cp39-win32.whl", hash = "sha256:0a0720299bdb2cc7306737295d56e41ce8827d5669d4a3cd870af832e3b17c4d"}, + {file = "grpcio-1.65.4-cp39-cp39-win_amd64.whl", hash = "sha256:a146bc40fa78769f22e1e9ff4f110ef36ad271b79707577bf2a31e3e931141b9"}, + {file = "grpcio-1.65.4.tar.gz", hash = "sha256:2a4f476209acffec056360d3e647ae0e14ae13dcf3dfb130c227ae1c594cbe39"}, ] [package.extras] -protobuf = ["grpcio-tools (>=1.64.1)"] +protobuf = ["grpcio-tools (>=1.65.4)"] [[package]] name = "h5py" @@ -1274,50 +1296,50 @@ files = [ [[package]] name = "igraph" -version = "0.11.5" +version = "0.11.6" description = "High performance graph data structures and algorithms" optional = false python-versions = ">=3.8" files = [ - {file = "igraph-0.11.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:964d779bea5cfb2bf1a363b3d95f129a1f9cbce8a1effd6591a2cc0c37948447"}, - {file = "igraph-0.11.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:72b60ba8e59bb684d333c6b8da53f8bdd470812233cee505ba9bd9f13cbcb072"}, - {file = "igraph-0.11.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d3ded6abc098280529dcf8a6b5990b55f30d4293263f30044a06f69ee18b563"}, - {file = "igraph-0.11.5-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5e3cd69e48922fe64239fec4e6612878af231bfdfb0f2b04716c347fd8d46830"}, - {file = "igraph-0.11.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2db4afe920bbf25302ac50eeb036f45f807d8a7e769e03ff349339ed263341f"}, - {file = "igraph-0.11.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:fb74cc4798b8110b7f6f5f4ee6d041893e97c993b5fe9a5e57650e6fa13d707e"}, - {file = "igraph-0.11.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:280ab2d7f7183f024427960467026698e8ceccdc83d22ac84d02837e92d45555"}, - {file = "igraph-0.11.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a0587634c5dd927b3579abcda98a81ca433198d2dd457f043f72c23761d3113d"}, - {file = "igraph-0.11.5-cp38-cp38-win32.whl", hash = "sha256:cd0708c62df1ff40b342f42a808d5566c924ab1f6855fe05e01eca29cd92f87a"}, - {file = "igraph-0.11.5-cp38-cp38-win_amd64.whl", hash = "sha256:37ab12421688547426a4dc34246cd3a2eb5c226b4c50a7e1156b288ef3b1890d"}, - {file = "igraph-0.11.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:164718e7dcf3096c2e73d26c4c98c5f963033af1148b1fa390ff916eb31dd31d"}, - {file = "igraph-0.11.5-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:7ea75561e921e05635e67a98735aca240e4bbc8df5909eb3b8df196fca334c31"}, - {file = "igraph-0.11.5-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9bcfd3c9308150d69644744c6bec1ff75a3dd7ea9b4fa6f9587664ef88d271a"}, - {file = "igraph-0.11.5-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:845cb179a0876c6c23eb0869bb984f85cd8131f8131595e935e902020df1327e"}, - {file = "igraph-0.11.5-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eb2fd96b729ce2f6eb7a2e95ec1b522ca7359e10c5609a7e3b686777bab89f4"}, - {file = "igraph-0.11.5-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:e6eac30df9294c3d9de73c1975358600b3230e408c2292a7a2e145e42faf12a8"}, - {file = "igraph-0.11.5-cp39-abi3-musllinux_1_1_i686.whl", hash = "sha256:3c98990f785117b00ff077d96aa9080d6934a21fdd1c2c06d3f4945ac212a6ac"}, - {file = "igraph-0.11.5-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ff3ba7a82a0a1b42bfcda6a379f3de78e0dabf84d0d582083ddecc52fdc81a40"}, - {file = "igraph-0.11.5-cp39-abi3-win32.whl", hash = "sha256:db8e4942574c6b4b0afb8d219a9a8e38e75d3cc3bad68aa7f438e8604a7d4d8a"}, - {file = "igraph-0.11.5-cp39-abi3-win_amd64.whl", hash = "sha256:034c1f8b50d58c911c2ff45cf299fb64255aa0ede6fa880a75d8d7493da743d8"}, - {file = "igraph-0.11.5-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:e4970886a79c1641d082a31cc2bbc17de956929e091770af0c3a37971f44568f"}, - {file = "igraph-0.11.5-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:213ee64c8dfe03ca2d9e6a0fa87f283decc44c79d94b75121acf97753bedc21e"}, - {file = "igraph-0.11.5-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8850b29d63e218da70f8a05cb29af3166c2ccdda3b7a44902cf05a5f0d234ec8"}, - {file = "igraph-0.11.5-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f68f020e4f83d6764952c48c7401316e33d0336607551711282b0177df0e9732"}, - {file = "igraph-0.11.5-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa5239c3b4405eddb424ffcd3c21a4e7d3e8f170d2da54b6c8e2bd980f4c1130"}, - {file = "igraph-0.11.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:59a66ff91a9d7e60b61fb842e1c1b987e258fb95d1d366b998d3e83948939a19"}, - {file = "igraph-0.11.5-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:29698baa1ada3bdbb746b03e8cfedfa950c8ebf91a3ed466ec423aa5d03552fd"}, - {file = "igraph-0.11.5-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:7da25d7966f5934de245370b67b7ed47bded36d32ab21a9bc78b710d07b48732"}, - {file = "igraph-0.11.5-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e00ac0542f014ef966f19629112065f6366fda4cd164b6107f8245868ebb0667"}, - {file = "igraph-0.11.5-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c51a9405b47f5b14ceabc4cf47ac2a440cead0a032fac6b22f2f171b09266ea8"}, - {file = "igraph-0.11.5-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49aa55e8bac8eb41f712340154d37c96b3021131ba72769006aa5ce354a275f8"}, - {file = "igraph-0.11.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:2ff0162bad7e9b88d3de3afffcb509413a1173359b10b59625ecdfc69b0602cd"}, - {file = "igraph-0.11.5-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:848206e8db2d47b9a252a994105f963eb03bce0c7455c5478478a02734cd4cc3"}, - {file = "igraph-0.11.5-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:ba42b1b38c8cd3fd4ed73bbe9f9f7da2d8c7457e3b61ddd235deed9593fc26c8"}, - {file = "igraph-0.11.5-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63074458ba8c26057cc22db46cace78276db4b3e9f231e3b2047139979647ffc"}, - {file = "igraph-0.11.5-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1a7fe9a87b9b904d14cfe0167ae8f7231a4cb951969652372b4a3b2cb23ca827"}, - {file = "igraph-0.11.5-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dfdd7266080e96d0a569e13ebd6219d2bd1cfabaa4db8c5ba8380360fccd0883"}, - {file = "igraph-0.11.5-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4305185b65cb6b2e8a22c9d5e834978093d4a8a65dd71dc42db7a13f8614864d"}, - {file = "igraph-0.11.5.tar.gz", hash = "sha256:2d71d645a4c3344c5910543fabbae10d3163f46a3e824ba7753c14b9036b8233"}, + {file = "igraph-0.11.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3f8b837181e8e87676be3873ce87cc92cc234efd58a2da2f6b4e050db150fcf4"}, + {file = "igraph-0.11.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:245c4b7d7657849eff80416f5df4525c8fc44c74a981ee4d44f0ef2612c3bada"}, + {file = "igraph-0.11.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bdb7be3d165073c0136295c0808e9edc57ba096cdb26e94086abb04561f7a292"}, + {file = "igraph-0.11.6-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58974e20df2986a1ae52a16e51ecb387cc0cbeb41c5c0ddff4d373a1bbf1d9c5"}, + {file = "igraph-0.11.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bef14de5e8ab70724a43808b1ed14aaa6fe1002f87e592289027a3827a8f44a"}, + {file = "igraph-0.11.6-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:86c1e98de2e32d074df8510bf18abfa1f4c5fda4cb28a009985a5d746b0c0125"}, + {file = "igraph-0.11.6-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:ebc5b3d702158abeb2e4d2414374586a2b932e1a07e48352b470600e1733d528"}, + {file = "igraph-0.11.6-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:0efe6d0fb22d3987a800eb3857ed04df9eb4c5dddd0998be05232cb646f1c337"}, + {file = "igraph-0.11.6-cp38-cp38-win32.whl", hash = "sha256:f4e68b27497b1c8ada2fb2bc35ef3fa7b0d72e84306b3d648d3de240fc618c32"}, + {file = "igraph-0.11.6-cp38-cp38-win_amd64.whl", hash = "sha256:5665b33dfbfca5f54ce9b4fea6b97903bd0e99fb1b02acf5e57e600bdfa5a355"}, + {file = "igraph-0.11.6-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:8aabef03d787b519d1075dfc0da4a1109fb113b941334883e3e7947ac30a459e"}, + {file = "igraph-0.11.6-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:1f2cc4a518d99cdf6cae514f85e93e56852bc8c325b3abb96037d1d690b5975f"}, + {file = "igraph-0.11.6-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c1e859238be52ab8ccc614d18f9362942bc88ce543afc12548f81ae99b10801d"}, + {file = "igraph-0.11.6-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d61fbe5e85eb4ae9efe08c461f9bdeedb02a2b5739fbc223d324a71f40a28be2"}, + {file = "igraph-0.11.6-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b6620ba39df29fd42151becf82309b54e57148233c9c3ef890eed62e25eed8a5"}, + {file = "igraph-0.11.6-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:59666589bb3d07f310cda2c5106a8adeeb77c2ef27fecf1c6438b6091f4ca69d"}, + {file = "igraph-0.11.6-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:8750b6d6caebf199cf7dc41c931f58e330153779707391e30f0a29f02666fb6e"}, + {file = "igraph-0.11.6-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:967d6f2c30fe94317da15e459374d0fb8ca3e56020412f201ecd07dd5b5352f2"}, + {file = "igraph-0.11.6-cp39-abi3-win32.whl", hash = "sha256:9744f95a67319eb6cb487ceabf30f5d7940de34bada51f0ba63adbd23e0f94ad"}, + {file = "igraph-0.11.6-cp39-abi3-win_amd64.whl", hash = "sha256:b80e69eb11faa9c57330a9ffebdde5808966efe1c1f638d4d4827ea04df7aca8"}, + {file = "igraph-0.11.6-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0329c16092e2ea7930d5f8368666ce7cb704900cc0ea04e4afe9ea1dd46e44af"}, + {file = "igraph-0.11.6-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:21752313f449bd8688e5688e95ea7231cea5e9199c7162535029be0d9af848ac"}, + {file = "igraph-0.11.6-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea25e136c6c4161f53ff58868b23ff6c845193050ab0e502236d68e5d4174e32"}, + {file = "igraph-0.11.6-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac84433a03aef15e4b810010b08882b09854a3669450ccf31e392dbe295d2a66"}, + {file = "igraph-0.11.6-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac697a44e3573169fa2b28c9c37dcf9cf01e0f558b845dd7123860d4c7c8fb89"}, + {file = "igraph-0.11.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:bdeae8bf35316eb1fb27bf667dcf5ecf5fcfb0b8f51831bc1b00c39c09c2d73b"}, + {file = "igraph-0.11.6-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ad7e4aa442935de72554b96733bf6d7f09eac5cee97988a2562bdd3ca173cfa3"}, + {file = "igraph-0.11.6-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:8d2818780358a686178866d01568b9df1f29678581734ad7a78882bab54df004"}, + {file = "igraph-0.11.6-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2352276a20d979f1dea360af4202bb9f0c9a7d2c77f51815c0e625165e82013d"}, + {file = "igraph-0.11.6-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:687fdab543b507d622fa3043f4227e5b26dc61dcf8ff8c0919fccddcc655f8b8"}, + {file = "igraph-0.11.6-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:57f7f8214cd48c9a4d97f7346a4152ba2d4ac95fb5ee0df4ecf224fce4ba3d14"}, + {file = "igraph-0.11.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:2b9cc69ede53f76ffae03b066609aa90184dd68ef15da8c104a97cebb9210838"}, + {file = "igraph-0.11.6-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:591e1e447c3f0092daf7613a3eaedab83f9a0b0adbaf7702724c5117ded038a5"}, + {file = "igraph-0.11.6-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:ca558eb331bc687bc33e5cd23717e22676e9412f8cda3a31d30c996a0487610d"}, + {file = "igraph-0.11.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf43c30e08debb087c9e3da69aa5cf1b6732968da34d55a614e3421b9a452146"}, + {file = "igraph-0.11.6-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d38e8d7db72b187d9d2211d0d06b3271fa9f32b04d49d789e2859b5480db0d0"}, + {file = "igraph-0.11.6-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a318b059051ff78144a1c3cb880f4d933c812bcdb3d833a49cd7168d0427672"}, + {file = "igraph-0.11.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2c54027add809b3c5b6685b8deca4ea4763fd000b9ea45c7ee46b7c9d61ff15e"}, + {file = "igraph-0.11.6.tar.gz", hash = "sha256:837f233256c3319f2a35a6a80d94eafe47b43791ef4c6f9e9871061341ac8e28"}, ] [package.dependencies] @@ -1345,13 +1367,13 @@ files = [ [[package]] name = "importlib-metadata" -version = "8.0.0" +version = "8.2.0" description = "Read metadata from Python packages" optional = false python-versions = ">=3.8" files = [ - {file = "importlib_metadata-8.0.0-py3-none-any.whl", hash = "sha256:15584cf2b1bf449d98ff8a6ff1abef57bf20f3ac6454f431736cd3e660921b2f"}, - {file = "importlib_metadata-8.0.0.tar.gz", hash = "sha256:188bd24e4c346d3f0a933f275c2fec67050326a856b9a359881d7c2a697e8812"}, + {file = "importlib_metadata-8.2.0-py3-none-any.whl", hash = "sha256:11901fa0c2f97919b288679932bb64febaeacf289d18ac84dd68cb2e74213369"}, + {file = "importlib_metadata-8.2.0.tar.gz", hash = "sha256:72e8d4399996132204f9a16dcc751af254a48f8d1b20b9ff0f98d4a8f901e73d"}, ] [package.dependencies] @@ -1513,21 +1535,21 @@ testing = ["portend", "pytest (>=6,!=8.1.1)", "pytest-checkdocs (>=2.4)", "pytes [[package]] name = "jaraco-functools" -version = "4.0.1" +version = "4.0.2" description = "Functools like those found in stdlib" optional = false python-versions = ">=3.8" files = [ - {file = "jaraco.functools-4.0.1-py3-none-any.whl", hash = "sha256:3b24ccb921d6b593bdceb56ce14799204f473976e2a9d4b15b04d0f2c2326664"}, - {file = "jaraco_functools-4.0.1.tar.gz", hash = "sha256:d33fa765374c0611b52f8b3a795f8900869aa88c84769d4d1746cd68fb28c3e8"}, + {file = "jaraco.functools-4.0.2-py3-none-any.whl", hash = "sha256:c9d16a3ed4ccb5a889ad8e0b7a343401ee5b2a71cee6ed192d3f68bc351e94e3"}, + {file = "jaraco_functools-4.0.2.tar.gz", hash = "sha256:3460c74cd0d32bf82b9576bbb3527c4364d5b27a21f5158a62aed6c4b42e23f5"}, ] [package.dependencies] more-itertools = "*" [package.extras] -docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"] -testing = ["jaraco.classes", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"] +doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] +test = ["jaraco.classes", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"] [[package]] name = "jedi" @@ -1593,13 +1615,13 @@ files = [ [[package]] name = "jsonschema" -version = "4.22.0" +version = "4.23.0" description = "An implementation of JSON Schema validation for Python" optional = false python-versions = ">=3.8" files = [ - {file = "jsonschema-4.22.0-py3-none-any.whl", hash = "sha256:ff4cfd6b1367a40e7bc6411caec72effadd3db0bbe5017de188f2d6108335802"}, - {file = "jsonschema-4.22.0.tar.gz", hash = "sha256:5b22d434a45935119af990552c862e5d6d564e8f6601206b305a61fdf661a2b7"}, + {file = "jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566"}, + {file = "jsonschema-4.23.0.tar.gz", hash = "sha256:d71497fef26351a33265337fa77ffeb82423f3ea21283cd9467bb03999266bc4"}, ] [package.dependencies] @@ -1610,7 +1632,7 @@ rpds-py = ">=0.7.1" [package.extras] format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"] -format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"] +format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=24.6.0)"] [[package]] name = "jsonschema-specifications" @@ -1698,13 +1720,13 @@ test = ["ipykernel", "pre-commit", "pytest (<8)", "pytest-cov", "pytest-timeout" [[package]] name = "jupytext" -version = "1.16.2" +version = "1.16.4" description = "Jupyter notebooks as Markdown documents, Julia, Python or R scripts" optional = false python-versions = ">=3.8" files = [ - {file = "jupytext-1.16.2-py3-none-any.whl", hash = "sha256:197a43fef31dca612b68b311e01b8abd54441c7e637810b16b6cb8f2ab66065e"}, - {file = "jupytext-1.16.2.tar.gz", hash = "sha256:8627dd9becbbebd79cc4a4ed4727d89d78e606b4b464eab72357b3b029023a14"}, + {file = "jupytext-1.16.4-py3-none-any.whl", hash = "sha256:76989d2690e65667ea6fb411d8056abe7cd0437c07bd774660b83d62acf9490a"}, + {file = "jupytext-1.16.4.tar.gz", hash = "sha256:28e33f46f2ce7a41fb9d677a4a2c95327285579b64ca104437c4b9eb1e4174e9"}, ] [package.dependencies] @@ -1716,24 +1738,24 @@ pyyaml = "*" tomli = {version = "*", markers = "python_version < \"3.11\""} [package.extras] -dev = ["autopep8", "black", "flake8", "gitpython", "ipykernel", "isort", "jupyter-fs (<0.4.0)", "jupyter-server (!=2.11)", "nbconvert", "pre-commit", "pytest", "pytest-cov (>=2.6.1)", "pytest-randomly", "pytest-xdist", "sphinx-gallery (<0.8)"] +dev = ["autopep8", "black", "flake8", "gitpython", "ipykernel", "isort", "jupyter-fs (>=1.0)", "jupyter-server (!=2.11)", "nbconvert", "pre-commit", "pytest", "pytest-cov (>=2.6.1)", "pytest-randomly", "pytest-xdist", "sphinx-gallery (<0.8)"] docs = ["myst-parser", "sphinx", "sphinx-copybutton", "sphinx-rtd-theme"] test = ["pytest", "pytest-randomly", "pytest-xdist"] test-cov = ["ipykernel", "jupyter-server (!=2.11)", "nbconvert", "pytest", "pytest-cov (>=2.6.1)", "pytest-randomly", "pytest-xdist"] -test-external = ["autopep8", "black", "flake8", "gitpython", "ipykernel", "isort", "jupyter-fs (<0.4.0)", "jupyter-server (!=2.11)", "nbconvert", "pre-commit", "pytest", "pytest-randomly", "pytest-xdist", "sphinx-gallery (<0.8)"] +test-external = ["autopep8", "black", "flake8", "gitpython", "ipykernel", "isort", "jupyter-fs (>=1.0)", "jupyter-server (!=2.11)", "nbconvert", "pre-commit", "pytest", "pytest-randomly", "pytest-xdist", "sphinx-gallery (<0.8)"] test-functional = ["pytest", "pytest-randomly", "pytest-xdist"] test-integration = ["ipykernel", "jupyter-server (!=2.11)", "nbconvert", "pytest", "pytest-randomly", "pytest-xdist"] test-ui = ["calysto-bash"] [[package]] name = "keyring" -version = "25.2.1" +version = "25.3.0" description = "Store and access your passwords safely." optional = false python-versions = ">=3.8" files = [ - {file = "keyring-25.2.1-py3-none-any.whl", hash = "sha256:2458681cdefc0dbc0b7eb6cf75d0b98e59f9ad9b2d4edd319d18f68bdca95e50"}, - {file = "keyring-25.2.1.tar.gz", hash = "sha256:daaffd42dbda25ddafb1ad5fec4024e5bbcfe424597ca1ca452b299861e49f1b"}, + {file = "keyring-25.3.0-py3-none-any.whl", hash = "sha256:8d963da00ccdf06e356acd9bf3b743208878751032d8599c6cc89eb51310ffae"}, + {file = "keyring-25.3.0.tar.gz", hash = "sha256:8d85a1ea5d6db8515b59e1c5d1d1678b03cf7fc8b8dcfb1651e8c4a524eb42ef"}, ] [package.dependencies] @@ -1747,8 +1769,8 @@ SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""} [package.extras] completion = ["shtab (>=1.1.0)"] -docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] -testing = ["pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"] +doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] +test = ["pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"] [[package]] name = "kiwisolver" @@ -1952,13 +1974,13 @@ test = ["codecov (==2.1.12)", "coverage (==6.5.0)", "pre-commit (==2.20.0)", "py [[package]] name = "lightning-utilities" -version = "0.11.3.post0" +version = "0.11.6" description = "Lightning toolbox for across the our ecosystem." optional = false python-versions = ">=3.8" files = [ - {file = "lightning_utilities-0.11.3.post0-py3-none-any.whl", hash = "sha256:2aec1d067e5ab61a8978f879998850a97f9a3764ee54aade329552706b0d189b"}, - {file = "lightning_utilities-0.11.3.post0.tar.gz", hash = "sha256:7485fad0e3c5607a6bde4507935689c553a2c91325de2127b4bb8171a601e236"}, + {file = "lightning_utilities-0.11.6-py3-none-any.whl", hash = "sha256:ecd9953c316cbaf56ad820fbe7bd062187b9973c4a23d47b076cd59dc080a310"}, + {file = "lightning_utilities-0.11.6.tar.gz", hash = "sha256:79fc27ef8ec8b8d55a537920f2c7610270c0c9e037fa6efc78f1aa34ec8cdf04"}, ] [package.dependencies] @@ -2114,40 +2136,40 @@ files = [ [[package]] name = "matplotlib" -version = "3.9.1" +version = "3.9.1.post1" description = "Python plotting package" optional = false python-versions = ">=3.9" files = [ - {file = "matplotlib-3.9.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:7ccd6270066feb9a9d8e0705aa027f1ff39f354c72a87efe8fa07632f30fc6bb"}, - {file = "matplotlib-3.9.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:591d3a88903a30a6d23b040c1e44d1afdd0d778758d07110eb7596f811f31842"}, - {file = "matplotlib-3.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dd2a59ff4b83d33bca3b5ec58203cc65985367812cb8c257f3e101632be86d92"}, - {file = "matplotlib-3.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0fc001516ffcf1a221beb51198b194d9230199d6842c540108e4ce109ac05cc0"}, - {file = "matplotlib-3.9.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:83c6a792f1465d174c86d06f3ae85a8fe36e6f5964633ae8106312ec0921fdf5"}, - {file = "matplotlib-3.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:421851f4f57350bcf0811edd754a708d2275533e84f52f6760b740766c6747a7"}, - {file = "matplotlib-3.9.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:b3fce58971b465e01b5c538f9d44915640c20ec5ff31346e963c9e1cd66fa812"}, - {file = "matplotlib-3.9.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a973c53ad0668c53e0ed76b27d2eeeae8799836fd0d0caaa4ecc66bf4e6676c0"}, - {file = "matplotlib-3.9.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:82cd5acf8f3ef43f7532c2f230249720f5dc5dd40ecafaf1c60ac8200d46d7eb"}, - {file = "matplotlib-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ab38a4f3772523179b2f772103d8030215b318fef6360cb40558f585bf3d017f"}, - {file = "matplotlib-3.9.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:2315837485ca6188a4b632c5199900e28d33b481eb083663f6a44cfc8987ded3"}, - {file = "matplotlib-3.9.1-cp311-cp311-win_amd64.whl", hash = "sha256:a0c977c5c382f6696caf0bd277ef4f936da7e2aa202ff66cad5f0ac1428ee15b"}, - {file = "matplotlib-3.9.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:565d572efea2b94f264dd86ef27919515aa6d629252a169b42ce5f570db7f37b"}, - {file = "matplotlib-3.9.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6d397fd8ccc64af2ec0af1f0efc3bacd745ebfb9d507f3f552e8adb689ed730a"}, - {file = "matplotlib-3.9.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26040c8f5121cd1ad712abffcd4b5222a8aec3a0fe40bc8542c94331deb8780d"}, - {file = "matplotlib-3.9.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d12cb1837cffaac087ad6b44399d5e22b78c729de3cdae4629e252067b705e2b"}, - {file = "matplotlib-3.9.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:0e835c6988edc3d2d08794f73c323cc62483e13df0194719ecb0723b564e0b5c"}, - {file = "matplotlib-3.9.1-cp312-cp312-win_amd64.whl", hash = "sha256:44a21d922f78ce40435cb35b43dd7d573cf2a30138d5c4b709d19f00e3907fd7"}, - {file = "matplotlib-3.9.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:0c584210c755ae921283d21d01f03a49ef46d1afa184134dd0f95b0202ee6f03"}, - {file = "matplotlib-3.9.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:11fed08f34fa682c2b792942f8902e7aefeed400da71f9e5816bea40a7ce28fe"}, - {file = "matplotlib-3.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0000354e32efcfd86bda75729716b92f5c2edd5b947200be9881f0a671565c33"}, - {file = "matplotlib-3.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4db17fea0ae3aceb8e9ac69c7e3051bae0b3d083bfec932240f9bf5d0197a049"}, - {file = "matplotlib-3.9.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:208cbce658b72bf6a8e675058fbbf59f67814057ae78165d8a2f87c45b48d0ff"}, - {file = "matplotlib-3.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:dc23f48ab630474264276be156d0d7710ac6c5a09648ccdf49fef9200d8cbe80"}, - {file = "matplotlib-3.9.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3fda72d4d472e2ccd1be0e9ccb6bf0d2eaf635e7f8f51d737ed7e465ac020cb3"}, - {file = "matplotlib-3.9.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:84b3ba8429935a444f1fdc80ed930babbe06725bcf09fbeb5c8757a2cd74af04"}, - {file = "matplotlib-3.9.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b918770bf3e07845408716e5bbda17eadfc3fcbd9307dc67f37d6cf834bb3d98"}, - {file = "matplotlib-3.9.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:f1f2e5d29e9435c97ad4c36fb6668e89aee13d48c75893e25cef064675038ac9"}, - {file = "matplotlib-3.9.1.tar.gz", hash = "sha256:de06b19b8db95dd33d0dc17c926c7c9ebed9f572074b6fac4f65068a6814d010"}, + {file = "matplotlib-3.9.1.post1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3779ad3e8b72df22b8a622c5796bbcfabfa0069b835412e3c1dec8ee3de92d0c"}, + {file = "matplotlib-3.9.1.post1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ec400340f8628e8e2260d679078d4e9b478699f386e5cc8094e80a1cb0039c7c"}, + {file = "matplotlib-3.9.1.post1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:82c18791b8862ea095081f745b81f896b011c5a5091678fb33204fef641476af"}, + {file = "matplotlib-3.9.1.post1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:621a628389c09a6b9f609a238af8e66acecece1cfa12febc5fe4195114ba7446"}, + {file = "matplotlib-3.9.1.post1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:9a54734ca761ebb27cd4f0b6c2ede696ab6861052d7d7e7b8f7a6782665115f5"}, + {file = "matplotlib-3.9.1.post1-cp310-cp310-win_amd64.whl", hash = "sha256:0721f93db92311bb514e446842e2b21c004541dcca0281afa495053e017c5458"}, + {file = "matplotlib-3.9.1.post1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:b08b46058fe2a31ecb81ef6aa3611f41d871f6a8280e9057cb4016cb3d8e894a"}, + {file = "matplotlib-3.9.1.post1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:22b344e84fcc574f561b5731f89a7625db8ef80cdbb0026a8ea855a33e3429d1"}, + {file = "matplotlib-3.9.1.post1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4b49fee26d64aefa9f061b575f0f7b5fc4663e51f87375c7239efa3d30d908fa"}, + {file = "matplotlib-3.9.1.post1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89eb7e89e2b57856533c5c98f018aa3254fa3789fcd86d5f80077b9034a54c9a"}, + {file = "matplotlib-3.9.1.post1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c06e742bade41fda6176d4c9c78c9ea016e176cd338e62a1686384cb1eb8de41"}, + {file = "matplotlib-3.9.1.post1-cp311-cp311-win_amd64.whl", hash = "sha256:c44edab5b849e0fc1f1c9d6e13eaa35ef65925f7be45be891d9784709ad95561"}, + {file = "matplotlib-3.9.1.post1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:bf28b09986aee06393e808e661c3466be9c21eff443c9bc881bce04bfbb0c500"}, + {file = "matplotlib-3.9.1.post1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:92aeb8c439d4831510d8b9d5e39f31c16c7f37873879767c26b147cef61e54cd"}, + {file = "matplotlib-3.9.1.post1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15798b0691b45c80d3320358a88ce5a9d6f518b28575b3ea3ed31b4bd95d009"}, + {file = "matplotlib-3.9.1.post1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d59fc6096da7b9c1df275f9afc3fef5cbf634c21df9e5f844cba3dd8deb1847d"}, + {file = "matplotlib-3.9.1.post1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ab986817a32a70ce22302438691e7df4c6ee4a844d47289db9d583d873491e0b"}, + {file = "matplotlib-3.9.1.post1-cp312-cp312-win_amd64.whl", hash = "sha256:0d78e7d2d86c4472da105d39aba9b754ed3dfeaeaa4ac7206b82706e0a5362fa"}, + {file = "matplotlib-3.9.1.post1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:bd07eba6431b4dc9253cce6374a28c415e1d3a7dc9f8aba028ea7592f06fe172"}, + {file = "matplotlib-3.9.1.post1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ca230cc4482010d646827bd2c6d140c98c361e769ae7d954ebf6fff2a226f5b1"}, + {file = "matplotlib-3.9.1.post1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ace27c0fdeded399cbc43f22ffa76e0f0752358f5b33106ec7197534df08725a"}, + {file = "matplotlib-3.9.1.post1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a4f3aeb7ba14c497dc6f021a076c48c2e5fbdf3da1e7264a5d649683e284a2f"}, + {file = "matplotlib-3.9.1.post1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:23f96fbd4ff4cfa9b8a6b685a65e7eb3c2ced724a8d965995ec5c9c2b1f7daf5"}, + {file = "matplotlib-3.9.1.post1-cp39-cp39-win_amd64.whl", hash = "sha256:2808b95452b4ffa14bfb7c7edffc5350743c31bda495f0d63d10fdd9bc69e895"}, + {file = "matplotlib-3.9.1.post1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:ffc91239f73b4179dec256b01299d46d0ffa9d27d98494bc1476a651b7821cbe"}, + {file = "matplotlib-3.9.1.post1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:f965ebca9fd4feaaca45937c4849d92b70653057497181100fcd1e18161e5f29"}, + {file = "matplotlib-3.9.1.post1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:801ee9323fd7b2da0d405aebbf98d1da77ea430bbbbbec6834c0b3af15e5db44"}, + {file = "matplotlib-3.9.1.post1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:50113e9b43ceb285739f35d43db36aa752fb8154325b35d134ff6e177452f9ec"}, + {file = "matplotlib-3.9.1.post1.tar.gz", hash = "sha256:c91e585c65092c975a44dc9d4239ba8c594ba3c193d7c478b6d178c4ef61f406"}, ] [package.dependencies] @@ -2221,13 +2243,13 @@ files = [ [[package]] name = "more-itertools" -version = "10.3.0" +version = "10.4.0" description = "More routines for operating on iterables, beyond itertools" optional = false python-versions = ">=3.8" files = [ - {file = "more-itertools-10.3.0.tar.gz", hash = "sha256:e5d93ef411224fbcef366a6e8ddc4c5781bc6359d43412a65dd5964e46111463"}, - {file = "more_itertools-10.3.0-py3-none-any.whl", hash = "sha256:ea6a02e24a9161e51faad17a8782b92a0df82c12c1c8886fec7f0c3fa1a1b320"}, + {file = "more-itertools-10.4.0.tar.gz", hash = "sha256:fe0e63c4ab068eac62410ab05cccca2dc71ec44ba8ef29916a0090df061cf923"}, + {file = "more_itertools-10.4.0-py3-none-any.whl", hash = "sha256:0f7d9f83a0a8dcfa8a2694a770590d98a67ea943e3d9f5298309a484758c4e27"}, ] [[package]] @@ -2500,27 +2522,27 @@ test = ["pytest (>=7.2)", "pytest-cov (>=4.0)"] [[package]] name = "nh3" -version = "0.2.17" +version = "0.2.18" description = "Python bindings to the ammonia HTML sanitization library." optional = false python-versions = "*" files = [ - {file = "nh3-0.2.17-cp37-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:551672fd71d06cd828e282abdb810d1be24e1abb7ae2543a8fa36a71c1006fe9"}, - {file = "nh3-0.2.17-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:c551eb2a3876e8ff2ac63dff1585236ed5dfec5ffd82216a7a174f7c5082a78a"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:66f17d78826096291bd264f260213d2b3905e3c7fae6dfc5337d49429f1dc9f3"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0316c25b76289cf23be6b66c77d3608a4fdf537b35426280032f432f14291b9a"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:22c26e20acbb253a5bdd33d432a326d18508a910e4dcf9a3316179860d53345a"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:85cdbcca8ef10733bd31f931956f7fbb85145a4d11ab9e6742bbf44d88b7e351"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:40015514022af31975c0b3bca4014634fa13cb5dc4dbcbc00570acc781316dcc"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ba73a2f8d3a1b966e9cdba7b211779ad8a2561d2dba9674b8a19ed817923f65f"}, - {file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c21bac1a7245cbd88c0b0e4a420221b7bfa838a2814ee5bb924e9c2f10a1120b"}, - {file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:d7a25fd8c86657f5d9d576268e3b3767c5cd4f42867c9383618be8517f0f022a"}, - {file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:c790769152308421283679a142dbdb3d1c46c79c823008ecea8e8141db1a2062"}, - {file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:b4427ef0d2dfdec10b641ed0bdaf17957eb625b2ec0ea9329b3d28806c153d71"}, - {file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a3f55fabe29164ba6026b5ad5c3151c314d136fd67415a17660b4aaddacf1b10"}, - {file = "nh3-0.2.17-cp37-abi3-win32.whl", hash = "sha256:1a814dd7bba1cb0aba5bcb9bebcc88fd801b63e21e2450ae6c52d3b3336bc911"}, - {file = "nh3-0.2.17-cp37-abi3-win_amd64.whl", hash = "sha256:1aa52a7def528297f256de0844e8dd680ee279e79583c76d6fa73a978186ddfb"}, - {file = "nh3-0.2.17.tar.gz", hash = "sha256:40d0741a19c3d645e54efba71cb0d8c475b59135c1e3c580f879ad5514cbf028"}, + {file = "nh3-0.2.18-cp37-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:14c5a72e9fe82aea5fe3072116ad4661af5cf8e8ff8fc5ad3450f123e4925e86"}, + {file = "nh3-0.2.18-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:7b7c2a3c9eb1a827d42539aa64091640bd275b81e097cd1d8d82ef91ffa2e811"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42c64511469005058cd17cc1537578eac40ae9f7200bedcfd1fc1a05f4f8c200"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0411beb0589eacb6734f28d5497ca2ed379eafab8ad8c84b31bb5c34072b7164"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:5f36b271dae35c465ef5e9090e1fdaba4a60a56f0bb0ba03e0932a66f28b9189"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:34c03fa78e328c691f982b7c03d4423bdfd7da69cd707fe572f544cf74ac23ad"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:19aaba96e0f795bd0a6c56291495ff59364f4300d4a39b29a0abc9cb3774a84b"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de3ceed6e661954871d6cd78b410213bdcb136f79aafe22aa7182e028b8c7307"}, + {file = "nh3-0.2.18-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6955369e4d9f48f41e3f238a9e60f9410645db7e07435e62c6a9ea6135a4907f"}, + {file = "nh3-0.2.18-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:f0eca9ca8628dbb4e916ae2491d72957fdd35f7a5d326b7032a345f111ac07fe"}, + {file = "nh3-0.2.18-cp37-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:3a157ab149e591bb638a55c8c6bcb8cdb559c8b12c13a8affaba6cedfe51713a"}, + {file = "nh3-0.2.18-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:c8b3a1cebcba9b3669ed1a84cc65bf005728d2f0bc1ed2a6594a992e817f3a50"}, + {file = "nh3-0.2.18-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:36c95d4b70530b320b365659bb5034341316e6a9b30f0b25fa9c9eff4c27a204"}, + {file = "nh3-0.2.18-cp37-abi3-win32.whl", hash = "sha256:a7f1b5b2c15866f2db413a3649a8fe4fd7b428ae58be2c0f6bca5eefd53ca2be"}, + {file = "nh3-0.2.18-cp37-abi3-win_amd64.whl", hash = "sha256:8ce0f819d2f1933953fca255db2471ad58184a60508f03e6285e5114b6254844"}, + {file = "nh3-0.2.18.tar.gz", hash = "sha256:94a166927e53972a9698af9542ace4e38b9de50c34352b962f4d9a7d4c927af4"}, ] [[package]] @@ -2725,14 +2747,14 @@ files = [ [[package]] name = "nvidia-nvjitlink-cu12" -version = "12.5.82" +version = "12.6.20" description = "Nvidia JIT LTO Library" optional = false python-versions = ">=3" files = [ - {file = "nvidia_nvjitlink_cu12-12.5.82-py3-none-manylinux2014_aarch64.whl", hash = "sha256:98103729cc5226e13ca319a10bbf9433bbbd44ef64fe72f45f067cacc14b8d27"}, - {file = "nvidia_nvjitlink_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl", hash = "sha256:f9b37bc5c8cf7509665cb6ada5aaa0ce65618f2332b7d3e78e9790511f111212"}, - {file = "nvidia_nvjitlink_cu12-12.5.82-py3-none-win_amd64.whl", hash = "sha256:e782564d705ff0bf61ac3e1bf730166da66dd2fe9012f111ede5fc49b64ae697"}, + {file = "nvidia_nvjitlink_cu12-12.6.20-py3-none-manylinux2014_aarch64.whl", hash = "sha256:84fb38465a5bc7c70cbc320cfd0963eb302ee25a5e939e9f512bbba55b6072fb"}, + {file = "nvidia_nvjitlink_cu12-12.6.20-py3-none-manylinux2014_x86_64.whl", hash = "sha256:562ab97ea2c23164823b2a89cb328d01d45cb99634b8c65fe7cd60d14562bd79"}, + {file = "nvidia_nvjitlink_cu12-12.6.20-py3-none-win_amd64.whl", hash = "sha256:ed3c43a17f37b0c922a919203d2d36cbef24d41cc3e6b625182f8b58203644f6"}, ] [[package]] @@ -2796,7 +2818,11 @@ files = [ ] [package.dependencies] -numpy = {version = ">=1.22.4", markers = "python_version < \"3.11\""} +numpy = [ + {version = ">=1.22.4", markers = "python_version < \"3.11\""}, + {version = ">=1.23.2", markers = "python_version == \"3.11\""}, + {version = ">=1.26.0", markers = "python_version >= \"3.12\""}, +] python-dateutil = ">=2.8.2" pytz = ">=2020.1" tzdata = ">=2022.7" @@ -3061,22 +3087,22 @@ wcwidth = "*" [[package]] name = "protobuf" -version = "4.25.3" +version = "4.25.4" description = "" optional = false python-versions = ">=3.8" files = [ - {file = "protobuf-4.25.3-cp310-abi3-win32.whl", hash = "sha256:d4198877797a83cbfe9bffa3803602bbe1625dc30d8a097365dbc762e5790faa"}, - {file = "protobuf-4.25.3-cp310-abi3-win_amd64.whl", hash = "sha256:209ba4cc916bab46f64e56b85b090607a676f66b473e6b762e6f1d9d591eb2e8"}, - {file = "protobuf-4.25.3-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:f1279ab38ecbfae7e456a108c5c0681e4956d5b1090027c1de0f934dfdb4b35c"}, - {file = "protobuf-4.25.3-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:e7cb0ae90dd83727f0c0718634ed56837bfeeee29a5f82a7514c03ee1364c019"}, - {file = "protobuf-4.25.3-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:7c8daa26095f82482307bc717364e7c13f4f1c99659be82890dcfc215194554d"}, - {file = "protobuf-4.25.3-cp38-cp38-win32.whl", hash = "sha256:f4f118245c4a087776e0a8408be33cf09f6c547442c00395fbfb116fac2f8ac2"}, - {file = "protobuf-4.25.3-cp38-cp38-win_amd64.whl", hash = "sha256:c053062984e61144385022e53678fbded7aea14ebb3e0305ae3592fb219ccfa4"}, - {file = "protobuf-4.25.3-cp39-cp39-win32.whl", hash = "sha256:19b270aeaa0099f16d3ca02628546b8baefe2955bbe23224aaf856134eccf1e4"}, - {file = "protobuf-4.25.3-cp39-cp39-win_amd64.whl", hash = "sha256:e3c97a1555fd6388f857770ff8b9703083de6bf1f9274a002a332d65fbb56c8c"}, - {file = "protobuf-4.25.3-py3-none-any.whl", hash = "sha256:f0700d54bcf45424477e46a9f0944155b46fb0639d69728739c0e47bab83f2b9"}, - {file = "protobuf-4.25.3.tar.gz", hash = "sha256:25b5d0b42fd000320bd7830b349e3b696435f3b329810427a6bcce6a5492cc5c"}, + {file = "protobuf-4.25.4-cp310-abi3-win32.whl", hash = "sha256:db9fd45183e1a67722cafa5c1da3e85c6492a5383f127c86c4c4aa4845867dc4"}, + {file = "protobuf-4.25.4-cp310-abi3-win_amd64.whl", hash = "sha256:ba3d8504116a921af46499471c63a85260c1a5fc23333154a427a310e015d26d"}, + {file = "protobuf-4.25.4-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:eecd41bfc0e4b1bd3fa7909ed93dd14dd5567b98c941d6c1ad08fdcab3d6884b"}, + {file = "protobuf-4.25.4-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:4c8a70fdcb995dcf6c8966cfa3a29101916f7225e9afe3ced4395359955d3835"}, + {file = "protobuf-4.25.4-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:3319e073562e2515c6ddc643eb92ce20809f5d8f10fead3332f71c63be6a7040"}, + {file = "protobuf-4.25.4-cp38-cp38-win32.whl", hash = "sha256:7e372cbbda66a63ebca18f8ffaa6948455dfecc4e9c1029312f6c2edcd86c4e1"}, + {file = "protobuf-4.25.4-cp38-cp38-win_amd64.whl", hash = "sha256:051e97ce9fa6067a4546e75cb14f90cf0232dcb3e3d508c448b8d0e4265b61c1"}, + {file = "protobuf-4.25.4-cp39-cp39-win32.whl", hash = "sha256:90bf6fd378494eb698805bbbe7afe6c5d12c8e17fca817a646cd6a1818c696ca"}, + {file = "protobuf-4.25.4-cp39-cp39-win_amd64.whl", hash = "sha256:ac79a48d6b99dfed2729ccccee547b34a1d3d63289c71cef056653a846a2240f"}, + {file = "protobuf-4.25.4-py3-none-any.whl", hash = "sha256:bfbebc1c8e4793cfd58589acfb8a1026be0003e852b9da7db5a4285bde996978"}, + {file = "protobuf-4.25.4.tar.gz", hash = "sha256:0dc4a62cc4052a036ee2204d26fe4d835c62827c855c8a03f29fe6da146b380d"}, ] [[package]] @@ -3121,13 +3147,13 @@ files = [ [[package]] name = "pure-eval" -version = "0.2.2" +version = "0.2.3" description = "Safely evaluate AST nodes without side effects" optional = false python-versions = "*" files = [ - {file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"}, - {file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"}, + {file = "pure_eval-0.2.3-py3-none-any.whl", hash = "sha256:1db8e35b67b3d218d818ae653e27f06c3aa420901fa7b081ca98cbedc874e0d0"}, + {file = "pure_eval-0.2.3.tar.gz", hash = "sha256:5f4e983f40564c576c7c8635ae88db5956bb2229d7e9237d03b3c0b0190eaf42"}, ] [package.extras] @@ -3239,34 +3265,33 @@ six = ">=1.5" [[package]] name = "pytorch-lightning" -version = "2.3.3" +version = "2.4.0" description = "PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate." optional = false python-versions = ">=3.8" files = [ - {file = "pytorch-lightning-2.3.3.tar.gz", hash = "sha256:5f974015425af6873b5689246c5495ca12686b446751479273c154b73aeea843"}, - {file = "pytorch_lightning-2.3.3-py3-none-any.whl", hash = "sha256:4365e3f2874e223e63cb42628d24c88c2bdc8d1794453cac38c0619b31115fba"}, + {file = "pytorch-lightning-2.4.0.tar.gz", hash = "sha256:6aa897fd9d6dfa7b7b49f37c2f04e13592861831d08deae584dfda423fdb71c8"}, + {file = "pytorch_lightning-2.4.0-py3-none-any.whl", hash = "sha256:9ac7935229ac022ef06994c928217ed37f525ac6700f7d4fc57009624570e655"}, ] [package.dependencies] fsspec = {version = ">=2022.5.0", extras = ["http"]} lightning-utilities = ">=0.10.0" -numpy = ">=1.17.2" packaging = ">=20.0" PyYAML = ">=5.4" -torch = ">=2.0.0" +torch = ">=2.1.0" torchmetrics = ">=0.7.0" tqdm = ">=4.57.0" typing-extensions = ">=4.4.0" [package.extras] -all = ["bitsandbytes (>=0.42.0)", "deepspeed (>=0.8.2,<=0.9.3)", "hydra-core (>=1.2.0)", "ipython[all] (<8.15.0)", "jsonargparse[signatures] (>=4.27.7)", "lightning-utilities (>=0.8.0)", "matplotlib (>3.1)", "omegaconf (>=2.2.3)", "requests (<2.32.0)", "rich (>=12.3.0)", "tensorboardX (>=2.2)", "torchmetrics (>=0.10.0)", "torchvision (>=0.15.0)"] +all = ["bitsandbytes (>=0.42.0)", "deepspeed (>=0.8.2,<=0.9.3)", "hydra-core (>=1.2.0)", "ipython[all] (<8.15.0)", "jsonargparse[signatures] (>=4.27.7)", "lightning-utilities (>=0.8.0)", "matplotlib (>3.1)", "omegaconf (>=2.2.3)", "requests (<2.32.0)", "rich (>=12.3.0)", "tensorboardX (>=2.2)", "torchmetrics (>=0.10.0)", "torchvision (>=0.16.0)"] deepspeed = ["deepspeed (>=0.8.2,<=0.9.3)"] -dev = ["bitsandbytes (>=0.42.0)", "cloudpickle (>=1.3)", "coverage (==7.3.1)", "deepspeed (>=0.8.2,<=0.9.3)", "fastapi", "hydra-core (>=1.2.0)", "ipython[all] (<8.15.0)", "jsonargparse[signatures] (>=4.27.7)", "lightning-utilities (>=0.8.0)", "matplotlib (>3.1)", "omegaconf (>=2.2.3)", "onnx (>=0.14.0)", "onnxruntime (>=0.15.0)", "pandas (>1.0)", "psutil (<5.9.6)", "pytest (==7.4.0)", "pytest-cov (==4.1.0)", "pytest-random-order (==1.1.0)", "pytest-rerunfailures (==12.0)", "pytest-timeout (==2.1.0)", "requests (<2.32.0)", "rich (>=12.3.0)", "scikit-learn (>0.22.1)", "tensorboard (>=2.9.1)", "tensorboardX (>=2.2)", "torchmetrics (>=0.10.0)", "torchvision (>=0.15.0)", "uvicorn"] -examples = ["ipython[all] (<8.15.0)", "lightning-utilities (>=0.8.0)", "requests (<2.32.0)", "torchmetrics (>=0.10.0)", "torchvision (>=0.15.0)"] +dev = ["bitsandbytes (>=0.42.0)", "cloudpickle (>=1.3)", "coverage (==7.3.1)", "deepspeed (>=0.8.2,<=0.9.3)", "fastapi", "hydra-core (>=1.2.0)", "ipython[all] (<8.15.0)", "jsonargparse[signatures] (>=4.27.7)", "lightning-utilities (>=0.8.0)", "matplotlib (>3.1)", "numpy (>=1.17.2)", "omegaconf (>=2.2.3)", "onnx (>=1.12.0)", "onnxruntime (>=1.12.0)", "pandas (>1.0)", "psutil (<5.9.6)", "pytest (==7.4.0)", "pytest-cov (==4.1.0)", "pytest-random-order (==1.1.0)", "pytest-rerunfailures (==12.0)", "pytest-timeout (==2.1.0)", "requests (<2.32.0)", "rich (>=12.3.0)", "scikit-learn (>0.22.1)", "tensorboard (>=2.9.1)", "tensorboardX (>=2.2)", "torchmetrics (>=0.10.0)", "torchvision (>=0.16.0)", "uvicorn"] +examples = ["ipython[all] (<8.15.0)", "lightning-utilities (>=0.8.0)", "requests (<2.32.0)", "torchmetrics (>=0.10.0)", "torchvision (>=0.16.0)"] extra = ["bitsandbytes (>=0.42.0)", "hydra-core (>=1.2.0)", "jsonargparse[signatures] (>=4.27.7)", "matplotlib (>3.1)", "omegaconf (>=2.2.3)", "rich (>=12.3.0)", "tensorboardX (>=2.2)"] strategies = ["deepspeed (>=0.8.2,<=0.9.3)"] -test = ["cloudpickle (>=1.3)", "coverage (==7.3.1)", "fastapi", "onnx (>=0.14.0)", "onnxruntime (>=0.15.0)", "pandas (>1.0)", "psutil (<5.9.6)", "pytest (==7.4.0)", "pytest-cov (==4.1.0)", "pytest-random-order (==1.1.0)", "pytest-rerunfailures (==12.0)", "pytest-timeout (==2.1.0)", "scikit-learn (>0.22.1)", "tensorboard (>=2.9.1)", "uvicorn"] +test = ["cloudpickle (>=1.3)", "coverage (==7.3.1)", "fastapi", "numpy (>=1.17.2)", "onnx (>=1.12.0)", "onnxruntime (>=1.12.0)", "pandas (>1.0)", "psutil (<5.9.6)", "pytest (==7.4.0)", "pytest-cov (==4.1.0)", "pytest-random-order (==1.1.0)", "pytest-rerunfailures (==12.0)", "pytest-timeout (==2.1.0)", "scikit-learn (>0.22.1)", "tensorboard (>=2.9.1)", "uvicorn"] [[package]] name = "pytz" @@ -3315,159 +3340,182 @@ files = [ [[package]] name = "pyyaml" -version = "6.0.1" +version = "6.0.2" description = "YAML parser and emitter for Python" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"}, - {file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, - {file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"}, - {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, - {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, - {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, - {file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, - {file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"}, - {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, - {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, - {file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"}, - {file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"}, - {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef"}, - {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"}, - {file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"}, - {file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"}, - {file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"}, - {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"}, - {file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"}, - {file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"}, - {file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"}, - {file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"}, - {file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"}, - {file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, - {file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"}, - {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, - {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, - {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, - {file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, - {file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"}, - {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, - {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, - {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, + {file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"}, + {file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"}, + {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237"}, + {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b"}, + {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed"}, + {file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180"}, + {file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68"}, + {file = "PyYAML-6.0.2-cp310-cp310-win32.whl", hash = "sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99"}, + {file = "PyYAML-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e"}, + {file = "PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774"}, + {file = "PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee"}, + {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c"}, + {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317"}, + {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85"}, + {file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4"}, + {file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e"}, + {file = "PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5"}, + {file = "PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44"}, + {file = "PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab"}, + {file = "PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725"}, + {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5"}, + {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425"}, + {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476"}, + {file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48"}, + {file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b"}, + {file = "PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4"}, + {file = "PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8"}, + {file = "PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba"}, + {file = "PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1"}, + {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133"}, + {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484"}, + {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5"}, + {file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc"}, + {file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652"}, + {file = "PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183"}, + {file = "PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563"}, + {file = "PyYAML-6.0.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a"}, + {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5"}, + {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d"}, + {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083"}, + {file = "PyYAML-6.0.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706"}, + {file = "PyYAML-6.0.2-cp38-cp38-win32.whl", hash = "sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a"}, + {file = "PyYAML-6.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff"}, + {file = "PyYAML-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d"}, + {file = "PyYAML-6.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f"}, + {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290"}, + {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12"}, + {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19"}, + {file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e"}, + {file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725"}, + {file = "PyYAML-6.0.2-cp39-cp39-win32.whl", hash = "sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631"}, + {file = "PyYAML-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8"}, + {file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"}, ] [[package]] name = "pyzmq" -version = "26.0.3" +version = "26.1.0" description = "Python bindings for 0MQ" optional = false python-versions = ">=3.7" files = [ - {file = "pyzmq-26.0.3-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:44dd6fc3034f1eaa72ece33588867df9e006a7303725a12d64c3dff92330f625"}, - {file = "pyzmq-26.0.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:acb704195a71ac5ea5ecf2811c9ee19ecdc62b91878528302dd0be1b9451cc90"}, - {file = "pyzmq-26.0.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dbb9c997932473a27afa93954bb77a9f9b786b4ccf718d903f35da3232317de"}, - {file = "pyzmq-26.0.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6bcb34f869d431799c3ee7d516554797f7760cb2198ecaa89c3f176f72d062be"}, - {file = "pyzmq-26.0.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:38ece17ec5f20d7d9b442e5174ae9f020365d01ba7c112205a4d59cf19dc38ee"}, - {file = "pyzmq-26.0.3-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:ba6e5e6588e49139a0979d03a7deb9c734bde647b9a8808f26acf9c547cab1bf"}, - {file = "pyzmq-26.0.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3bf8b000a4e2967e6dfdd8656cd0757d18c7e5ce3d16339e550bd462f4857e59"}, - {file = "pyzmq-26.0.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2136f64fbb86451dbbf70223635a468272dd20075f988a102bf8a3f194a411dc"}, - {file = "pyzmq-26.0.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e8918973fbd34e7814f59143c5f600ecd38b8038161239fd1a3d33d5817a38b8"}, - {file = "pyzmq-26.0.3-cp310-cp310-win32.whl", hash = "sha256:0aaf982e68a7ac284377d051c742610220fd06d330dcd4c4dbb4cdd77c22a537"}, - {file = "pyzmq-26.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:f1a9b7d00fdf60b4039f4455afd031fe85ee8305b019334b72dcf73c567edc47"}, - {file = "pyzmq-26.0.3-cp310-cp310-win_arm64.whl", hash = "sha256:80b12f25d805a919d53efc0a5ad7c0c0326f13b4eae981a5d7b7cc343318ebb7"}, - {file = "pyzmq-26.0.3-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:a72a84570f84c374b4c287183debc776dc319d3e8ce6b6a0041ce2e400de3f32"}, - {file = "pyzmq-26.0.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7ca684ee649b55fd8f378127ac8462fb6c85f251c2fb027eb3c887e8ee347bcd"}, - {file = "pyzmq-26.0.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e222562dc0f38571c8b1ffdae9d7adb866363134299264a1958d077800b193b7"}, - {file = "pyzmq-26.0.3-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f17cde1db0754c35a91ac00b22b25c11da6eec5746431d6e5092f0cd31a3fea9"}, - {file = "pyzmq-26.0.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b7c0c0b3244bb2275abe255d4a30c050d541c6cb18b870975553f1fb6f37527"}, - {file = "pyzmq-26.0.3-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:ac97a21de3712afe6a6c071abfad40a6224fd14fa6ff0ff8d0c6e6cd4e2f807a"}, - {file = "pyzmq-26.0.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:88b88282e55fa39dd556d7fc04160bcf39dea015f78e0cecec8ff4f06c1fc2b5"}, - {file = "pyzmq-26.0.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:72b67f966b57dbd18dcc7efbc1c7fc9f5f983e572db1877081f075004614fcdd"}, - {file = "pyzmq-26.0.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f4b6cecbbf3b7380f3b61de3a7b93cb721125dc125c854c14ddc91225ba52f83"}, - {file = "pyzmq-26.0.3-cp311-cp311-win32.whl", hash = "sha256:eed56b6a39216d31ff8cd2f1d048b5bf1700e4b32a01b14379c3b6dde9ce3aa3"}, - {file = "pyzmq-26.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:3191d312c73e3cfd0f0afdf51df8405aafeb0bad71e7ed8f68b24b63c4f36500"}, - {file = "pyzmq-26.0.3-cp311-cp311-win_arm64.whl", hash = "sha256:b6907da3017ef55139cf0e417c5123a84c7332520e73a6902ff1f79046cd3b94"}, - {file = "pyzmq-26.0.3-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:068ca17214038ae986d68f4a7021f97e187ed278ab6dccb79f837d765a54d753"}, - {file = "pyzmq-26.0.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:7821d44fe07335bea256b9f1f41474a642ca55fa671dfd9f00af8d68a920c2d4"}, - {file = "pyzmq-26.0.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eeb438a26d87c123bb318e5f2b3d86a36060b01f22fbdffd8cf247d52f7c9a2b"}, - {file = "pyzmq-26.0.3-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:69ea9d6d9baa25a4dc9cef5e2b77b8537827b122214f210dd925132e34ae9b12"}, - {file = "pyzmq-26.0.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7daa3e1369355766dea11f1d8ef829905c3b9da886ea3152788dc25ee6079e02"}, - {file = "pyzmq-26.0.3-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:6ca7a9a06b52d0e38ccf6bca1aeff7be178917893f3883f37b75589d42c4ac20"}, - {file = "pyzmq-26.0.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:1b7d0e124948daa4d9686d421ef5087c0516bc6179fdcf8828b8444f8e461a77"}, - {file = "pyzmq-26.0.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:e746524418b70f38550f2190eeee834db8850088c834d4c8406fbb9bc1ae10b2"}, - {file = "pyzmq-26.0.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:6b3146f9ae6af82c47a5282ac8803523d381b3b21caeae0327ed2f7ecb718798"}, - {file = "pyzmq-26.0.3-cp312-cp312-win32.whl", hash = "sha256:2b291d1230845871c00c8462c50565a9cd6026fe1228e77ca934470bb7d70ea0"}, - {file = "pyzmq-26.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:926838a535c2c1ea21c903f909a9a54e675c2126728c21381a94ddf37c3cbddf"}, - {file = "pyzmq-26.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:5bf6c237f8c681dfb91b17f8435b2735951f0d1fad10cc5dfd96db110243370b"}, - {file = "pyzmq-26.0.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c0991f5a96a8e620f7691e61178cd8f457b49e17b7d9cfa2067e2a0a89fc1d5"}, - {file = "pyzmq-26.0.3-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:dbf012d8fcb9f2cf0643b65df3b355fdd74fc0035d70bb5c845e9e30a3a4654b"}, - {file = "pyzmq-26.0.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:01fbfbeb8249a68d257f601deb50c70c929dc2dfe683b754659569e502fbd3aa"}, - {file = "pyzmq-26.0.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c8eb19abe87029c18f226d42b8a2c9efdd139d08f8bf6e085dd9075446db450"}, - {file = "pyzmq-26.0.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:5344b896e79800af86ad643408ca9aa303a017f6ebff8cee5a3163c1e9aec987"}, - {file = "pyzmq-26.0.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:204e0f176fd1d067671157d049466869b3ae1fc51e354708b0dc41cf94e23a3a"}, - {file = "pyzmq-26.0.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:a42db008d58530efa3b881eeee4991146de0b790e095f7ae43ba5cc612decbc5"}, - {file = "pyzmq-26.0.3-cp37-cp37m-win32.whl", hash = "sha256:8d7a498671ca87e32b54cb47c82a92b40130a26c5197d392720a1bce1b3c77cf"}, - {file = "pyzmq-26.0.3-cp37-cp37m-win_amd64.whl", hash = "sha256:3b4032a96410bdc760061b14ed6a33613ffb7f702181ba999df5d16fb96ba16a"}, - {file = "pyzmq-26.0.3-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:2cc4e280098c1b192c42a849de8de2c8e0f3a84086a76ec5b07bfee29bda7d18"}, - {file = "pyzmq-26.0.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bde86a2ed3ce587fa2b207424ce15b9a83a9fa14422dcc1c5356a13aed3df9d"}, - {file = "pyzmq-26.0.3-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:34106f68e20e6ff253c9f596ea50397dbd8699828d55e8fa18bd4323d8d966e6"}, - {file = "pyzmq-26.0.3-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ebbbd0e728af5db9b04e56389e2299a57ea8b9dd15c9759153ee2455b32be6ad"}, - {file = "pyzmq-26.0.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f6b1d1c631e5940cac5a0b22c5379c86e8df6a4ec277c7a856b714021ab6cfad"}, - {file = "pyzmq-26.0.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e891ce81edd463b3b4c3b885c5603c00141151dd9c6936d98a680c8c72fe5c67"}, - {file = "pyzmq-26.0.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9b273ecfbc590a1b98f014ae41e5cf723932f3b53ba9367cfb676f838038b32c"}, - {file = "pyzmq-26.0.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b32bff85fb02a75ea0b68f21e2412255b5731f3f389ed9aecc13a6752f58ac97"}, - {file = "pyzmq-26.0.3-cp38-cp38-win32.whl", hash = "sha256:f6c21c00478a7bea93caaaef9e7629145d4153b15a8653e8bb4609d4bc70dbfc"}, - {file = "pyzmq-26.0.3-cp38-cp38-win_amd64.whl", hash = "sha256:3401613148d93ef0fd9aabdbddb212de3db7a4475367f49f590c837355343972"}, - {file = "pyzmq-26.0.3-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:2ed8357f4c6e0daa4f3baf31832df8a33334e0fe5b020a61bc8b345a3db7a606"}, - {file = "pyzmq-26.0.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c1c8f2a2ca45292084c75bb6d3a25545cff0ed931ed228d3a1810ae3758f975f"}, - {file = "pyzmq-26.0.3-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:b63731993cdddcc8e087c64e9cf003f909262b359110070183d7f3025d1c56b5"}, - {file = "pyzmq-26.0.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b3cd31f859b662ac5d7f4226ec7d8bd60384fa037fc02aee6ff0b53ba29a3ba8"}, - {file = "pyzmq-26.0.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:115f8359402fa527cf47708d6f8a0f8234f0e9ca0cab7c18c9c189c194dbf620"}, - {file = "pyzmq-26.0.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:715bdf952b9533ba13dfcf1f431a8f49e63cecc31d91d007bc1deb914f47d0e4"}, - {file = "pyzmq-26.0.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:e1258c639e00bf5e8a522fec6c3eaa3e30cf1c23a2f21a586be7e04d50c9acab"}, - {file = "pyzmq-26.0.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:15c59e780be8f30a60816a9adab900c12a58d79c1ac742b4a8df044ab2a6d920"}, - {file = "pyzmq-26.0.3-cp39-cp39-win32.whl", hash = "sha256:d0cdde3c78d8ab5b46595054e5def32a755fc028685add5ddc7403e9f6de9879"}, - {file = "pyzmq-26.0.3-cp39-cp39-win_amd64.whl", hash = "sha256:ce828058d482ef860746bf532822842e0ff484e27f540ef5c813d516dd8896d2"}, - {file = "pyzmq-26.0.3-cp39-cp39-win_arm64.whl", hash = "sha256:788f15721c64109cf720791714dc14afd0f449d63f3a5487724f024345067381"}, - {file = "pyzmq-26.0.3-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c18645ef6294d99b256806e34653e86236eb266278c8ec8112622b61db255de"}, - {file = "pyzmq-26.0.3-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7e6bc96ebe49604df3ec2c6389cc3876cabe475e6bfc84ced1bf4e630662cb35"}, - {file = "pyzmq-26.0.3-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:971e8990c5cc4ddcff26e149398fc7b0f6a042306e82500f5e8db3b10ce69f84"}, - {file = "pyzmq-26.0.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8416c23161abd94cc7da80c734ad7c9f5dbebdadfdaa77dad78244457448223"}, - {file = "pyzmq-26.0.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:082a2988364b60bb5de809373098361cf1dbb239623e39e46cb18bc035ed9c0c"}, - {file = "pyzmq-26.0.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d57dfbf9737763b3a60d26e6800e02e04284926329aee8fb01049635e957fe81"}, - {file = "pyzmq-26.0.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:77a85dca4c2430ac04dc2a2185c2deb3858a34fe7f403d0a946fa56970cf60a1"}, - {file = "pyzmq-26.0.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4c82a6d952a1d555bf4be42b6532927d2a5686dd3c3e280e5f63225ab47ac1f5"}, - {file = "pyzmq-26.0.3-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4496b1282c70c442809fc1b151977c3d967bfb33e4e17cedbf226d97de18f709"}, - {file = "pyzmq-26.0.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:e4946d6bdb7ba972dfda282f9127e5756d4f299028b1566d1245fa0d438847e6"}, - {file = "pyzmq-26.0.3-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:03c0ae165e700364b266876d712acb1ac02693acd920afa67da2ebb91a0b3c09"}, - {file = "pyzmq-26.0.3-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:3e3070e680f79887d60feeda051a58d0ac36622e1759f305a41059eff62c6da7"}, - {file = "pyzmq-26.0.3-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6ca08b840fe95d1c2bd9ab92dac5685f949fc6f9ae820ec16193e5ddf603c3b2"}, - {file = "pyzmq-26.0.3-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e76654e9dbfb835b3518f9938e565c7806976c07b37c33526b574cc1a1050480"}, - {file = "pyzmq-26.0.3-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:871587bdadd1075b112e697173e946a07d722459d20716ceb3d1bd6c64bd08ce"}, - {file = "pyzmq-26.0.3-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d0a2d1bd63a4ad79483049b26514e70fa618ce6115220da9efdff63688808b17"}, - {file = "pyzmq-26.0.3-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0270b49b6847f0d106d64b5086e9ad5dc8a902413b5dbbb15d12b60f9c1747a4"}, - {file = "pyzmq-26.0.3-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:703c60b9910488d3d0954ca585c34f541e506a091a41930e663a098d3b794c67"}, - {file = "pyzmq-26.0.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:74423631b6be371edfbf7eabb02ab995c2563fee60a80a30829176842e71722a"}, - {file = "pyzmq-26.0.3-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:4adfbb5451196842a88fda3612e2c0414134874bffb1c2ce83ab4242ec9e027d"}, - {file = "pyzmq-26.0.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3516119f4f9b8671083a70b6afaa0a070f5683e431ab3dc26e9215620d7ca1ad"}, - {file = "pyzmq-26.0.3.tar.gz", hash = "sha256:dba7d9f2e047dfa2bca3b01f4f84aa5246725203d6284e3790f2ca15fba6b40a"}, + {file = "pyzmq-26.1.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:263cf1e36862310bf5becfbc488e18d5d698941858860c5a8c079d1511b3b18e"}, + {file = "pyzmq-26.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d5c8b17f6e8f29138678834cf8518049e740385eb2dbf736e8f07fc6587ec682"}, + {file = "pyzmq-26.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:75a95c2358fcfdef3374cb8baf57f1064d73246d55e41683aaffb6cfe6862917"}, + {file = "pyzmq-26.1.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f99de52b8fbdb2a8f5301ae5fc0f9e6b3ba30d1d5fc0421956967edcc6914242"}, + {file = "pyzmq-26.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bcbfbab4e1895d58ab7da1b5ce9a327764f0366911ba5b95406c9104bceacb0"}, + {file = "pyzmq-26.1.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:77ce6a332c7e362cb59b63f5edf730e83590d0ab4e59c2aa5bd79419a42e3449"}, + {file = "pyzmq-26.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ba0a31d00e8616149a5ab440d058ec2da621e05d744914774c4dde6837e1f545"}, + {file = "pyzmq-26.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:8b88641384e84a258b740801cd4dbc45c75f148ee674bec3149999adda4a8598"}, + {file = "pyzmq-26.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:2fa76ebcebe555cce90f16246edc3ad83ab65bb7b3d4ce408cf6bc67740c4f88"}, + {file = "pyzmq-26.1.0-cp310-cp310-win32.whl", hash = "sha256:fbf558551cf415586e91160d69ca6416f3fce0b86175b64e4293644a7416b81b"}, + {file = "pyzmq-26.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:a7b8aab50e5a288c9724d260feae25eda69582be84e97c012c80e1a5e7e03fb2"}, + {file = "pyzmq-26.1.0-cp310-cp310-win_arm64.whl", hash = "sha256:08f74904cb066e1178c1ec706dfdb5c6c680cd7a8ed9efebeac923d84c1f13b1"}, + {file = "pyzmq-26.1.0-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:46d6800b45015f96b9d92ece229d92f2aef137d82906577d55fadeb9cf5fcb71"}, + {file = "pyzmq-26.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5bc2431167adc50ba42ea3e5e5f5cd70d93e18ab7b2f95e724dd8e1bd2c38120"}, + {file = "pyzmq-26.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b3bb34bebaa1b78e562931a1687ff663d298013f78f972a534f36c523311a84d"}, + {file = "pyzmq-26.1.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bd3f6329340cef1c7ba9611bd038f2d523cea79f09f9c8f6b0553caba59ec562"}, + {file = "pyzmq-26.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:471880c4c14e5a056a96cd224f5e71211997d40b4bf5e9fdded55dafab1f98f2"}, + {file = "pyzmq-26.1.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:ce6f2b66799971cbae5d6547acefa7231458289e0ad481d0be0740535da38d8b"}, + {file = "pyzmq-26.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:0a1f6ea5b1d6cdbb8cfa0536f0d470f12b4b41ad83625012e575f0e3ecfe97f0"}, + {file = "pyzmq-26.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:b45e6445ac95ecb7d728604bae6538f40ccf4449b132b5428c09918523abc96d"}, + {file = "pyzmq-26.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:94c4262626424683feea0f3c34951d39d49d354722db2745c42aa6bb50ecd93b"}, + {file = "pyzmq-26.1.0-cp311-cp311-win32.whl", hash = "sha256:a0f0ab9df66eb34d58205913f4540e2ad17a175b05d81b0b7197bc57d000e829"}, + {file = "pyzmq-26.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:8efb782f5a6c450589dbab4cb0f66f3a9026286333fe8f3a084399149af52f29"}, + {file = "pyzmq-26.1.0-cp311-cp311-win_arm64.whl", hash = "sha256:f133d05aaf623519f45e16ab77526e1e70d4e1308e084c2fb4cedb1a0c764bbb"}, + {file = "pyzmq-26.1.0-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:3d3146b1c3dcc8a1539e7cc094700b2be1e605a76f7c8f0979b6d3bde5ad4072"}, + {file = "pyzmq-26.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d9270fbf038bf34ffca4855bcda6e082e2c7f906b9eb8d9a8ce82691166060f7"}, + {file = "pyzmq-26.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:995301f6740a421afc863a713fe62c0aaf564708d4aa057dfdf0f0f56525294b"}, + {file = "pyzmq-26.1.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e7eca8b89e56fb8c6c26dd3e09bd41b24789022acf1cf13358e96f1cafd8cae3"}, + {file = "pyzmq-26.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d4feb2e83dfe9ace6374a847e98ee9d1246ebadcc0cb765482e272c34e5820"}, + {file = "pyzmq-26.1.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d4fafc2eb5d83f4647331267808c7e0c5722c25a729a614dc2b90479cafa78bd"}, + {file = "pyzmq-26.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:58c33dc0e185dd97a9ac0288b3188d1be12b756eda67490e6ed6a75cf9491d79"}, + {file = "pyzmq-26.1.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:68a0a1d83d33d8367ddddb3e6bb4afbb0f92bd1dac2c72cd5e5ddc86bdafd3eb"}, + {file = "pyzmq-26.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:2ae7c57e22ad881af78075e0cea10a4c778e67234adc65c404391b417a4dda83"}, + {file = "pyzmq-26.1.0-cp312-cp312-win32.whl", hash = "sha256:347e84fc88cc4cb646597f6d3a7ea0998f887ee8dc31c08587e9c3fd7b5ccef3"}, + {file = "pyzmq-26.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:9f136a6e964830230912f75b5a116a21fe8e34128dcfd82285aa0ef07cb2c7bd"}, + {file = "pyzmq-26.1.0-cp312-cp312-win_arm64.whl", hash = "sha256:a4b7a989c8f5a72ab1b2bbfa58105578753ae77b71ba33e7383a31ff75a504c4"}, + {file = "pyzmq-26.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:d416f2088ac8f12daacffbc2e8918ef4d6be8568e9d7155c83b7cebed49d2322"}, + {file = "pyzmq-26.1.0-cp313-cp313-macosx_10_15_universal2.whl", hash = "sha256:ecb6c88d7946166d783a635efc89f9a1ff11c33d680a20df9657b6902a1d133b"}, + {file = "pyzmq-26.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:471312a7375571857a089342beccc1a63584315188560c7c0da7e0a23afd8a5c"}, + {file = "pyzmq-26.1.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e6cea102ffa16b737d11932c426f1dc14b5938cf7bc12e17269559c458ac334"}, + {file = "pyzmq-26.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec7248673ffc7104b54e4957cee38b2f3075a13442348c8d651777bf41aa45ee"}, + {file = "pyzmq-26.1.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:0614aed6f87d550b5cecb03d795f4ddbb1544b78d02a4bd5eecf644ec98a39f6"}, + {file = "pyzmq-26.1.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:e8746ce968be22a8a1801bf4a23e565f9687088580c3ed07af5846580dd97f76"}, + {file = "pyzmq-26.1.0-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:7688653574392d2eaeef75ddcd0b2de5b232d8730af29af56c5adf1df9ef8d6f"}, + {file = "pyzmq-26.1.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:8d4dac7d97f15c653a5fedcafa82626bd6cee1450ccdaf84ffed7ea14f2b07a4"}, + {file = "pyzmq-26.1.0-cp313-cp313-win32.whl", hash = "sha256:ccb42ca0a4a46232d716779421bbebbcad23c08d37c980f02cc3a6bd115ad277"}, + {file = "pyzmq-26.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:e1e5d0a25aea8b691a00d6b54b28ac514c8cc0d8646d05f7ca6cb64b97358250"}, + {file = "pyzmq-26.1.0-cp313-cp313-win_arm64.whl", hash = "sha256:fc82269d24860cfa859b676d18850cbb8e312dcd7eada09e7d5b007e2f3d9eb1"}, + {file = "pyzmq-26.1.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:416ac51cabd54f587995c2b05421324700b22e98d3d0aa2cfaec985524d16f1d"}, + {file = "pyzmq-26.1.0-cp313-cp313t-macosx_10_15_universal2.whl", hash = "sha256:ff832cce719edd11266ca32bc74a626b814fff236824aa1aeaad399b69fe6eae"}, + {file = "pyzmq-26.1.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:393daac1bcf81b2a23e696b7b638eedc965e9e3d2112961a072b6cd8179ad2eb"}, + {file = "pyzmq-26.1.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9869fa984c8670c8ab899a719eb7b516860a29bc26300a84d24d8c1b71eae3ec"}, + {file = "pyzmq-26.1.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b3b8e36fd4c32c0825b4461372949ecd1585d326802b1321f8b6dc1d7e9318c"}, + {file = "pyzmq-26.1.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:3ee647d84b83509b7271457bb428cc347037f437ead4b0b6e43b5eba35fec0aa"}, + {file = "pyzmq-26.1.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:45cb1a70eb00405ce3893041099655265fabcd9c4e1e50c330026e82257892c1"}, + {file = "pyzmq-26.1.0-cp313-cp313t-musllinux_1_1_i686.whl", hash = "sha256:5cca7b4adb86d7470e0fc96037771981d740f0b4cb99776d5cb59cd0e6684a73"}, + {file = "pyzmq-26.1.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:91d1a20bdaf3b25f3173ff44e54b1cfbc05f94c9e8133314eb2962a89e05d6e3"}, + {file = "pyzmq-26.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c0665d85535192098420428c779361b8823d3d7ec4848c6af3abb93bc5c915bf"}, + {file = "pyzmq-26.1.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:96d7c1d35ee4a495df56c50c83df7af1c9688cce2e9e0edffdbf50889c167595"}, + {file = "pyzmq-26.1.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b281b5ff5fcc9dcbfe941ac5c7fcd4b6c065adad12d850f95c9d6f23c2652384"}, + {file = "pyzmq-26.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5384c527a9a004445c5074f1e20db83086c8ff1682a626676229aafd9cf9f7d1"}, + {file = "pyzmq-26.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:754c99a9840839375ee251b38ac5964c0f369306eddb56804a073b6efdc0cd88"}, + {file = "pyzmq-26.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9bdfcb74b469b592972ed881bad57d22e2c0acc89f5e8c146782d0d90fb9f4bf"}, + {file = "pyzmq-26.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:bd13f0231f4788db619347b971ca5f319c5b7ebee151afc7c14632068c6261d3"}, + {file = "pyzmq-26.1.0-cp37-cp37m-win32.whl", hash = "sha256:c5668dac86a869349828db5fc928ee3f58d450dce2c85607067d581f745e4fb1"}, + {file = "pyzmq-26.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ad875277844cfaeca7fe299ddf8c8d8bfe271c3dc1caf14d454faa5cdbf2fa7a"}, + {file = "pyzmq-26.1.0-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:65c6e03cc0222eaf6aad57ff4ecc0a070451e23232bb48db4322cc45602cede0"}, + {file = "pyzmq-26.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:038ae4ffb63e3991f386e7fda85a9baab7d6617fe85b74a8f9cab190d73adb2b"}, + {file = "pyzmq-26.1.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:bdeb2c61611293f64ac1073f4bf6723b67d291905308a7de9bb2ca87464e3273"}, + {file = "pyzmq-26.1.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:61dfa5ee9d7df297c859ac82b1226d8fefaf9c5113dc25c2c00ecad6feeeb04f"}, + {file = "pyzmq-26.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3292d384537b9918010769b82ab3e79fca8b23d74f56fc69a679106a3e2c2cf"}, + {file = "pyzmq-26.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f9499c70c19ff0fbe1007043acb5ad15c1dec7d8e84ab429bca8c87138e8f85c"}, + {file = "pyzmq-26.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:d3dd5523ed258ad58fed7e364c92a9360d1af8a9371e0822bd0146bdf017ef4c"}, + {file = "pyzmq-26.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:baba2fd199b098c5544ef2536b2499d2e2155392973ad32687024bd8572a7d1c"}, + {file = "pyzmq-26.1.0-cp38-cp38-win32.whl", hash = "sha256:ddbb2b386128d8eca92bd9ca74e80f73fe263bcca7aa419f5b4cbc1661e19741"}, + {file = "pyzmq-26.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:79e45a4096ec8388cdeb04a9fa5e9371583bcb826964d55b8b66cbffe7b33c86"}, + {file = "pyzmq-26.1.0-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:add52c78a12196bc0fda2de087ba6c876ea677cbda2e3eba63546b26e8bf177b"}, + {file = "pyzmq-26.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:98c03bd7f3339ff47de7ea9ac94a2b34580a8d4df69b50128bb6669e1191a895"}, + {file = "pyzmq-26.1.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:dcc37d9d708784726fafc9c5e1232de655a009dbf97946f117aefa38d5985a0f"}, + {file = "pyzmq-26.1.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5a6ed52f0b9bf8dcc64cc82cce0607a3dfed1dbb7e8c6f282adfccc7be9781de"}, + {file = "pyzmq-26.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:451e16ae8bea3d95649317b463c9f95cd9022641ec884e3d63fc67841ae86dfe"}, + {file = "pyzmq-26.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:906e532c814e1d579138177a00ae835cd6becbf104d45ed9093a3aaf658f6a6a"}, + {file = "pyzmq-26.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:05bacc4f94af468cc82808ae3293390278d5f3375bb20fef21e2034bb9a505b6"}, + {file = "pyzmq-26.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:57bb2acba798dc3740e913ffadd56b1fcef96f111e66f09e2a8db3050f1f12c8"}, + {file = "pyzmq-26.1.0-cp39-cp39-win32.whl", hash = "sha256:f774841bb0e8588505002962c02da420bcfb4c5056e87a139c6e45e745c0e2e2"}, + {file = "pyzmq-26.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:359c533bedc62c56415a1f5fcfd8279bc93453afdb0803307375ecf81c962402"}, + {file = "pyzmq-26.1.0-cp39-cp39-win_arm64.whl", hash = "sha256:7907419d150b19962138ecec81a17d4892ea440c184949dc29b358bc730caf69"}, + {file = "pyzmq-26.1.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:b24079a14c9596846bf7516fe75d1e2188d4a528364494859106a33d8b48be38"}, + {file = "pyzmq-26.1.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59d0acd2976e1064f1b398a00e2c3e77ed0a157529779e23087d4c2fb8aaa416"}, + {file = "pyzmq-26.1.0-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:911c43a4117915203c4cc8755e0f888e16c4676a82f61caee2f21b0c00e5b894"}, + {file = "pyzmq-26.1.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b10163e586cc609f5f85c9b233195554d77b1e9a0801388907441aaeb22841c5"}, + {file = "pyzmq-26.1.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:28a8b2abb76042f5fd7bd720f7fea48c0fd3e82e9de0a1bf2c0de3812ce44a42"}, + {file = "pyzmq-26.1.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:bef24d3e4ae2c985034439f449e3f9e06bf579974ce0e53d8a507a1577d5b2ab"}, + {file = "pyzmq-26.1.0-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:2cd0f4d314f4a2518e8970b6f299ae18cff7c44d4a1fc06fc713f791c3a9e3ea"}, + {file = "pyzmq-26.1.0-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:fa25a620eed2a419acc2cf10135b995f8f0ce78ad00534d729aa761e4adcef8a"}, + {file = "pyzmq-26.1.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ef3b048822dca6d231d8a8ba21069844ae38f5d83889b9b690bf17d2acc7d099"}, + {file = "pyzmq-26.1.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:9a6847c92d9851b59b9f33f968c68e9e441f9a0f8fc972c5580c5cd7cbc6ee24"}, + {file = "pyzmq-26.1.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c9b9305004d7e4e6a824f4f19b6d8f32b3578aad6f19fc1122aaf320cbe3dc83"}, + {file = "pyzmq-26.1.0-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:63c1d3a65acb2f9c92dce03c4e1758cc552f1ae5c78d79a44e3bb88d2fa71f3a"}, + {file = "pyzmq-26.1.0-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d36b8fffe8b248a1b961c86fbdfa0129dfce878731d169ede7fa2631447331be"}, + {file = "pyzmq-26.1.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67976d12ebfd61a3bc7d77b71a9589b4d61d0422282596cf58c62c3866916544"}, + {file = "pyzmq-26.1.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:998444debc8816b5d8d15f966e42751032d0f4c55300c48cc337f2b3e4f17d03"}, + {file = "pyzmq-26.1.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:e5c88b2f13bcf55fee78ea83567b9fe079ba1a4bef8b35c376043440040f7edb"}, + {file = "pyzmq-26.1.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8d906d43e1592be4b25a587b7d96527cb67277542a5611e8ea9e996182fae410"}, + {file = "pyzmq-26.1.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:80b0c9942430d731c786545da6be96d824a41a51742e3e374fedd9018ea43106"}, + {file = "pyzmq-26.1.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:314d11564c00b77f6224d12eb3ddebe926c301e86b648a1835c5b28176c83eab"}, + {file = "pyzmq-26.1.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:093a1a3cae2496233f14b57f4b485da01b4ff764582c854c0f42c6dd2be37f3d"}, + {file = "pyzmq-26.1.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3c397b1b450f749a7e974d74c06d69bd22dd362142f370ef2bd32a684d6b480c"}, + {file = "pyzmq-26.1.0.tar.gz", hash = "sha256:6c5aeea71f018ebd3b9115c7cb13863dd850e98ca6b9258509de1246461a7e7f"}, ] [package.dependencies] @@ -3576,110 +3624,114 @@ jupyter = ["ipywidgets (>=7.5.1,<9)"] [[package]] name = "rpds-py" -version = "0.18.1" +version = "0.20.0" description = "Python bindings to Rust's persistent data structures (rpds)" optional = false python-versions = ">=3.8" files = [ - {file = "rpds_py-0.18.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:d31dea506d718693b6b2cffc0648a8929bdc51c70a311b2770f09611caa10d53"}, - {file = "rpds_py-0.18.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:732672fbc449bab754e0b15356c077cc31566df874964d4801ab14f71951ea80"}, - {file = "rpds_py-0.18.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a98a1f0552b5f227a3d6422dbd61bc6f30db170939bd87ed14f3c339aa6c7c9"}, - {file = "rpds_py-0.18.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7f1944ce16401aad1e3f7d312247b3d5de7981f634dc9dfe90da72b87d37887d"}, - {file = "rpds_py-0.18.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:38e14fb4e370885c4ecd734f093a2225ee52dc384b86fa55fe3f74638b2cfb09"}, - {file = "rpds_py-0.18.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08d74b184f9ab6289b87b19fe6a6d1a97fbfea84b8a3e745e87a5de3029bf944"}, - {file = "rpds_py-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d70129cef4a8d979caa37e7fe957202e7eee8ea02c5e16455bc9808a59c6b2f0"}, - {file = "rpds_py-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ce0bb20e3a11bd04461324a6a798af34d503f8d6f1aa3d2aa8901ceaf039176d"}, - {file = "rpds_py-0.18.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:81c5196a790032e0fc2464c0b4ab95f8610f96f1f2fa3d4deacce6a79852da60"}, - {file = "rpds_py-0.18.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:f3027be483868c99b4985fda802a57a67fdf30c5d9a50338d9db646d590198da"}, - {file = "rpds_py-0.18.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:d44607f98caa2961bab4fa3c4309724b185b464cdc3ba6f3d7340bac3ec97cc1"}, - {file = "rpds_py-0.18.1-cp310-none-win32.whl", hash = "sha256:c273e795e7a0f1fddd46e1e3cb8be15634c29ae8ff31c196debb620e1edb9333"}, - {file = "rpds_py-0.18.1-cp310-none-win_amd64.whl", hash = "sha256:8352f48d511de5f973e4f2f9412736d7dea76c69faa6d36bcf885b50c758ab9a"}, - {file = "rpds_py-0.18.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:6b5ff7e1d63a8281654b5e2896d7f08799378e594f09cf3674e832ecaf396ce8"}, - {file = "rpds_py-0.18.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8927638a4d4137a289e41d0fd631551e89fa346d6dbcfc31ad627557d03ceb6d"}, - {file = "rpds_py-0.18.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:154bf5c93d79558b44e5b50cc354aa0459e518e83677791e6adb0b039b7aa6a7"}, - {file = "rpds_py-0.18.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:07f2139741e5deb2c5154a7b9629bc5aa48c766b643c1a6750d16f865a82c5fc"}, - {file = "rpds_py-0.18.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8c7672e9fba7425f79019db9945b16e308ed8bc89348c23d955c8c0540da0a07"}, - {file = "rpds_py-0.18.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:489bdfe1abd0406eba6b3bb4fdc87c7fa40f1031de073d0cfb744634cc8fa261"}, - {file = "rpds_py-0.18.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3c20f05e8e3d4fc76875fc9cb8cf24b90a63f5a1b4c5b9273f0e8225e169b100"}, - {file = "rpds_py-0.18.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:967342e045564cef76dfcf1edb700b1e20838d83b1aa02ab313e6a497cf923b8"}, - {file = "rpds_py-0.18.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:2cc7c1a47f3a63282ab0f422d90ddac4aa3034e39fc66a559ab93041e6505da7"}, - {file = "rpds_py-0.18.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:f7afbfee1157e0f9376c00bb232e80a60e59ed716e3211a80cb8506550671e6e"}, - {file = "rpds_py-0.18.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9e6934d70dc50f9f8ea47081ceafdec09245fd9f6032669c3b45705dea096b88"}, - {file = "rpds_py-0.18.1-cp311-none-win32.whl", hash = "sha256:c69882964516dc143083d3795cb508e806b09fc3800fd0d4cddc1df6c36e76bb"}, - {file = "rpds_py-0.18.1-cp311-none-win_amd64.whl", hash = "sha256:70a838f7754483bcdc830444952fd89645569e7452e3226de4a613a4c1793fb2"}, - {file = "rpds_py-0.18.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:3dd3cd86e1db5aadd334e011eba4e29d37a104b403e8ca24dcd6703c68ca55b3"}, - {file = "rpds_py-0.18.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:05f3d615099bd9b13ecf2fc9cf2d839ad3f20239c678f461c753e93755d629ee"}, - {file = "rpds_py-0.18.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35b2b771b13eee8729a5049c976197ff58a27a3829c018a04341bcf1ae409b2b"}, - {file = "rpds_py-0.18.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ee17cd26b97d537af8f33635ef38be873073d516fd425e80559f4585a7b90c43"}, - {file = "rpds_py-0.18.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b646bf655b135ccf4522ed43d6902af37d3f5dbcf0da66c769a2b3938b9d8184"}, - {file = "rpds_py-0.18.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:19ba472b9606c36716062c023afa2484d1e4220548751bda14f725a7de17b4f6"}, - {file = "rpds_py-0.18.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e30ac5e329098903262dc5bdd7e2086e0256aa762cc8b744f9e7bf2a427d3f8"}, - {file = "rpds_py-0.18.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d58ad6317d188c43750cb76e9deacf6051d0f884d87dc6518e0280438648a9ac"}, - {file = "rpds_py-0.18.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e1735502458621921cee039c47318cb90b51d532c2766593be6207eec53e5c4c"}, - {file = "rpds_py-0.18.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:f5bab211605d91db0e2995a17b5c6ee5edec1270e46223e513eaa20da20076ac"}, - {file = "rpds_py-0.18.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2fc24a329a717f9e2448f8cd1f960f9dac4e45b6224d60734edeb67499bab03a"}, - {file = "rpds_py-0.18.1-cp312-none-win32.whl", hash = "sha256:1805d5901779662d599d0e2e4159d8a82c0b05faa86ef9222bf974572286b2b6"}, - {file = "rpds_py-0.18.1-cp312-none-win_amd64.whl", hash = "sha256:720edcb916df872d80f80a1cc5ea9058300b97721efda8651efcd938a9c70a72"}, - {file = "rpds_py-0.18.1-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:c827576e2fa017a081346dce87d532a5310241648eb3700af9a571a6e9fc7e74"}, - {file = "rpds_py-0.18.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:aa3679e751408d75a0b4d8d26d6647b6d9326f5e35c00a7ccd82b78ef64f65f8"}, - {file = "rpds_py-0.18.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0abeee75434e2ee2d142d650d1e54ac1f8b01e6e6abdde8ffd6eeac6e9c38e20"}, - {file = "rpds_py-0.18.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed402d6153c5d519a0faf1bb69898e97fb31613b49da27a84a13935ea9164dfc"}, - {file = "rpds_py-0.18.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:338dee44b0cef8b70fd2ef54b4e09bb1b97fc6c3a58fea5db6cc083fd9fc2724"}, - {file = "rpds_py-0.18.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7750569d9526199c5b97e5a9f8d96a13300950d910cf04a861d96f4273d5b104"}, - {file = "rpds_py-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:607345bd5912aacc0c5a63d45a1f73fef29e697884f7e861094e443187c02be5"}, - {file = "rpds_py-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:207c82978115baa1fd8d706d720b4a4d2b0913df1c78c85ba73fe6c5804505f0"}, - {file = "rpds_py-0.18.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:6d1e42d2735d437e7e80bab4d78eb2e459af48c0a46e686ea35f690b93db792d"}, - {file = "rpds_py-0.18.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:5463c47c08630007dc0fe99fb480ea4f34a89712410592380425a9b4e1611d8e"}, - {file = "rpds_py-0.18.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:06d218939e1bf2ca50e6b0ec700ffe755e5216a8230ab3e87c059ebb4ea06afc"}, - {file = "rpds_py-0.18.1-cp38-none-win32.whl", hash = "sha256:312fe69b4fe1ffbe76520a7676b1e5ac06ddf7826d764cc10265c3b53f96dbe9"}, - {file = "rpds_py-0.18.1-cp38-none-win_amd64.whl", hash = "sha256:9437ca26784120a279f3137ee080b0e717012c42921eb07861b412340f85bae2"}, - {file = "rpds_py-0.18.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:19e515b78c3fc1039dd7da0a33c28c3154458f947f4dc198d3c72db2b6b5dc93"}, - {file = "rpds_py-0.18.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7b28c5b066bca9a4eb4e2f2663012debe680f097979d880657f00e1c30875a0"}, - {file = "rpds_py-0.18.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:673fdbbf668dd958eff750e500495ef3f611e2ecc209464f661bc82e9838991e"}, - {file = "rpds_py-0.18.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d960de62227635d2e61068f42a6cb6aae91a7fe00fca0e3aeed17667c8a34611"}, - {file = "rpds_py-0.18.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:352a88dc7892f1da66b6027af06a2e7e5d53fe05924cc2cfc56495b586a10b72"}, - {file = "rpds_py-0.18.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4e0ee01ad8260184db21468a6e1c37afa0529acc12c3a697ee498d3c2c4dcaf3"}, - {file = "rpds_py-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4c39ad2f512b4041343ea3c7894339e4ca7839ac38ca83d68a832fc8b3748ab"}, - {file = "rpds_py-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:aaa71ee43a703c321906813bb252f69524f02aa05bf4eec85f0c41d5d62d0f4c"}, - {file = "rpds_py-0.18.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6cd8098517c64a85e790657e7b1e509b9fe07487fd358e19431cb120f7d96338"}, - {file = "rpds_py-0.18.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:4adec039b8e2928983f885c53b7cc4cda8965b62b6596501a0308d2703f8af1b"}, - {file = "rpds_py-0.18.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:32b7daaa3e9389db3695964ce8e566e3413b0c43e3394c05e4b243a4cd7bef26"}, - {file = "rpds_py-0.18.1-cp39-none-win32.whl", hash = "sha256:2625f03b105328729f9450c8badda34d5243231eef6535f80064d57035738360"}, - {file = "rpds_py-0.18.1-cp39-none-win_amd64.whl", hash = "sha256:bf18932d0003c8c4d51a39f244231986ab23ee057d235a12b2684ea26a353590"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:cbfbea39ba64f5e53ae2915de36f130588bba71245b418060ec3330ebf85678e"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:a3d456ff2a6a4d2adcdf3c1c960a36f4fd2fec6e3b4902a42a384d17cf4e7a65"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7700936ef9d006b7ef605dc53aa364da2de5a3aa65516a1f3ce73bf82ecfc7ae"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:51584acc5916212e1bf45edd17f3a6b05fe0cbb40482d25e619f824dccb679de"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:942695a206a58d2575033ff1e42b12b2aece98d6003c6bc739fbf33d1773b12f"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b906b5f58892813e5ba5c6056d6a5ad08f358ba49f046d910ad992196ea61397"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6f8e3fecca256fefc91bb6765a693d96692459d7d4c644660a9fff32e517843"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7732770412bab81c5a9f6d20aeb60ae943a9b36dcd990d876a773526468e7163"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:bd1105b50ede37461c1d51b9698c4f4be6e13e69a908ab7751e3807985fc0346"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-musllinux_1_2_i686.whl", hash = "sha256:618916f5535784960f3ecf8111581f4ad31d347c3de66d02e728de460a46303c"}, - {file = "rpds_py-0.18.1-pp310-pypy310_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:17c6d2155e2423f7e79e3bb18151c686d40db42d8645e7977442170c360194d4"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:6c4c4c3f878df21faf5fac86eda32671c27889e13570645a9eea0a1abdd50922"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:fab6ce90574645a0d6c58890e9bcaac8d94dff54fb51c69e5522a7358b80ab64"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531796fb842b53f2695e94dc338929e9f9dbf473b64710c28af5a160b2a8927d"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:740884bc62a5e2bbb31e584f5d23b32320fd75d79f916f15a788d527a5e83644"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:998125738de0158f088aef3cb264a34251908dd2e5d9966774fdab7402edfab7"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e2be6e9dd4111d5b31ba3b74d17da54a8319d8168890fbaea4b9e5c3de630ae5"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0cee71bc618cd93716f3c1bf56653740d2d13ddbd47673efa8bf41435a60daa"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2c3caec4ec5cd1d18e5dd6ae5194d24ed12785212a90b37f5f7f06b8bedd7139"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:27bba383e8c5231cd559affe169ca0b96ec78d39909ffd817f28b166d7ddd4d8"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-musllinux_1_2_i686.whl", hash = "sha256:a888e8bdb45916234b99da2d859566f1e8a1d2275a801bb8e4a9644e3c7e7909"}, - {file = "rpds_py-0.18.1-pp38-pypy38_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:6031b25fb1b06327b43d841f33842b383beba399884f8228a6bb3df3088485ff"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:48c2faaa8adfacefcbfdb5f2e2e7bdad081e5ace8d182e5f4ade971f128e6bb3"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:d85164315bd68c0806768dc6bb0429c6f95c354f87485ee3593c4f6b14def2bd"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6afd80f6c79893cfc0574956f78a0add8c76e3696f2d6a15bca2c66c415cf2d4"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fa242ac1ff583e4ec7771141606aafc92b361cd90a05c30d93e343a0c2d82a89"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d21be4770ff4e08698e1e8e0bce06edb6ea0626e7c8f560bc08222880aca6a6f"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5c45a639e93a0c5d4b788b2613bd637468edd62f8f95ebc6fcc303d58ab3f0a8"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:910e71711d1055b2768181efa0a17537b2622afeb0424116619817007f8a2b10"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b9bb1f182a97880f6078283b3505a707057c42bf55d8fca604f70dedfdc0772a"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:1d54f74f40b1f7aaa595a02ff42ef38ca654b1469bef7d52867da474243cc633"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-musllinux_1_2_i686.whl", hash = "sha256:8d2e182c9ee01135e11e9676e9a62dfad791a7a467738f06726872374a83db49"}, - {file = "rpds_py-0.18.1-pp39-pypy39_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:636a15acc588f70fda1661234761f9ed9ad79ebed3f2125d44be0862708b666e"}, - {file = "rpds_py-0.18.1.tar.gz", hash = "sha256:dc48b479d540770c811fbd1eb9ba2bb66951863e448efec2e2c102625328e92f"}, + {file = "rpds_py-0.20.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3ad0fda1635f8439cde85c700f964b23ed5fc2d28016b32b9ee5fe30da5c84e2"}, + {file = "rpds_py-0.20.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9bb4a0d90fdb03437c109a17eade42dfbf6190408f29b2744114d11586611d6f"}, + {file = "rpds_py-0.20.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c6377e647bbfd0a0b159fe557f2c6c602c159fc752fa316572f012fc0bf67150"}, + {file = "rpds_py-0.20.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb851b7df9dda52dc1415ebee12362047ce771fc36914586b2e9fcbd7d293b3e"}, + {file = "rpds_py-0.20.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1e0f80b739e5a8f54837be5d5c924483996b603d5502bfff79bf33da06164ee2"}, + {file = "rpds_py-0.20.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5a8c94dad2e45324fc74dce25e1645d4d14df9a4e54a30fa0ae8bad9a63928e3"}, + {file = "rpds_py-0.20.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f8e604fe73ba048c06085beaf51147eaec7df856824bfe7b98657cf436623daf"}, + {file = "rpds_py-0.20.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:df3de6b7726b52966edf29663e57306b23ef775faf0ac01a3e9f4012a24a4140"}, + {file = "rpds_py-0.20.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf258ede5bc22a45c8e726b29835b9303c285ab46fc7c3a4cc770736b5304c9f"}, + {file = "rpds_py-0.20.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:55fea87029cded5df854ca7e192ec7bdb7ecd1d9a3f63d5c4eb09148acf4a7ce"}, + {file = "rpds_py-0.20.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:ae94bd0b2f02c28e199e9bc51485d0c5601f58780636185660f86bf80c89af94"}, + {file = "rpds_py-0.20.0-cp310-none-win32.whl", hash = "sha256:28527c685f237c05445efec62426d285e47a58fb05ba0090a4340b73ecda6dee"}, + {file = "rpds_py-0.20.0-cp310-none-win_amd64.whl", hash = "sha256:238a2d5b1cad28cdc6ed15faf93a998336eb041c4e440dd7f902528b8891b399"}, + {file = "rpds_py-0.20.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:ac2f4f7a98934c2ed6505aead07b979e6f999389f16b714448fb39bbaa86a489"}, + {file = "rpds_py-0.20.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:220002c1b846db9afd83371d08d239fdc865e8f8c5795bbaec20916a76db3318"}, + {file = "rpds_py-0.20.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8d7919548df3f25374a1f5d01fbcd38dacab338ef5f33e044744b5c36729c8db"}, + {file = "rpds_py-0.20.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:758406267907b3781beee0f0edfe4a179fbd97c0be2e9b1154d7f0a1279cf8e5"}, + {file = "rpds_py-0.20.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3d61339e9f84a3f0767b1995adfb171a0d00a1185192718a17af6e124728e0f5"}, + {file = "rpds_py-0.20.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1259c7b3705ac0a0bd38197565a5d603218591d3f6cee6e614e380b6ba61c6f6"}, + {file = "rpds_py-0.20.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c1dc0f53856b9cc9a0ccca0a7cc61d3d20a7088201c0937f3f4048c1718a209"}, + {file = "rpds_py-0.20.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7e60cb630f674a31f0368ed32b2a6b4331b8350d67de53c0359992444b116dd3"}, + {file = "rpds_py-0.20.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:dbe982f38565bb50cb7fb061ebf762c2f254ca3d8c20d4006878766e84266272"}, + {file = "rpds_py-0.20.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:514b3293b64187172bc77c8fb0cdae26981618021053b30d8371c3a902d4d5ad"}, + {file = "rpds_py-0.20.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d0a26ffe9d4dd35e4dfdd1e71f46401cff0181c75ac174711ccff0459135fa58"}, + {file = "rpds_py-0.20.0-cp311-none-win32.whl", hash = "sha256:89c19a494bf3ad08c1da49445cc5d13d8fefc265f48ee7e7556839acdacf69d0"}, + {file = "rpds_py-0.20.0-cp311-none-win_amd64.whl", hash = "sha256:c638144ce971df84650d3ed0096e2ae7af8e62ecbbb7b201c8935c370df00a2c"}, + {file = "rpds_py-0.20.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a84ab91cbe7aab97f7446652d0ed37d35b68a465aeef8fc41932a9d7eee2c1a6"}, + {file = "rpds_py-0.20.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:56e27147a5a4c2c21633ff8475d185734c0e4befd1c989b5b95a5d0db699b21b"}, + {file = "rpds_py-0.20.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2580b0c34583b85efec8c5c5ec9edf2dfe817330cc882ee972ae650e7b5ef739"}, + {file = "rpds_py-0.20.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b80d4a7900cf6b66bb9cee5c352b2d708e29e5a37fe9bf784fa97fc11504bf6c"}, + {file = "rpds_py-0.20.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:50eccbf054e62a7b2209b28dc7a22d6254860209d6753e6b78cfaeb0075d7bee"}, + {file = "rpds_py-0.20.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:49a8063ea4296b3a7e81a5dfb8f7b2d73f0b1c20c2af401fb0cdf22e14711a96"}, + {file = "rpds_py-0.20.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ea438162a9fcbee3ecf36c23e6c68237479f89f962f82dae83dc15feeceb37e4"}, + {file = "rpds_py-0.20.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:18d7585c463087bddcfa74c2ba267339f14f2515158ac4db30b1f9cbdb62c8ef"}, + {file = "rpds_py-0.20.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d4c7d1a051eeb39f5c9547e82ea27cbcc28338482242e3e0b7768033cb083821"}, + {file = "rpds_py-0.20.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:e4df1e3b3bec320790f699890d41c59d250f6beda159ea3c44c3f5bac1976940"}, + {file = "rpds_py-0.20.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2cf126d33a91ee6eedc7f3197b53e87a2acdac63602c0f03a02dd69e4b138174"}, + {file = "rpds_py-0.20.0-cp312-none-win32.whl", hash = "sha256:8bc7690f7caee50b04a79bf017a8d020c1f48c2a1077ffe172abec59870f1139"}, + {file = "rpds_py-0.20.0-cp312-none-win_amd64.whl", hash = "sha256:0e13e6952ef264c40587d510ad676a988df19adea20444c2b295e536457bc585"}, + {file = "rpds_py-0.20.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:aa9a0521aeca7d4941499a73ad7d4f8ffa3d1affc50b9ea11d992cd7eff18a29"}, + {file = "rpds_py-0.20.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4a1f1d51eccb7e6c32ae89243cb352389228ea62f89cd80823ea7dd1b98e0b91"}, + {file = "rpds_py-0.20.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a86a9b96070674fc88b6f9f71a97d2c1d3e5165574615d1f9168ecba4cecb24"}, + {file = "rpds_py-0.20.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6c8ef2ebf76df43f5750b46851ed1cdf8f109d7787ca40035fe19fbdc1acc5a7"}, + {file = "rpds_py-0.20.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b25f024b421d5859d156750ea9a65651793d51b76a2e9238c05c9d5f203a9"}, + {file = "rpds_py-0.20.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:57eb94a8c16ab08fef6404301c38318e2c5a32216bf5de453e2714c964c125c8"}, + {file = "rpds_py-0.20.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1940dae14e715e2e02dfd5b0f64a52e8374a517a1e531ad9412319dc3ac7879"}, + {file = "rpds_py-0.20.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d20277fd62e1b992a50c43f13fbe13277a31f8c9f70d59759c88f644d66c619f"}, + {file = "rpds_py-0.20.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:06db23d43f26478303e954c34c75182356ca9aa7797d22c5345b16871ab9c45c"}, + {file = "rpds_py-0.20.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b2a5db5397d82fa847e4c624b0c98fe59d2d9b7cf0ce6de09e4d2e80f8f5b3f2"}, + {file = "rpds_py-0.20.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5a35df9f5548fd79cb2f52d27182108c3e6641a4feb0f39067911bf2adaa3e57"}, + {file = "rpds_py-0.20.0-cp313-none-win32.whl", hash = "sha256:fd2d84f40633bc475ef2d5490b9c19543fbf18596dcb1b291e3a12ea5d722f7a"}, + {file = "rpds_py-0.20.0-cp313-none-win_amd64.whl", hash = "sha256:9bc2d153989e3216b0559251b0c260cfd168ec78b1fac33dd485750a228db5a2"}, + {file = "rpds_py-0.20.0-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:f2fbf7db2012d4876fb0d66b5b9ba6591197b0f165db8d99371d976546472a24"}, + {file = "rpds_py-0.20.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1e5f3cd7397c8f86c8cc72d5a791071431c108edd79872cdd96e00abd8497d29"}, + {file = "rpds_py-0.20.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce9845054c13696f7af7f2b353e6b4f676dab1b4b215d7fe5e05c6f8bb06f965"}, + {file = "rpds_py-0.20.0-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c3e130fd0ec56cb76eb49ef52faead8ff09d13f4527e9b0c400307ff72b408e1"}, + {file = "rpds_py-0.20.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4b16aa0107ecb512b568244ef461f27697164d9a68d8b35090e9b0c1c8b27752"}, + {file = "rpds_py-0.20.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:aa7f429242aae2947246587d2964fad750b79e8c233a2367f71b554e9447949c"}, + {file = "rpds_py-0.20.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af0fc424a5842a11e28956e69395fbbeab2c97c42253169d87e90aac2886d751"}, + {file = "rpds_py-0.20.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b8c00a3b1e70c1d3891f0db1b05292747f0dbcfb49c43f9244d04c70fbc40eb8"}, + {file = "rpds_py-0.20.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:40ce74fc86ee4645d0a225498d091d8bc61f39b709ebef8204cb8b5a464d3c0e"}, + {file = "rpds_py-0.20.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:4fe84294c7019456e56d93e8ababdad5a329cd25975be749c3f5f558abb48253"}, + {file = "rpds_py-0.20.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:338ca4539aad4ce70a656e5187a3a31c5204f261aef9f6ab50e50bcdffaf050a"}, + {file = "rpds_py-0.20.0-cp38-none-win32.whl", hash = "sha256:54b43a2b07db18314669092bb2de584524d1ef414588780261e31e85846c26a5"}, + {file = "rpds_py-0.20.0-cp38-none-win_amd64.whl", hash = "sha256:a1862d2d7ce1674cffa6d186d53ca95c6e17ed2b06b3f4c476173565c862d232"}, + {file = "rpds_py-0.20.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:3fde368e9140312b6e8b6c09fb9f8c8c2f00999d1823403ae90cc00480221b22"}, + {file = "rpds_py-0.20.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9824fb430c9cf9af743cf7aaf6707bf14323fb51ee74425c380f4c846ea70789"}, + {file = "rpds_py-0.20.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:11ef6ce74616342888b69878d45e9f779b95d4bd48b382a229fe624a409b72c5"}, + {file = "rpds_py-0.20.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c52d3f2f82b763a24ef52f5d24358553e8403ce05f893b5347098014f2d9eff2"}, + {file = "rpds_py-0.20.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9d35cef91e59ebbeaa45214861874bc6f19eb35de96db73e467a8358d701a96c"}, + {file = "rpds_py-0.20.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d72278a30111e5b5525c1dd96120d9e958464316f55adb030433ea905866f4de"}, + {file = "rpds_py-0.20.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b4c29cbbba378759ac5786730d1c3cb4ec6f8ababf5c42a9ce303dc4b3d08cda"}, + {file = "rpds_py-0.20.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6632f2d04f15d1bd6fe0eedd3b86d9061b836ddca4c03d5cf5c7e9e6b7c14580"}, + {file = "rpds_py-0.20.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:d0b67d87bb45ed1cd020e8fbf2307d449b68abc45402fe1a4ac9e46c3c8b192b"}, + {file = "rpds_py-0.20.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:ec31a99ca63bf3cd7f1a5ac9fe95c5e2d060d3c768a09bc1d16e235840861420"}, + {file = "rpds_py-0.20.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:22e6c9976e38f4d8c4a63bd8a8edac5307dffd3ee7e6026d97f3cc3a2dc02a0b"}, + {file = "rpds_py-0.20.0-cp39-none-win32.whl", hash = "sha256:569b3ea770c2717b730b61998b6c54996adee3cef69fc28d444f3e7920313cf7"}, + {file = "rpds_py-0.20.0-cp39-none-win_amd64.whl", hash = "sha256:e6900ecdd50ce0facf703f7a00df12374b74bbc8ad9fe0f6559947fb20f82364"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:617c7357272c67696fd052811e352ac54ed1d9b49ab370261a80d3b6ce385045"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9426133526f69fcaba6e42146b4e12d6bc6c839b8b555097020e2b78ce908dcc"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:deb62214c42a261cb3eb04d474f7155279c1a8a8c30ac89b7dcb1721d92c3c02"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fcaeb7b57f1a1e071ebd748984359fef83ecb026325b9d4ca847c95bc7311c92"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d454b8749b4bd70dd0a79f428731ee263fa6995f83ccb8bada706e8d1d3ff89d"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d807dc2051abe041b6649681dce568f8e10668e3c1c6543ebae58f2d7e617855"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c3c20f0ddeb6e29126d45f89206b8291352b8c5b44384e78a6499d68b52ae511"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b7f19250ceef892adf27f0399b9e5afad019288e9be756d6919cb58892129f51"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:4f1ed4749a08379555cebf4650453f14452eaa9c43d0a95c49db50c18b7da075"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-musllinux_1_2_i686.whl", hash = "sha256:dcedf0b42bcb4cfff4101d7771a10532415a6106062f005ab97d1d0ab5681c60"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:39ed0d010457a78f54090fafb5d108501b5aa5604cc22408fc1c0c77eac14344"}, + {file = "rpds_py-0.20.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:bb273176be34a746bdac0b0d7e4e2c467323d13640b736c4c477881a3220a989"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f918a1a130a6dfe1d7fe0f105064141342e7dd1611f2e6a21cd2f5c8cb1cfb3e"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:f60012a73aa396be721558caa3a6fd49b3dd0033d1675c6d59c4502e870fcf0c"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d2b1ad682a3dfda2a4e8ad8572f3100f95fad98cb99faf37ff0ddfe9cbf9d03"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:614fdafe9f5f19c63ea02817fa4861c606a59a604a77c8cdef5aa01d28b97921"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fa518bcd7600c584bf42e6617ee8132869e877db2f76bcdc281ec6a4113a53ab"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f0475242f447cc6cb8a9dd486d68b2ef7fbee84427124c232bff5f63b1fe11e5"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f90a4cd061914a60bd51c68bcb4357086991bd0bb93d8aa66a6da7701370708f"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:def7400461c3a3f26e49078302e1c1b38f6752342c77e3cf72ce91ca69fb1bc1"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:65794e4048ee837494aea3c21a28ad5fc080994dfba5b036cf84de37f7ad5074"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-musllinux_1_2_i686.whl", hash = "sha256:faefcc78f53a88f3076b7f8be0a8f8d35133a3ecf7f3770895c25f8813460f08"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:5b4f105deeffa28bbcdff6c49b34e74903139afa690e35d2d9e3c2c2fba18cec"}, + {file = "rpds_py-0.20.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:fdfc3a892927458d98f3d55428ae46b921d1f7543b89382fdb483f5640daaec8"}, + {file = "rpds_py-0.20.0.tar.gz", hash = "sha256:d72a210824facfdaf8768cf2d7ca25a042c30320b3020de2fa04640920d4e121"}, ] [[package]] @@ -3873,18 +3925,19 @@ stdlib_list = "*" [[package]] name = "setuptools" -version = "70.2.0" +version = "72.1.0" description = "Easily download, build, install, upgrade, and uninstall Python packages" optional = false python-versions = ">=3.8" files = [ - {file = "setuptools-70.2.0-py3-none-any.whl", hash = "sha256:b8b8060bb426838fbe942479c90296ce976249451118ef566a5a0b7d8b78fb05"}, - {file = "setuptools-70.2.0.tar.gz", hash = "sha256:bd63e505105011b25c3c11f753f7e3b8465ea739efddaccef8f0efac2137bac1"}, + {file = "setuptools-72.1.0-py3-none-any.whl", hash = "sha256:5a03e1860cf56bb6ef48ce186b0e557fdba433237481a9a625176c2831be15d1"}, + {file = "setuptools-72.1.0.tar.gz", hash = "sha256:8d243eff56d095e5817f796ede6ae32941278f542e0f941867cc05ae52b162ec"}, ] [package.extras] +core = ["importlib-metadata (>=6)", "importlib-resources (>=5.10.2)", "jaraco.text (>=3.7)", "more-itertools (>=8.8)", "ordered-set (>=3.1.1)", "packaging (>=24)", "platformdirs (>=2.6.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"] doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"] -test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.10.0)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"] +test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.11.*)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"] [[package]] name = "six" @@ -3921,27 +3974,27 @@ files = [ [[package]] name = "sphinx" -version = "7.3.7" +version = "7.4.7" description = "Python documentation generator" optional = false python-versions = ">=3.9" files = [ - {file = "sphinx-7.3.7-py3-none-any.whl", hash = "sha256:413f75440be4cacf328f580b4274ada4565fb2187d696a84970c23f77b64d8c3"}, - {file = "sphinx-7.3.7.tar.gz", hash = "sha256:a4a7db75ed37531c05002d56ed6948d4c42f473a36f46e1382b0bd76ca9627bc"}, + {file = "sphinx-7.4.7-py3-none-any.whl", hash = "sha256:c2419e2135d11f1951cd994d6eb18a1835bd8fdd8429f9ca375dc1f3281bd239"}, + {file = "sphinx-7.4.7.tar.gz", hash = "sha256:242f92a7ea7e6c5b406fdc2615413890ba9f699114a9c09192d7dfead2ee9cfe"}, ] [package.dependencies] alabaster = ">=0.7.14,<0.8.0" -babel = ">=2.9" -colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} -docutils = ">=0.18.1,<0.22" +babel = ">=2.13" +colorama = {version = ">=0.4.6", markers = "sys_platform == \"win32\""} +docutils = ">=0.20,<0.22" imagesize = ">=1.3" -importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""} -Jinja2 = ">=3.0" -packaging = ">=21.0" -Pygments = ">=2.14" -requests = ">=2.25.0" -snowballstemmer = ">=2.0" +importlib-metadata = {version = ">=6.0", markers = "python_version < \"3.10\""} +Jinja2 = ">=3.1" +packaging = ">=23.0" +Pygments = ">=2.17" +requests = ">=2.30.0" +snowballstemmer = ">=2.2" sphinxcontrib-applehelp = "*" sphinxcontrib-devhelp = "*" sphinxcontrib-htmlhelp = ">=2.0.0" @@ -3952,18 +4005,18 @@ tomli = {version = ">=2", markers = "python_version < \"3.11\""} [package.extras] docs = ["sphinxcontrib-websupport"] -lint = ["flake8 (>=3.5.0)", "importlib_metadata", "mypy (==1.9.0)", "pytest (>=6.0)", "ruff (==0.3.7)", "sphinx-lint", "tomli", "types-docutils", "types-requests"] -test = ["cython (>=3.0)", "defusedxml (>=0.7.1)", "pytest (>=6.0)", "setuptools (>=67.0)"] +lint = ["flake8 (>=6.0)", "importlib-metadata (>=6.0)", "mypy (==1.10.1)", "pytest (>=6.0)", "ruff (==0.5.2)", "sphinx-lint (>=0.9)", "tomli (>=2)", "types-docutils (==0.21.0.20240711)", "types-requests (>=2.30.0)"] +test = ["cython (>=3.0)", "defusedxml (>=0.7.1)", "pytest (>=8.0)", "setuptools (>=70.0)", "typing_extensions (>=4.9)"] [[package]] name = "sphinx-autodoc-typehints" -version = "2.2.2" +version = "2.2.3" description = "Type hints (PEP 484) support for the Sphinx autodoc extension" optional = false python-versions = ">=3.9" files = [ - {file = "sphinx_autodoc_typehints-2.2.2-py3-none-any.whl", hash = "sha256:b98337a8530c95b73ba0c65465847a8ab0a13403bdc81294d5ef396bbd1f783e"}, - {file = "sphinx_autodoc_typehints-2.2.2.tar.gz", hash = "sha256:128e600eeef63b722f3d8dac6403594592c8cade3ba66fd11dcb997465ee259d"}, + {file = "sphinx_autodoc_typehints-2.2.3-py3-none-any.whl", hash = "sha256:b7058e8c5831e5598afca1a78fda0695d3291388d954464a6e480c36198680c0"}, + {file = "sphinx_autodoc_typehints-2.2.3.tar.gz", hash = "sha256:fde3d888949bd0a91207cf1e54afda58121dbb4bf1f183d0cc78a0826654c974"}, ] [package.dependencies] @@ -3995,49 +4048,49 @@ dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"] [[package]] name = "sphinxcontrib-applehelp" -version = "1.0.8" +version = "2.0.0" description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books" optional = false python-versions = ">=3.9" files = [ - {file = "sphinxcontrib_applehelp-1.0.8-py3-none-any.whl", hash = "sha256:cb61eb0ec1b61f349e5cc36b2028e9e7ca765be05e49641c97241274753067b4"}, - {file = "sphinxcontrib_applehelp-1.0.8.tar.gz", hash = "sha256:c40a4f96f3776c4393d933412053962fac2b84f4c99a7982ba42e09576a70619"}, + {file = "sphinxcontrib_applehelp-2.0.0-py3-none-any.whl", hash = "sha256:4cd3f0ec4ac5dd9c17ec65e9ab272c9b867ea77425228e68ecf08d6b28ddbdb5"}, + {file = "sphinxcontrib_applehelp-2.0.0.tar.gz", hash = "sha256:2f29ef331735ce958efa4734873f084941970894c6090408b079c61b2e1c06d1"}, ] [package.extras] -lint = ["docutils-stubs", "flake8", "mypy"] +lint = ["mypy", "ruff (==0.5.5)", "types-docutils"] standalone = ["Sphinx (>=5)"] test = ["pytest"] [[package]] name = "sphinxcontrib-devhelp" -version = "1.0.6" +version = "2.0.0" description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents" optional = false python-versions = ">=3.9" files = [ - {file = "sphinxcontrib_devhelp-1.0.6-py3-none-any.whl", hash = "sha256:6485d09629944511c893fa11355bda18b742b83a2b181f9a009f7e500595c90f"}, - {file = "sphinxcontrib_devhelp-1.0.6.tar.gz", hash = "sha256:9893fd3f90506bc4b97bdb977ceb8fbd823989f4316b28c3841ec128544372d3"}, + {file = "sphinxcontrib_devhelp-2.0.0-py3-none-any.whl", hash = "sha256:aefb8b83854e4b0998877524d1029fd3e6879210422ee3780459e28a1f03a8a2"}, + {file = "sphinxcontrib_devhelp-2.0.0.tar.gz", hash = "sha256:411f5d96d445d1d73bb5d52133377b4248ec79db5c793ce7dbe59e074b4dd1ad"}, ] [package.extras] -lint = ["docutils-stubs", "flake8", "mypy"] +lint = ["mypy", "ruff (==0.5.5)", "types-docutils"] standalone = ["Sphinx (>=5)"] test = ["pytest"] [[package]] name = "sphinxcontrib-htmlhelp" -version = "2.0.5" +version = "2.1.0" description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files" optional = false python-versions = ">=3.9" files = [ - {file = "sphinxcontrib_htmlhelp-2.0.5-py3-none-any.whl", hash = "sha256:393f04f112b4d2f53d93448d4bce35842f62b307ccdc549ec1585e950bc35e04"}, - {file = "sphinxcontrib_htmlhelp-2.0.5.tar.gz", hash = "sha256:0dc87637d5de53dd5eec3a6a01753b1ccf99494bd756aafecd74b4fa9e729015"}, + {file = "sphinxcontrib_htmlhelp-2.1.0-py3-none-any.whl", hash = "sha256:166759820b47002d22914d64a075ce08f4c46818e17cfc9470a9786b759b19f8"}, + {file = "sphinxcontrib_htmlhelp-2.1.0.tar.gz", hash = "sha256:c9e2916ace8aad64cc13a0d233ee22317f2b9025b9cf3295249fa985cc7082e9"}, ] [package.extras] -lint = ["docutils-stubs", "flake8", "mypy"] +lint = ["mypy", "ruff (==0.5.5)", "types-docutils"] standalone = ["Sphinx (>=5)"] test = ["html5lib", "pytest"] @@ -4071,92 +4124,92 @@ test = ["flake8", "mypy", "pytest"] [[package]] name = "sphinxcontrib-qthelp" -version = "1.0.7" +version = "2.0.0" description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents" optional = false python-versions = ">=3.9" files = [ - {file = "sphinxcontrib_qthelp-1.0.7-py3-none-any.whl", hash = "sha256:e2ae3b5c492d58fcbd73281fbd27e34b8393ec34a073c792642cd8e529288182"}, - {file = "sphinxcontrib_qthelp-1.0.7.tar.gz", hash = "sha256:053dedc38823a80a7209a80860b16b722e9e0209e32fea98c90e4e6624588ed6"}, + {file = "sphinxcontrib_qthelp-2.0.0-py3-none-any.whl", hash = "sha256:b18a828cdba941ccd6ee8445dbe72ffa3ef8cbe7505d8cd1fa0d42d3f2d5f3eb"}, + {file = "sphinxcontrib_qthelp-2.0.0.tar.gz", hash = "sha256:4fe7d0ac8fc171045be623aba3e2a8f613f8682731f9153bb2e40ece16b9bbab"}, ] [package.extras] -lint = ["docutils-stubs", "flake8", "mypy"] +lint = ["mypy", "ruff (==0.5.5)", "types-docutils"] standalone = ["Sphinx (>=5)"] -test = ["pytest"] +test = ["defusedxml (>=0.7.1)", "pytest"] [[package]] name = "sphinxcontrib-serializinghtml" -version = "1.1.10" +version = "2.0.0" description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)" optional = false python-versions = ">=3.9" files = [ - {file = "sphinxcontrib_serializinghtml-1.1.10-py3-none-any.whl", hash = "sha256:326369b8df80a7d2d8d7f99aa5ac577f51ea51556ed974e7716cfd4fca3f6cb7"}, - {file = "sphinxcontrib_serializinghtml-1.1.10.tar.gz", hash = "sha256:93f3f5dc458b91b192fe10c397e324f262cf163d79f3282c158e8436a2c4511f"}, + {file = "sphinxcontrib_serializinghtml-2.0.0-py3-none-any.whl", hash = "sha256:6e2cb0eef194e10c27ec0023bfeb25badbbb5868244cf5bc5bdc04e4464bf331"}, + {file = "sphinxcontrib_serializinghtml-2.0.0.tar.gz", hash = "sha256:e9d912827f872c029017a53f0ef2180b327c3f7fd23c87229f7a8e8b70031d4d"}, ] [package.extras] -lint = ["docutils-stubs", "flake8", "mypy"] +lint = ["mypy", "ruff (==0.5.5)", "types-docutils"] standalone = ["Sphinx (>=5)"] test = ["pytest"] [[package]] name = "sqlalchemy" -version = "2.0.31" +version = "2.0.32" description = "Database Abstraction Library" optional = false python-versions = ">=3.7" files = [ - {file = "SQLAlchemy-2.0.31-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f2a213c1b699d3f5768a7272de720387ae0122f1becf0901ed6eaa1abd1baf6c"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9fea3d0884e82d1e33226935dac990b967bef21315cbcc894605db3441347443"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3ad7f221d8a69d32d197e5968d798217a4feebe30144986af71ada8c548e9fa"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f2bee229715b6366f86a95d497c347c22ddffa2c7c96143b59a2aa5cc9eebbc"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cd5b94d4819c0c89280b7c6109c7b788a576084bf0a480ae17c227b0bc41e109"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:750900a471d39a7eeba57580b11983030517a1f512c2cb287d5ad0fcf3aebd58"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-win32.whl", hash = "sha256:7bd112be780928c7f493c1a192cd8c5fc2a2a7b52b790bc5a84203fb4381c6be"}, - {file = "SQLAlchemy-2.0.31-cp310-cp310-win_amd64.whl", hash = "sha256:5a48ac4d359f058474fadc2115f78a5cdac9988d4f99eae44917f36aa1476327"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f68470edd70c3ac3b6cd5c2a22a8daf18415203ca1b036aaeb9b0fb6f54e8298"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2e2c38c2a4c5c634fe6c3c58a789712719fa1bf9b9d6ff5ebfce9a9e5b89c1ca"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd15026f77420eb2b324dcb93551ad9c5f22fab2c150c286ef1dc1160f110203"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2196208432deebdfe3b22185d46b08f00ac9d7b01284e168c212919891289396"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:352b2770097f41bff6029b280c0e03b217c2dcaddc40726f8f53ed58d8a85da4"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:56d51ae825d20d604583f82c9527d285e9e6d14f9a5516463d9705dab20c3740"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-win32.whl", hash = "sha256:6e2622844551945db81c26a02f27d94145b561f9d4b0c39ce7bfd2fda5776dac"}, - {file = "SQLAlchemy-2.0.31-cp311-cp311-win_amd64.whl", hash = "sha256:ccaf1b0c90435b6e430f5dd30a5aede4764942a695552eb3a4ab74ed63c5b8d3"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:3b74570d99126992d4b0f91fb87c586a574a5872651185de8297c6f90055ae42"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f77c4f042ad493cb8595e2f503c7a4fe44cd7bd59c7582fd6d78d7e7b8ec52c"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cd1591329333daf94467e699e11015d9c944f44c94d2091f4ac493ced0119449"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:74afabeeff415e35525bf7a4ecdab015f00e06456166a2eba7590e49f8db940e"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b9c01990d9015df2c6f818aa8f4297d42ee71c9502026bb074e713d496e26b67"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:66f63278db425838b3c2b1c596654b31939427016ba030e951b292e32b99553e"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-win32.whl", hash = "sha256:0b0f658414ee4e4b8cbcd4a9bb0fd743c5eeb81fc858ca517217a8013d282c96"}, - {file = "SQLAlchemy-2.0.31-cp312-cp312-win_amd64.whl", hash = "sha256:fa4b1af3e619b5b0b435e333f3967612db06351217c58bfb50cee5f003db2a5a"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:f43e93057cf52a227eda401251c72b6fbe4756f35fa6bfebb5d73b86881e59b0"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d337bf94052856d1b330d5fcad44582a30c532a2463776e1651bd3294ee7e58b"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c06fb43a51ccdff3b4006aafee9fcf15f63f23c580675f7734245ceb6b6a9e05"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:b6e22630e89f0e8c12332b2b4c282cb01cf4da0d26795b7eae16702a608e7ca1"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:79a40771363c5e9f3a77f0e28b3302801db08040928146e6808b5b7a40749c88"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-win32.whl", hash = "sha256:501ff052229cb79dd4c49c402f6cb03b5a40ae4771efc8bb2bfac9f6c3d3508f"}, - {file = "SQLAlchemy-2.0.31-cp37-cp37m-win_amd64.whl", hash = "sha256:597fec37c382a5442ffd471f66ce12d07d91b281fd474289356b1a0041bdf31d"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dc6d69f8829712a4fd799d2ac8d79bdeff651c2301b081fd5d3fe697bd5b4ab9"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:23b9fbb2f5dd9e630db70fbe47d963c7779e9c81830869bd7d137c2dc1ad05fb"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a21c97efcbb9f255d5c12a96ae14da873233597dfd00a3a0c4ce5b3e5e79704"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26a6a9837589c42b16693cf7bf836f5d42218f44d198f9343dd71d3164ceeeac"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:dc251477eae03c20fae8db9c1c23ea2ebc47331bcd73927cdcaecd02af98d3c3"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:2fd17e3bb8058359fa61248c52c7b09a97cf3c820e54207a50af529876451808"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-win32.whl", hash = "sha256:c76c81c52e1e08f12f4b6a07af2b96b9b15ea67ccdd40ae17019f1c373faa227"}, - {file = "SQLAlchemy-2.0.31-cp38-cp38-win_amd64.whl", hash = "sha256:4b600e9a212ed59355813becbcf282cfda5c93678e15c25a0ef896b354423238"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b6cf796d9fcc9b37011d3f9936189b3c8074a02a4ed0c0fbbc126772c31a6d4"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:78fe11dbe37d92667c2c6e74379f75746dc947ee505555a0197cfba9a6d4f1a4"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2fc47dc6185a83c8100b37acda27658fe4dbd33b7d5e7324111f6521008ab4fe"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a41514c1a779e2aa9a19f67aaadeb5cbddf0b2b508843fcd7bafdf4c6864005"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:afb6dde6c11ea4525318e279cd93c8734b795ac8bb5dda0eedd9ebaca7fa23f1"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:3f9faef422cfbb8fd53716cd14ba95e2ef655400235c3dfad1b5f467ba179c8c"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-win32.whl", hash = "sha256:fc6b14e8602f59c6ba893980bea96571dd0ed83d8ebb9c4479d9ed5425d562e9"}, - {file = "SQLAlchemy-2.0.31-cp39-cp39-win_amd64.whl", hash = "sha256:3cb8a66b167b033ec72c3812ffc8441d4e9f5f78f5e31e54dcd4c90a4ca5bebc"}, - {file = "SQLAlchemy-2.0.31-py3-none-any.whl", hash = "sha256:69f3e3c08867a8e4856e92d7afb618b95cdee18e0bc1647b77599722c9a28911"}, - {file = "SQLAlchemy-2.0.31.tar.gz", hash = "sha256:b607489dd4a54de56984a0c7656247504bd5523d9d0ba799aef59d4add009484"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0c9045ecc2e4db59bfc97b20516dfdf8e41d910ac6fb667ebd3a79ea54084619"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1467940318e4a860afd546ef61fefb98a14d935cd6817ed07a228c7f7c62f389"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5954463675cb15db8d4b521f3566a017c8789222b8316b1e6934c811018ee08b"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:167e7497035c303ae50651b351c28dc22a40bb98fbdb8468cdc971821b1ae533"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:b27dfb676ac02529fb6e343b3a482303f16e6bc3a4d868b73935b8792edb52d0"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:bf2360a5e0f7bd75fa80431bf8ebcfb920c9f885e7956c7efde89031695cafb8"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-win32.whl", hash = "sha256:306fe44e754a91cd9d600a6b070c1f2fadbb4a1a257b8781ccf33c7067fd3e4d"}, + {file = "SQLAlchemy-2.0.32-cp310-cp310-win_amd64.whl", hash = "sha256:99db65e6f3ab42e06c318f15c98f59a436f1c78179e6a6f40f529c8cc7100b22"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:21b053be28a8a414f2ddd401f1be8361e41032d2ef5884b2f31d31cb723e559f"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b178e875a7a25b5938b53b006598ee7645172fccafe1c291a706e93f48499ff5"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:723a40ee2cc7ea653645bd4cf024326dea2076673fc9d3d33f20f6c81db83e1d"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:295ff8689544f7ee7e819529633d058bd458c1fd7f7e3eebd0f9268ebc56c2a0"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:49496b68cd190a147118af585173ee624114dfb2e0297558c460ad7495f9dfe2"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:acd9b73c5c15f0ec5ce18128b1fe9157ddd0044abc373e6ecd5ba376a7e5d961"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-win32.whl", hash = "sha256:9365a3da32dabd3e69e06b972b1ffb0c89668994c7e8e75ce21d3e5e69ddef28"}, + {file = "SQLAlchemy-2.0.32-cp311-cp311-win_amd64.whl", hash = "sha256:8bd63d051f4f313b102a2af1cbc8b80f061bf78f3d5bd0843ff70b5859e27924"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6bab3db192a0c35e3c9d1560eb8332463e29e5507dbd822e29a0a3c48c0a8d92"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:19d98f4f58b13900d8dec4ed09dd09ef292208ee44cc9c2fe01c1f0a2fe440e9"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cd33c61513cb1b7371fd40cf221256456d26a56284e7d19d1f0b9f1eb7dd7e8"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d6ba0497c1d066dd004e0f02a92426ca2df20fac08728d03f67f6960271feec"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2b6be53e4fde0065524f1a0a7929b10e9280987b320716c1509478b712a7688c"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:916a798f62f410c0b80b63683c8061f5ebe237b0f4ad778739304253353bc1cb"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-win32.whl", hash = "sha256:31983018b74908ebc6c996a16ad3690301a23befb643093fcfe85efd292e384d"}, + {file = "SQLAlchemy-2.0.32-cp312-cp312-win_amd64.whl", hash = "sha256:4363ed245a6231f2e2957cccdda3c776265a75851f4753c60f3004b90e69bfeb"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b8afd5b26570bf41c35c0121801479958b4446751a3971fb9a480c1afd85558e"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c750987fc876813f27b60d619b987b057eb4896b81117f73bb8d9918c14f1cad"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ada0102afff4890f651ed91120c1120065663506b760da4e7823913ebd3258be"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:78c03d0f8a5ab4f3034c0e8482cfcc415a3ec6193491cfa1c643ed707d476f16"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:3bd1cae7519283ff525e64645ebd7a3e0283f3c038f461ecc1c7b040a0c932a1"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-win32.whl", hash = "sha256:01438ebcdc566d58c93af0171c74ec28efe6a29184b773e378a385e6215389da"}, + {file = "SQLAlchemy-2.0.32-cp37-cp37m-win_amd64.whl", hash = "sha256:4979dc80fbbc9d2ef569e71e0896990bc94df2b9fdbd878290bd129b65ab579c"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c742be912f57586ac43af38b3848f7688863a403dfb220193a882ea60e1ec3a"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:62e23d0ac103bcf1c5555b6c88c114089587bc64d048fef5bbdb58dfd26f96da"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:251f0d1108aab8ea7b9aadbd07fb47fb8e3a5838dde34aa95a3349876b5a1f1d"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ef18a84e5116340e38eca3e7f9eeaaef62738891422e7c2a0b80feab165905f"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:3eb6a97a1d39976f360b10ff208c73afb6a4de86dd2a6212ddf65c4a6a2347d5"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:0c1c9b673d21477cec17ab10bc4decb1322843ba35b481585facd88203754fc5"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-win32.whl", hash = "sha256:c41a2b9ca80ee555decc605bd3c4520cc6fef9abde8fd66b1cf65126a6922d65"}, + {file = "SQLAlchemy-2.0.32-cp38-cp38-win_amd64.whl", hash = "sha256:8a37e4d265033c897892279e8adf505c8b6b4075f2b40d77afb31f7185cd6ecd"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:52fec964fba2ef46476312a03ec8c425956b05c20220a1a03703537824b5e8e1"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:328429aecaba2aee3d71e11f2477c14eec5990fb6d0e884107935f7fb6001632"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85a01b5599e790e76ac3fe3aa2f26e1feba56270023d6afd5550ed63c68552b3"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aaf04784797dcdf4c0aa952c8d234fa01974c4729db55c45732520ce12dd95b4"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4488120becf9b71b3ac718f4138269a6be99a42fe023ec457896ba4f80749525"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:14e09e083a5796d513918a66f3d6aedbc131e39e80875afe81d98a03312889e6"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-win32.whl", hash = "sha256:0d322cc9c9b2154ba7e82f7bf25ecc7c36fbe2d82e2933b3642fc095a52cfc78"}, + {file = "SQLAlchemy-2.0.32-cp39-cp39-win_amd64.whl", hash = "sha256:7dd8583df2f98dea28b5cd53a1beac963f4f9d087888d75f22fcc93a07cf8d84"}, + {file = "SQLAlchemy-2.0.32-py3-none-any.whl", hash = "sha256:e567a8793a692451f706b363ccf3c45e056b67d90ead58c3bc9471af5d212202"}, + {file = "SQLAlchemy-2.0.32.tar.gz", hash = "sha256:c1b88cc8b02b6a5f0efb0345a03672d4c897dc7d92585176f88c67346f565ea8"}, ] [package.dependencies] @@ -4272,13 +4325,13 @@ test = ["coverage[toml]", "pytest", "pytest-cov"] [[package]] name = "sympy" -version = "1.13.1" +version = "1.13.2" description = "Computer algebra system (CAS) in Python" optional = false python-versions = ">=3.8" files = [ - {file = "sympy-1.13.1-py3-none-any.whl", hash = "sha256:db36cdc64bf61b9b24578b6f7bab1ecdd2452cf008f34faa33776680c26d66f8"}, - {file = "sympy-1.13.1.tar.gz", hash = "sha256:9cebf7e04ff162015ce31c9c6c9144daa34a93bd082f54fd8f12deca4f47515f"}, + {file = "sympy-1.13.2-py3-none-any.whl", hash = "sha256:c51d75517712f1aed280d4ce58506a4a88d635d6b5dd48b39102a7ae1f3fcfe9"}, + {file = "sympy-1.13.2.tar.gz", hash = "sha256:401449d84d07be9d0c7a46a64bd54fe097667d5e7181bfe67ec777be9e01cb13"}, ] [package.dependencies] @@ -4422,13 +4475,12 @@ optree = ["optree (>=0.11.0)"] [[package]] name = "torchmetrics" -version = "1.4.0.post0" +version = "1.4.1" description = "PyTorch native Metrics" optional = false python-versions = ">=3.8" files = [ - {file = "torchmetrics-1.4.0.post0-py3-none-any.whl", hash = "sha256:ab234216598e3fbd8d62ee4541a0e74e7e8fc935d099683af5b8da50f745b3c8"}, - {file = "torchmetrics-1.4.0.post0.tar.gz", hash = "sha256:ab9bcfe80e65dbabbddb6cecd9be21f1f1d5207bb74051ef95260740f2762358"}, + {file = "torchmetrics-1.4.1-py3-none-any.whl", hash = "sha256:c2e7cd56dd8bdc60ae63d712f3bdce649f23bd174d9180bdd0b746e0230b865a"}, ] [package.dependencies] @@ -4438,15 +4490,14 @@ packaging = ">17.1" torch = ">=1.10.0" [package.extras] -all = ["SciencePlots (>=2.0.0)", "ipadic (>=1.0.0)", "matplotlib (>=3.3.0)", "mecab-python3 (>=1.0.6)", "mypy (==1.9.0)", "nltk (>=3.6)", "piq (<=0.8.0)", "pretty-errors (>=1.2.0)", "pycocotools (>2.0.0)", "pystoi (>=0.3.0)", "regex (>=2021.9.24)", "scipy (>1.0.0)", "sentencepiece (>=0.2.0)", "torch (==2.3.0)", "torch-fidelity (<=0.4.0)", "torchaudio (>=0.10.0)", "torchvision (>=0.8)", "tqdm (>=4.41.0)", "transformers (>4.4.0)", "transformers (>=4.10.0)", "types-PyYAML", "types-emoji", "types-protobuf", "types-requests", "types-setuptools", "types-six", "types-tabulate"] -audio = ["pystoi (>=0.3.0)", "torchaudio (>=0.10.0)"] -debug = ["pretty-errors (>=1.2.0)"] +all = ["SciencePlots (>=2.0.0)", "gammatone (>1.0.0)", "ipadic (>=1.0.0)", "matplotlib (>=3.3.0)", "mecab-python3 (>=1.0.6)", "mypy (==1.11.0)", "nltk (>=3.6)", "pesq (>=0.0.4)", "piq (<=0.8.0)", "pycocotools (>2.0.0)", "pystoi (>=0.3.0)", "regex (>=2021.9.24)", "scipy (>1.0.0)", "sentencepiece (>=0.2.0)", "torch (==2.3.1)", "torch-fidelity (<=0.4.0)", "torchaudio (>=0.10.0)", "torchvision (>=0.8)", "tqdm (>=4.41.0)", "transformers (>4.4.0)", "transformers (>=4.42.3)", "types-PyYAML", "types-emoji", "types-protobuf", "types-requests", "types-setuptools", "types-six", "types-tabulate"] +audio = ["gammatone (>1.0.0)", "pesq (>=0.0.4)", "pystoi (>=0.3.0)", "torchaudio (>=0.10.0)"] detection = ["pycocotools (>2.0.0)", "torchvision (>=0.8)"] -dev = ["SciencePlots (>=2.0.0)", "bert-score (==0.3.13)", "dython (<=0.7.5)", "fairlearn", "fast-bss-eval (>=0.1.0)", "faster-coco-eval (>=1.3.3)", "huggingface-hub (<0.23)", "ipadic (>=1.0.0)", "jiwer (>=2.3.0)", "kornia (>=0.6.7)", "lpips (<=0.1.4)", "matplotlib (>=3.3.0)", "mecab-ko (>=1.0.0)", "mecab-ko-dic (>=1.0.0)", "mecab-python3 (>=1.0.6)", "mir-eval (>=0.6)", "monai (==1.3.0)", "mypy (==1.9.0)", "netcal (>1.0.0)", "nltk (>=3.6)", "numpy (<1.27.0)", "pandas (>1.0.0)", "pandas (>=1.4.0)", "piq (<=0.8.0)", "pretty-errors (>=1.2.0)", "pycocotools (>2.0.0)", "pystoi (>=0.3.0)", "pytorch-msssim (==1.0.0)", "regex (>=2021.9.24)", "rouge-score (>0.1.0)", "sacrebleu (>=2.3.0)", "scikit-image (>=0.19.0)", "scipy (>1.0.0)", "sentencepiece (>=0.2.0)", "sewar (>=0.4.4)", "statsmodels (>0.13.5)", "torch (==2.3.0)", "torch-complex (<=0.4.3)", "torch-fidelity (<=0.4.0)", "torchaudio (>=0.10.0)", "torchvision (>=0.8)", "tqdm (>=4.41.0)", "transformers (>4.4.0)", "transformers (>=4.10.0)", "types-PyYAML", "types-emoji", "types-protobuf", "types-requests", "types-setuptools", "types-six", "types-tabulate"] +dev = ["SciencePlots (>=2.0.0)", "bert-score (==0.3.13)", "dython (<=0.7.6)", "fairlearn", "fast-bss-eval (>=0.1.0)", "faster-coco-eval (>=1.3.3)", "gammatone (>1.0.0)", "huggingface-hub (<0.25)", "ipadic (>=1.0.0)", "jiwer (>=2.3.0)", "kornia (>=0.6.7)", "lpips (<=0.1.4)", "matplotlib (>=3.3.0)", "mecab-ko (>=1.0.0)", "mecab-ko-dic (>=1.0.0)", "mecab-python3 (>=1.0.6)", "mir-eval (>=0.6)", "monai (==1.3.2)", "mypy (==1.11.0)", "netcal (>1.0.0)", "nltk (>=3.6)", "numpy (<2.1.0)", "pandas (>1.0.0)", "pandas (>=1.4.0)", "pesq (>=0.0.4)", "piq (<=0.8.0)", "pycocotools (>2.0.0)", "pystoi (>=0.3.0)", "pytorch-msssim (==1.0.0)", "regex (>=2021.9.24)", "rouge-score (>0.1.0)", "sacrebleu (>=2.3.0)", "scikit-image (>=0.19.0)", "scipy (>1.0.0)", "sentencepiece (>=0.2.0)", "sewar (>=0.4.4)", "statsmodels (>0.13.5)", "torch (==2.3.1)", "torch-complex (<0.5.0)", "torch-fidelity (<=0.4.0)", "torchaudio (>=0.10.0)", "torchvision (>=0.8)", "tqdm (>=4.41.0)", "transformers (>4.4.0)", "transformers (>=4.42.3)", "types-PyYAML", "types-emoji", "types-protobuf", "types-requests", "types-setuptools", "types-six", "types-tabulate"] image = ["scipy (>1.0.0)", "torch-fidelity (<=0.4.0)", "torchvision (>=0.8)"] -multimodal = ["piq (<=0.8.0)", "transformers (>=4.10.0)"] +multimodal = ["piq (<=0.8.0)", "transformers (>=4.42.3)"] text = ["ipadic (>=1.0.0)", "mecab-python3 (>=1.0.6)", "nltk (>=3.6)", "regex (>=2021.9.24)", "sentencepiece (>=0.2.0)", "tqdm (>=4.41.0)", "transformers (>4.4.0)"] -typing = ["mypy (==1.9.0)", "torch (==2.3.0)", "types-PyYAML", "types-emoji", "types-protobuf", "types-requests", "types-setuptools", "types-six", "types-tabulate"] +typing = ["mypy (==1.11.0)", "torch (==2.3.1)", "types-PyYAML", "types-emoji", "types-protobuf", "types-requests", "types-setuptools", "types-six", "types-tabulate"] visual = ["SciencePlots (>=2.0.0)", "matplotlib (>=3.3.0)"] [[package]] @@ -4471,13 +4522,13 @@ files = [ [[package]] name = "tqdm" -version = "4.66.4" +version = "4.66.5" description = "Fast, Extensible Progress Meter" optional = false python-versions = ">=3.7" files = [ - {file = "tqdm-4.66.4-py3-none-any.whl", hash = "sha256:b75ca56b413b030bc3f00af51fd2c1a1a5eac6a0c1cca83cbb37a5c52abce644"}, - {file = "tqdm-4.66.4.tar.gz", hash = "sha256:e4d936c9de8727928f3be6079590e97d9abfe8d39a590be678eb5919ffc186bb"}, + {file = "tqdm-4.66.5-py3-none-any.whl", hash = "sha256:90279a3770753eafc9194a0364852159802111925aa30eb3f9d85b0e805ac7cd"}, + {file = "tqdm-4.66.5.tar.gz", hash = "sha256:e1020aef2e5096702d8a025ac7d16b1577279c9d63f8375b63083e9a5f0fcbad"}, ] [package.dependencies] @@ -4799,13 +4850,13 @@ multidict = ">=4.0" [[package]] name = "zipp" -version = "3.19.2" +version = "3.20.0" description = "Backport of pathlib-compatible object wrapper for zip files" optional = false python-versions = ">=3.8" files = [ - {file = "zipp-3.19.2-py3-none-any.whl", hash = "sha256:f091755f667055f2d02b32c53771a7a6c8b47e1fdbc4b72a8b9072b3eef8015c"}, - {file = "zipp-3.19.2.tar.gz", hash = "sha256:bf1dcf6450f873a13e952a29504887c89e6de7506209e5b1bcc3460135d4de19"}, + {file = "zipp-3.20.0-py3-none-any.whl", hash = "sha256:58da6168be89f0be59beb194da1250516fdaa062ccebd30127ac65d30045e10d"}, + {file = "zipp-3.20.0.tar.gz", hash = "sha256:0145e43d89664cfe1a2e533adc75adafed82fe2da404b4bbb6b026c0157bdb31"}, ] [package.extras] @@ -4814,5 +4865,5 @@ test = ["big-O", "importlib-resources", "jaraco.functools", "jaraco.itertools", [metadata] lock-version = "2.0" -python-versions = ">= 3.9, < 3.11" -content-hash = "78380f8f6b3d0f82d6ee40c00804542f88b7fc45f0a2ea15091a9b7557d8f23a" +python-versions = ">= 3.9, < 3.13" +content-hash = "e1a6ffd72d2d7124a20d852de1bab0149fb7334baba3ae705ac0c49119679f53" diff --git a/pyproject.toml b/pyproject.toml index 2a48f79..f37372a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,7 +1,7 @@ [tool.poetry] name = "biostarling" packages = [{ include = "starling" }] -version = "0.1.3" +version = "0.1.4" description = "Segmentation error aware clustering single-cell spatial expression data" repository = "https://github.com/camlab-bioml/starling" authors = ["Jett (Yuju) Lee <[email protected]>"] @@ -10,17 +10,21 @@ keywords = ["imaging cytometry classifier single-cell"] classifiers = [ "Intended Audience :: Science/Research", "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", ] license = "See License.txt" [tool.poetry.dependencies] -python = ">= 3.9, < 3.11" +python = ">= 3.9, < 3.13" phenograph = "^1.5.7" flowsom = "^0.1.1" numpy = "^1.26" -torch = "^2.4.0" +pandas = ">= 0.23.0" pytorch-lightning = "^2.3.3" scanpy = "^1.10.2" +torch = "^2.4.0" [tool.poetry.group.dev] optional = true diff --git a/requirements.txt b/requirements.txt index d874ef8..2ff666f 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,68 +1,86 @@ -aiohttp==3.9.1 ; python_version >= "3.9" and python_version < "4.0" -aiosignal==1.3.1 ; python_version >= "3.9" and python_version < "4.0" -anndata==0.10.4 ; python_version >= "3.9" and python_version < "4.0" -array-api-compat==1.4 ; python_version >= "3.9" and python_version < "4.0" +aiohappyeyeballs==2.3.5 ; python_version >= "3.9" and python_version < "3.13" +aiohttp==3.10.3 ; python_version >= "3.9" and python_version < "3.13" +aiosignal==1.3.1 ; python_version >= "3.9" and python_version < "3.13" +anndata==0.10.8 ; python_version >= "3.9" and python_version < "3.13" +array-api-compat==1.8 ; python_version >= "3.9" and python_version < "3.13" async-timeout==4.0.3 ; python_version >= "3.9" and python_version < "3.11" -attrs==23.2.0 ; python_version >= "3.9" and python_version < "4.0" -certifi==2023.11.17 ; python_version >= "3.9" and python_version < "4.0" -charset-normalizer==3.3.2 ; python_version >= "3.9" and python_version < "4.0" -colorama==0.4.6 ; python_version >= "3.9" and python_version < "4.0" and platform_system == "Windows" -contourpy==1.2.0 ; python_version >= "3.9" and python_version < "4.0" -cycler==0.12.1 ; python_version >= "3.9" and python_version < "4.0" -decorator==5.1.1 ; python_version >= "3.9" and python_version < "4.0" -exceptiongroup==1.2.0 ; python_version >= "3.9" and python_version < "3.11" -fcsparser==0.2.8 ; python_version >= "3.9" and python_version < "4.0" -flowcytometrytools==0.5.1 ; python_version >= "3.9" and python_version < "4.0" -flowsom==0.1.1 ; python_version >= "3.9" and python_version < "4.0" -fonttools==4.47.2 ; python_version >= "3.9" and python_version < "4.0" -frozenlist==1.4.1 ; python_version >= "3.9" and python_version < "4.0" -fsspec[http]==2023.12.2 ; python_version >= "3.9" and python_version < "4.0" -h5py==3.10.0 ; python_version >= "3.9" and python_version < "4.0" -idna==3.6 ; python_version >= "3.9" and python_version < "4.0" -igraph==0.10.8 ; python_version >= "3.9" and python_version < "4.0" -importlib-resources==6.1.1 ; python_version >= "3.9" and python_version < "3.10" -joblib==1.3.2 ; python_version >= "3.9" and python_version < "4.0" -kiwisolver==1.4.5 ; python_version >= "3.9" and python_version < "4.0" -leidenalg==0.10.1 ; python_version >= "3.9" and python_version < "4.0" -lightning-utilities==0.10.0 ; python_version >= "3.9" and python_version < "4.0" -llvmlite==0.41.1 ; python_version >= "3.9" and python_version < "4.0" -matplotlib==3.8.2 ; python_version >= "3.9" and python_version < "4.0" -minisom==2.3.1 ; python_version >= "3.9" and python_version < "4.0" -multidict==6.0.4 ; python_version >= "3.9" and python_version < "4.0" -natsort==8.4.0 ; python_version >= "3.9" and python_version < "4.0" -networkx==3.2.1 ; python_version >= "3.9" and python_version < "4.0" -numba==0.58.1 ; python_version >= "3.9" and python_version < "4.0" -numpy==1.26.3 ; python_version >= "3.9" and python_version < "4.0" -packaging==23.2 ; python_version >= "3.9" and python_version < "4.0" -pandas==2.1.4 ; python_version >= "3.9" and python_version < "4.0" -patsy==0.5.6 ; python_version >= "3.9" and python_version < "4.0" -phenograph==1.5.7 ; python_version >= "3.9" and python_version < "4.0" -pillow==10.2.0 ; python_version >= "3.9" and python_version < "4.0" -psutil==5.9.7 ; python_version >= "3.9" and python_version < "4.0" -pynndescent==0.5.11 ; python_version >= "3.9" and python_version < "4.0" -pyparsing==3.1.1 ; python_version >= "3.9" and python_version < "4.0" -python-dateutil==2.8.2 ; python_version >= "3.9" and python_version < "4.0" -pytorch-lightning==2.1.0 ; python_version >= "3.9" and python_version < "4.0" -pytz==2023.3.post1 ; python_version >= "3.9" and python_version < "4.0" -pyyaml==6.0.1 ; python_version >= "3.9" and python_version < "4.0" -requests==2.31.0 ; python_version >= "3.9" and python_version < "4.0" -scanpy==1.9.5 ; python_version >= "3.9" and python_version < "4.0" -scikit-learn==1.4.0 ; python_version >= "3.9" and python_version < "4.0" -scipy==1.11.4 ; python_version >= "3.9" and python_version < "4.0" -seaborn==0.13.1 ; python_version >= "3.9" and python_version < "4.0" -session-info==1.0.0 ; python_version >= "3.9" and python_version < "4.0" -setuptools==69.0.3 ; python_version >= "3.9" and python_version < "4.0" -six==1.16.0 ; python_version >= "3.9" and python_version < "4.0" -statsmodels==0.14.1 ; python_version >= "3.9" and python_version < "4.0" -stdlib-list==0.10.0 ; python_version >= "3.9" and python_version < "4.0" -texttable==1.7.0 ; python_version >= "3.9" and python_version < "4.0" -threadpoolctl==3.2.0 ; python_version >= "3.9" and python_version < "4.0" -torch==1.12.1 ; python_version >= "3.9" and python_version < "4.0" -torchmetrics==1.3.0.post0 ; python_version >= "3.9" and python_version < "4.0" -tqdm==4.66.1 ; python_version >= "3.9" and python_version < "4.0" -typing-extensions==4.9.0 ; python_version >= "3.9" and python_version < "4.0" -tzdata==2023.4 ; python_version >= "3.9" and python_version < "4.0" -umap-learn==0.5.5 ; python_version >= "3.9" and python_version < "4.0" -urllib3==2.1.0 ; python_version >= "3.9" and python_version < "4.0" -yarl==1.9.4 ; python_version >= "3.9" and python_version < "4.0" -zipp==3.17.0 ; python_version >= "3.9" and python_version < "3.10" +attrs==24.2.0 ; python_version >= "3.9" and python_version < "3.13" +colorama==0.4.6 ; python_version >= "3.9" and python_version < "3.13" and platform_system == "Windows" +contourpy==1.2.1 ; python_version >= "3.9" and python_version < "3.13" +cycler==0.12.1 ; python_version >= "3.9" and python_version < "3.13" +decorator==5.1.1 ; python_version >= "3.9" and python_version < "3.13" +exceptiongroup==1.2.2 ; python_version >= "3.9" and python_version < "3.11" +fcsparser==0.2.8 ; python_version >= "3.9" and python_version < "3.13" +filelock==3.15.4 ; python_version >= "3.9" and python_version < "3.13" +flowcytometrytools==0.5.1 ; python_version >= "3.9" and python_version < "3.13" +flowsom==0.1.1 ; python_version >= "3.9" and python_version < "3.13" +fonttools==4.53.1 ; python_version >= "3.9" and python_version < "3.13" +frozenlist==1.4.1 ; python_version >= "3.9" and python_version < "3.13" +fsspec==2024.6.1 ; python_version >= "3.9" and python_version < "3.13" +fsspec[http]==2024.6.1 ; python_version >= "3.9" and python_version < "3.13" +get-annotations==0.1.2 ; python_version >= "3.9" and python_version < "3.10" +h5py==3.11.0 ; python_version >= "3.9" and python_version < "3.13" +idna==3.7 ; python_version >= "3.9" and python_version < "3.13" +igraph==0.11.6 ; python_version >= "3.9" and python_version < "3.13" +importlib-resources==6.4.0 ; python_version >= "3.9" and python_version < "3.10" +jinja2==3.1.4 ; python_version >= "3.9" and python_version < "3.13" +joblib==1.4.2 ; python_version >= "3.9" and python_version < "3.13" +kiwisolver==1.4.5 ; python_version >= "3.9" and python_version < "3.13" +legacy-api-wrap==1.4 ; python_version >= "3.9" and python_version < "3.13" +leidenalg==0.10.2 ; python_version >= "3.9" and python_version < "3.13" +lightning-utilities==0.11.6 ; python_version >= "3.9" and python_version < "3.13" +llvmlite==0.43.0 ; python_version >= "3.9" and python_version < "3.13" +markupsafe==2.1.5 ; python_version >= "3.9" and python_version < "3.13" +matplotlib==3.9.1.post1 ; python_version >= "3.9" and python_version < "3.13" +minisom==2.3.2 ; python_version >= "3.9" and python_version < "3.13" +mpmath==1.3.0 ; python_version >= "3.9" and python_version < "3.13" +multidict==6.0.5 ; python_version >= "3.9" and python_version < "3.13" +natsort==8.4.0 ; python_version >= "3.9" and python_version < "3.13" +networkx==3.2.1 ; python_version >= "3.9" and python_version < "3.13" +numba==0.60.0 ; python_version >= "3.9" and python_version < "3.13" +numpy==1.26.4 ; python_version >= "3.9" and python_version < "3.13" +nvidia-cublas-cu12==12.1.3.1 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cuda-cupti-cu12==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cuda-nvrtc-cu12==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cuda-runtime-cu12==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cudnn-cu12==9.1.0.70 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cufft-cu12==11.0.2.54 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-curand-cu12==10.3.2.106 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cusolver-cu12==11.4.5.107 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-cusparse-cu12==12.1.0.106 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-nccl-cu12==2.20.5 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-nvjitlink-cu12==12.6.20 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +nvidia-nvtx-cu12==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version >= "3.9" and python_version < "3.13" +packaging==24.1 ; python_version >= "3.9" and python_version < "3.13" +pandas==2.2.2 ; python_version >= "3.9" and python_version < "3.13" +patsy==0.5.6 ; python_version >= "3.9" and python_version < "3.13" +phenograph==1.5.7 ; python_version >= "3.9" and python_version < "3.13" +pillow==10.4.0 ; python_version >= "3.9" and python_version < "3.13" +psutil==6.0.0 ; python_version >= "3.9" and python_version < "3.13" +pynndescent==0.5.13 ; python_version >= "3.9" and python_version < "3.13" +pyparsing==3.1.2 ; python_version >= "3.9" and python_version < "3.13" +python-dateutil==2.9.0.post0 ; python_version >= "3.9" and python_version < "3.13" +pytorch-lightning==2.4.0 ; python_version >= "3.9" and python_version < "3.13" +pytz==2024.1 ; python_version >= "3.9" and python_version < "3.13" +pyyaml==6.0.2 ; python_version >= "3.9" and python_version < "3.13" +scanpy==1.10.2 ; python_version >= "3.9" and python_version < "3.13" +scikit-learn==1.5.1 ; python_version >= "3.9" and python_version < "3.13" +scipy==1.13.1 ; python_version >= "3.9" and python_version < "3.13" +seaborn==0.13.2 ; python_version >= "3.9" and python_version < "3.13" +session-info==1.0.0 ; python_version >= "3.9" and python_version < "3.13" +setuptools==72.1.0 ; python_version >= "3.9" and python_version < "3.13" +six==1.16.0 ; python_version >= "3.9" and python_version < "3.13" +statsmodels==0.14.2 ; python_version >= "3.9" and python_version < "3.13" +stdlib-list==0.10.0 ; python_version >= "3.9" and python_version < "3.13" +sympy==1.13.2 ; python_version >= "3.9" and python_version < "3.13" +texttable==1.7.0 ; python_version >= "3.9" and python_version < "3.13" +threadpoolctl==3.5.0 ; python_version >= "3.9" and python_version < "3.13" +torch==2.4.0 ; python_version >= "3.9" and python_version < "3.13" +torchmetrics==1.4.1 ; python_version >= "3.9" and python_version < "3.13" +tqdm==4.66.5 ; python_version >= "3.9" and python_version < "3.13" +triton==3.0.0 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version < "3.13" and python_version >= "3.9" +typing-extensions==4.12.2 ; python_version >= "3.9" and python_version < "3.13" +tzdata==2024.1 ; python_version >= "3.9" and python_version < "3.13" +umap-learn==0.5.6 ; python_version >= "3.9" and python_version < "3.13" +yarl==1.9.4 ; python_version >= "3.9" and python_version < "3.13" +zipp==3.20.0 ; python_version >= "3.9" and python_version < "3.10" diff --git a/starling/utility.py b/starling/utility.py index b86c988..6bb449d 100644 --- a/starling/utility.py +++ b/starling/utility.py @@ -9,6 +9,8 @@ import scanpy.external as sce import torch +# patch outdated flowsom dependencies +pd.DataFrame.as_matrix = pd.DataFrame.to_numpy collections.MutableMapping = abc.MutableMapping from flowsom import flowsom from scanpy import AnnData @@ -107,7 +109,7 @@ def init_clustering( elif initial_clustering_method == "FS": ## needs to output to csv first # ofn = OPATH + "fs_" + ONAME + ".csv" - pd.DataFrame(X).to_csv("fs.csv") + pd.DataFrame(adata.X).to_csv("fs.csv") fsom = flowsom("fs.csv", if_fcs=False, if_drop=True, drop_col=["Unnamed: 0"]) fsom.som_mapping(
Error when using init_clustering with FlowSom method Get this error when using init_clustering(initial_clustering_method="FS"): ``` Traceback (most recent call last): File "/ddn_exa/campbell/dchan/starling/run_starling.py", line 400, in <module> main() File "/ddn_exa/campbell/dchan/starling/run_starling.py", line 130, in main starling_adata = utility.init_clustering(cluster_type, sc_expr_subset_adata, k=leiden_resolution, seed=seed) File "/ddn_exa/campbell/dchan/starling/starling/utility.py", line 105, in init_clustering pd.DataFrame(X).to_csv("fs.csv") NameError: name 'X' is not defined ``` Presumably, the error is that it should be `adata.X` instead of just `X`.
2024-08-12T16:27:10
0.0
[]
[]
camlab-bioml/starling
camlab-bioml__starling-45
a8e9266ee078ab5ad6602a2d751d51765c4f7e29
diff --git a/Dockerfile b/Dockerfile index 3390d60..a8489a7 100644 --- a/Dockerfile +++ b/Dockerfile @@ -35,6 +35,10 @@ RUN python3 -m venv $VIRTUAL_ENV && \ USER $USERNAME -COPY --chown=${USER_UID}:${USER_GID} . . +# prevent full rebuilds every time code changes +COPY --chown=${USER_UID}:${USER_GID} pyproject.toml poetry.lock README.md /code/ +COPY --chown=${USER_UID}:${USER_GID} starling/__init__.py /code/starling/__init__.py RUN poetry install --with docs,dev + +COPY . . diff --git a/starling/starling.py b/starling/starling.py index 06aabea..1fd0d29 100644 --- a/starling/starling.py +++ b/starling/starling.py @@ -160,8 +160,8 @@ def prepare_data(self) -> None: self.adata.uns["init_cell_size_variances"] = np.array(init_sv) else: # init_cell_size_centroids = None; init_cell_size_variances = None - self.adata.varm["init_cell_size_centroids"] = None - self.adata.varm["init_cell_size_variances"] = None + self.adata.uns["init_cell_size_centroids"] = None + self.adata.uns["init_cell_size_variances"] = None self.train_df = utility.ConcatDataset([self.X, tr_fy, tr_fl]) # model_params = utility.model_paramters(self.init_e, self.init_v, self.init_s, self.init_sv) diff --git a/starling/utility.py b/starling/utility.py index 5fbaecb..b86c988 100644 --- a/starling/utility.py +++ b/starling/utility.py @@ -38,7 +38,7 @@ def __len__(self): def init_clustering( initial_clustering_method: Literal["User", "KM", "GMM", "FS", "PG"], adata: AnnData, - k: Union[int, None], + k: Union[int, None] = None, labels: Optional[np.ndarray] = None, ) -> AnnData: """Compute initial cluster centroids, variances & labels @@ -49,7 +49,8 @@ def init_clustering( ``FS`` (FlowSOM), ``User`` (user-provided), or ``PG`` (PhenoGraph). :param k: The number of clusters, must be ``n_components`` when ``initial_clustering_method`` is ``GMM`` (required), ``k`` when ``initial_clustering_method`` is ``KM`` (required), ``k`` when ``initial_clustering_method`` - is ``FS`` (required), ``?`` when ``initial_clustering_method`` is ``PG`` (optional) + is ``FS`` (required), ``?`` when ``initial_clustering_method`` is ``PG`` (optional), and can be ommited when + ``initial_clustering_method`` is "User", because user will be passing in their own labels. :param labels: optional, user-provided labels :raises: ValueError @@ -67,6 +68,11 @@ def init_clustering( "k cannot be ommitted for KMeans, FlowSOM, or Gaussian Mixture" ) + if initial_clustering_method == "User" and labels is None: + raise ValueError( + "labels must be provided when initial_clustering_method is set to 'User'" + ) + if initial_clustering_method == "KM": kms = KMeans(k).fit(adata.X) init_l = kms.labels_ @@ -90,12 +96,13 @@ def init_clustering( else: init_l = labels - k = len(np.unique(init_l)) + classes = np.unique(init_l) + k = len(classes) init_e = np.zeros((k, adata.X.shape[1])) init_ev = np.zeros((k, adata.X.shape[1])) - for c in range(k): - init_e[c, :] = adata.X[init_l == c].mean(0) - init_ev[c, :] = adata.X[init_l == c].var(0) + for i, c in enumerate(classes): + init_e[i, :] = adata.X[init_l == c].mean(0) + init_ev[i, :] = adata.X[init_l == c].var(0) elif initial_clustering_method == "FS": ## needs to output to csv first
init clustering with "User" specified doesn't work if user supplied clusters are not consecutive integers When `utility.init_clustering` is called with `initial_clustering_method == "User"` assumes that the user supplied clusters in `labels` are consecutive integers, which then breaks at `init_e[c, :] = adata.X[init_l == c].mean(0)` if e.g. `init_l` is a string. The solution isn't as simple as casting `labels` to an `int` as no guarantee labels are either integers or consecutive. Would recommend something like ```python cluster_idx_map = zip(range(unique_labels), unique_labels) ``` then doing ```python adata.X[init_l == cluster_idx_map[c]] ```
2024-06-05T17:14:57
0.0
[]
[]
voxel51/fiftyone-brain
voxel51__fiftyone-brain-142
01db15893a68bce341e8f67efc14a404861cecd9
diff --git a/fiftyone/brain/config.py b/fiftyone/brain/config.py index a64ab5a6..dbaa7a91 100644 --- a/fiftyone/brain/config.py +++ b/fiftyone/brain/config.py @@ -23,6 +23,12 @@ class BrainConfig(EnvConfig): "qdrant": { "config_cls": "fiftyone.brain.internal.core.qdrant.QdrantSimilarityConfig", }, + "milvus": { + "config_cls": "fiftyone.brain.internal.core.milvus.MilvusSimilarityConfig", + }, + "lancedb": { + "config_cls": "fiftyone.brain.internal.core.lancedb.LanceDBSimilarityConfig", + }, } def __init__(self, d=None): diff --git a/fiftyone/brain/internal/core/lancedb.py b/fiftyone/brain/internal/core/lancedb.py new file mode 100644 index 00000000..f6ceda8c --- /dev/null +++ b/fiftyone/brain/internal/core/lancedb.py @@ -0,0 +1,545 @@ +""" +LanceDB similarity backend. + +| Copyright 2017-2023, Voxel51, Inc. +| `voxel51.com <https://voxel51.com/>`_ +| +""" +import logging + +import numpy as np + +import eta.core.utils as etau + +import fiftyone.core.utils as fou +import fiftyone.brain.internal.core.utils as fbu +from fiftyone.brain.similarity import ( + SimilarityConfig, + Similarity, + SimilarityIndex, +) + +lancedb = fou.lazy_import("lancedb") +pa = fou.lazy_import("pyarrow") + + +_SUPPORTED_METRICS = { + "cosine": "cosine", + "euclidean": "l2", +} + +logger = logging.getLogger(__name__) + + +class LanceDBSimilarityConfig(SimilarityConfig): + """Configuration for a LanceDB similarity instance. + + Args: + embeddings_field (None): the name of the embeddings field to use + model (None): the name of the model to use + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known + patches_field (None): the sample field defining the patches being + analyzed, if any + supports_prompts (None): whether this run supports prompt queries + table_name (None): the name of the LanceDB table to use. If none is + provided, a new table will be created + metric ("cosine"): the embedding distance metric to use when creating a + new index. Supported values are ``("cosine", "euclidean")`` + uri ("/tmp/lancedb"): the database URI to use + **kwargs: keyword arguments for :class:`SimilarityConfig` + """ + + def __init__( + self, + embeddings_field=None, + model=None, + patches_field=None, + supports_prompts=None, + table_name=None, + metric="cosine", + uri="/tmp/lancedb", + **kwargs, + ): + if metric is not None and metric not in _SUPPORTED_METRICS: + raise ValueError( + "Unsupported metric '%s'. Supported values are %s" + % (metric, tuple(_SUPPORTED_METRICS.keys())) + ) + + super().__init__( + embeddings_field=embeddings_field, + model=model, + patches_field=patches_field, + supports_prompts=supports_prompts, + **kwargs, + ) + + self.table_name = table_name + self.metric = metric + + # store privately so these aren't serialized + self._uri = fou.normalize_path(uri) + + @property + def method(self): + """The name of the similarity backend.""" + return "lancedb" + + @property + def uri(self): + return self._uri + + @uri.setter + def uri(self, value): + self._uri = value + + @property + def max_k(self): + """A maximum k value for nearest neighbor queries, or None if there is + no limit. + """ + return None + + @property + def supports_least_similarity(self): + """Whether this backend supports least similarity queries.""" + return False + + @property + def supported_aggregations(self): + return ("mean",) + + def load_credentials(self, uri=None): + self._load_parameters(uri=uri) + + +class LanceDBSimilarity(Similarity): + """LanceDB similarity factory. + + Args: + config: a :class:`LanceDBSimilarityConfig` + """ + + def ensure_requirements(self): + fou.ensure_package("lancedb") + + def ensure_usage_requirements(self): + fou.ensure_package("lancedb") + + def initialize(self, samples, brain_key): + return LanceDBSimilarityIndex( + samples, self.config, brain_key, backend=self + ) + + +class LanceDBSimilarityIndex(SimilarityIndex): + """Class for interacting with LanceDB similarity indexes. + + Args: + samples: the :class:`fiftyone.core.collections.SampleCollection` used + config: the :class:`LanceDBSimilarityConfig` used + brain_key: the brain key + backend (None): a :class:`LanceDBSimilarity` instance + """ + + def __init__(self, samples, config, brain_key, backend=None): + super().__init__(samples, config, brain_key, backend=backend) + self._table = None + self._db = None + self._initialize() + + def _initialize(self): + try: + db = lancedb.connect(self.config.uri) + except Exception as e: + raise ValueError( + "Failed to connect to LanceDB backend at URI '%s'. Refer to " + "https://docs.voxel51.com/integrations/lancedb.html for more " + "information" % self.config.uri + ) from e + + table_names = db.table_names() + + if self.config.table_name is None: + root = "fiftyone-" + fou.to_slug(self.samples._root_dataset.name) + table_name = fbu.get_unique_name(root, table_names) + + self.config.table_name = table_name + self.save_config() + + if self.config.table_name in table_names: + table = db.open_table(self.config.table_name) + else: + table = None + + self._db = db + self._table = table + + @property + def table(self): + """The ``lancedb.LanceTable`` instance for this index.""" + return self._table + + @property + def total_index_size(self): + if self._table is None: + return None + + return len(self._table) + + def add_to_index( + self, + embeddings, + sample_ids, + label_ids=None, + overwrite=True, + allow_existing=True, + warn_existing=False, + reload=True, + ): + """Adds the given embeddings to the index. + + Args: + embeddings: a ``num_embeddings x num_dims`` array of embeddings + sample_ids: a ``num_embeddings`` array of sample IDs + label_ids (None): a ``num_embeddings`` array of label IDs, if + applicable + overwrite (True): whether to replace (True) or ignore (False) + existing embeddings with the same sample/label IDs + allow_existing (True): whether to ignore (True) or raise an error + (False) when ``overwrite`` is False and a provided ID already + exists in the + warn_missing (False): whether to log a warning if an embedding is + not added to the index because its ID already exists + reload (True): whether to call :meth:`reload` to refresh the + current view after the update + """ + if self._table is None: + pa_table = pa.Table.from_arrays( + [[], [], []], names=["id", "sample_id", "vector"] + ) + else: + pa_table = self._table.to_arrow() + + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if warn_existing or not allow_existing or not overwrite: + existing_ids = set(pa_table["id"].to_pylist()) & set(ids) + num_existing = len(existing_ids) + + if num_existing > 0: + if not allow_existing: + raise ValueError( + "Found %d IDs (eg %s) that already exist in the index" + % (num_existing, next(iter(existing_ids))) + ) + + if warn_existing: + if overwrite: + logger.warning( + "Overwriting %d IDs that already exist in the " + "index", + num_existing, + ) + else: + logger.warning( + "Skipping %d IDs that already exist in the index", + num_existing, + ) + else: + existing_ids = set() + + if existing_ids and not overwrite: + del_inds = [i for i, _id in enumerate(ids) if _id in existing_ids] + + embeddings = np.delete(embeddings, del_inds) + sample_ids = np.delete(sample_ids, del_inds) + if label_ids is not None: + label_ids = np.delete(label_ids, del_inds) + + if label_ids is not None: + ids = list(label_ids) + else: + ids = list(sample_ids) + + dim = embeddings.shape[1] + if self._table: # update the table + prev_embeddings = np.concatenate( + pa_table["vector"].to_numpy() + ).reshape(-1, dim) + embeddings = np.concatenate([prev_embeddings, embeddings]) + ids = pa_table["id"].to_pylist() + ids + sample_ids = pa_table["sample_id"].to_pylist() + sample_ids + + embeddings = pa.array(embeddings.reshape(-1), type=pa.float32()) + embeddings = pa.FixedSizeListArray.from_arrays(embeddings, dim) + sample_ids = list(sample_ids) + pa_table = pa.Table.from_arrays( + [ids, sample_ids, embeddings], names=["id", "sample_id", "vector"] + ) + self._table = self._db.create_table( + self.config.table_name, pa_table, mode="overwrite" + ) + + if reload: + self.reload() + + def remove_from_index( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + reload=True, + ): + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if not allow_missing or warn_missing: + existing_ids = self._index.fetch(ids).vectors.keys() + missing_ids = set(existing_ids) - set(ids) + num_missing = len(missing_ids) + + if num_missing > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that are not present in the " + "index" % (num_missing, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Ignoring %d IDs that are not present in the index", + num_missing, + ) + + df = self._table.to_pandas() + df = df[~df["id"].isin(ids)] + self._table = self._db.create_table( + self.config.table_name, df, mode="overwrite" + ) + + if reload: + self.reload() + + def get_embeddings( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + ): + """Retrieves the embeddings for the given IDs from the index. + + If no IDs are provided, the entire index is returned. + + Args: + sample_ids (None): a sample ID or list of sample IDs for which to + retrieve embeddings + label_ids (None): a label ID or list of label IDs for which to + retrieve embeddings + allow_missing (True): whether to allow the index to not contain IDs + that you provide (True) or whether to raise an error in this + case (False) + warn_missing (False): whether to log a warning if the index does + not contain IDs that you provide + + Returns: + a tuple of: + + - a ``num_embeddings x num_dims`` array of embeddings + - a ``num_embeddings`` array of sample IDs + - a ``num_embeddings`` array of label IDs, if applicable, or else + ``None`` + """ + if label_ids is not None: + if self.config.patches_field is None: + raise ValueError("This index does not support label IDs") + + if sample_ids is not None: + logger.warning( + "Ignoring sample IDs when label IDs are provided" + ) + + pd_table = self._table.to_pandas() + + found_embeddings = [] + found_sample_ids = [] + found_label_ids = [] + missing_ids = [] + + if sample_ids is not None and self.config.patches_field is not None: + sample_ids = ( + sample_ids if isinstance(sample_ids, list) else [sample_ids] + ) + df = pd_table.set_index("sample_id") + for sample_id in sample_ids: + if sample_id in df.index: + found_embeddings.append(df.loc[sample_id]["vector"]) + found_sample_ids.append(sample_id) + found_label_ids.append(df.loc[sample_id]["id"]) + else: + missing_ids.append(sample_id) + + elif self.config.patches_field is not None: + df = pd_table.set_index("id") + if label_ids is None: + label_ids = list(df.index) + label_ids = ( + label_ids if isinstance(label_ids, list) else [label_ids] + ) + for label_id in label_ids: + if label_id in df.index: + found_embeddings.append(df.loc[label_id]["vector"]) + found_sample_ids.append(df.loc[label_id]["sample_id"]) + found_label_ids.append(label_id) + else: + missing_ids.append(label_id) + else: + df = pd_table.set_index("sample_id") + + if sample_ids is None: + sample_ids = list(df.index) + else: + sample_ids = ( + sample_ids + if isinstance(sample_ids, list) + else [sample_ids] + ) + for sample_id in sample_ids: + if sample_id in df.index: + found_embeddings.append(df.loc[sample_id]["vector"]) + found_sample_ids.append(sample_id) + else: + missing_ids.append(sample_id) + + num_missing_ids = len(missing_ids) + if num_missing_ids > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that do not exist in the index" + % (num_missing_ids, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Skipping %d IDs that do not exist in the index", + num_missing_ids, + ) + + embeddings = np.array(found_embeddings) + sample_ids = np.array(found_sample_ids) + if label_ids is not None: + label_ids = np.array(found_label_ids) + + return embeddings, sample_ids, label_ids + + def cleanup(self): + if self._db is not None: + for tbl in [ + self.config.table_name, + self.config.table_name + "_filter", + ]: + if tbl in self._db.table_names(): + self._db.drop_table(tbl) + self._table = None + + def _kneighbors( + self, + query=None, + k=None, + reverse=False, + aggregation=None, + return_dists=False, + ): + if query is None: + raise ValueError("LanceDB does not support full index neighbors") + + if aggregation not in (None, "mean"): + raise ValueError( + f"LanceDB does not support {aggregation} aggregation" + ) + + if k is None: + k = len(self._table.to_arrow()) + + query = self._parse_neighbors_query(query) + if aggregation == "mean" and query.ndim == 2: + query = query.mean(axis=0) + + single_query = query.ndim == 1 + if single_query: + query = [query] + + if self.config.patches_field is not None: + index_ids = list(self.current_label_ids) + else: + index_ids = list(self.current_sample_ids) + + ids = [] + dists = [] + df = self._table.to_pandas().set_index("id") + df = df.loc[index_ids] + tbl_filtered = self._db.create_table( + self.config.table_name + "_filter", df, mode="overwrite" + ) + + for q in query: + results = tbl_filtered.search(q) + if self.config.metric is not None: + results = results.metric( + _SUPPORTED_METRICS[self.config.metric] + ) + + results = results.limit(k).to_df() + if reverse: + results = results.iloc[::-1] + + ids.append(results.id.tolist()) + if return_dists: + dists.append(results.score.tolist()) + + if single_query: + ids = ids[0] + if return_dists: + dists = dists[0] + + if return_dists: + return ids, dists + + return ids + + def _parse_neighbors_query(self, query): + if etau.is_str(query): + query_ids = [query] + single_query = True + else: + query = np.asarray(query) + + # Query by vector(s) + if np.issubdtype(query.dtype, np.number): + return query + + query_ids = list(query) + single_query = False + + # Query by ID(s) + embeddings = ( + self._table.to_pandas().set_index("id").loc[query_ids]["vector"] + ) + query = np.array([emb for emb in embeddings]) + + if single_query: + query = query[0, :] + + return query + + @classmethod + def _from_dict(cls, d, samples, config, brain_key): + return cls(samples, config, brain_key) diff --git a/fiftyone/brain/internal/core/milvus.py b/fiftyone/brain/internal/core/milvus.py new file mode 100644 index 00000000..ede6e677 --- /dev/null +++ b/fiftyone/brain/internal/core/milvus.py @@ -0,0 +1,642 @@ +""" +Milvus similarity backend. + +| Copyright 2017-2023, Voxel51, Inc. +| `voxel51.com <https://voxel51.com/>`_ +| +""" +import logging + +import numpy as np +from uuid import uuid4 + +import eta.core.utils as etau + +import fiftyone.core.utils as fou +from fiftyone.brain.similarity import ( + SimilarityConfig, + Similarity, + SimilarityIndex, +) +import fiftyone.brain.internal.core.utils as fbu + +pymilvus = fou.lazy_import("pymilvus") + + +logger = logging.getLogger(__name__) + +_SUPPORTED_METRICS = { + "dotproduct": "IP", + "euclidean": "L2", +} + + +class MilvusSimilarityConfig(SimilarityConfig): + """Configuration for the Milvus similarity backend. + + Args: + embeddings_field (None): the sample field containing the embeddings, + if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known + patches_field (None): the sample field defining the patches being + analyzed, if any + supports_prompts (None): whether this run supports prompt queries + collection_name (None): the name of a Milvus collection to use or + create. If none is provided, a new collection will be created + metric ("dotproduct"): the embedding distance metric to use when + creating a new index. Supported values are + ``("dotproduct", "euclidean")`` + consistency_level ("Session"): the consistency level to use. Supported + values are ``("Session", "Strong", "Bounded", "Eventually")`` + uri (None): a full Milvus server address to use + user (None): a username to use + password (None): a password to use + """ + + def __init__( + self, + embeddings_field=None, + model=None, + patches_field=None, + supports_prompts=None, + collection_name=None, + metric="dotproduct", + consistency_level="Session", + uri=None, + user=None, + password=None, + **kwargs, + ): + if metric is not None and metric not in _SUPPORTED_METRICS: + raise ValueError( + "Unsupported metric '%s'. Supported values are %s" + % (metric, tuple(_SUPPORTED_METRICS.keys())) + ) + + super().__init__( + embeddings_field=embeddings_field, + model=model, + patches_field=patches_field, + supports_prompts=supports_prompts, + **kwargs, + ) + + self.collection_name = collection_name + self.metric = metric + self.consistency_level = consistency_level + + # store privately so these aren't serialized + self._uri = uri + self._user = user + self._password = password + + @property + def method(self): + return "milvus" + + @property + def uri(self): + return self._uri + + @uri.setter + def uri(self, value): + self._uri = value + + @property + def user(self): + return self._user + + @user.setter + def user(self, value): + self._user = value + + @property + def password(self): + return self._password + + @password.setter + def password(self, value): + self._password = value + + @property + def max_k(self): + return 16384 + + @property + def supports_least_similarity(self): + return False + + @property + def supported_aggregations(self): + return ("mean",) + + @property + def index_params(self): + return { + "metric_type": _SUPPORTED_METRICS[self.metric], + "index_type": "HNSW", + "params": {"M": 8, "efConstruction": 64}, + } + + @property + def search_params(self): + return { + "HNSW": { + "metric_type": _SUPPORTED_METRICS[self.metric], + "params": {"ef": 10}, + }, + } + + def load_credentials(self, uri=None, user=None, password=None): + self._load_parameters(uri=uri, user=user, password=password) + + +class MilvusSimilarity(Similarity): + """Milvus similarity factory. + + Args: + config: a :class:`MilvusSimilarityConfig` + """ + + def ensure_requirements(self): + fou.ensure_package("pymilvus") + + def ensure_usage_requirements(self): + fou.ensure_package("pymilvus") + + def initialize(self, samples, brain_key): + return MilvusSimilarityIndex( + samples, self.config, brain_key, backend=self + ) + + +class MilvusSimilarityIndex(SimilarityIndex): + """Class for interacting with Milvus similarity indexes. + + Args: + samples: the :class:`fiftyone.core.collections.SampleCollection` used + config: the :class:`MilvusSimilarityConfig` used + brain_key: the brain key + backend (None): a :class:`MilvusSimilarity` instance + """ + + def __init__(self, samples, config, brain_key, backend=None): + super().__init__(samples, config, brain_key, backend=backend) + self._alias = None + self._collection = None + self._initialize() + + def _initialize(self): + kwargs = {} + if self.config.uri: + kwargs["uri"] = self.config.uri + + if self.config.user: + kwargs["user"] = self.config.user + + if self.config.password: + kwargs["password"] = self.config.password + + alias = uuid4().hex if kwargs else "default" + + try: + pymilvus.connections.connect(alias=alias, **kwargs) + except pymilvus.MilvusException as e: + raise ValueError( + "Failed to connect to Milvus backend at URI '%s'. Refer to " + "https://docs.voxel51.com/integrations/milvus.html for more " + "information" % self.config.uri + ) from e + + collection_names = pymilvus.utility.list_collections(using=alias) + + if self.config.collection_name is None: + # Milvus only supports numbers, letters and underscores + root = "fiftyone-" + fou.to_slug(self.samples._root_dataset.name) + root = root.replace("-", "_") + collection_name = fbu.get_unique_name(root, collection_names) + collection_name = collection_name.replace("-", "_") + + self.config.collection_name = collection_name + self.save_config() + + if self.config.collection_name in collection_names: + collection = pymilvus.Collection( + self.config.collection_name, using=alias + ) + collection.load() + else: + collection = None + + self._alias = alias + self._collection = collection + + def _create_collection(self, dimension): + schema = pymilvus.CollectionSchema( + [ + pymilvus.FieldSchema( + "pk", + pymilvus.DataType.VARCHAR, + is_primary=True, + auto_id=False, + max_length=64000, + ), + pymilvus.FieldSchema( + "vector", pymilvus.DataType.FLOAT_VECTOR, dim=dimension + ), + pymilvus.FieldSchema( + "sample_id", pymilvus.DataType.VARCHAR, max_length=64000 + ), + ] + ) + + collection = pymilvus.Collection( + self.config.collection_name, + schema, + consistency_level=self.config.consistency_level, + using=self._alias, + ) + collection.create_index( + "vector", index_params=self.config.index_params + ) + collection.load() + + self._collection = collection + + @property + def collection(self): + """The ``pymilvus.Collection`` instance for this index.""" + return self._collection + + @property + def total_index_size(self): + if self._collection is None: + return None + + return self._collection.num_entities + + def add_to_index( + self, + embeddings, + sample_ids, + label_ids=None, + overwrite=True, + allow_existing=True, + warn_existing=False, + reload=True, + batch_size=100, + ): + if self._collection is None: + self._create_collection(embeddings.shape[1]) + + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if warn_existing or not allow_existing or not overwrite: + existing_ids = self._get_existing_ids(ids) + num_existing = len(existing_ids) + + if num_existing > 0: + if not allow_existing: + raise ValueError( + "Found %d IDs (eg %s) that already exist in the index" + % (num_existing, next(iter(existing_ids))) + ) + + if warn_existing: + if overwrite: + logger.warning( + "Overwriting %d IDs that already exist in the " + "index", + num_existing, + ) + else: + logger.warning( + "Skipping %d IDs that already exist in the index", + num_existing, + ) + else: + existing_ids = set() + + if existing_ids and not overwrite: + del_inds = [i for i, _id in enumerate(ids) if _id in existing_ids] + embeddings = np.delete(embeddings, del_inds) + sample_ids = np.delete(sample_ids, del_inds) + if label_ids is not None: + label_ids = np.delete(label_ids, del_inds) + + elif existing_ids and overwrite: + self._delete_ids(existing_ids) + + embeddings = [e.tolist() for e in embeddings] + sample_ids = list(sample_ids) + ids = list(ids) + + for _embeddings, _ids, _sample_ids in zip( + fou.iter_batches(embeddings, batch_size), + fou.iter_batches(ids, batch_size), + fou.iter_batches(sample_ids, batch_size), + ): + insert_data = [ + list(_ids), + list(_embeddings), + list(_sample_ids), + ] + self._collection.insert(insert_data) + + self._collection.flush() + + if reload: + self.reload() + + def _get_existing_ids(self, ids): + ids = ['"' + str(entry) + '"' for entry in ids] + expr = f"""pk in [{','.join(ids)}]""" + return self._collection.query(expr) + + def _delete_ids(self, ids): + ids = ['"' + str(entry) + '"' for entry in ids] + expr = f"""pk in [{','.join(ids)}]""" + self._collection.delete(expr) + self._collection.flush() + + def _get_embeddings(self, ids): + ids = ['"' + str(entry) + '"' for entry in ids] + expr = f"""pk in [{','.join(ids)}]""" + return self._collection.query( + expr, output_fields=["pk", "sample_id", "vector"] + ) + + def remove_from_index( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + reload=True, + ): + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if not allow_missing or warn_missing: + existing_ids = self._get_existing_ids(ids) + missing_ids = set(existing_ids) - set(ids) + num_missing = len(missing_ids) + + if num_missing > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that are not present in the " + "index" % (num_missing, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Ignoring %d IDs that are not present in the index", + num_missing, + ) + + self._delete_ids(ids=ids) + + if reload: + self.reload() + + def get_embeddings( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + ): + if label_ids is not None: + if self.config.patches_field is None: + raise ValueError("This index does not support label IDs") + + if sample_ids is not None: + logger.warning( + "Ignoring sample IDs when label IDs are provided" + ) + + if sample_ids is not None and self.config.patches_field is not None: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_patch_embeddings_from_sample_ids(sample_ids) + elif self.config.patches_field is not None: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_patch_embeddings_from_label_ids(label_ids) + else: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_sample_embeddings(sample_ids) + + num_missing_ids = len(missing_ids) + if num_missing_ids > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that do not exist in the index" + % (num_missing_ids, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Skipping %d IDs that do not exist in the index", + num_missing_ids, + ) + + embeddings = np.array(embeddings) + sample_ids = np.array(sample_ids) + if label_ids is not None: + label_ids = np.array(label_ids) + + return embeddings, sample_ids, label_ids + + def cleanup(self): + pymilvus.utility.drop_collection( + self.config.collection_name, using=self._alias + ) + self._collection = None + + def _get_sample_embeddings(self, sample_ids, batch_size=1000): + found_embeddings = [] + found_sample_ids = [] + + if sample_ids is None: + raise ValueError( + "Milvus does not support retrieving all vectors in an index" + ) + + for batch_ids in fou.iter_batches(sample_ids, batch_size): + response = self._get_embeddings(list(batch_ids)) + + for r in response: + found_embeddings.append(r["vector"]) + found_sample_ids.append(r["sample_id"]) + + missing_ids = list(set(sample_ids) - set(found_sample_ids)) + + return found_embeddings, found_sample_ids, None, missing_ids + + def _get_patch_embeddings_from_label_ids(self, label_ids, batch_size=1000): + found_embeddings = [] + found_sample_ids = [] + found_label_ids = [] + + if label_ids is None: + raise ValueError( + "Milvus does not support retrieving all vectors in an index" + ) + + for batch_ids in fou.iter_batches(label_ids, batch_size): + response = self._get_embeddings(list(batch_ids)) + + for r in response: + found_embeddings.append(r["vector"]) + found_sample_ids.append(r["sample_id"]) + found_label_ids.append(r["pk"]) + + missing_ids = list(set(label_ids) - set(found_label_ids)) + + return found_embeddings, found_sample_ids, found_label_ids, missing_ids + + def _get_patch_embeddings_from_sample_ids( + self, sample_ids, batch_size=100 + ): + found_embeddings = [] + found_sample_ids = [] + found_label_ids = [] + + query_vector = [0.0] * self._get_dimension() + top_k = min(batch_size, self.config.max_k) + + for batch_ids in fou.iter_batches(sample_ids, batch_size): + ids = ['"' + str(entry) + '"' for entry in batch_ids] + expr = f"""pk in [{','.join(ids)}]""" + response = self._collection.search( + data=[query_vector], + anns_field="vector", + param=self.config.search_params, + expr=expr, + limit=top_k, + ) + ids = [x.id for x in response[0]] + response = self._get_embeddings(ids) + for r in response: + found_embeddings.append(r["vector"]) + found_sample_ids.append(r["sample_id"]) + found_label_ids.append(r["pk"]) + + missing_ids = list(set(sample_ids) - set(found_sample_ids)) + + return found_embeddings, found_sample_ids, found_label_ids, missing_ids + + def _kneighbors( + self, + query=None, + k=None, + reverse=False, + aggregation=None, + return_dists=False, + ): + if query is None: + raise ValueError("Milvus does not support full index neighbors") + + if reverse is True: + raise ValueError( + "Milvus does not support least similarity queries" + ) + + if k is None or k > self.config.max_k: + raise ValueError("Milvus requires k<=%s" % self.config.max_k) + + if aggregation not in (None, "mean"): + raise ValueError("Unsupported aggregation '%s'" % aggregation) + + query = self._parse_neighbors_query(query) + if aggregation == "mean" and query.ndim == 2: + query = query.mean(axis=0) + + single_query = query.ndim == 1 + if single_query: + query = [query] + + if self.config.patches_field is not None: + index_ids = self.current_label_ids + else: + index_ids = self.current_sample_ids + + expr = ['"' + str(entry) + '"' for entry in index_ids] + expr = f"""pk in [{','.join(expr)}]""" + + ids = [] + dists = [] + for q in query: + response = self._collection.search( + data=[q.tolist()], + anns_field="vector", + limit=k, + expr=expr, + param=self.config.search_params, + ) + ids.append([r.id for r in response[0]]) + if return_dists: + dists.append([r.score for r in response[0]]) + + if single_query: + ids = ids[0] + if return_dists: + dists = dists[0] + + if return_dists: + return ids, dists + + return ids + + def _parse_neighbors_query(self, query): + if etau.is_str(query): + query_ids = [query] + single_query = True + else: + query = np.asarray(query) + + # Query by vector(s) + if np.issubdtype(query.dtype, np.number): + return query + + query_ids = list(query) + single_query = False + + # Query by ID(s) + response = self._get_embeddings(query_ids) + query = np.array([x["vector"] for x in response]) + + if single_query: + query = query[0, :] + + return query + + def _get_dimension(self): + if self._collection is None: + return None + + for field in self._collection.describe()["fields"]: + if field["name"] == "vector": + return field["params"]["dim"] + + @classmethod + def _from_dict(cls, d, samples, config, brain_key): + return cls(samples, config, brain_key) diff --git a/fiftyone/brain/internal/core/pinecone.py b/fiftyone/brain/internal/core/pinecone.py index f5c3f4b1..782d6876 100644 --- a/fiftyone/brain/internal/core/pinecone.py +++ b/fiftyone/brain/internal/core/pinecone.py @@ -483,7 +483,7 @@ def _get_patch_embeddings_from_sample_ids( found_sample_ids = [] found_label_ids = [] - query_vector = [0.0] * self._index.describe_index_stats().dimension + query_vector = [0.0] * self._get_dimension() top_k = min(batch_size, self.config.max_k) for batch_ids in fou.iter_batches(sample_ids, batch_size): @@ -584,6 +584,12 @@ def _parse_neighbors_query(self, query): return query + def _get_dimension(self): + if self._index is None: + return None + + return self._index.describe_index_stats().dimension + @classmethod def _from_dict(cls, d, samples, config, brain_key): return cls(samples, config, brain_key) diff --git a/fiftyone/brain/internal/core/qdrant.py b/fiftyone/brain/internal/core/qdrant.py index ecf58eb3..e56ac324 100644 --- a/fiftyone/brain/internal/core/qdrant.py +++ b/fiftyone/brain/internal/core/qdrant.py @@ -60,6 +60,8 @@ class QdrantSimilarityConfig(SimilarityConfig): use when creating a new index wal_config (None): an optional dict of WAL config parameters to use when creating a new index + url (None): a Qdrant server URL to use + api_key (None): a Qdrant API key to use """ def __init__( diff --git a/fiftyone/brain/internal/core/visualization.py b/fiftyone/brain/internal/core/visualization.py index e26a8b2d..3701da9a 100644 --- a/fiftyone/brain/internal/core/visualization.py +++ b/fiftyone/brain/internal/core/visualization.py @@ -68,7 +68,7 @@ def compute_visualization( model = None embeddings = None embeddings_field = None - num_dims = points.shape[1] + num_dims = _get_dimension(points) elif model is None and embeddings is None: model = _DEFAULT_MODEL if batch_size is None: @@ -303,3 +303,16 @@ def _parse_config( num_dims=num_dims, **kwargs, ) + + +def _get_dimension(points): + if isinstance(points, dict): + points = next(iter(points.values()), None) + + if isinstance(points, list): + points = next(iter(points), None) + + if points is None: + return 2 + + return points.shape[-1] diff --git a/setup.py b/setup.py index b05596a8..4c27920e 100644 --- a/setup.py +++ b/setup.py @@ -17,7 +17,7 @@ long_description += "\n## License\n\n" + fh.read() -VERSION = "0.12.0" +VERSION = "0.13.0" def get_version():
Handling dict points in compute_visualization() Resolves https://github.com/voxel51/fiftyone/issues/3268
2023-07-11T16:11:50
0.0
[]
[]
voxel51/fiftyone-brain
voxel51__fiftyone-brain-140
80cc85b4ac77c6173089b93e597c98290ebbc5aa
diff --git a/fiftyone/brain/internal/core/visualization.py b/fiftyone/brain/internal/core/visualization.py index e26a8b2d..3701da9a 100644 --- a/fiftyone/brain/internal/core/visualization.py +++ b/fiftyone/brain/internal/core/visualization.py @@ -68,7 +68,7 @@ def compute_visualization( model = None embeddings = None embeddings_field = None - num_dims = points.shape[1] + num_dims = _get_dimension(points) elif model is None and embeddings is None: model = _DEFAULT_MODEL if batch_size is None: @@ -303,3 +303,16 @@ def _parse_config( num_dims=num_dims, **kwargs, ) + + +def _get_dimension(points): + if isinstance(points, dict): + points = next(iter(points.values()), None) + + if isinstance(points, list): + points = next(iter(points), None) + + if points is None: + return 2 + + return points.shape[-1] diff --git a/setup.py b/setup.py index b05596a8..4c27920e 100644 --- a/setup.py +++ b/setup.py @@ -17,7 +17,7 @@ long_description += "\n## License\n\n" + fh.read() -VERSION = "0.12.0" +VERSION = "0.13.0" def get_version():
Handling dict points in compute_visualization() Resolves https://github.com/voxel51/fiftyone/issues/3268
2023-07-06T19:07:58
0.0
[]
[]
voxel51/fiftyone-brain
voxel51__fiftyone-brain-135
740d4b016b64263c65bf57b90bf015f418df3c7f
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 64dab306..3dd0e53e 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -56,8 +56,8 @@ jobs: fail-fast: false matrix: python: - - 3.7 - - 3.9 + - 3.8 + - "3.10" steps: - name: Clone fiftyone-brain uses: actions/checkout@v1 @@ -124,13 +124,8 @@ jobs: env: RELEASE_TAG: ${{ github.ref }} run: | - if [[ $RELEASE_TAG =~ ^refs\/tags\/v.*-rc\..*$ ]]; then - echo "TWINE_PASSWORD=${{ secrets.FIFTYONE_TEST_PYPI_TOKEN }}" >> $GITHUB_ENV - echo "TWINE_REPOSITORY=testpypi" >> $GITHUB_ENV - else - echo "TWINE_PASSWORD=${{ secrets.FIFTYONE_PYPI_TOKEN }}" >> $GITHUB_ENV - echo "TWINE_REPOSITORY=pypi" >> $GITHUB_ENV - fi + echo "TWINE_PASSWORD=${{ secrets.FIFTYONE_PYPI_TOKEN }}" >> $GITHUB_ENV + echo "TWINE_REPOSITORY=pypi" >> $GITHUB_ENV - name: Upload to pypi env: TWINE_USERNAME: __token__ diff --git a/docs/dev_guide.md b/docs/dev_guide.md index 2f17eb78..f73154d2 100644 --- a/docs/dev_guide.md +++ b/docs/dev_guide.md @@ -71,4 +71,4 @@ from .hardness import compute_hardness ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> voxel51.com +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/experiments/labelerror/README.md b/experiments/labelerror/README.md index 2b035d10..a36fea61 100644 --- a/experiments/labelerror/README.md +++ b/experiments/labelerror/README.md @@ -12,5 +12,5 @@ This code does not explicitly use eta or theta because it is intended as somethi ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/experiments/labelerror/plot_run_scalar.py b/experiments/labelerror/plot_run_scalar.py index e906d923..31ba8eba 100644 --- a/experiments/labelerror/plot_run_scalar.py +++ b/experiments/labelerror/plot_run_scalar.py @@ -6,7 +6,7 @@ It can also plot data-files that have multiple fields of results and select one of them to plot per file. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/experiments/labelerror/resnet_cifar10/labelerror.py b/experiments/labelerror/resnet_cifar10/labelerror.py index d4b33c32..3126c765 100644 --- a/experiments/labelerror/resnet_cifar10/labelerror.py +++ b/experiments/labelerror/resnet_cifar10/labelerror.py @@ -19,7 +19,7 @@ In addition, simple reporting on the accuracy of the identification of errors in the annotations -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/experiments/labelerror/resnet_cifar10/simple_resnet.py b/experiments/labelerror/resnet_cifar10/simple_resnet.py index 75acfab4..af605571 100644 --- a/experiments/labelerror/resnet_cifar10/simple_resnet.py +++ b/experiments/labelerror/resnet_cifar10/simple_resnet.py @@ -4,7 +4,7 @@ Original Implementation of this is from David Page's work on fast model training with resnets. <https://github.com/davidcpage/cifar10-fast/> -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/experiments/uniqueness/README.md b/experiments/uniqueness/README.md index de83ab36..34516a0f 100644 --- a/experiments/uniqueness/README.md +++ b/experiments/uniqueness/README.md @@ -8,4 +8,4 @@ Problem: Unlabeled ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> voxel51.com +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/fiftyone/brain/__init__.py b/fiftyone/brain/__init__.py index 19a9ad55..154848df 100644 --- a/fiftyone/brain/__init__.py +++ b/fiftyone/brain/__init__.py @@ -4,24 +4,30 @@ See https://github.com/voxel51/fiftyone for more information. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ +import fiftyone.brain.config as _foc + from .similarity import ( + Similarity, SimilarityConfig, - SimilarityResults, + SimilarityIndex, ) from .visualization import ( VisualizationConfig, + VisualizationResults, UMAPVisualizationConfig, TSNEVisualizationConfig, PCAVisualizationConfig, ManualVisualizationConfig, - VisualizationResults, ) +brain_config = _foc.load_brain_config() + + def compute_hardness(samples, label_field, hardness_field="hardness"): """Adds a hardness field to each sample scoring the difficulty that the specified label field observed in classifying the sample. @@ -56,29 +62,26 @@ def compute_hardness(samples, label_field, hardness_field="hardness"): def compute_mistakenness( samples, pred_field, - label_field="ground_truth", + label_field, mistakenness_field="mistakenness", missing_field="possible_missing", spurious_field="possible_spurious", use_logits=False, copy_missing=False, ): - """Computes the mistakenness of the labels in the specified - ``label_field``, scoring the chance that the labels are incorrect. - - Mistakenness is computed based on the predictions in the ``pred_field``, - through either their ``confidence`` or ``logits`` attributes. This measure - can be used to detect things like annotation errors and unusually hard - samples. + """Computes the mistakenness (likelihood of being incorrect) of the labels + in ``label_field`` based on the predcted labels in ``pred_field``. - This method supports both classifications and detections/polylines. + Mistakenness is measured based on either the ``confidence`` or ``logits`` + of the predictions in ``pred_field``. This measure can be used to detect + things like annotation errors and unusually hard samples. For classifications, a ``mistakenness_field`` field is populated on each sample that quantifies the likelihood that the label in the ``label_field`` of that sample is incorrect. - For detections/polylines, the mistakenness of each object in - ``label_field`` is computed, using + For objects (detections, polylines, keypoints, etc), the mistakenness of + each object in ``label_field`` is computed, using :meth:`fiftyone.core.collections.SampleCollection.evaluate_detections` to locate corresponding objects in ``pred_field``. Three types of mistakes are identified: @@ -86,9 +89,9 @@ def compute_mistakenness( - **(Mistakes)** Objects in ``label_field`` with a match in ``pred_field`` are assigned a mistakenness value in their ``mistakenness_field`` that captures the likelihood that the class - label of the detection in ``label_field`` is a mistake. A + label of the object in ``label_field`` is a mistake. A ``mistakenness_field + "_loc"`` field is also populated that captures - the likelihood that the detection in ``label_field`` is a mistake due + the likelihood that the object in ``label_field`` is a mistake due to its localization (bounding box). - **(Missing)** Objects in ``pred_field`` with no matches in @@ -101,8 +104,7 @@ def compute_mistakenness( ``pred_field`` but which are likely to be incorrect will have their ``spurious_field`` attribute set to True. - In addition, for detections/polylines, the following sample-level fields - are populated: + In addition, for objects, the following sample-level fields are populated: - **(Mistakes)** The ``mistakenness_field`` of each sample is populated with the maximum mistakenness of the objects in ``label_field`` @@ -125,23 +127,25 @@ def compute_mistakenness( pred_field: the name of the predicted label field to use from each sample. Can be of type :class:`fiftyone.core.labels.Classification`, - :class:`fiftyone.core.labels.Classifications`, or - :class:`fiftyone.core.labels.Detections` - label_field ("ground_truth"): the name of the "ground truth" label - field that you want to test for mistakes with respect to the - predictions in ``pred_field``. Must have the same type as - ``pred_field`` + :class:`fiftyone.core.labels.Classifications`, + :class:`fiftyone.core.labels.Detections`, + :class:`fiftyone.core.labels.Polylines`, + :class:`fiftyone.core.labels.Keypoints`, or + :class:`fiftyone.core.labels.TemporalDetections` + label_field: the name of the "ground truth" label field that you want + to test for mistakes with respect to the predictions in + ``pred_field``. Must have the same type as ``pred_field`` mistakenness_field ("mistakenness"): the field name to use to store the mistakenness value for each sample missing_field ("possible_missing): the field in which to store - per-sample counts of potential missing detections/polylines + per-sample counts of potential missing objects spurious_field ("possible_spurious): the field in which to store - per-sample counts of potential spurious detections/polylines + per-sample counts of potential spurious objects use_logits (False): whether to use logits (True) or confidence (False) to compute mistakenness. Logits typically yield better results, when they are available - copy_missing (False): whether to copy predicted detections/polylines - that were deemed to be missing into ``label_field`` + copy_missing (False): whether to copy predicted objects that were + deemed to be missing into ``label_field`` """ import fiftyone.brain.internal.core.mistakenness as fbm @@ -192,17 +196,22 @@ def compute_uniqueness( :class:`fiftyone.core.labels.Polyline`, or :class:`fiftyone.core.labels.Polylines` field defining a region of interest within each image to use to compute uniqueness - embeddings (None): pre-computed embeddings to use. Can be any of the - following: + embeddings (None): if no ``model`` is provided, this argument specifies + pre-computed embeddings to use, which can be any of the following: - a ``num_samples x num_dims`` array of embeddings - if ``roi_field`` is specified, a dict mapping sample IDs to ``num_patches x num_dims`` arrays of patch embeddings - the name of a dataset field containing the embeddings to use + If a ``model`` is provided, this argument specifies the name of a + field in which to store the computed embeddings. In either case, + when working with patch embeddings, you can provide either the + fully-qualified path to the patch embeddings or just the name of + the label attribute in ``roi_field`` model (None): a :class:`fiftyone.core.models.Model` or the name of a model from the - `FiftyOne Model Zoo <https://voxel51.com/docs/fiftyone/user_guide/model_zoo/models.html>`_ + `FiftyOne Model Zoo <https://docs.voxel51.com/user_guide/model_zoo/models.html>`_ to use to generate embeddings. The model must expose embeddings (``model.has_embeddings = True``) force_square (False): whether to minimally manipulate the patch @@ -259,12 +268,16 @@ def compute_visualization( patches that can be interactively visualized. The representation can be visualized by calling the - :meth:`sort_by_similarity() <fiftyone.brain.similarity.SimilarityResults.sort_by_similarity>` + :meth:`visualize() <fiftyone.brain.visualization.VisualizationResults.visualize>` method of the returned :class:`fiftyone.brain.visualization.VisualizationResults` object. - If no ``embeddings`` or ``model`` is provided, a default model is used to - generate embeddings. + If no ``embeddings`` or ``model`` is provided, the following default model + is used to generate embeddings:: + + import fiftyone.zoo as foz + + model = foz.load_zoo_model("mobilenet-v2-imagenet-torch") You can use the ``method`` parameter to select the dimensionality-reduction method to use, and you can optionally customize the method by passing @@ -287,19 +300,35 @@ def compute_visualization( :class:`fiftyone.core.labels.Detections`, :class:`fiftyone.core.labels.Polyline`, or :class:`fiftyone.core.labels.Polylines` - embeddings (None): pre-computed embeddings to use. Can be any of the - following: + embeddings (None): if no ``model`` is provided, this argument specifies + pre-computed embeddings to use, which can be any of the following: + - a dict mapping sample IDs to embedding vectors - a ``num_samples x num_embedding_dims`` array of embeddings + corresponding to the samples in ``samples`` + - if ``patches_field`` is specified, a dict mapping label IDs to + to embedding vectors - if ``patches_field`` is specified, a dict mapping sample IDs to ``num_patches x num_embedding_dims`` arrays of patch embeddings - the name of a dataset field containing the embeddings to use + - a :class:`fiftyone.brain.similarity.SimilarityIndex` from which + to retrieve embeddings for all samples/patches in ``samples`` + + If a ``model`` is provided, this argument specifies the name of a + field in which to store the computed embeddings. In either case, + when working with patch embeddings, you can provide either the + fully-qualified path to the patch embeddings or just the name of + the label attribute in ``patches_field`` points (None): a pre-computed low-dimensional representation to use. If - provided, no embeddings will be computed. Can be any of the + provided, no embeddings will be used/computed. Can be any of the following: - - a ``num_samples x num_dims`` array of points + - a dict mapping sample IDs to points vectors + - a ``num_samples x num_dims`` array of points corresponding to + the samples in ``samples`` + - if ``patches_field`` is specified, a dict mapping label IDs to + points vectors - if ``patches_field`` is specified, a ``num_patches x num_dims`` array of points whose rows correspond to the flattened list of patches whose IDs are shown below:: @@ -315,7 +344,7 @@ def compute_visualization( values are ``("umap", "tsne", "pca", "manual")`` model (None): a :class:`fiftyone.core.models.Model` or the name of a model from the - `FiftyOne Model Zoo <https://voxel51.com/docs/fiftyone/user_guide/model_zoo/index.html>`_ + `FiftyOne Model Zoo <https://docs.voxel51.com/user_guide/model_zoo/index.html>`_ to use to generate embeddings. The model must expose embeddings (``model.has_embeddings = True``) force_square (False): whether to minimally manipulate the patch @@ -367,36 +396,43 @@ def compute_similarity( patches_field=None, embeddings=None, brain_key=None, - metric="euclidean", model=None, force_square=False, alpha=None, batch_size=None, num_workers=None, skip_failures=True, + backend=None, + **kwargs, ): """Uses embeddings to index the samples or their patches so that you can - query/sort by visual similarity. + query/sort by similarity. + + Calling this method only creates the index. You can then call the methods + exposed on the retuned :class:`fiftyone.brain.similarity.SimilarityIndex` + object to perform the following operations: - Calling this method (or loading existing results) only generates the index. - You can then call the methods exposed on the retuned - :class:`fiftyone.brain.similarity.SimilarityResults` object to perform the - following operations: + - :meth:`sort_by_similarity() <fiftyone.brain.similarity.SimilarityIndex.sort_by_similarity>`: + Sort the samples in the collection by similarity to a specific example + or example(s) - - :meth:`sort_by_similarity() <fiftyone.brain.similarity.SimilarityResults.sort_by_similarity>`: - Sort the samples in the collection by visual similarity to a specific - example or example(s) + In addition, if the backend supports it, you can call the following + duplicate detection methods: - - :meth:`find_duplicates() <fiftyone.brain.similarity.SimilarityResults.find_duplicates>`: + - :meth:`find_duplicates() <fiftyone.brain.similarity.DuplicatesMixin.find_duplicates>`: Query the index to find all examples with near-duplicates in the collection - - :meth:`find_unique() <fiftyone.brain.similarity.SimilarityResults.find_unique>`: + - :meth:`find_unique() <fiftyone.brain.similarity.DuplicatesMixin.find_unique>`: Query the index to select a subset of examples of a specified size that are maximally unique with respect to each other - If no ``embeddings`` or ``model`` is provided, a default model is used to - generate embeddings. + If no ``embeddings`` or ``model`` is provided, the following default model + is used to generate embeddings:: + + import fiftyone.zoo as foz + + model = foz.load_zoo_model("mobilenet-v2-imagenet-torch") Args: samples: a :class:`fiftyone.core.collections.SampleCollection` @@ -406,21 +442,34 @@ def compute_similarity( :class:`fiftyone.core.labels.Detections`, :class:`fiftyone.core.labels.Polyline`, or :class:`fiftyone.core.labels.Polylines` - embeddings (None): pre-computed embeddings to use. Can be any of the - following: + embeddings (None): embeddings to feed the index. This argument's + behavior depends on whether a ``model`` is provided, as described + below. + + If no ``model`` is provided, this argument specifies precomputed + embeddings to use: - a ``num_samples x num_dims`` array of embeddings - if ``patches_field`` is specified, a dict mapping sample IDs to ``num_patches x num_dims`` arrays of patch embeddings - - the name of a dataset field containing the embeddings to use + - the name of a dataset field from which to load embeddings + - ``None``: use the default model to compute embeddings + - ``False``: **do not** compute embeddings right now + + If a ``model`` is provided, this argument specifies where to store + the model's embeddings: + - the name of a field in which to store the computed embeddings + - ``False``: **do not** compute or store embeddings right now + + In either case, when working with patch embeddings, you can provide + either the fully-qualified path to the patch embeddings or just the + name of the label attribute in ``patches_field`` brain_key (None): a brain key under which to store the results of this method - metric ("euclidean"): the embedding distance metric to use. See - ``sklearn.metrics.pairwise_distance`` for supported values model (None): a :class:`fiftyone.core.models.Model` or the name of a model from the - `FiftyOne Model Zoo <https://voxel51.com/docs/fiftyone/user_guide/model_zoo/index.html>`_ + `FiftyOne Model Zoo <https://docs.voxel51.com/user_guide/model_zoo/index.html>`_ to use to generate embeddings. The model must expose embeddings (``model.has_embeddings = True``) force_square (False): whether to minimally manipulate the patch @@ -440,24 +489,32 @@ def compute_similarity( embeddings skip_failures (True): whether to gracefully continue without raising an error if embeddings cannot be generated for a sample + backend (None): the similarity backend to use. The supported values are + ``fiftyone.brain.brain_config.similarity_backends.keys()`` and the + default is + ``fiftyone.brain.brain_config.default_similarity_backend`` + **kwargs: keyword arguments for the + :class:`fiftyone.brian.SimilarityConfig` subclass of the backend + being used Returns: - a :class:`fiftyone.brain.similarity.SimilarityResults` + a :class:`fiftyone.brain.similarity.SimilarityIndex` """ - import fiftyone.brain.internal.core.similarity as fbs + import fiftyone.brain.similarity as fbs return fbs.compute_similarity( samples, patches_field, embeddings, brain_key, - metric, model, force_square, alpha, batch_size, num_workers, skip_failures, + backend, + **kwargs, ) diff --git a/fiftyone/brain/config.py b/fiftyone/brain/config.py new file mode 100644 index 00000000..a64ab5a6 --- /dev/null +++ b/fiftyone/brain/config.py @@ -0,0 +1,151 @@ +""" +Brain config. + +| Copyright 2017-2023, Voxel51, Inc. +| `voxel51.com <https://voxel51.com/>`_ +| +""" +import os + +from fiftyone.core.config import EnvConfig + + +class BrainConfig(EnvConfig): + """FiftyOne brain configuration settings.""" + + _BUILTIN_SIMILARITY_BACKENDS = { + "sklearn": { + "config_cls": "fiftyone.brain.internal.core.sklearn.SklearnSimilarityConfig", + }, + "pinecone": { + "config_cls": "fiftyone.brain.internal.core.pinecone.PineconeSimilarityConfig", + }, + "qdrant": { + "config_cls": "fiftyone.brain.internal.core.qdrant.QdrantSimilarityConfig", + }, + } + + def __init__(self, d=None): + if d is None: + d = {} + + self.default_similarity_backend = self.parse_string( + d, + "default_similarity_backend", + env_var="FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND", + default="sklearn", + ) + + self.similarity_backends = self._parse_similarity_backends(d) + + def _parse_similarity_backends(self, d): + d = d.get("similarity_backends", {}) + env_vars = dict(os.environ) + + # + # `FIFTYONE_BRAIN_SIMILARITY_BACKENDS` can be used to declare which + # backends are exposed. This may exclude builtin backends and/or + # declare new backends + # + + if "FIFTYONE_BRAIN_SIMILARITY_BACKENDS" in env_vars: + backends = env_vars["FIFTYONE_BRAIN_SIMILARITY_BACKENDS"].split( + "," + ) + + # Declare new backends and omit any others not in `backends` + d = {backend: d.get(backend, {}) for backend in backends} + else: + backends = sorted(self._BUILTIN_SIMILARITY_BACKENDS.keys()) + + # Declare builtin backends if necessary + for backend in backends: + if backend not in d: + d[backend] = {} + + # + # Extract parameters from any environment variables of the form + # `FIFTYONE_BRAIN_SIMILARITY_<BACKEND>_<PARAMETER>` + # + + for backend, parameters in d.items(): + prefix = "FIFTYONE_BRAIN_SIMILARITY_%s_" % backend.upper() + for env_name, env_value in env_vars.items(): + if env_name.startswith(prefix): + name = env_name[len(prefix) :].lower() + value = _parse_env_value(env_value) + parameters[name] = value + + # + # Set default parameters for builtin similarity backends + # + + for backend, defaults in self._BUILTIN_SIMILARITY_BACKENDS.items(): + if backend not in d: + continue + + d_backend = d[backend] + for name, value in defaults.items(): + if name not in d_backend: + d_backend[name] = value + + return d + + +def locate_brain_config(): + """Returns the path to the :class:`BrainConfig` on disk. + + The default location is ``~/.fiftyone/brain_config.json``, but you can + override this path by setting the ``FIFTYONE_BRAIN_CONFIG_PATH`` + environment variable. + + Note that a config file may not actually exist on disk. + + Returns: + the path to the :class:`BrainConfig` on disk + """ + if "FIFTYONE_BRAIN_CONFIG_PATH" not in os.environ: + return os.path.join( + os.path.expanduser("~"), ".fiftyone", "brain_config.json" + ) + + return os.environ["FIFTYONE_BRAIN_CONFIG_PATH"] + + +def load_brain_config(): + """Loads the FiftyOne brain config. + + Returns: + a :class:`BrainConfig` instance + """ + brain_config_path = locate_brain_config() + if os.path.isfile(brain_config_path): + return BrainConfig.from_json(brain_config_path) + + return BrainConfig() + + +def _parse_env_value(value): + try: + return int(value) + except: + pass + + try: + return float(value) + except: + pass + + if value in ("True", "true"): + return True + + if value in ("False", "false"): + return False + + if value in ("None", ""): + return None + + if "," in value: + return [_parse_env_value(v) for v in value.split(",")] + + return value diff --git a/fiftyone/brain/internal/__init__.py b/fiftyone/brain/internal/__init__.py index fff6c747..9ba90b68 100644 --- a/fiftyone/brain/internal/__init__.py +++ b/fiftyone/brain/internal/__init__.py @@ -3,7 +3,7 @@ Contains all non-public code powering the ``fiftyone.brain`` public namespace. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/fiftyone/brain/internal/core/__init__.py b/fiftyone/brain/internal/core/__init__.py index c8dc42a4..8d756b31 100644 --- a/fiftyone/brain/internal/core/__init__.py +++ b/fiftyone/brain/internal/core/__init__.py @@ -1,5 +1,5 @@ """ -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/fiftyone/brain/internal/core/duplicates.py b/fiftyone/brain/internal/core/duplicates.py index a356c490..ffed4005 100644 --- a/fiftyone/brain/internal/core/duplicates.py +++ b/fiftyone/brain/internal/core/duplicates.py @@ -1,7 +1,7 @@ """ Duplicates methods. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/fiftyone/brain/internal/core/hardness.py b/fiftyone/brain/internal/core/hardness.py index cde3609d..e4d5312a 100644 --- a/fiftyone/brain/internal/core/hardness.py +++ b/fiftyone/brain/internal/core/hardness.py @@ -1,7 +1,7 @@ """ Hardness methods. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ @@ -13,6 +13,7 @@ import fiftyone.core.brain as fob import fiftyone.core.labels as fol +import fiftyone.core.media as fom import fiftyone.core.utils as fou import fiftyone.core.validation as fov @@ -35,6 +36,9 @@ def compute_hardness(samples, label_field, hardness_field): fov.validate_collection(samples) fov.validate_collection_label_fields(samples, label_field, _ALLOWED_TYPES) + if samples.media_type == fom.VIDEO: + hardness_field, _ = samples._handle_frame_field(hardness_field) + config = HardnessConfig(label_field, hardness_field) brain_key = hardness_field brain_method = config.build() diff --git a/fiftyone/brain/internal/core/mistakenness.py b/fiftyone/brain/internal/core/mistakenness.py index 84a4ed69..fb2535fe 100644 --- a/fiftyone/brain/internal/core/mistakenness.py +++ b/fiftyone/brain/internal/core/mistakenness.py @@ -1,7 +1,7 @@ """ Mistakenness methods. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ @@ -15,6 +15,7 @@ from fiftyone import ViewField as F import fiftyone.core.brain as fob import fiftyone.core.labels as fol +import fiftyone.core.media as fom import fiftyone.core.utils as fou import fiftyone.core.validation as fov @@ -27,6 +28,8 @@ fol.Classifications, fol.Detections, fol.Polylines, + fol.Keypoints, + fol.TemporalDetections, ) _MISSED_CONFIDENCE_THRESHOLD = 0.95 _DETECTION_IOU = 0.5 @@ -63,16 +66,19 @@ def compute_mistakenness( # mistakenness, and low confidence predictions result in middling # mistakenness. # - # See the docstring above for additional handling of missing and spurious - # detections/polylines. - # fov.validate_collection_label_fields( samples, (pred_field, label_field), _ALLOWED_TYPES, same_type=True ) + if samples.media_type == fom.VIDEO: + mistakenness_field, _ = samples._handle_frame_field(mistakenness_field) + missing_field, _ = samples._handle_frame_field(missing_field) + spurious_field, _ = samples._handle_frame_field(spurious_field) + is_objects = samples._is_label_field( - pred_field, (fol.Detections, fol.Polylines) + pred_field, + (fol.Detections, fol.Polylines, fol.Keypoints, fol.TemporalDetections), ) if is_objects: eval_key = _make_eval_key(samples, mistakenness_field) diff --git a/fiftyone/brain/internal/core/pinecone.py b/fiftyone/brain/internal/core/pinecone.py new file mode 100644 index 00000000..f5c3f4b1 --- /dev/null +++ b/fiftyone/brain/internal/core/pinecone.py @@ -0,0 +1,589 @@ +""" +Piencone similarity backend. + +| Copyright 2017-2023, Voxel51, Inc. +| `voxel51.com <https://voxel51.com/>`_ +| +""" +import logging + +import numpy as np + +import eta.core.utils as etau + +import fiftyone.core.utils as fou +from fiftyone.brain.similarity import ( + SimilarityConfig, + Similarity, + SimilarityIndex, +) +import fiftyone.brain.internal.core.utils as fbu + +pinecone = fou.lazy_import("pinecone") + + +logger = logging.getLogger(__name__) + +_SUPPORTED_METRICS = ("cosine", "dotproduct", "euclidean") + + +class PineconeSimilarityConfig(SimilarityConfig): + """Configuration for the Pinecone similarity backend. + + Args: + embeddings_field (None): the sample field containing the embeddings, + if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known + patches_field (None): the sample field defining the patches being + analyzed, if any + supports_prompts (None): whether this run supports prompt queries + index_name (None): the name of a Pinecone index to use or create. If + none is provided, a new index will be created + index_type (None): the index type to use when creating a new index + namespace (None): a namespace under which to store vectors added to the + index + metric (None): the embedding distance metric to use when creating a + new index. Supported values are + ``("cosine", "dotproduct", "euclidean")`` + replicas (None): an optional number of replicas when creating a new + index + shards (None): an optional number of shards when creating a new index + pods (None): an optional number of pods when creating a new index + pod_type (None): an optional pod type when creating a new index + api_key (None): a Pinecone API key to use + environment (None): a Pinecone environment to use + project_name (None): a Pinecone project to use + """ + + def __init__( + self, + embeddings_field=None, + model=None, + patches_field=None, + supports_prompts=None, + index_name=None, + index_type=None, + namespace=None, + metric=None, + replicas=None, + shards=None, + pods=None, + pod_type=None, + api_key=None, + environment=None, + project_name=None, + **kwargs, + ): + if metric is not None and metric not in _SUPPORTED_METRICS: + raise ValueError( + "Unsupported metric '%s'. Supported values are %s" + % (metric, _SUPPORTED_METRICS) + ) + + super().__init__( + embeddings_field=embeddings_field, + model=model, + patches_field=patches_field, + supports_prompts=supports_prompts, + **kwargs, + ) + + self.index_name = index_name + self.index_type = index_type + self.namespace = namespace + self.metric = metric + self.replicas = replicas + self.shards = shards + self.pods = pods + self.pod_type = pod_type + + # store privately so these aren't serialized + self._api_key = api_key + self._environment = environment + self._project_name = project_name + + @property + def method(self): + return "pinecone" + + @property + def api_key(self): + return self._api_key + + @api_key.setter + def api_key(self, value): + self._api_key = value + + @property + def environment(self): + return self._environment + + @environment.setter + def environment(self, value): + self._environment = value + + @property + def project_name(self): + return self._project_name + + @project_name.setter + def project_name(self, value): + self._project_name = value + + @property + def max_k(self): + return 10000 # Pinecone limit + + @property + def supports_least_similarity(self): + return False + + @property + def supported_aggregations(self): + return ("mean",) + + def load_credentials( + self, api_key=None, environment=None, project_name=None + ): + self._load_parameters( + api_key=api_key, environment=environment, project_name=project_name + ) + + +class PineconeSimilarity(Similarity): + """Pinecone similarity factory. + + Args: + config: a :class:`PineconeSimilarityConfig` + """ + + def ensure_requirements(self): + fou.ensure_package("pinecone-client") + + def ensure_usage_requirements(self): + fou.ensure_package("pinecone-client") + + def initialize(self, samples, brain_key): + return PineconeSimilarityIndex( + samples, self.config, brain_key, backend=self + ) + + +class PineconeSimilarityIndex(SimilarityIndex): + """Class for interacting with Pinecone similarity indexes. + + Args: + samples: the :class:`fiftyone.core.collections.SampleCollection` used + config: the :class:`PineconeSimilarityConfig` used + brain_key: the brain key + backend (None): a :class:`PineconeSimilarity` instance + """ + + def __init__(self, samples, config, brain_key, backend=None): + super().__init__(samples, config, brain_key, backend=backend) + self._index = None + self._initialize() + + def _initialize(self): + pinecone.init( + api_key=self.config.api_key, + environment=self.config.environment, + project_name=self.config.project_name, + ) + + try: + index_names = pinecone.list_indexes() + except Exception as e: + raise ValueError( + "Failed to connect to Pinecone backend at environment '%s'. " + "Refer to https://docs.voxel51.com/integrations/pinecone.html " + "for more information" % self.config.environment + ) from e + + if self.config.index_name is None: + root = "fiftyone-" + fou.to_slug(self.samples._root_dataset.name) + index_name = fbu.get_unique_name(root, index_names) + + self.config.index_name = index_name + self.save_config() + + if self.config.index_name in index_names: + index = pinecone.Index(self.config.index_name) + else: + index = None + + self._index = index + + def _create_index(self, dimension): + kwargs = dict( + index_type=self.config.index_type, + metric=self.config.metric, + replicas=self.config.replicas, + shards=self.config.shards, + pods=self.config.pods, + pod_type=self.config.pod_type, + ) + kwargs = {k: v for k, v in kwargs.items() if v is not None} + + pinecone.create_index( + self.config.index_name, + dimension=dimension, + **kwargs, + ) + + self._index = pinecone.Index(self.config.index_name) + + @property + def index(self): + """The ``pinecone.Index`` instance for this index.""" + return self._index + + @property + def total_index_size(self): + if self._index is None: + return None + + return self._index.describe_index_stats()["total_vector_count"] + + def add_to_index( + self, + embeddings, + sample_ids, + label_ids=None, + overwrite=True, + allow_existing=True, + warn_existing=False, + reload=True, + batch_size=100, + namespace=None, + ): + if namespace is None: + namespace = self.config.namespace + + if self._index is None: + self._create_index(embeddings.shape[1]) + + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if warn_existing or not allow_existing or not overwrite: + existing_ids = self._get_existing_ids(ids) + num_existing = len(existing_ids) + + if num_existing > 0: + if not allow_existing: + raise ValueError( + "Found %d IDs (eg %s) that already exist in the index" + % (num_existing, next(iter(existing_ids))) + ) + + if warn_existing: + if overwrite: + logger.warning( + "Overwriting %d IDs that already exist in the " + "index", + num_existing, + ) + else: + logger.warning( + "Skipping %d IDs that already exist in the index", + num_existing, + ) + else: + existing_ids = set() + + if existing_ids and not overwrite: + del_inds = [i for i, _id in enumerate(ids) if _id in existing_ids] + + embeddings = np.delete(embeddings, del_inds) + sample_ids = np.delete(sample_ids, del_inds) + if label_ids is not None: + label_ids = np.delete(label_ids, del_inds) + + embeddings = [e.tolist() for e in embeddings] + sample_ids = list(sample_ids) + if label_ids is not None: + ids = list(label_ids) + else: + ids = list(sample_ids) + + for _embeddings, _ids, _sample_ids in zip( + fou.iter_batches(embeddings, batch_size), + fou.iter_batches(ids, batch_size), + fou.iter_batches(sample_ids, batch_size), + ): + _id_dicts = [ + {"id": _id, "sample_id": _sid} + for _id, _sid in zip(_ids, _sample_ids) + ] + self._index.upsert( + list(zip(_ids, _embeddings, _id_dicts)), + namespace=namespace, + ) + + if reload: + self.reload() + + def remove_from_index( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + reload=True, + ): + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if not allow_missing or warn_missing: + existing_ids = self._index.fetch(ids).vectors.keys() + missing_ids = set(existing_ids) - set(ids) + num_missing = len(missing_ids) + + if num_missing > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that are not present in the " + "index" % (num_missing, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Ignoring %d IDs that are not present in the index", + num_missing, + ) + + self._index.delete(ids=ids) + + if reload: + self.reload() + + def get_embeddings( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + ): + if label_ids is not None: + if self.config.patches_field is None: + raise ValueError("This index does not support label IDs") + + if sample_ids is not None: + logger.warning( + "Ignoring sample IDs when label IDs are provided" + ) + + if sample_ids is not None and self.config.patches_field is not None: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_patch_embeddings_from_sample_ids(sample_ids) + elif self.config.patches_field is not None: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_patch_embeddings_from_label_ids(label_ids) + else: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_sample_embeddings(sample_ids) + + num_missing_ids = len(missing_ids) + if num_missing_ids > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that do not exist in the index" + % (num_missing_ids, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Skipping %d IDs that do not exist in the index", + num_missing_ids, + ) + + embeddings = np.array(embeddings) + sample_ids = np.array(sample_ids) + if label_ids is not None: + label_ids = np.array(label_ids) + + return embeddings, sample_ids, label_ids + + def cleanup(self): + pinecone.delete_index(self.config.index_name) + self._index = None + + def _get_existing_ids(self, ids, batch_size=1000): + existing_ids = set() + for batch_ids in fou.iter_batches(ids, batch_size): + response = self._index.fetch(ids=list(batch_ids))["vectors"] + existing_ids.update(response.keys()) + + return existing_ids + + def _get_sample_embeddings(self, sample_ids, batch_size=1000): + found_embeddings = [] + found_sample_ids = [] + + if sample_ids is None: + raise ValueError( + "Pinecone does not support retrieving all vectors in an index" + ) + + for batch_ids in fou.iter_batches(sample_ids, batch_size): + response = self._index.fetch(ids=list(batch_ids))["vectors"] + + for r in response.values(): + found_embeddings.append(r["values"]) + found_sample_ids.append(r["id"]) + + missing_ids = list(set(sample_ids) - set(found_sample_ids)) + + return found_embeddings, found_sample_ids, None, missing_ids + + def _get_patch_embeddings_from_label_ids(self, label_ids, batch_size=1000): + found_embeddings = [] + found_sample_ids = [] + found_label_ids = [] + + if label_ids is None: + raise ValueError( + "Pinecone does not support retrieving all vectors in an index" + ) + + for batch_ids in fou.iter_batches(label_ids, batch_size): + response = self._index.fetch(ids=list(batch_ids))["vectors"] + + for r in response.values(): + found_embeddings.append(r["values"]) + found_sample_ids.append(r["metadata"]["sample_id"]) + found_label_ids.append(r["id"]) + + missing_ids = list(set(label_ids) - set(found_label_ids)) + + return found_embeddings, found_sample_ids, found_label_ids, missing_ids + + def _get_patch_embeddings_from_sample_ids( + self, sample_ids, batch_size=100 + ): + found_embeddings = [] + found_sample_ids = [] + found_label_ids = [] + + query_vector = [0.0] * self._index.describe_index_stats().dimension + top_k = min(batch_size, self.config.max_k) + + for batch_ids in fou.iter_batches(sample_ids, batch_size): + response = self._index.query( + vector=query_vector, + filter={"sample_id": {"$in": list(batch_ids)}}, + top_k=top_k, + include_values=True, + include_metadata=True, + ) + + for r in response["matches"]: + found_embeddings.append(r["values"]) + found_sample_ids.append(r["metadata"]["sample_id"]) + found_label_ids.append(r["id"]) + + missing_ids = list(set(sample_ids) - set(found_sample_ids)) + + return found_embeddings, found_sample_ids, found_label_ids, missing_ids + + def _kneighbors( + self, + query=None, + k=None, + reverse=False, + aggregation=None, + return_dists=False, + ): + if query is None: + raise ValueError("Pinecone does not support full index neighbors") + + if reverse is True: + raise ValueError( + "Pinecone does not support least similarity queries" + ) + + if k is None or k > self.config.max_k: + raise ValueError("Pincone requires k<=%s" % self.config.max_k) + + if aggregation not in (None, "mean"): + raise ValueError("Unsupported aggregation '%s'" % aggregation) + + query = self._parse_neighbors_query(query) + if aggregation == "mean" and query.ndim == 2: + query = query.mean(axis=0) + + single_query = query.ndim == 1 + if single_query: + query = [query] + + if self.config.patches_field is not None: + index_ids = self.current_label_ids + else: + index_ids = self.current_sample_ids + + _filter = {"id": {"$in": list(index_ids)}} + + ids = [] + dists = [] + for q in query: + response = self._index.query( + vector=q.tolist(), top_k=k, filter=_filter + ) + ids.append([r["id"] for r in response["matches"]]) + if return_dists: + dists.append([r["score"] for r in response["matches"]]) + + if single_query: + ids = ids[0] + if return_dists: + dists = dists[0] + + if return_dists: + return ids, dists + + return ids + + def _parse_neighbors_query(self, query): + if etau.is_str(query): + query_ids = [query] + single_query = True + else: + query = np.asarray(query) + + # Query by vector(s) + if np.issubdtype(query.dtype, np.number): + return query + + query_ids = list(query) + single_query = False + + # Query by ID(s) + response = self._index.fetch(query_ids)["vectors"] + query = np.array([response[_id]["values"] for _id in query_ids]) + + if single_query: + query = query[0, :] + + return query + + @classmethod + def _from_dict(cls, d, samples, config, brain_key): + return cls(samples, config, brain_key) diff --git a/fiftyone/brain/internal/core/qdrant.py b/fiftyone/brain/internal/core/qdrant.py new file mode 100644 index 00000000..ecf58eb3 --- /dev/null +++ b/fiftyone/brain/internal/core/qdrant.py @@ -0,0 +1,620 @@ +""" +Qdrant similarity backend. + +| Copyright 2017-2023, Voxel51, Inc. +| `voxel51.com <https://voxel51.com/>`_ +| +""" +import logging + +import numpy as np + +import eta.core.utils as etau + +import fiftyone.core.utils as fou +from fiftyone.brain.similarity import ( + SimilarityConfig, + Similarity, + SimilarityIndex, +) +import fiftyone.brain.internal.core.utils as fbu + +qdrant = fou.lazy_import("qdrant_client") +qmodels = fou.lazy_import("qdrant_client.http.models") + + +logger = logging.getLogger(__name__) + +_SUPPORTED_METRICS = { + "cosine": qmodels.Distance.COSINE, + "dotproduct": qmodels.Distance.DOT, + "euclidean": qmodels.Distance.EUCLID, +} + + +class QdrantSimilarityConfig(SimilarityConfig): + """Configuration for the Qdrant similarity backend. + + Args: + embeddings_field (None): the sample field containing the embeddings, + if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known + patches_field (None): the sample field defining the patches being + analyzed, if any + supports_prompts (None): whether this run supports prompt queries + collection_name (None): the name of a Qdrant collection to use or + create. If none is provided, a new collection will be created + metric (None): the embedding distance metric to use when creating a + new index. Supported values are + ``("cosine", "dotproduct", "euclidean")`` + replication_factor (None): an optional replication factor to use when + creating a new index + shard_number (None): an optional number of shards to use when creating + a new index + write_consistency_factor (None): an optional write consistsency factor + to use when creating a new index + hnsw_config (None): an optional dict of HNSW config parameters to use + when creating a new index + optimizers_config (None): an optional dict of optimizer parameters to + use when creating a new index + wal_config (None): an optional dict of WAL config parameters to use + when creating a new index + """ + + def __init__( + self, + embeddings_field=None, + model=None, + patches_field=None, + supports_prompts=None, + collection_name=None, + metric=None, + replication_factor=None, + shard_number=None, + write_consistency_factor=None, + hnsw_config=None, + optimizers_config=None, + wal_config=None, + url=None, + api_key=None, + **kwargs, + ): + if metric is not None and metric not in _SUPPORTED_METRICS: + raise ValueError( + "Unsupported metric '%s'. Supported values are %s" + % (metric, tuple(_SUPPORTED_METRICS.keys())) + ) + + super().__init__( + embeddings_field=embeddings_field, + model=model, + patches_field=patches_field, + supports_prompts=supports_prompts, + **kwargs, + ) + + self.collection_name = collection_name + self.metric = metric + self.replication_factor = replication_factor + self.shard_number = shard_number + self.write_consistency_factor = write_consistency_factor + self.hnsw_config = hnsw_config + self.optimizers_config = optimizers_config + self.wal_config = wal_config + + # store privately so these aren't serialized + self._url = url + self._api_key = api_key + + @property + def method(self): + return "qdrant" + + @property + def url(self): + return self._url + + @url.setter + def url(self, value): + self._url = value + + @property + def api_key(self): + return self._api_key + + @api_key.setter + def api_key(self, value): + self._api_key = value + + @property + def max_k(self): + return None + + @property + def supports_least_similarity(self): + return False + + @property + def supported_aggregations(self): + return ("mean",) + + def load_credentials(self, url=None, api_key=None): + self._load_parameters(url=url, api_key=api_key) + + +class QdrantSimilarity(Similarity): + """Qdrant similarity factory. + + Args: + config: a :class:`QdrantSimilarityConfig` + """ + + def ensure_requirements(self): + fou.ensure_package("qdrant-client") + + def ensure_usage_requirements(self): + fou.ensure_package("qdrant-client") + + def initialize(self, samples, brain_key): + return QdrantSimilarityIndex( + samples, self.config, brain_key, backend=self + ) + + +class QdrantSimilarityIndex(SimilarityIndex): + """Class for interacting with Qdrant similarity indexes. + + Args: + samples: the :class:`fiftyone.core.collections.SampleCollection` used + config: the :class:`QdrantSimilarityConfig` used + brain_key: the brain key + backend (None): a :class:`QdrantSimilarity` instance + """ + + def __init__(self, samples, config, brain_key, backend=None): + super().__init__(samples, config, brain_key, backend=backend) + self._client = None + self._initialize() + + def _initialize(self): + self._client = qdrant.QdrantClient( + url=self.config.url, api_key=self.config.api_key + ) + + try: + collection_names = self._get_collection_names() + except Exception as e: + raise ValueError( + "Failed to connect to Qdrant backend at URL '%s'. Refer to " + "https://docs.voxel51.com/integrations/qdrant.html for more " + "information" % self.config.url + ) from e + + if self.config.collection_name is None: + root = "fiftyone-" + fou.to_slug(self.samples._root_dataset.name) + collection_name = fbu.get_unique_name(root, collection_names) + + self.config.collection_name = collection_name + self.save_config() + + def _get_collection_names(self): + return [c.name for c in self._client.get_collections().collections] + + def _create_collection(self, dimension): + if self.config.metric: + metric = self.config.metric + else: + metric = "cosine" + + vectors_config = qmodels.VectorParams( + size=dimension, + distance=_SUPPORTED_METRICS[metric], + ) + + if self.config.hnsw_config: + hnsw_config = qmodels.HnswConfig(**self.config.hnsw_config) + else: + hnsw_config = None + + if self.config.optimizers_config: + optimizers_config = qmodels.OptimizersConfig( + **self.config.optimizers_config + ) + else: + optimizers_config = None + + if self.config.wal_config: + wal_config = qmodels.WalConfig(**self.config.wal_config) + else: + wal_config = None + + self._client.recreate_collection( + collection_name=self.config.collection_name, + vectors_config=vectors_config, + shard_number=self.config.shard_number, + replication_factor=self.config.replication_factor, + hnsw_config=hnsw_config, + optimizers_config=optimizers_config, + wal_config=wal_config, + ) + + def _get_index_ids(self, batch_size=1000): + ids = [] + + offset = 0 + while offset is not None: + response = self._client.scroll( + collection_name=self.config.collection_name, + offset=offset, + limit=batch_size, + with_payload=True, + with_vectors=False, + ) + ids.extend([self._to_fiftyone_id(r.id) for r in response[0]]) + offset = response[-1] + + return ids + + @property + def total_index_size(self): + return self._client.count(self.config.collection_name).count + + @property + def client(self): + """The ``qdrant.QdrantClient`` instance for this index.""" + return self._client + + def add_to_index( + self, + embeddings, + sample_ids, + label_ids=None, + overwrite=True, + allow_existing=True, + warn_existing=False, + reload=True, + batch_size=1000, + ): + if self.config.collection_name not in self._get_collection_names(): + self._create_collection(embeddings.shape[1]) + + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + if warn_existing or not allow_existing or not overwrite: + index_ids = self._get_index_ids() + + existing_ids = set(ids) & set(index_ids) + num_existing = len(existing_ids) + + if num_existing > 0: + if not allow_existing: + raise ValueError( + "Found %d IDs (eg %s) that already exist in the index" + % (num_existing, next(iter(existing_ids))) + ) + + if warn_existing: + if overwrite: + logger.warning( + "Overwriting %d IDs that already exist in the " + "index", + num_existing, + ) + else: + logger.warning( + "Skipping %d IDs that already exist in the index", + num_existing, + ) + else: + existing_ids = set() + + if existing_ids and not overwrite: + del_inds = [i for i, _id in enumerate(ids) if _id in existing_ids] + + embeddings = np.delete(embeddings, del_inds) + sample_ids = np.delete(sample_ids, del_inds) + if label_ids is not None: + label_ids = np.delete(label_ids, del_inds) + + embeddings = [e.tolist() for e in embeddings] + sample_ids = list(sample_ids) + if label_ids is not None: + ids = list(label_ids) + else: + ids = list(sample_ids) + + for _embeddings, _ids, _sample_ids in zip( + fou.iter_batches(embeddings, batch_size), + fou.iter_batches(ids, batch_size), + fou.iter_batches(sample_ids, batch_size), + ): + self._client.upsert( + collection_name=self.config.collection_name, + points=qmodels.Batch( + ids=self._to_qdrant_ids(_ids), + payloads=[{"sample_id": _id} for _id in _sample_ids], + vectors=_embeddings, + ), + ) + + if reload: + self.reload() + + def remove_from_index( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + reload=True, + ): + if label_ids is not None: + ids = label_ids + else: + ids = sample_ids + + qids = self._to_qdrant_ids(ids) + + if warn_missing or not allow_missing: + response = self._retrieve_points(qids, with_vectors=False) + + existing_ids = self._to_fiftyone_ids([r.id for r in response]) + missing_ids = list(set(ids) - set(existing_ids)) + num_missing_ids = len(missing_ids) + + if num_missing_ids > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that do not exist in the index" + % (num_missing_ids, missing_ids[0]) + ) + if warn_missing and not allow_missing: + logger.warning( + "Skipping %d IDs that do not exist in the index", + num_missing_ids, + ) + + self._client.delete( + collection_name=self.config.collection_name, + points_selector=qmodels.PointIdsList(points=qids), + ) + + if reload: + self.reload() + + def get_embeddings( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + ): + if label_ids is not None: + if self.config.patches_field is None: + raise ValueError("This index does not support label IDs") + + if sample_ids is not None: + logger.warning( + "Ignoring sample IDs when label IDs are provided" + ) + + if sample_ids is not None and self.config.patches_field is not None: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_patch_embeddings_from_sample_ids(sample_ids) + elif self.config.patches_field is not None: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_patch_embeddings_from_label_ids(label_ids) + else: + ( + embeddings, + sample_ids, + label_ids, + missing_ids, + ) = self._get_sample_embeddings(sample_ids) + + num_missing_ids = len(missing_ids) + if num_missing_ids > 0: + if not allow_missing: + raise ValueError( + "Found %d IDs (eg %s) that do not exist in the index" + % (num_missing_ids, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Skipping %d IDs that do not exist in the index", + num_missing_ids, + ) + + embeddings = np.array(embeddings) + sample_ids = np.array(sample_ids) + if label_ids is not None: + label_ids = np.array(label_ids) + + return embeddings, sample_ids, label_ids + + def cleanup(self): + self._client.delete_collection(self.config.collection_name) + + def _retrieve_points(self, qids, with_vectors=True, with_payload=True): + # @todo add batching? + return self._client.retrieve( + collection_name=self.config.collection_name, + ids=qids, + with_vectors=with_vectors, + with_payload=with_payload, + ) + + def _get_sample_embeddings(self, sample_ids): + if sample_ids is None: + sample_ids = self._get_index_ids() + + response = self._retrieve_points( + self._to_qdrant_ids(sample_ids), + with_vectors=True, + ) + + found_embeddings = [r.vector for r in response] + found_sample_ids = self._to_fiftyone_ids([r.id for r in response]) + missing_ids = list(set(sample_ids) - set(found_sample_ids)) + + return found_embeddings, found_sample_ids, None, missing_ids + + def _get_patch_embeddings_from_label_ids(self, label_ids): + if label_ids is None: + label_ids = self._get_index_ids() + + response = self._retrieve_points( + self._to_qdrant_ids(label_ids), + with_vectors=True, + with_payload=True, + ) + + found_embeddings = [r.vector for r in response] + found_sample_ids = [r.payload["sample_id"] for r in response] + found_label_ids = self._to_fiftyone_ids([r.id for r in response]) + missing_ids = list(set(label_ids) - set(found_label_ids)) + + return found_embeddings, found_sample_ids, found_label_ids, missing_ids + + def _get_patch_embeddings_from_sample_ids(self, sample_ids): + _filter = qmodels.Filter( + should=[ + qmodels.FieldCondition( + key="sample_id", match=qmodels.MatchValue(value=sid) + ) + for sid in sample_ids + ] + ) + + response = self._client.scroll( + collection_name=self.config.collection_name, + scroll_filter=_filter, + with_vectors=True, + with_payload=True, + )[0] + + found_embeddings = [r.vector for r in response] + found_sample_ids = [r.payload["sample_id"] for r in response] + found_label_ids = [r.id for r in response] + missing_ids = list(set(sample_ids) - set(found_sample_ids)) + + return found_embeddings, found_sample_ids, found_label_ids, missing_ids + + def _kneighbors( + self, + query=None, + k=None, + reverse=False, + aggregation=None, + return_dists=False, + ): + if query is None: + raise ValueError("Qdrant does not support full index neighbors") + + if reverse is True: + raise ValueError( + "Qdrant does not support least similarity queries" + ) + + if aggregation not in (None, "mean"): + raise ValueError("Unsupported aggregation '%s'" % aggregation) + + if k is None: + k = self.index_size + + query = self._parse_neighbors_query(query) + if aggregation == "mean" and query.ndim == 2: + query = query.mean(axis=0) + + single_query = query.ndim == 1 + if single_query: + query = [query] + + if self.config.patches_field is not None: + index_ids = self.current_label_ids + else: + index_ids = self.current_sample_ids + + _filter = qmodels.Filter( + must=[ + qmodels.HasIdCondition(has_id=self._to_qdrant_ids(index_ids)) + ] + ) + + ids = [] + dists = [] + for q in query: + results = self._client.search( + collection_name=self.config.collection_name, + query_vector=q, + query_filter=_filter, + with_payload=False, + limit=k, + ) + + ids.append(self._to_fiftyone_ids([r.id for r in results])) + if return_dists: + dists.append([r.score for r in results]) + + if single_query: + ids = ids[0] + if return_dists: + dists = dists[0] + + if return_dists: + return ids, dists + + return ids + + def _parse_neighbors_query(self, query): + if etau.is_str(query): + query_ids = [query] + single_query = True + else: + query = np.asarray(query) + + # Query by vector(s) + if np.issubdtype(query.dtype, np.number): + return query + + query_ids = list(query) + single_query = False + + # Query by ID(s) + qids = self._to_qdrant_ids(query_ids) + response = self._retrieve_points(qids, with_vectors=True) + query = np.array([r.vector for r in response]) + + if single_query: + query = query[0, :] + + return query + + def _to_qdrant_id(self, _id): + return _id + "00000000" + + def _to_qdrant_ids(self, ids): + return [self._to_qdrant_id(_id) for _id in ids] + + def _to_fiftyone_id(self, qid): + return qid.replace("-", "")[:-8] + + def _to_fiftyone_ids(self, qids): + return [self._to_fiftyone_id(qid) for qid in qids] + + @classmethod + def _from_dict(cls, d, samples, config, brain_key): + return cls(samples, config, brain_key) diff --git a/fiftyone/brain/internal/core/similarity.py b/fiftyone/brain/internal/core/similarity.py deleted file mode 100644 index 8e353cff..00000000 --- a/fiftyone/brain/internal/core/similarity.py +++ /dev/null @@ -1,911 +0,0 @@ -""" -Similarity methods. - -| Copyright 2017-2022, Voxel51, Inc. -| `voxel51.com <https://voxel51.com/>`_ -| -""" -from collections import defaultdict -import logging - -from bson import ObjectId -import numpy as np -import sklearn.metrics as skm -import sklearn.neighbors as skn -import sklearn.preprocessing as skp - -import eta.core.utils as etau - -from fiftyone import ViewField as F -import fiftyone.core.brain as fob -import fiftyone.core.context as foc -import fiftyone.core.fields as fof -import fiftyone.core.labels as fol -import fiftyone.core.patches as fop -import fiftyone.core.stages as fos -import fiftyone.core.validation as fov -import fiftyone.zoo as foz - -from fiftyone.brain.similarity import SimilarityConfig, SimilarityResults -import fiftyone.brain.internal.core.utils as fbu - - -logger = logging.getLogger(__name__) - -_AGGREGATIONS = {"mean": np.mean, "min": np.min, "max": np.max} - -_DEFAULT_MODEL = "mobilenet-v2-imagenet-torch" -_DEFAULT_BATCH_SIZE = None - -_MAX_PRECOMPUTE_DISTS = 15000 # ~1.7GB to store distance matrix in-memory -_COSINE_HACK_ATTR = "_cosine_hack" - - -def compute_similarity( - samples, - patches_field, - embeddings, - brain_key, - metric, - model, - force_square, - alpha, - batch_size, - num_workers, - skip_failures, -): - """See ``fiftyone/brain/__init__.py``.""" - - fov.validate_collection(samples) - - if model is None and embeddings is None: - model = foz.load_zoo_model(_DEFAULT_MODEL) - if batch_size is None: - batch_size = _DEFAULT_BATCH_SIZE - - if etau.is_str(embeddings): - embeddings_field = embeddings - embeddings = None - else: - embeddings_field = None - - config = SimilarityConfig( - embeddings_field=embeddings_field, - model=model, - patches_field=patches_field, - metric=metric, - ) - brain_method = config.build() - brain_method.ensure_requirements() - - if brain_key is not None: - brain_method.register_run(samples, brain_key) - - embeddings = fbu.get_embeddings( - samples, - model=model, - patches_field=patches_field, - embeddings_field=embeddings_field, - embeddings=embeddings, - force_square=force_square, - alpha=alpha, - batch_size=batch_size, - num_workers=num_workers, - skip_failures=skip_failures, - ) - - results = SimilarityResults(samples, config, embeddings) - brain_method.save_run_results(samples, brain_key, results) - - return results - - -class Similarity(fob.BrainMethod): - def ensure_requirements(self): - pass - - def get_fields(self, samples, brain_key): - fields = [] - if self.config.patches_field is not None: - fields.append(self.config.patches_field) - - return fields - - def cleanup(self, samples, brain_key): - pass - - -class NeighborsHelper(object): - def __init__(self, embeddings, metric): - self.embeddings = embeddings - self.metric = metric - - self._initialized = False - self._full_dists = None - self._curr_keep_inds = None - self._curr_dists = None - self._curr_neighbors = None - - def get_distances(self, keep_inds=None): - self._init() - - if self._same_keep_inds(keep_inds): - return self._curr_dists - - if keep_inds is not None: - dists, _ = self._build(keep_inds=keep_inds, build_neighbors=False) - else: - dists = self._full_dists - - self._curr_keep_inds = keep_inds - self._curr_dists = dists - self._curr_neighbors = None - - return dists - - def get_neighbors(self, keep_inds=None): - self._init() - - if self._curr_neighbors is not None and self._same_keep_inds( - keep_inds - ): - return self._curr_neighbors, self._curr_dists - - dists, neighbors = self._build(keep_inds=keep_inds) - - self._curr_keep_inds = keep_inds - self._curr_dists = dists - self._curr_neighbors = neighbors - - return neighbors, dists - - def _same_keep_inds(self, keep_inds): - if keep_inds is None and self._curr_keep_inds is None: - return True - - if ( - isinstance(keep_inds, np.ndarray) - and isinstance(self._curr_keep_inds, np.ndarray) - and keep_inds.size == self._curr_keep_inds.size - and (keep_inds == self._curr_keep_inds).all() - ): - return True - - return False - - def _init(self): - if self._initialized: - return - - # Pre-compute all pairwise distances if number of embeddings is small - if len(self.embeddings) <= _MAX_PRECOMPUTE_DISTS: - dists, _ = self._build_precomputed( - self.embeddings, build_neighbors=False - ) - else: - dists = None - - self._initialized = True - self._full_dists = dists - self._curr_keep_inds = None - self._curr_dists = dists - self._curr_neighbors = None - - def _build(self, keep_inds=None, build_neighbors=True): - # Use full distance matrix if available - if self._full_dists is not None: - if keep_inds is not None: - dists = self._full_dists[keep_inds, :][:, keep_inds] - else: - dists = self._full_dists - - if build_neighbors: - neighbors = skn.NearestNeighbors(metric="precomputed") - neighbors.fit(dists) - else: - neighbors = None - - return dists, neighbors - - # Must build index - embeddings = self.embeddings - - if keep_inds is not None: - embeddings = embeddings[keep_inds] - - if len(embeddings) <= _MAX_PRECOMPUTE_DISTS: - dists, neighbors = self._build_precomputed( - embeddings, build_neighbors=build_neighbors - ) - else: - dists = None - neighbors = self._build_graph(embeddings) - - return dists, neighbors - - def _build_precomputed(self, embeddings, build_neighbors=True): - logger.info("Generating index...") - - # Center embeddings - embeddings = np.asarray(embeddings) - embeddings -= embeddings.mean(axis=0, keepdims=True) - - dists = skm.pairwise_distances(embeddings, metric=self.metric) - - if build_neighbors: - neighbors = skn.NearestNeighbors(metric="precomputed") - neighbors.fit(dists) - else: - neighbors = None - - logger.info("Index complete") - - return dists, neighbors - - def _build_graph(self, embeddings): - logger.info( - "Generating neighbors graph for %d embeddings; this may take " - "awhile...", - len(embeddings), - ) - - # Center embeddings - embeddings = np.asarray(embeddings) - embeddings -= embeddings.mean(axis=0, keepdims=True) - - metric = self.metric - - if metric == "cosine": - # Nearest neighbors does not directly support cosine distance, so - # we approximate via euclidean distance on unit-norm embeddings - cosine_hack = True - embeddings = skp.normalize(embeddings, axis=1) - metric = "euclidean" - else: - cosine_hack = False - - neighbors = skn.NearestNeighbors(metric=metric) - neighbors.fit(embeddings) - - setattr(neighbors, _COSINE_HACK_ATTR, cosine_hack) - - logger.info("Index complete") - - return neighbors - - -def plot_distances(results, bins, log, backend, **kwargs): - _ensure_neighbors(results) - - keep_inds = results._curr_keep_inds - neighbors, _ = results._neighbors_helper.get_neighbors(keep_inds=keep_inds) - metric = results.config.metric - thresh = results.thresh - - dists, _ = neighbors.kneighbors(n_neighbors=1) - - if backend == "matplotlib": - return _plot_distances_mpl(dists, metric, thresh, bins, log, **kwargs) - - return _plot_distances_plotly(dists, metric, thresh, bins, log, **kwargs) - - -def sort_by_similarity( - results, query_ids, k, reverse, aggregation, dist_field, mongo -): - _ensure_neighbors(results) - - samples = results.view - sample_ids = results._curr_sample_ids - label_ids = results._curr_label_ids - keep_inds = results._curr_keep_inds - patches_field = results.config.patches_field - metric = results.config.metric - - selecting_samples = patches_field is None or isinstance( - samples, fop.PatchesView - ) - - if etau.is_str(query_ids): - query_ids = [query_ids] - - if not query_ids: - raise ValueError("At least one query ID must be provided") - - if aggregation not in _AGGREGATIONS: - raise ValueError( - "Unsupported aggregation method '%s'. Supported values are %s" - % (aggregation, tuple(_AGGREGATIONS.keys())) - ) - - # - # Parse query (always using full index) - # - - if patches_field is None: - ids = results._sample_ids - else: - ids = results._label_ids - - bad_ids = [] - query_inds = [] - for query_id in query_ids: - _inds = np.where(ids == query_id)[0] - if _inds.size == 0: - bad_ids.append(query_id) - else: - query_inds.append(_inds[0]) - - if bad_ids: - raise ValueError( - "Query IDs %s were not included in this index" % bad_ids - ) - - # - # Perform sorting - # - - dists = results._neighbors_helper.get_distances() - - if dists is not None: - if keep_inds is not None: - dists = dists[keep_inds, :] - - dists = dists[:, query_inds] - else: - index_embeddings = results.embeddings - if keep_inds is not None: - index_embeddings = index_embeddings[keep_inds] - - query_embeddings = results.embeddings[query_inds] - dists = skm.pairwise_distances( - index_embeddings, query_embeddings, metric=metric - ) - - agg_fcn = _AGGREGATIONS[aggregation] - dists = agg_fcn(dists, axis=1) - - inds = np.argsort(dists) - if reverse: - inds = np.flip(inds) - - if k is not None: - inds = inds[:k] - - # - # Store query distances - # - - if dist_field is not None: - if selecting_samples: - values = {sample_ids[ind]: dists[ind] for ind in inds} - samples.set_values(dist_field, values, key_field="id") - else: - label_type, path = samples._get_label_field_path( - patches_field, dist_field - ) - if issubclass(label_type, fol._LABEL_LIST_FIELDS): - samples._set_list_values_by_id( - path, - sample_ids[inds], - label_ids[inds], - dists[inds], - path.rsplit(".", 1)[0], - ) - else: - values = {sample_ids[ind]: dists[ind] for ind in inds} - samples.set_values(path, values, key_field="id") - - # - # Construct sorted view - # - - stages = [] - - if selecting_samples: - stage = fos.Select(sample_ids[inds], ordered=True) - stages.append(stage) - else: - # We're sorting by object similarity but this is not a patches view, so - # arrange the samples in order of their first occuring label - result_sample_ids = _unique_no_sort(sample_ids[inds]) - stage = fos.Select(result_sample_ids, ordered=True) - stages.append(stage) - - if k is not None: - _ids = [ObjectId(_id) for _id in label_ids[inds]] - stage = fos.FilterLabels(patches_field, F("_id").is_in(_ids)) - stages.append(stage) - - if mongo: - pipeline = [] - for stage in stages: - stage.validate(samples) - pipeline.extend(stage.to_mongo(samples)) - - return pipeline - - view = samples - for stage in stages: - view = view.add_stage(stage) - - return view - - -def find_duplicates(results, thresh, fraction): - _ensure_neighbors(results) - - keep_inds = results._curr_keep_inds - embeddings = results.embeddings - metric = results.config.metric - patches_field = results.config.patches_field - - neighbors, dists = results._neighbors_helper.get_neighbors( - keep_inds=keep_inds - ) - - if keep_inds is not None: - embeddings = embeddings[keep_inds] - - if patches_field is not None: - ids = results._curr_label_ids - logger.info("Computing duplicate patches...") - else: - ids = results._curr_sample_ids - logger.info("Computing duplicate samples...") - - num_embeddings = len(embeddings) - - # - # Detect duplicates - # - - if fraction is not None: - num_keep = int(round(min(max(0, 1.0 - fraction), 1) * num_embeddings)) - keep, thresh = _remove_duplicates_count( - neighbors, num_keep, num_embeddings, init_thresh=thresh, - ) - else: - keep = _remove_duplicates_thresh(neighbors, thresh, num_embeddings) - - unique_ids = [_id for idx, _id in enumerate(ids) if idx in keep] - duplicate_ids = [_id for idx, _id in enumerate(ids) if idx not in keep] - - # - # Locate nearest non-duplicate for each duplicate - # - - if unique_ids and duplicate_ids: - unique_inds = np.array(sorted(keep)) - dup_inds = np.array( - [idx for idx in range(num_embeddings) if idx not in keep] - ) - - if dists is not None: - # Use pre-computed distances - _dists = dists[unique_inds, :][:, dup_inds] - min_inds = np.argmin(_dists, axis=0) - min_dists = _dists[min_inds, range(len(dup_inds))] - else: - neighbors = skn.NearestNeighbors(metric=metric) - neighbors.fit(embeddings[unique_inds, :]) - min_dists, min_inds = neighbors.kneighbors( - embeddings[dup_inds, :], n_neighbors=1 - ) - min_dists = min_dists.ravel() - min_inds = min_inds.ravel() - - neighbors_map = defaultdict(list) - for dup_id, min_ind, min_dist in zip( - duplicate_ids, min_inds, min_dists - ): - nearest_id = ids[unique_inds[min_ind]] - neighbors_map[nearest_id].append((dup_id, min_dist)) - - neighbors_map = { - k: sorted(v, key=lambda t: t[1]) for k, v in neighbors_map.items() - } - else: - neighbors_map = {} - - results._thresh = thresh - results._unique_ids = unique_ids - results._duplicate_ids = duplicate_ids - results._neighbors_map = neighbors_map - - logger.info("Duplicates computation complete") - - -def find_unique(results, count): - _ensure_neighbors(results) - - keep_inds = results._curr_keep_inds - neighbors, _ = results._neighbors_helper.get_neighbors(keep_inds=keep_inds) - patches_field = results.config.patches_field - num_embeddings = results.index_size - - if patches_field is not None: - ids = results._curr_label_ids - logger.info("Computing unique patches...") - else: - ids = results._curr_sample_ids - logger.info("Computing unique samples...") - - # Find uniques - keep, thresh = _remove_duplicates_count(neighbors, count, num_embeddings) - - unique_ids = [_id for idx, _id in enumerate(ids) if idx in keep] - duplicate_ids = [_id for idx, _id in enumerate(ids) if idx not in keep] - - results._thresh = thresh - results._unique_ids = unique_ids - results._duplicate_ids = duplicate_ids - results._neighbors_map = None - - logger.info("Uniqueness computation complete") - - -def duplicates_view( - results, type_field, id_field, dist_field, sort_by, reverse -): - samples = results.view - patches_field = results.config.patches_field - neighbors_map = results.neighbors_map - - if patches_field is not None and not isinstance(samples, fop.PatchesView): - samples = samples.to_patches(patches_field) - - if sort_by == "distance": - key = lambda kv: min(e[1] for e in kv[1]) - elif sort_by == "count": - key = lambda kv: len(kv[1]) - else: - raise ValueError( - "Invalid sort_by='%s'; supported values are %s" - % (sort_by, ("distance", "count")) - ) - - existing_ids = set(samples.values("id")) - neighbors = [(k, v) for k, v in neighbors_map.items() if k in existing_ids] - - ids = [] - types = [] - nearest_ids = [] - dists = [] - for _id, duplicates in sorted(neighbors, key=key, reverse=reverse): - ids.append(_id) - types.append("nearest") - nearest_ids.append(_id) - dists.append(0) - - for dup_id, dist in duplicates: - ids.append(dup_id) - types.append("duplicate") - nearest_ids.append(_id) - dists.append(dist) - - dups_view = samples.select(ids, ordered=True) - - if type_field is not None: - dups_view._dataset.add_sample_field(type_field, fof.StringField) - dups_view.set_values(type_field, types) - - if id_field is not None: - dups_view._dataset.add_sample_field(id_field, fof.StringField) - dups_view.set_values(id_field, nearest_ids) - - if dist_field is not None: - dups_view._dataset.add_sample_field(dist_field, fof.FloatField) - dups_view.set_values(dist_field, dists) - - return dups_view - - -def unique_view(results): - samples = results.view - patches_field = results.config.patches_field - unique_ids = results.unique_ids - - if patches_field is not None and not isinstance(samples, fop.PatchesView): - samples = samples.to_patches(patches_field) - - return samples.select(unique_ids) - - -def visualize_duplicates(results, visualization, backend, **kwargs): - visualization = _ensure_visualization(results, visualization) - - samples = results.view - duplicate_ids = results.duplicate_ids - neighbors_map = results.neighbors_map - patches_field = results.config.patches_field - - if patches_field is not None: - _, id_path = samples._get_label_field_path(patches_field, "id") - ids = samples.values(id_path, unwind=True) - else: - ids = samples.values("id") - - dup_ids = set(duplicate_ids) - nearest_ids = set(neighbors_map.keys()) - - labels = [] - for _id in ids: - if _id in dup_ids: - label = "duplicate" - elif _id in nearest_ids: - label = "nearest" - else: - label = "unique" - - labels.append(label) - - if backend == "plotly": - kwargs["edges"] = _build_edges(ids, neighbors_map) - kwargs["edges_title"] = "neighbors" - kwargs["labels_title"] = "type" - - with visualization.use_view(samples): - return visualization.visualize( - labels=labels, - classes=["unique", "nearest", "duplicate"], - backend=backend, - **kwargs, - ) - - -def visualize_unique(results, visualization, backend, **kwargs): - visualization = _ensure_visualization(results, visualization) - - samples = results.view - unique_ids = results.unique_ids - patches_field = results.config.patches_field - - if patches_field is not None: - _, id_path = samples._get_label_field_path(patches_field, "id") - ids = samples.values(id_path, unwind=True) - else: - ids = samples.values("id") - - _unique_ids = set(unique_ids) - - labels = [] - for _id in ids: - if _id in _unique_ids: - label = "unique" - else: - label = "other" - - labels.append(label) - - with visualization.use_view(samples): - return visualization.visualize( - labels=labels, - classes=["other", "unique"], - backend=backend, - **kwargs, - ) - - -def _unique_no_sort(values): - seen = set() - return [v for v in values if v not in seen and not seen.add(v)] - - -def _ensure_neighbors(results): - if results._neighbors_helper is not None: - return - - embeddings = results.embeddings - metric = results.config.metric - results._neighbors_helper = NeighborsHelper(embeddings, metric) - - -def _ensure_visualization(results, visualization): - if visualization is not None: - return visualization - - import fiftyone.brain as fb - - samples = results._samples - embeddings = results.embeddings - patches_field = results.config.patches_field - - if embeddings.shape[1] in {2, 3}: - config = fb.VisualizationConfig( - patches_field=patches_field, num_dims=2 - ) - return fb.VisualizationResults(samples, config, embeddings) - - return fb.compute_visualization( - samples, - patches_field=patches_field, - embeddings=embeddings, - num_dims=2, - seed=51, - verbose=True, - ) - - -def _build_edges(ids, neighbors_map): - inds_map = {_id: idx for idx, _id in enumerate(ids)} - - edges = [] - for nearest_id, duplicates in neighbors_map.items(): - nearest_ind = inds_map[nearest_id] - for dup_id, _ in duplicates: - dup_ind = inds_map[dup_id] - edges.append((dup_ind, nearest_ind)) - - return np.array(edges) - - -def _remove_duplicates_thresh(neighbors, thresh, num_embeddings): - # When not using brute force, we approximate cosine distance by computing - # Euclidean distance on unit-norm embeddings. ED = sqrt(2 * CD), so we need - # to scale the threshold appropriately - if getattr(neighbors, _COSINE_HACK_ATTR, False): - thresh = np.sqrt(2.0 * thresh) - - inds = neighbors.radius_neighbors(radius=thresh, return_distance=False) - - keep = set(range(num_embeddings)) - for ind in range(num_embeddings): - if ind in keep: - keep -= {i for i in inds[ind] if i > ind} - - return keep - - -def _remove_duplicates_count( - neighbors, num_keep, num_embeddings, init_thresh=None -): - if init_thresh is not None: - thresh = init_thresh - else: - thresh = 1 - - if num_keep <= 0: - logger.info("threshold: -, kept: %d, target: %d", num_keep, num_keep) - return set(), None - - if num_keep >= num_embeddings: - logger.info("threshold: -, kept: %d, target: %d", num_keep, num_keep) - return set(range(num_embeddings)), None - - thresh_lims = [0, None] - num_target = num_keep - num_keep = -1 - - while True: - keep = _remove_duplicates_thresh(neighbors, thresh, num_embeddings) - num_keep_last = num_keep - num_keep = len(keep) - - logger.info( - "threshold: %f, kept: %d, target: %d", - thresh, - num_keep, - num_target, - ) - - if num_keep == num_target or ( - num_keep == num_keep_last - and thresh_lims[1] is not None - and thresh_lims[1] - thresh_lims[0] < 1e-6 - ): - break - - if num_keep < num_target: - # Need to decrease threshold - thresh_lims[1] = thresh - thresh = 0.5 * (thresh_lims[0] + thresh) - else: - # Need to increase threshold - thresh_lims[0] = thresh - if thresh_lims[1] is not None: - thresh = 0.5 * (thresh + thresh_lims[1]) - else: - thresh *= 2 - - return keep, thresh - - -def _plot_distances_plotly(dists, metric, thresh, bins, log, **kwargs): - import plotly.graph_objects as go - import fiftyone.core.plots.plotly as fopl - - counts, edges = np.histogram(dists, bins=bins) - left_edges = edges[:-1] - widths = edges[1:] - edges[:-1] - customdata = np.stack((edges[:-1], edges[1:]), axis=1) - - hover_lines = [ - "<b>count: %{y}</b>", - "distance: [%{customdata[0]:.2f}, %{customdata[1]:.2f}]", - ] - hovertemplate = "<br>".join(hover_lines) + "<extra></extra>" - - bar = go.Bar( - x=left_edges, - y=counts, - width=widths, - customdata=customdata, - offset=0, - marker_color="#FF6D04", - hovertemplate=hovertemplate, - showlegend=False, - ) - - traces = [bar] - - if thresh is not None: - line = go.Scatter( - x=[thresh, thresh], - y=[0, max(counts)], - mode="lines", - line=dict(color="#17191C", width=3), - hovertemplate="<b>thresh: %{x}</b><extra></extra>", - showlegend=False, - ) - traces.append(line) - - figure = go.Figure(traces) - - figure.update_layout( - xaxis_title="nearest neighbor distance (%s)" % metric, - yaxis_title="count", - hovermode="x", - yaxis_rangemode="tozero", - ) - - if log: - figure.update_layout(yaxis_type="log") - - figure.update_layout(**fopl._DEFAULT_LAYOUT) - figure.update_layout(**kwargs) - - if foc.is_jupyter_context(): - figure = fopl.PlotlyNotebookPlot(figure) - - return figure - - -def _plot_distances_mpl( - dists, metric, thresh, bins, log, ax=None, figsize=None, **kwargs -): - import matplotlib.pyplot as plt - - if ax is None: - fig, ax = plt.subplots() - else: - fig = ax.figure - - counts, edges = np.histogram(dists, bins=bins) - left_edges = edges[:-1] - widths = edges[1:] - edges[:-1] - - ax.bar( - left_edges, - counts, - width=widths, - align="edge", - color="#FF6D04", - **kwargs, - ) - - if thresh is not None: - ax.vlines(thresh, 0, max(counts), color="#17191C", linewidth=3) - - if log: - ax.set_yscale("log") - - ax.set_xlabel("nearest neighbor distance (%s)" % metric) - ax.set_ylabel("count") - - if figsize is not None: - fig.set_size_inches(*figsize) - - plt.tight_layout() - - return fig diff --git a/fiftyone/brain/internal/core/sklearn.py b/fiftyone/brain/internal/core/sklearn.py new file mode 100644 index 00000000..6fc529a3 --- /dev/null +++ b/fiftyone/brain/internal/core/sklearn.py @@ -0,0 +1,847 @@ +""" +Sklearn similarity backend. + +| Copyright 2017-2023, Voxel51, Inc. +| `voxel51.com <https://voxel51.com/>`_ +| +""" +import logging + +import numpy as np +import sklearn.metrics as skm +import sklearn.neighbors as skn +import sklearn.preprocessing as skp + +import eta.core.utils as etau + +from fiftyone.brain.similarity import ( + DuplicatesMixin, + SimilarityConfig, + Similarity, + SimilarityIndex, +) +import fiftyone.brain.internal.core.utils as fbu + + +logger = logging.getLogger(__name__) + +_AGGREGATIONS = { + "mean": np.mean, + "post-mean": np.mean, + "post-min": np.min, + "post-max": np.max, +} + +_MAX_PRECOMPUTE_DISTS = 15000 # ~1.7GB to store distance matrix in-memory +_COSINE_HACK_ATTR = "_cosine_hack" + + +class SklearnSimilarityConfig(SimilarityConfig): + """Configuration for the sklearn similarity backend. + + Args: + embeddings_field (None): the sample field containing the embeddings, + if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known + patches_field (None): the sample field defining the patches being + analyzed, if any + supports_prompts (None): whether this run supports prompt queries + metric ("cosine"): the embedding distance metric to use. See + ``sklearn.metrics.pairwise_distance`` for supported values + """ + + def __init__( + self, + embeddings_field=None, + model=None, + patches_field=None, + supports_prompts=None, + metric="cosine", + **kwargs, + ): + super().__init__( + embeddings_field=embeddings_field, + model=model, + patches_field=patches_field, + supports_prompts=supports_prompts, + **kwargs, + ) + self.metric = metric + + @property + def method(self): + return "sklearn" + + @property + def max_k(self): + return None + + @property + def supports_least_similarity(self): + return True + + @property + def supported_aggregations(self): + return tuple(_AGGREGATIONS.keys()) + + +class SklearnSimilarity(Similarity): + """Sklearn similarity factory. + + Args: + config: an :class:`SklearnSimilarityConfig` + """ + + def initialize(self, samples, brain_key): + return SklearnSimilarityIndex( + samples, self.config, brain_key, backend=self + ) + + +class SklearnSimilarityIndex(SimilarityIndex, DuplicatesMixin): + """Class for interacting with sklearn similarity indexes. + + Args: + samples: the :class:`fiftyone.core.collections.SampleCollection` used + config: the :class:`SklearnSimilarityConfig` used + brain_key: the brain key + embeddings (None): a ``num_embeddings x num_dims`` array of embeddings + sample_ids (None): a ``num_embeddings`` array of sample IDs + label_ids (None): a ``num_embeddings`` array of label IDs, if + applicable + backend (None): a :class:`SklearnSimilarity` instance + """ + + def __init__( + self, + samples, + config, + brain_key, + embeddings=None, + sample_ids=None, + label_ids=None, + backend=None, + ): + embeddings, sample_ids, label_ids = self._parse_data( + samples, + config, + embeddings=embeddings, + sample_ids=sample_ids, + label_ids=label_ids, + ) + + self._embeddings = embeddings + self._sample_ids = sample_ids + self._label_ids = label_ids + + self._ids_to_inds = None + self._curr_ids_to_inds = None + self._neighbors_helper = None + + SimilarityIndex.__init__( + self, samples, config, brain_key, backend=backend + ) + DuplicatesMixin.__init__(self) + + @property + def embeddings(self): + return self._embeddings + + @property + def sample_ids(self): + return self._sample_ids + + @property + def label_ids(self): + return self._label_ids + + @property + def total_index_size(self): + return len(self._sample_ids) + + def add_to_index( + self, + embeddings, + sample_ids, + label_ids=None, + overwrite=True, + allow_existing=True, + warn_existing=False, + reload=True, + ): + _sample_ids, _label_ids, ii, jj = fbu.add_ids( + sample_ids, + label_ids, + self._sample_ids, + self._label_ids, + patches_field=self.config.patches_field, + overwrite=overwrite, + allow_existing=allow_existing, + warn_existing=warn_existing, + ) + + if ii.size == 0: + return + + _embeddings = embeddings[ii, :] + + if self.config.embeddings_field is not None: + fbu.add_embeddings( + self._samples, + _embeddings, + _sample_ids, + _label_ids, + self.config.embeddings_field, + patches_field=self.config.patches_field, + ) + + _e = self._embeddings + + n = _e.shape[0] + if n == 0: + _e = np.empty((0, embeddings.shape[1]), dtype=embeddings.dtype) + + d = _e.shape[1] + m = jj[-1] - n + 1 + + if m > 0: + if _e.size > 0: + _e = np.concatenate((_e, np.empty((m, d), dtype=_e.dtype))) + else: + _e = np.empty_like(_embeddings) + + _e[jj, :] = _embeddings + + self._embeddings = _e + self._sample_ids = _sample_ids + self._label_ids = _label_ids + + if reload: + self.reload() + + def remove_from_index( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + reload=True, + ): + _sample_ids, _label_ids, rm_inds = fbu.remove_ids( + sample_ids, + label_ids, + self._sample_ids, + self._label_ids, + patches_field=self.config.patches_field, + allow_missing=allow_missing, + warn_missing=warn_missing, + ) + + if rm_inds.size == 0: + return + + if self.config.embeddings_field is not None: + fbu.remove_embeddings( + self._samples, + self.config.embeddings_field, + sample_ids=_sample_ids, + label_ids=_label_ids, + patches_field=self.config.patches_field, + ) + + _embeddings = np.delete(self._embeddings, rm_inds) + + self._embeddings = _embeddings + self._sample_ids = _sample_ids + self._label_ids = _label_ids + + if reload: + self.reload() + + def use_view(self, *args, **kwargs): + self._curr_ids_to_inds = None + return super().use_view(*args, **kwargs) + + def get_embeddings( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + ): + if label_ids is not None: + if self.config.patches_field is None: + raise ValueError("This index does not support label IDs") + + if sample_ids is not None: + logger.warning( + "Ignoring sample IDs when label IDs are provided" + ) + + inds = _get_inds( + label_ids, + self.label_ids, + "label", + allow_missing, + warn_missing, + ) + + embeddings = self._embeddings[inds, :] + sample_ids = self.sample_ids[inds] + label_ids = np.asarray(label_ids) + elif sample_ids is not None: + if etau.is_str(sample_ids): + sample_ids = [sample_ids] + + if self.config.patches_field is not None: + sample_ids = set(sample_ids) + bools = [_id in sample_ids for _id in self.sample_ids] + inds = np.nonzero(bools)[0] + else: + inds = _get_inds( + sample_ids, + self.sample_ids, + "sample", + allow_missing, + warn_missing, + ) + + embeddings = self._embeddings[inds, :] + sample_ids = self.sample_ids[inds] + if self.config.patches_field is not None: + label_ids = self.label_ids[inds] + else: + label_ids = None + else: + embeddings = self._embeddings.copy() + sample_ids = self.sample_ids.copy() + if self.config.patches_field is not None: + label_ids = self.label_ids.copy() + else: + label_ids = None + + return embeddings, sample_ids, label_ids + + def reload(self): + if self.config.embeddings_field is not None: + embeddings, sample_ids, label_ids = self._parse_data( + self._samples, self.config + ) + + self._embeddings = embeddings + self._sample_ids = sample_ids + self._label_ids = label_ids + + self._ids_to_inds = None + self._curr_ids_to_inds = None + self._neighbors_helper = None + + super().reload() + + def cleanup(self): + pass + + def attributes(self): + attrs = super().attributes() + + if self.config.embeddings_field is None: + attrs.extend(["embeddings", "sample_ids", "label_ids"]) + + return attrs + + def _kneighbors( + self, + query=None, + k=None, + reverse=False, + aggregation=None, + return_dists=False, + ): + if aggregation is not None: + return self._kneighbors_aggregate( + query, k, reverse, aggregation, return_dists + ) + + ( + query, + query_inds, + full_index, + single_query, + ) = self._parse_neighbors_query(query) + + can_use_dists = full_index or query_inds is not None + neighbors, dists = self._get_neighbors(can_use_dists=can_use_dists) + + if dists is not None: + # Use pre-computed distances + if query_inds is not None: + _dists = dists[query_inds, :] + _rows = range(len(query_inds)) + else: + _dists = dists + _rows = range(dists.shape[0]) + + inds = np.argmin(_dists, axis=1) + if return_dists: + dists = _dists[_rows, inds] + else: + dists = None + else: + if return_dists: + dists, inds = neighbors.kneighbors( + X=query, n_neighbors=k, return_distance=True + ) + else: + inds = neighbors.kneighbors( + X=query, n_neighbors=k, return_distance=False + ) + dists = None + + return self._format_output( + inds, dists, full_index, single_query, return_dists + ) + + def _radius_neighbors(self, query=None, thresh=None, return_dists=False): + ( + query, + query_inds, + full_index, + single_query, + ) = self._parse_neighbors_query(query) + + can_use_dists = full_index or query_inds is not None + neighbors, dists = self._get_neighbors(can_use_dists=can_use_dists) + + # When not using brute force, we approximate cosine distance by + # computing Euclidean distance on unit-norm embeddings. + # ED = sqrt(2 * CD), so we need to scale the threshold appropriately + if getattr(neighbors, _COSINE_HACK_ATTR, False): + thresh = np.sqrt(2.0 * thresh) + + if dists is not None: + # Use pre-computed distances + if query_inds is not None: + _dists = dists[query_inds, :] + else: + _dists = dists + + inds = [np.nonzero(d <= thresh)[0] for d in _dists] + if return_dists: + dists = [d[i] for i, d in zip(inds, _dists)] + else: + dists = None + else: + if return_dists: + dists, inds = neighbors.radius_neighbors( + X=query, radius=thresh, return_distance=True + ) + else: + dists = None + inds = neighbors.radius_neighbors( + X=query, radius=thresh, return_distance=False + ) + + return self._format_output( + inds, dists, full_index, single_query, return_dists + ) + + def _kneighbors_aggregate( + self, query, k, reverse, aggregation, return_dists + ): + if query is None: + raise ValueError("Full index queries do not support aggregation") + + if aggregation not in _AGGREGATIONS: + raise ValueError( + "Unsupported aggregation method '%s'. Supported values are %s" + % (aggregation, tuple(_AGGREGATIONS.keys())) + ) + + query, query_inds, _, _ = self._parse_neighbors_query(query) + + # Pre-aggregation + if aggregation == "mean": + if query.shape[0] > 1: + query = query.mean(axis=0, keepdims=True) + query_inds = None + + aggregation = None + + can_use_dists = query_inds is not None + _, dists = self._get_neighbors( + can_use_neighbors=False, can_use_dists=can_use_dists + ) + + if dists is not None: + # Use pre-computed distances + dists = dists[query_inds, :] + else: + keep_inds = self._current_inds + index_embeddings = self._embeddings + if keep_inds is not None: + index_embeddings = index_embeddings[keep_inds] + + dists = skm.pairwise_distances( + query, index_embeddings, metric=self.config.metric + ) + + # Post-aggregation + if aggregation is not None: + agg_fcn = _AGGREGATIONS[aggregation] + dists = agg_fcn(dists, axis=0) + else: + dists = dists[0, :] + + inds = np.argsort(dists) + if reverse: + inds = np.flip(inds) + + if k is not None: + inds = inds[:k] + + if self.config.patches_field is not None: + ids = self.current_label_ids + else: + ids = self.current_sample_ids + + if return_dists: + return ids[inds], dists[inds] + + return ids[inds] + + def _parse_neighbors_query(self, query): + # Full index + if query is None: + return None, None, True, False + + if etau.is_str(query): + query_ids = [query] + single_query = True + else: + query = np.asarray(query) + + # Query vector(s) + if np.issubdtype(query.dtype, np.number): + single_query = query.ndim == 1 + if single_query: + query = query[np.newaxis, :] + + return query, None, False, single_query + + query_ids = list(query) + single_query = False + + # Retrieve indices into active `dists` matrix, if possible + ids_to_inds = self._get_ids_to_inds(full=False) + query_inds = [] + for _id in query_ids: + _ind = ids_to_inds.get(_id, None) + if _ind is not None: + query_inds.append(_ind) + else: + # At least one query ID is not in the active index + query_inds = None + break + + # Retrieve embeddings + ids_to_inds = self._get_ids_to_inds(full=True) + inds = [] + bad_ids = [] + for _id in query_ids: + _ind = ids_to_inds.get(_id, None) + if _ind is not None: + inds.append(_ind) + else: + bad_ids.append(_id) + + inds = np.array(inds) + + if bad_ids: + raise ValueError( + "Query IDs %s do not exist in this index" % bad_ids + ) + + query = self._embeddings[inds, :] + + if query_inds is not None: + query_inds = np.array(query_inds) + + return query, query_inds, False, single_query + + def _get_ids_to_inds(self, full=False): + if full: + if self._ids_to_inds is None: + if self.config.patches_field is not None: + ids = self.label_ids + else: + ids = self.sample_ids + + self._ids_to_inds = {_id: i for i, _id in enumerate(ids)} + + return self._ids_to_inds + + if self._curr_ids_to_inds is None: + if self.config.patches_field is not None: + ids = self.current_label_ids + else: + ids = self.current_sample_ids + + self._curr_ids_to_inds = {_id: i for i, _id in enumerate(ids)} + + return self._curr_ids_to_inds + + def _get_neighbors(self, can_use_neighbors=True, can_use_dists=True): + if self._neighbors_helper is None: + self._neighbors_helper = NeighborsHelper( + self._embeddings, self.config.metric + ) + + return self._neighbors_helper.get_neighbors( + keep_inds=self._current_inds, + can_use_neighbors=can_use_neighbors, + can_use_dists=can_use_dists, + ) + + def _format_output( + self, inds, dists, full_index, single_query, return_dists + ): + if full_index: + return (inds, dists) if return_dists else inds + + if self.config.patches_field is not None: + index_ids = self.current_label_ids + else: + index_ids = self.current_sample_ids + + ids = [[index_ids[i] for i in _inds] for _inds in inds] + if return_dists: + dists = [list(d) for d in dists] + + if single_query: + ids = ids[0] + if return_dists: + dists = dists[0] + + return (ids, dists) if return_dists else ids + + @staticmethod + def _parse_data( + samples, + config, + embeddings=None, + sample_ids=None, + label_ids=None, + ): + if embeddings is None: + embeddings, sample_ids, label_ids = fbu.get_embeddings( + samples._dataset, + patches_field=config.patches_field, + embeddings_field=config.embeddings_field, + ) + elif sample_ids is None: + sample_ids, label_ids = fbu.get_ids( + samples, + patches_field=config.patches_field, + data=embeddings, + data_type="embeddings", + ) + + return embeddings, sample_ids, label_ids + + @classmethod + def _from_dict(cls, d, samples, config, brain_key): + embeddings = d.get("embeddings", None) + if embeddings is not None: + embeddings = np.array(embeddings) + + sample_ids = d.get("sample_ids", None) + if sample_ids is not None: + sample_ids = np.array(sample_ids) + + label_ids = d.get("label_ids", None) + if label_ids is not None: + label_ids = np.array(label_ids) + + return cls( + samples, + config, + brain_key, + embeddings=embeddings, + sample_ids=sample_ids, + label_ids=label_ids, + ) + + +class NeighborsHelper(object): + + _UNAVAILABLE = "UNAVAILABLE" + + def __init__(self, embeddings, metric): + self.embeddings = embeddings + self.metric = metric + + self._initialized = False + self._full_dists = None + + self._curr_keep_inds = None + self._curr_neighbors = None + self._curr_dists = None + + def get_neighbors( + self, + keep_inds=None, + can_use_neighbors=True, + can_use_dists=True, + ): + iokay = self._same_keep_inds(keep_inds) + nokay = not can_use_neighbors or self._curr_neighbors is not None + dokay = not can_use_dists or self._curr_dists is not None + + if iokay and nokay and dokay: + neighbors = self._curr_neighbors + dists = self._curr_dists + else: + neighbors, dists = self._build( + keep_inds=keep_inds, + can_use_neighbors=can_use_neighbors, + can_use_dists=can_use_dists, + ) + + if not iokay: + self._curr_keep_inds = keep_inds + + if self._curr_neighbors is None or not iokay: + self._curr_neighbors = neighbors + + if self._curr_dists is None or not iokay: + self._curr_dists = dists + + if not can_use_neighbors or neighbors is self._UNAVAILABLE: + neighbors = None + + if not can_use_dists or dists is self._UNAVAILABLE: + dists = None + + return neighbors, dists + + def _same_keep_inds(self, keep_inds): + # This handles either argument being None + return np.array_equal(keep_inds, self._curr_keep_inds) + + def _build( + self, keep_inds=None, can_use_neighbors=True, can_use_dists=True + ): + if can_use_dists: + if ( + self._full_dists is None + and len(self.embeddings) <= _MAX_PRECOMPUTE_DISTS + ): + self._full_dists = self._build_dists(self.embeddings) + + if self._full_dists is not None: + if keep_inds is not None: + dists = self._full_dists[keep_inds, :][:, keep_inds] + else: + dists = self._full_dists + elif ( + keep_inds is not None + and len(keep_inds) <= _MAX_PRECOMPUTE_DISTS + ): + dists = self._build_dists(self.embeddings[keep_inds]) + else: + dists = self._UNAVAILABLE + else: + dists = None + + if can_use_neighbors: + if not isinstance(dists, np.ndarray): + embeddings = self.embeddings + if keep_inds is not None: + embeddings = embeddings[keep_inds] + + neighbors = self._build_neighbors(embeddings) + else: + neighbors = self._UNAVAILABLE + else: + neighbors = None + + return neighbors, dists + + def _build_dists(self, embeddings): + logger.info("Generating index for %d embeddings...", len(embeddings)) + + # Center embeddings + embeddings = np.asarray(embeddings) + embeddings -= embeddings.mean(axis=0, keepdims=True) + + dists = skm.pairwise_distances(embeddings, metric=self.metric) + + logger.info("Index complete") + + return dists + + def _build_neighbors(self, embeddings): + logger.info( + "Generating neighbors graph for %d embeddings...", + len(embeddings), + ) + + # Center embeddings + embeddings = np.asarray(embeddings) + embeddings -= embeddings.mean(axis=0, keepdims=True) + + metric = self.metric + + if metric == "cosine": + # Nearest neighbors does not directly support cosine distance, so + # we approximate via euclidean distance on unit-norm embeddings + cosine_hack = True + embeddings = skp.normalize(embeddings, axis=1) + metric = "euclidean" + else: + cosine_hack = False + + neighbors = skn.NearestNeighbors(metric=metric) + neighbors.fit(embeddings) + + setattr(neighbors, _COSINE_HACK_ATTR, cosine_hack) + + logger.info("Index complete") + + return neighbors + + +def _get_inds(ids, index_ids, ftype, allow_missing, warn_missing): + if etau.is_str(ids): + ids = [ids] + + ids_map = {_id: i for i, _id in enumerate(index_ids)} + + inds = [] + bad_ids = [] + + for _id in ids: + idx = ids_map.get(_id, None) + if idx is not None: + inds.append(idx) + else: + bad_ids.append(_id) + + num_missing = len(bad_ids) + + if num_missing > 0: + if not allow_missing: + raise ValueError( + "Found %d %s IDs (eg '%s') that are not present in the index" + % (num_missing, ftype, bad_ids[0]) + ) + + if warn_missing: + logger.warning( + "Ignoring %d %s IDs that are not present in the index", + num_missing, + ftype, + ) + + return np.array(inds) diff --git a/fiftyone/brain/internal/core/uniqueness.py b/fiftyone/brain/internal/core/uniqueness.py index 6d85632a..864cbe5a 100644 --- a/fiftyone/brain/internal/core/uniqueness.py +++ b/fiftyone/brain/internal/core/uniqueness.py @@ -1,7 +1,7 @@ """ Uniqueness methods. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ @@ -25,7 +25,6 @@ logger = logging.getLogger(__name__) - _ALLOWED_ROI_FIELD_TYPES = ( fol.Detection, fol.Detections, @@ -65,23 +64,25 @@ def compute_uniqueness( # to dense clusters of related samples. # - fov.validate_collection(samples) + fov.validate_image_collection(samples) if roi_field is not None: fov.validate_collection_label_fields( samples, roi_field, _ALLOWED_ROI_FIELD_TYPES ) - if samples.media_type == fom.VIDEO: - raise ValueError("Uniqueness does not yet support video collections") - if model is None and embeddings is None: model = fbm.load_model(_DEFAULT_MODEL) if batch_size is None: batch_size = _DEFAULT_BATCH_SIZE if etau.is_str(embeddings): - embeddings_field = embeddings + embeddings_field = fbu.parse_embeddings_field( + samples, + embeddings, + patches_field=roi_field, + allow_embedded=model is None, + ) embeddings = None else: embeddings_field = None @@ -103,7 +104,7 @@ def compute_uniqueness( else: agg_fcn = None - embeddings = fbu.get_embeddings( + embeddings, sample_ids, _ = fbu.get_embeddings( samples, model=model, patches_field=roi_field, @@ -121,8 +122,12 @@ def compute_uniqueness( logger.info("Computing uniqueness...") uniqueness = _compute_uniqueness(embeddings) + # Ensure field exists, even if `uniqueness` is empty samples._dataset.add_sample_field(uniqueness_field, fof.FloatField) - samples.set_values(uniqueness_field, uniqueness) + + uniqueness = {_id: u for _id, u in zip(sample_ids, uniqueness)} + if uniqueness: + samples.set_values(uniqueness_field, uniqueness, key_field="id") brain_method.save_run_results(samples, brain_key, None) @@ -130,11 +135,12 @@ def compute_uniqueness( def _compute_uniqueness(embeddings, metric="euclidean"): - # @todo convert to a parameter with a default, for tuning K = 3 num_embeddings = len(embeddings) - if num_embeddings <= _MAX_PRECOMPUTE_DISTS: + if num_embeddings <= K: + return [1] * num_embeddings + elif num_embeddings <= _MAX_PRECOMPUTE_DISTS: embeddings = skm.pairwise_distances(embeddings, metric=metric) metric = "precomputed" else: diff --git a/fiftyone/brain/internal/core/utils.py b/fiftyone/brain/internal/core/utils.py index af5a68f1..486aedef 100644 --- a/fiftyone/brain/internal/core/utils.py +++ b/fiftyone/brain/internal/core/utils.py @@ -1,16 +1,24 @@ """ Utilities. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ +import itertools import logging +import random +import string import numpy as np import eta.core.utils as etau +import fiftyone.brain as fob +import fiftyone.core.fields as fof +import fiftyone.core.labels as fol +import fiftyone.core.models as fom +import fiftyone.core.media as fomm import fiftyone.core.patches as fop import fiftyone.zoo as foz from fiftyone import ViewField as F @@ -19,88 +27,682 @@ logger = logging.getLogger(__name__) -def get_ids(samples, patches_field=None): +def parse_data( + samples, + patches_field=None, + data=None, + data_type="embeddings", + allow_missing=True, + warn_missing=True, +): + if isinstance(data, fob.SimilarityIndex): + return get_embeddings_from_index( + samples, + data, + patches_field=None, + allow_missing=True, + warn_missing=True, + ) + + _validate_args(samples, patches_field=patches_field) + if patches_field is None: - sample_ids = np.array(samples.values("id")) - return sample_ids, None + if isinstance(data, dict): + sample_ids, data = zip(*data.items()) + return np.array(data), np.array(sample_ids), None + + sample_ids, _ = get_ids(samples, data=data, data_type=data_type) + return data, sample_ids, None + + if isinstance(data, dict): + value = next(iter(data.values()), None) + if isinstance(value, np.ndarray) and value.ndim == 1: + label_ids, data = zip(*data.items()) + return _parse_label_data( + samples, + patches_field, + label_ids, + data, + data_type, + allow_missing, + warn_missing, + ) - sample_ids = [] - label_ids = [] - for l in samples._get_selected_labels(fields=patches_field): - sample_ids.append(l["sample_id"]) - label_ids.append(l["label_id"]) + sample_ids, label_ids = get_ids( + samples, + patches_field=patches_field, + data=data, + data_type=data_type, + ) - return np.array(sample_ids), np.array(label_ids) + return data, sample_ids, label_ids + + +def _parse_label_data( + samples, + patches_field, + label_ids, + data, + data_type, + allow_missing, + warn_missing, +): + if samples._is_patches: + sample_id_path = "sample_id" + else: + sample_id_path = "id" + + label_type, label_id_path = samples._get_label_field_path( + patches_field, "id" + ) + is_list_field = issubclass(label_type, fol._LABEL_LIST_FIELDS) + + ref_sample_ids, ref_label_ids = samples._dataset.values( + [sample_id_path, label_id_path] + ) + + if is_list_field: + ids_map = {} + for _sample_id, _lids in zip(ref_sample_ids, ref_label_ids): + if _lids: + for _label_id in _lids: + ids_map[_label_id] = _sample_id + else: + ids_map = dict(zip(ref_label_ids, ref_sample_ids)) + + _data = [] + _sample_ids = [] + _label_ids = [] + _missing_ids = [] + for _lid, _d in zip(label_ids, data): + _sid = ids_map.get(_lid, None) + if _sid is not None: + _data.append(_d) + _sample_ids.append(_sid) + _label_ids.append(_lid) + else: + _missing_ids.append(_lid) + + num_missing = len(_missing_ids) + if num_missing > 0: + if not allow_missing: + raise ValueError( + "Unable to retrieve sample IDs for %d label IDs (eg %s)" + % (num_missing, _missing_ids[0]) + ) + if warn_missing: + logger.warning( + "Ignoring %s for %d label IDs (eg %s) for which sample IDs " + "could not be retrieved", + data_type, + num_missing, + _missing_ids[0], + ) -def filter_ids(view, samples, sample_ids, label_ids, patches_field=None): - # No filtering required - if view == samples or view.view() == samples.view(): - return view, sample_ids, label_ids, None + return np.array(_data), np.array(_sample_ids), np.array(_label_ids) + +def get_embeddings_from_index( + samples, + similarity_index, + patches_field=None, + allow_missing=True, + warn_missing=True, +): if patches_field is None: - _sample_ids = view.values("id") - keep_inds = _get_keep_inds(_sample_ids, sample_ids) - return view, np.array(_sample_ids), None, keep_inds + if samples._is_patches: + sample_id_path = "sample_id" + else: + sample_id_path = "id" - # Filter labels in patches view + sample_ids = samples.values(sample_id_path) + label_ids = None + else: + if samples._is_patches: + label_id_path = "id" + else: + _, label_id_path = samples._get_label_field_path( + patches_field, "id" + ) - if ( - isinstance(view, fop.PatchesView) - and patches_field != view.patches_field - ): + sample_ids = None + label_ids = samples.values(label_id_path, unwind=True) + + return similarity_index.get_embeddings( + sample_ids=sample_ids, + label_ids=label_ids, + allow_missing=allow_missing, + warn_missing=warn_missing, + ) + + +def get_ids( + samples, + patches_field=None, + data=None, + data_type="embeddings", + handle_missing="skip", + ref_sample_ids=None, +): + _validate_args(samples, patches_field=patches_field) + + if patches_field is None: + if ref_sample_ids is not None: + sample_ids = ref_sample_ids + else: + sample_ids = samples.values("id") + + if data is not None and len(sample_ids) != len(data): + raise ValueError( + "The number of %s (%d) in these results no longer matches the " + "number of samples (%d) in the collection. You must " + "regenerate the results" + % (data_type, len(data), len(sample_ids)) + ) + + return np.array(sample_ids), None + + sample_ids, label_ids = _get_patch_ids( + samples, + patches_field, + handle_missing=handle_missing, + ref_sample_ids=ref_sample_ids, + ) + + if data is not None and len(sample_ids) != len(data): raise ValueError( - "This patches view contains labels from field '%s', not " - "'%s'" % (view.patches_field, patches_field) + "The number of %s (%d) in these results no longer matches the " + "number of labels (%d) in the '%s' field of the collection. You " + "must regenerate the results" + % (data_type, len(data), len(sample_ids), patches_field) ) - if isinstance(view, fop.EvaluationPatchesView) and patches_field not in ( - view.gt_field, - view.pred_field, - ): - raise ValueError( - "This evaluation patches view contains patches from " - "fields '%s' and '%s', not '%s'" - % (view.gt_field, view.pred_field, patches_field) + return np.array(sample_ids), np.array(label_ids) + + +def filter_ids( + samples, + index_sample_ids, + index_label_ids, + patches_field=None, + allow_missing=True, + warn_missing=False, +): + _validate_args(samples, patches_field=patches_field) + + if patches_field is None: + if samples._is_patches: + sample_ids = np.array(samples.values("sample_id")) + else: + sample_ids = np.array(samples.values("id")) + + if index_sample_ids is None: + return sample_ids, None, None, None + + keep_inds, good_inds, bad_ids = _parse_ids( + sample_ids, + index_sample_ids, + "samples", + allow_missing, + warn_missing, + ) + + if bad_ids is not None: + sample_ids = sample_ids[good_inds] + + return sample_ids, None, keep_inds, good_inds + + sample_ids, label_ids = _get_patch_ids(samples, patches_field) + + if index_label_ids is None: + return sample_ids, label_ids, None, None + + keep_inds, good_inds, bad_ids = _parse_ids( + label_ids, + index_label_ids, + "labels", + allow_missing, + warn_missing, + ) + + if bad_ids is not None: + sample_ids = sample_ids[good_inds] + label_ids = label_ids[good_inds] + + return sample_ids, label_ids, keep_inds, good_inds + + +def _get_patch_ids( + samples, patches_field, handle_missing="skip", ref_sample_ids=None +): + if samples._is_patches: + sample_id_path = "sample_id" + else: + sample_id_path = "id" + + label_type, label_id_path = samples._get_label_field_path( + patches_field, "id" + ) + is_list_field = issubclass(label_type, fol._LABEL_LIST_FIELDS) + + sample_ids, label_ids = samples.values([sample_id_path, label_id_path]) + + if ref_sample_ids is not None: + sample_ids, label_ids = _apply_ref_sample_ids( + sample_ids, label_ids, ref_sample_ids ) - labels = view._get_selected_labels(fields=patches_field) - _sample_ids = [l["sample_id"] for l in labels] - _label_ids = [l["label_id"] for l in labels] - keep_inds = _get_keep_inds(_label_ids, label_ids) - return view, np.array(_sample_ids), np.array(_label_ids), keep_inds + if is_list_field: + sample_ids, label_ids = _flatten_list_ids( + sample_ids, label_ids, handle_missing + ) + + return np.array(sample_ids), np.array(label_ids) + + +def _apply_ref_sample_ids(sample_ids, label_ids, ref_sample_ids): + ref_label_ids = [None] * len(ref_sample_ids) + inds_map = {_id: i for i, _id in enumerate(ref_sample_ids)} + for _id, _lid in zip(sample_ids, label_ids): + idx = inds_map.get(_id, None) + if idx is not None: + ref_label_ids[idx] = _lid + + return ref_sample_ids, ref_label_ids + + +def _flatten_list_ids(sample_ids, label_ids, handle_missing): + _sample_ids = [] + _label_ids = [] + _add_missing = handle_missing == "image" + + for _id, _lids in zip(sample_ids, label_ids): + if _lids: + for _lid in _lids: + _sample_ids.append(_id) + _label_ids.append(_lid) + elif _add_missing: + _sample_ids.append(_id) + _label_ids.append(None) + return _sample_ids, _label_ids -def _get_keep_inds(ids, ref_ids): - if len(ids) == len(ref_ids) and list(ids) == list(ref_ids): - return None - inds_map = {_id: idx for idx, _id in enumerate(ref_ids)} +def _parse_ids(ids, index_ids, ftype, allow_missing, warn_missing): + if np.array_equal(ids, index_ids): + return None, None, None + + inds_map = {_id: idx for idx, _id in enumerate(index_ids)} keep_inds = [] + bad_inds = [] bad_ids = [] - for _id in ids: + for _idx, _id in enumerate(ids): ind = inds_map.get(_id, None) if ind is not None: keep_inds.append(ind) else: + bad_inds.append(_idx) bad_ids.append(_id) - num_bad = len(bad_ids) + num_missing_index = len(index_ids) - len(keep_inds) + if num_missing_index > 0: + if not allow_missing: + raise ValueError( + "The index contains %d %s that are not present in the " + "provided collection" % (num_missing_index, ftype) + ) + + if warn_missing: + logger.warning( + "Ignoring %d %s from the index that are not present in the " + "provided collection", + num_missing_index, + ftype, + ) + + num_missing_collection = len(bad_ids) + if num_missing_collection > 0: + if not allow_missing: + raise ValueError( + "The provided collection contains %d %s not present in the " + "index" % (num_missing_collection, ftype) + ) + + if warn_missing: + logger.warning( + "Ignoring %d %s from the provided collection that are not " + "present in the index", + num_missing_collection, + ftype, + ) + + bad_inds = np.array(bad_inds, dtype=np.int64) + + good_inds = np.full(ids.shape, True) + good_inds[bad_inds] = False + else: + good_inds = None + bad_ids = None + + keep_inds = np.array(keep_inds, dtype=np.int64) + + return keep_inds, good_inds, bad_ids + + +def skip_ids(samples, ids, patches_field=None, warn_existing=False): + sample_ids, label_ids = get_ids(samples, patches_field=patches_field) + + if patches_field is not None: + exclude_ids = list(set(label_ids) - set(ids)) + num_existing = len(exclude_ids) + + if num_existing > 0: + if warn_existing: + logger.warning("Skipping %d existing label IDs", num_existing) + + samples = samples.exclude_labels( + ids=exclude_ids, fields=patches_field + ) + else: + exclude_ids = list(set(sample_ids) - set(ids)) + num_existing = len(exclude_ids) + + if num_existing > 0: + if warn_existing: + logger.warning("Skipping %d existing sample IDs", num_existing) + + samples = samples.exclude(exclude_ids) + + return samples + + +def add_ids( + sample_ids, + label_ids, + index_sample_ids, + index_label_ids, + patches_field=None, + overwrite=True, + allow_existing=True, + warn_existing=False, +): + if patches_field is not None: + ids = label_ids + index_ids = index_label_ids + else: + ids = sample_ids + index_ids = index_sample_ids + + ii = [] + jj = [] + + ids_map = {_id: _i for _i, _id in enumerate(index_ids)} + new_idx = len(index_ids) + for _i, _id in enumerate(ids): + _idx = ids_map.get(_id, None) + if _idx is None: + ii.append(_i) + jj.append(new_idx) + new_idx += 1 + elif overwrite: + ii.append(_i) + jj.append(_idx) + + ii = np.array(ii) + jj = np.array(jj) + + n = len(index_sample_ids) + + if not allow_existing: + existing_inds = np.nonzero(jj < n)[0] + num_existing = existing_inds.size + + if num_existing > 0: + if warn_existing: + logger.warning( + "Ignoring %d IDs (eg '%s') that are already present in " + "the index", + num_existing, + ids[ii[0]], + ) + + ii = np.delete(ii, existing_inds) + jj = np.delete(jj, existing_inds) + else: + raise ValueError( + "Found %d IDs (eg '%s') that are already present in the " + "index" % (num_existing, ids[ii[0]]) + ) + + if ii.size > 0: + sample_ids = np.array(sample_ids) + if patches_field is not None: + label_ids = np.array(label_ids) + + m = jj[-1] - n + 1 + + if n == 0: + index_sample_ids = np.array([], dtype=sample_ids.dtype) + if patches_field is not None: + index_label_ids = np.array([], dtype=label_ids.dtype) + + if m > 0: + index_sample_ids = np.concatenate( + (index_sample_ids, np.empty(m, dtype=index_sample_ids.dtype)) + ) + if patches_field is not None: + index_label_ids = np.concatenate( + (index_label_ids, np.empty(m, dtype=index_label_ids.dtype)) + ) + + index_sample_ids[jj] = sample_ids[ii] + if patches_field is not None: + index_label_ids[jj] = label_ids[ii] + + return index_sample_ids, index_label_ids, ii, jj + + +def add_embeddings( + samples, + embeddings, + sample_ids, + label_ids, + embeddings_field, + patches_field=None, +): + dataset = samples._dataset + + if patches_field is not None: + _, embeddings_path = dataset._get_label_field_path( + patches_field, embeddings_field + ) + + values = dict(zip(label_ids, embeddings)) + dataset.set_label_values(embeddings_path, values) + else: + values = dict(zip(sample_ids, embeddings)) + dataset.set_values(embeddings_field, values, key_field="id") + - if num_bad == 1: +def remove_ids( + sample_ids, + label_ids, + index_sample_ids, + index_label_ids, + patches_field=None, + allow_missing=True, + warn_missing=False, +): + rm_inds = [] + + if sample_ids is not None: + rm_inds.extend( + _find_ids( + sample_ids, + index_sample_ids, + allow_missing, + warn_missing, + "sample", + ) + ) + + if label_ids is not None: + rm_inds.extend( + _find_ids( + label_ids, + index_label_ids, + allow_missing, + warn_missing, + "label", + ) + ) + + rm_inds = np.array(rm_inds) + + if rm_inds.size > 0: + index_sample_ids = np.delete(index_sample_ids, rm_inds) + if patches_field is not None: + index_label_ids = np.delete(index_label_ids, rm_inds) + + return index_sample_ids, index_label_ids, rm_inds + + +def _find_ids(ids, index_ids, allow_missing, warn_missing, ftype): + found_inds = [] + missing_ids = [] + + ids_map = {_id: _i for _i, _id in enumerate(index_ids)} + for _id in ids: + ind = ids_map.get(_id, None) + if ind is not None: + found_inds.append(ind) + elif not allow_missing: + missing_ids.append(_id) + + num_missing = len(missing_ids) + + if num_missing > 0: + if not allow_missing: + raise ValueError( + "Found %d %d IDs (eg '%s') that are not present in the index" + % (num_missing, ftype, missing_ids[0]) + ) + + if warn_missing: + logger.warning( + "Ignoring %d %d IDs (eg '%s') that are not present in the " + "index", + num_missing, + ftype, + missing_ids[0], + ) + + return found_inds + + +def remove_embeddings( + samples, + embeddings_field, + sample_ids=None, + label_ids=None, + patches_field=None, +): + dataset = samples._dataset + + if patches_field is not None: + _, embeddings_path = dataset._get_label_field_path( + patches_field, embeddings_field + ) + + if sample_ids is not None and label_ids is None: + _, id_path = dataset._get_label_field_path(patches_field, "id") + label_ids = dataset.select(sample_ids).values(id_path, unwind=True) + + if label_ids is not None: + values = dict(zip(label_ids, itertools.repeat(None))) + dataset.set_label_values(embeddings_path, values) + elif sample_ids is not None: + values = dict(zip(sample_ids, itertools.repeat(None))) + dataset.set_values(embeddings_field, values, key_field="id") + + +def filter_values(values, keep_inds, patches_field=None): + if patches_field: + _values = list(itertools.chain.from_iterable(values)) + else: + _values = values + + _values = np.asarray(_values) + + if _values.size == keep_inds.size: + _values = _values[keep_inds] + else: + num_expected = np.count_nonzero(keep_inds) + if _values.size != num_expected: + raise ValueError( + "Expected %d raw values or %d pre-filtered values; found %d " + "values" % (keep_inds.size, num_expected, values.size) + ) + + # @todo we might need to re-ravel patch values here in the future + # We currently do not do this because all downstream users of this data + # will gracefully handle either flat or nested list data + + return _values + + +def get_values(samples, path_or_expr, ids, patches_field=None): + _validate_args( + samples, patches_field=patches_field, path_or_expr=path_or_expr + ) + return samples._get_values_by_id( + path_or_expr, ids, link_field=patches_field + ) + + +def parse_embeddings_field( + samples, embeddings_field, patches_field=None, allow_embedded=True +): + if not etau.is_str(embeddings_field): raise ValueError( - "The provided view contains ID '%s' not present in the index" - % bad_ids[0] + "Invalid embeddings_field=%s; expected a string field name" + % embeddings_field ) - if num_bad > 1: + if patches_field is None: + _embeddings_field, is_frame_field = samples._handle_frame_field( + embeddings_field + ) + + if not allow_embedded and "." in _embeddings_field: + ftype = "frame" if is_frame_field else "sample" + raise ValueError( + "Invalid embeddings_field=%s; expected a top-level %s field " + "name that contains no '.'" % (_embeddings_field, ftype) + ) + + return embeddings_field + + if embeddings_field.startswith(patches_field + "."): + _, root = samples._get_label_field_path(patches_field) + "." + if not embeddings_field.startswith(root): + raise ValueError( + "Invalid embeddings_field=%s for patches_field=%s" + % (embeddings_field, patches_field) + ) + + embeddings_field = embeddings_field[len(root) + 1] + + if not allow_embedded and "." in embeddings_field: raise ValueError( - "The provided view contains %d IDs (eg '%s') not present in the " - "index" % (num_bad, bad_ids[0]) + "Invalid embeddings_field=%s for patches_field=%s; expected a " + "label attribute name that contains no '.'" + % (embeddings_field, patches_field) ) - return np.array(keep_inds, dtype=np.int64) + return embeddings_field def get_embeddings( @@ -117,10 +719,30 @@ def get_embeddings( num_workers=None, skip_failures=True, ): - if model is not None: + _validate_args(samples, patches_field=patches_field) + + if model is None and embeddings_field is None and embeddings is None: + return _empty_embeddings(patches_field) + + if isinstance(embeddings, fob.SimilarityIndex): + allow_missing = handle_missing == "skip" + return get_embeddings_from_index( + samples, + embeddings, + patches_field=patches_field, + allow_missing=allow_missing, + warn_missing=True, + ) + + if embeddings is None and model is not None: if etau.is_str(model): model = foz.load_zoo_model(model) + if not isinstance(model, fom.Model): + raise ValueError( + "Model must be a %s; found %s" % (fom.Model, type(model)) + ) + if patches_field is not None: logger.info("Computing patch embeddings...") embeddings = samples.compute_patch_embeddings( @@ -135,6 +757,16 @@ def get_embeddings( skip_failures=skip_failures, ) else: + if ( + samples.media_type == fomm.VIDEO + and model.media_type == fomm.IMAGE + ): + raise ValueError( + "This method cannot use image models to compute video " + "embeddings. Try providing precomputed video embeddings " + "or converting to a frames view via `to_frames()` first" + ) + logger.info("Computing embeddings...") embeddings = samples.compute_embeddings( model, @@ -143,85 +775,181 @@ def get_embeddings( num_workers=num_workers, skip_failures=skip_failures, ) - elif embeddings_field is not None: - embeddings = samples.values(embeddings_field) - if embeddings is None: - raise ValueError( - "One of `model`, `embeddings_field`, or `embeddings` must be " - "provided" + if embeddings is None and embeddings_field is not None: + embeddings, samples = _load_embeddings( + samples, embeddings_field, patches_field=patches_field + ) + ref_sample_ids = None + else: + if isinstance(embeddings, dict): + embeddings = [ + embeddings.get(_id, None) for _id in samples.values("id") + ] + + embeddings, ref_sample_ids = _handle_missing_embeddings( + embeddings, samples ) - if isinstance(embeddings, dict): - embeddings = [ - embeddings.get(_id, None) for _id in samples.values("id") - ] + if not isinstance(embeddings, np.ndarray) and not embeddings: + return _empty_embeddings(patches_field) if patches_field is not None: - _handle_missing_patch_embeddings(embeddings, samples, patches_field) - if agg_fcn is not None: - embeddings = [agg_fcn(e) for e in embeddings] - embeddings = np.stack(embeddings) + embeddings = np.stack([agg_fcn(e) for e in embeddings]) else: embeddings = np.concatenate(embeddings, axis=0) + elif not isinstance(embeddings, np.ndarray): + embeddings = np.stack(embeddings) + + if agg_fcn is not None: + patches_field = None + + sample_ids, label_ids = get_ids( + samples, + patches_field=patches_field, + data=embeddings, + data_type="embeddings", + handle_missing=handle_missing, + ref_sample_ids=ref_sample_ids, + ) + + return embeddings, sample_ids, label_ids + + +def get_unique_name(name, ref_names): + ref_names = set(ref_names) + + if name in ref_names: + name += "-" + _get_random_characters(6) + + while name in ref_names: + name += _get_random_characters(1) + + return name + + +def _get_random_characters(n): + return "".join( + random.choice(string.ascii_lowercase + string.digits) for _ in range(n) + ) + + +def _empty_embeddings(patches_field): + embeddings = np.empty((0, 0), dtype=float) + sample_ids = np.array([], dtype="<U24") + + if patches_field is not None: + label_ids = np.array([], dtype="<U24") else: - _handle_missing_embeddings(embeddings) + label_ids = None - if agg_fcn is not None: - embeddings = [agg_fcn(e) for e in embeddings] + return embeddings, sample_ids, label_ids - embeddings = np.stack(embeddings) - return embeddings +def _load_embeddings(samples, embeddings_field, patches_field=None): + if patches_field is not None: + label_type, embeddings_path = samples._get_label_field_path( + patches_field, embeddings_field + ) + is_list_field = issubclass(label_type, fol._LABEL_LIST_FIELDS) + else: + embeddings_path = embeddings_field + is_list_field = False + if is_list_field: + samples = samples.filter_labels( + patches_field, F(embeddings_field) != None + ) + else: + samples = samples.match(F(embeddings_path) != None) -def _handle_missing_embeddings(embeddings): - if isinstance(embeddings, np.ndarray): - return + if samples.has_field(embeddings_path): + _field = None + else: + _field = fof.VectorField() - missing_inds = [] - num_dims = None - for idx, embedding in enumerate(embeddings): - if embedding is None: - missing_inds.append(idx) - elif num_dims is None: - num_dims = embedding.size + embeddings = samples.values(embeddings_path, _field=_field) - if not missing_inds: + if is_list_field: + embeddings = [np.stack(e) for e in embeddings if e] + + return embeddings, samples + + +def _validate_args(samples, patches_field=None, path_or_expr=None): + if patches_field is not None: + _validate_patches_args( + samples, patches_field, path_or_expr=path_or_expr + ) + else: + _validate_samples_args(samples, path_or_expr=path_or_expr) + + +def _validate_samples_args(samples, path_or_expr=None): + if not etau.is_str(path_or_expr): return - missing_embedding = np.zeros(num_dims or 16) - for idx in missing_inds: - embeddings[idx] = missing_embedding.copy() + path, _, list_fields, _, _ = samples._parse_field_name(path_or_expr) + + if list_fields: + raise ValueError( + "Values path '%s' contains invalid list field '%s'" + % (path, list_fields[0]) + ) - logger.warning("Using zeros for %d missing embeddings", len(missing_inds)) +def _validate_patches_args(samples, patches_field, path_or_expr=None): + if samples.media_type == fomm.VIDEO: + raise ValueError( + "This method does not directly support frame patches for video " + "collections. Try converting to a frames view via `to_frames()` " + "first" + ) + + if etau.is_str(path_or_expr) and not path_or_expr.startswith( + patches_field + "." + ): + raise ValueError( + "Values path '%s' must start with patches field '%s'" + % (path_or_expr, patches_field) + ) + + if ( + isinstance(samples, fop.PatchesView) + and patches_field != samples.patches_field + ): + raise ValueError( + "This patches view contains labels from field '%s', not " + "'%s'" % (samples.patches_field, patches_field) + ) + + if isinstance( + samples, fop.EvaluationPatchesView + ) and patches_field not in ( + samples.gt_field, + samples.pred_field, + ): + raise ValueError( + "This evaluation patches view contains patches from " + "fields '%s' and '%s', not '%s'" + % (samples.gt_field, samples.pred_field, patches_field) + ) + + +def _handle_missing_embeddings(embeddings, samples): + if isinstance(embeddings, np.ndarray): + return embeddings, None -def _handle_missing_patch_embeddings(embeddings, samples, patches_field): missing_inds = [] - num_dims = None for idx, embedding in enumerate(embeddings): if embedding is None: missing_inds.append(idx) - elif num_dims is None: - num_dims = embedding.shape[1] if not missing_inds: - return + return embeddings, None - missing_embedding = np.zeros(num_dims or 16) + embeddings = [e for e in embeddings if e is not None] + ref_sample_ids = list(np.delete(samples.values("id"), missing_inds)) - _, labels_path = samples._get_label_field_path(patches_field) - patch_counts = samples.values(F(labels_path).length()) - - num_missing = 0 - for idx in missing_inds: - count = patch_counts[idx] - embeddings[idx] = np.tile(missing_embedding, (count, 1)) - num_missing += count - - if num_missing > 0: - logger.warning( - "Using zeros for %d missing patch embeddings", num_missing - ) + return embeddings, ref_sample_ids diff --git a/fiftyone/brain/internal/core/visualization.py b/fiftyone/brain/internal/core/visualization.py index d972caf8..e26a8b2d 100644 --- a/fiftyone/brain/internal/core/visualization.py +++ b/fiftyone/brain/internal/core/visualization.py @@ -1,7 +1,7 @@ """ Visualization methods. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ @@ -14,6 +14,8 @@ import eta.core.utils as etau import fiftyone.core.brain as fob +import fiftyone.core.expressions as foe +import fiftyone.core.plots as fop import fiftyone.core.utils as fou import fiftyone.core.validation as fov import fiftyone.zoo as foz @@ -32,7 +34,6 @@ logger = logging.getLogger(__name__) - _DEFAULT_MODEL = "mobilenet-v2-imagenet-torch" _DEFAULT_BATCH_SIZE = None @@ -69,12 +70,17 @@ def compute_visualization( embeddings_field = None num_dims = points.shape[1] elif model is None and embeddings is None: - model = foz.load_zoo_model(_DEFAULT_MODEL) + model = _DEFAULT_MODEL if batch_size is None: batch_size = _DEFAULT_BATCH_SIZE if etau.is_str(embeddings): - embeddings_field = embeddings + embeddings_field = fbu.parse_embeddings_field( + samples, + embeddings, + patches_field=patches_field, + allow_embedded=model is None, + ) embeddings = None else: embeddings_field = None @@ -90,7 +96,7 @@ def compute_visualization( brain_method.register_run(samples, brain_key) if points is None: - embeddings = fbu.get_embeddings( + embeddings, sample_ids, label_ids = fbu.get_embeddings( samples, model=model, patches_field=patches_field, @@ -105,17 +111,97 @@ def compute_visualization( logger.info("Generating visualization...") points = brain_method.fit(embeddings) + else: + points, sample_ids, label_ids = fbu.parse_data( + samples, + patches_field=patches_field, + data=points, + data_type="points", + ) + + results = VisualizationResults( + samples, + config, + brain_key, + points, + sample_ids=sample_ids, + label_ids=label_ids, + ) - results = VisualizationResults(samples, config, points) brain_method.save_run_results(samples, brain_key, results) return results -class Visualization(fob.BrainMethod): - def ensure_requirements(self): - pass +def values(results, path_or_expr): + samples = results.view + patches_field = results.config.patches_field + if patches_field is not None: + ids = results.current_label_ids + else: + ids = results.current_sample_ids + + return fbu.get_values( + samples, path_or_expr, ids, patches_field=patches_field + ) + + +def visualize( + results, + labels=None, + sizes=None, + classes=None, + backend="plotly", + **kwargs, +): + points = results.current_points + samples = results.view + patches_field = results.config.patches_field + good_inds = results._curr_good_inds + if patches_field is not None: + ids = results.current_label_ids + else: + ids = results.current_sample_ids + + if good_inds is not None: + if etau.is_container(labels) and not _is_expr(labels): + labels = fbu.filter_values( + labels, good_inds, patches_field=patches_field + ) + + if etau.is_container(sizes) and not _is_expr(sizes): + sizes = fbu.filter_values( + sizes, good_inds, patches_field=patches_field + ) + + if labels is not None and _is_expr(labels): + labels = fbu.get_values( + samples, labels, ids, patches_field=patches_field + ) + + if sizes is not None and _is_expr(sizes): + sizes = fbu.get_values( + samples, sizes, ids, patches_field=patches_field + ) + + return fop.scatterplot( + points, + samples=samples, + ids=ids, + link_field=patches_field, + labels=labels, + sizes=sizes, + classes=classes, + backend=backend, + **kwargs, + ) + +def _is_expr(arg): + return isinstance(arg, (foe.ViewExpression, dict)) + + +class Visualization(fob.BrainMethod): def fit(self, embeddings): raise NotImplementedError("subclass must implement fit()") @@ -126,9 +212,6 @@ def get_fields(self, samples, brain_key): return fields - def cleanup(self, samples, brain_key): - pass - class UMAPVisualization(Visualization): def ensure_requirements(self): diff --git a/fiftyone/brain/internal/models/__init__.py b/fiftyone/brain/internal/models/__init__.py index 4b9f39e5..93b5040f 100644 --- a/fiftyone/brain/internal/models/__init__.py +++ b/fiftyone/brain/internal/models/__init__.py @@ -1,7 +1,7 @@ """ Brain models. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/fiftyone/brain/internal/models/simple_resnet.py b/fiftyone/brain/internal/models/simple_resnet.py index 5317c06a..8ed489f5 100644 --- a/fiftyone/brain/internal/models/simple_resnet.py +++ b/fiftyone/brain/internal/models/simple_resnet.py @@ -4,7 +4,7 @@ The original implementation of this is from David Page's work on fast model training with resnets at https://github.com/davidcpage/cifar10-fast. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/fiftyone/brain/internal/models/torch.py b/fiftyone/brain/internal/models/torch.py index 049d7610..2b7c7abe 100644 --- a/fiftyone/brain/internal/models/torch.py +++ b/fiftyone/brain/internal/models/torch.py @@ -1,7 +1,7 @@ """ PyTorch utilities. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/fiftyone/brain/similarity.py b/fiftyone/brain/similarity.py index be4b2705..dbe50a63 100644 --- a/fiftyone/brain/similarity.py +++ b/fiftyone/brain/similarity.py @@ -1,125 +1,468 @@ """ Similarity interface. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ +from collections import defaultdict +from copy import deepcopy +import logging + +from bson import ObjectId import numpy as np import eta.core.utils as etau +import fiftyone.brain as fb import fiftyone.core.brain as fob +import fiftyone.core.context as foc +import fiftyone.core.fields as fof +import fiftyone.core.labels as fol +import fiftyone.core.patches as fop +import fiftyone.core.stages as fos import fiftyone.core.utils as fou +import fiftyone.core.validation as fov +import fiftyone.zoo as foz +from fiftyone import ViewField as F -fbs = fou.lazy_import("fiftyone.brain.internal.core.similarity") fbu = fou.lazy_import("fiftyone.brain.internal.core.utils") -class SimilarityResults(fob.BrainResults): - """Class storing the results of :meth:`fiftyone.brain.compute_similarity`. +logger = logging.getLogger(__name__) + +_DEFAULT_MODEL = "mobilenet-v2-imagenet-torch" +_DEFAULT_BATCH_SIZE = None + + +def compute_similarity( + samples, + patches_field, + embeddings, + brain_key, + model, + force_square, + alpha, + batch_size, + num_workers, + skip_failures, + backend, + **kwargs, +): + """See ``fiftyone/brain/__init__.py``.""" + + fov.validate_collection(samples) + + if model is None and embeddings is None: + model = _DEFAULT_MODEL + if batch_size is None: + batch_size = _DEFAULT_BATCH_SIZE + + if etau.is_str(embeddings): + embeddings_field = fbu.parse_embeddings_field( + samples, + embeddings, + patches_field=patches_field, + allow_embedded=model is None, + ) + embeddings = None + else: + embeddings_field = None + + if etau.is_str(model): + _model = foz.load_zoo_model(model) + try: + supports_prompts = _model.can_embed_prompts + except: + supports_prompts = None + else: + _model = model + supports_prompts = None + + config = _parse_config( + backend, + embeddings_field=embeddings_field, + patches_field=patches_field, + model=model, + supports_prompts=supports_prompts, + **kwargs, + ) + brain_method = config.build() + brain_method.ensure_requirements() + + if brain_key is not None: + # Don't allow overwriting an existing run with same key, since we + # need the existing run in order to perform workflows like + # automatically cleaning up the backend's index + brain_method.register_run(samples, brain_key, overwrite=False) + + results = brain_method.initialize(samples, brain_key) + + if embeddings is not False: + embeddings, sample_ids, label_ids = fbu.get_embeddings( + samples, + model=_model, + patches_field=patches_field, + embeddings=embeddings, + embeddings_field=embeddings_field, + force_square=force_square, + alpha=alpha, + batch_size=batch_size, + num_workers=num_workers, + skip_failures=skip_failures, + ) + else: + # Special syntax to allow embeddings to be added later + embeddings = None + + if embeddings is not None: + results.add_to_index(embeddings, sample_ids, label_ids=label_ids) + + brain_method.save_run_results(samples, brain_key, results) + + return results + + +def _parse_config(name, **kwargs): + if name is None: + name = fb.brain_config.default_similarity_backend + + backends = fb.brain_config.similarity_backends + + if name not in backends: + raise ValueError( + "Unsupported backend '%s'. The available backends are %s" + % (name, sorted(backends.keys())) + ) + + params = deepcopy(backends[name]) + + config_cls = kwargs.pop("config_cls", None) + + if config_cls is None: + config_cls = params.pop("config_cls", None) + + if config_cls is None: + raise ValueError("Similarity backend '%s' has no `config_cls`" % name) + + if etau.is_str(config_cls): + config_cls = etau.get_class(config_cls) + + params.update(**kwargs) + return config_cls(**params) + + +class SimilarityConfig(fob.BrainMethodConfig): + """Similarity configuration. Args: - samples: the :class:`fiftyone.core.collections.SampleCollection` used - config: the :class:`SimilarityConfig` used - embeddings: a ``num_embeddings x num_dims`` array of embeddings + embeddings_field (None): the sample field containing the embeddings, + if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known + patches_field (None): the sample field defining the patches being + analyzed, if any + supports_prompts (False): whether this run supports prompt queries """ - def __init__(self, samples, config, embeddings): - sample_ids, label_ids = fbu.get_ids( - samples, patches_field=config.patches_field + def __init__( + self, + embeddings_field=None, + model=None, + patches_field=None, + supports_prompts=None, + **kwargs, + ): + if model is not None and not etau.is_str(model): + model = None + + self.embeddings_field = embeddings_field + self.model = model + self.patches_field = patches_field + self.supports_prompts = supports_prompts + super().__init__(**kwargs) + + @property + def method(self): + """The name of the similarity backend.""" + raise NotImplementedError("subclass must implement method") + + @property + def max_k(self): + """A maximum k value for nearest neighbor queries, or None if there is + no limit. + """ + raise NotImplementedError("subclass must implement max_k") + + @property + def supports_least_similarity(self): + """Whether this backend supports least similarity queries.""" + raise NotImplementedError( + "subclass must implement supports_least_similarity" ) - if len(sample_ids) != len(embeddings): - ptype = "label" if config.patches_field is not None else "sample" - raise ValueError( - "Number of %s IDs (%d) does not match number of embeddings " - "(%d). You may have missing data/labels that you need to omit " - "from your view" % (ptype, len(sample_ids), len(embeddings)) - ) + @property + def supported_aggregations(self): + """A tuple of supported values for the ``aggregation`` parameter of the + backend's + :meth:`sort_by_similarity() <SimilarityIndex.sort_by_similarity>` and + :meth:`_kneighbors() <SimilarityIndex._kneighbors>` methods. + """ + raise NotImplementedError( + "subclass must implement supported_aggregations" + ) - self.embeddings = embeddings + def load_credentials(self, **kwargs): + self._load_parameters(**kwargs) - self._samples = samples - self._config = config - self._sample_ids = sample_ids - self._label_ids = label_ids - self._last_view = None + def _load_parameters(self, **kwargs): + name = self.method + parameters = fb.brain_config.similarity_backends.get(name, {}) + + for name, value in kwargs.items(): + if value is None: + value = parameters.get(name, None) + + if value is not None: + setattr(self, name, value) + + +class Similarity(fob.BrainMethod): + """Base class for similarity factories. + + Args: + config: a :class:`SimilarityConfig` + """ + + def initialize(self, samples, brain_key): + """Initializes a similarity index. + + Args: + samples: a :class:`fiftyone.core.collections.SampleColllection` + brain_key: the brain key + + Returns: + a :class:`SimilarityIndex` + """ + raise NotImplementedError("subclass must implement initialize()") + + def get_fields(self, samples, brain_key): + fields = [] + if self.config.patches_field is not None: + fields.append(self.config.patches_field) + + if self.config.embeddings_field is not None: + fields.append(self.config.embeddings_field) + + return fields + + +class SimilarityIndex(fob.BrainResults): + """Base class for similarity indexes. + + Args: + samples: the :class:`fiftyone.core.collections.SampleCollection` used + config: the :class:`SimilarityConfig` used + brain_key: the brain key + backend (None): a :class:`Similarity` backend + """ + + def __init__(self, samples, config, brain_key, backend=None): + super().__init__(samples, config, brain_key, backend=backend) + + self._model = None self._curr_view = None self._curr_sample_ids = None self._curr_label_ids = None self._curr_keep_inds = None - self._neighbors_helper = None - self._thresh = None - self._unique_ids = None - self._duplicate_ids = None - self._neighbors_map = None + self._curr_missing_size = None + self._last_view = None + self._last_views = [] self.use_view(samples) def __enter__(self): - self._last_view = self.view + self._last_views.append(self._last_view) return self def __exit__(self, *args): - self.use_view(self._last_view) - self._last_view = None + try: + last_view = self._last_views.pop() + except: + last_view = self._samples + + self.use_view(last_view) @property def config(self): - """The :class:`SimilarityConfig` for the results.""" + """The :class:`SimilarityConfig` for these results.""" return self._config @property - def index_size(self): - """The number of examples in the index. + def sample_ids(self): + """The sample IDs of the full index, or ``None`` if not supported.""" + return None - If :meth:`use_view` has been called to restrict the index, this - property will reflect the size of the active index. + @property + def label_ids(self): + """The label IDs of the full index, or ``None`` if not applicable or + not supported. """ - return len(self._curr_sample_ids) + return None + + @property + def total_index_size(self): + """The total number of data points in the index. + + If :meth:`use_view` has been called to restrict the index, this value + may be larger than the current :meth:`index_size`. + """ + raise NotImplementedError("subclass must implement total_index_size") @property def view(self): """The :class:`fiftyone.core.collections.SampleCollection` against which results are currently being generated. - If :meth:`use_view` has been called, this view may be a subset of the - collection on which the full index was generated. + If :meth:`use_view` has been called, this view may be different than + the collection on which the full index was generated. """ return self._curr_view @property - def thresh(self): - """The threshold used by the last call to :meth:`find_duplicates` or - :meth:`find_unique`. + def current_sample_ids(self): + """The sample IDs of the currently active data points in the index. + + If :meth:`use_view` has been called, this may be a subset of the full + index. """ - return self._thresh + return self._curr_sample_ids @property - def unique_ids(self): - """A list of unique IDs from the last call to :meth:`find_duplicates` - or :meth:`find_unique`. + def current_label_ids(self): + """The label IDs of the currently active data points in the index, or + ``None`` if not applicable. + + If :meth:`use_view` has been called, this may be a subset of the full + index. """ - return self._unique_ids + return self._curr_label_ids @property - def duplicate_ids(self): - """A list of duplicate IDs from the last call to - :meth:`find_duplicates` or :meth:`find_unique`. + def _current_inds(self): + """The indices of :meth:`current_sample_ids` in :meth:`sample_ids`, or + ``None`` if not supported or if the full index is currently being used. """ - return self._duplicate_ids + return self._curr_keep_inds @property - def neighbors_map(self): - """A dictionary mapping IDs to lists of ``(dup_id, dist)`` tuples from - the last call to :meth:`find_duplicates`. + def index_size(self): + """The number of active data points in the index. + + If :meth:`use_view` has been called to restrict the index, this + property will reflect the size of the active index. """ - return self._neighbors_map + return len(self._curr_sample_ids) + + @property + def missing_size(self): + """The total number of data points in :meth:`view` that are missing + from this index, or ``None`` if unknown. + + This property is only applicable when :meth:`use_view` has been called, + and it will be ``None`` if no data points are missing or when the + backend does not support it. + """ + return self._curr_missing_size - def use_view(self, sample_collection): - """Restricts the index to the provided view, which must be a subset of - the full index's collection. + def add_to_index( + self, + embeddings, + sample_ids, + label_ids=None, + overwrite=True, + allow_existing=True, + warn_existing=False, + reload=True, + ): + """Adds the given embeddings to the index. + + Args: + embeddings: a ``num_embeddings x num_dims`` array of embeddings + sample_ids: a ``num_embeddings`` array of sample IDs + label_ids (None): a ``num_embeddings`` array of label IDs, if + applicable + overwrite (True): whether to replace (True) or ignore (False) + existing embeddings with the same sample/label IDs + allow_existing (True): whether to ignore (True) or raise an error + (False) when ``overwrite`` is False and a provided ID already + exists in the + warn_missing (False): whether to log a warning if an embedding is + not added to the index because its ID already exists + reload (True): whether to call :meth:`reload` to refresh the + current view after the update + """ + raise NotImplementedError("subclass must implement add_to_index()") + + def remove_from_index( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + reload=True, + ): + """Removes the specified embeddings from the index. + + Args: + sample_ids (None): an array of sample IDs + label_ids (None): an array of label IDs, if applicable + allow_missing (True): whether to allow the index to not contain IDs + that you provide (True) or whether to raise an error in this + case (False) + warn_missing (False): whether to log a warning if the index does + not contain IDs that you provide + reload (True): whether to call :meth:`reload` to refresh the + current view after the update + """ + raise NotImplementedError( + "subclass must implement remove_from_index()" + ) + + def get_embeddings( + self, + sample_ids=None, + label_ids=None, + allow_missing=True, + warn_missing=False, + ): + """Retrieves the embeddings for the given IDs from the index. + + If no IDs are provided, the entire index is returned. + + Args: + sample_ids (None): a sample ID or list of sample IDs for which to + retrieve embeddings + label_ids (None): a label ID or list of label IDs for which to + retrieve embeddings + allow_missing (True): whether to allow the index to not contain IDs + that you provide (True) or whether to raise an error in this + case (False) + warn_missing (False): whether to log a warning if the index does + not contain IDs that you provide + + Returns: + a tuple of: + + - a ``num_embeddings x num_dims`` array of embeddings + - a ``num_embeddings`` array of sample IDs + - a ``num_embeddings`` array of label IDs, if applicable, or else + ``None`` + """ + raise NotImplementedError("subclass must implement get_embeddings()") + + def use_view(self, samples, allow_missing=True, warn_missing=False): + """Restricts the index to the provided view. Subsequent calls to methods on this instance will only contain results from the specified view rather than the full index. @@ -151,25 +494,37 @@ def use_view(self, sample_collection): plot.show() Args: - sample_collection: a - :class:`fiftyone.core.collections.SampleCollection` defining a - subset of this index to use + samples: a :class:`fiftyone.core.collections.SampleCollection` + allow_missing (True): whether to allow the provided collection to + contain data points that this index does not contain (True) or + whether to raise an error in this case (False) + warn_missing (False): whether to log a warning if the provided + collection contains data points that this index does not + contain Returns: self """ - view, sample_ids, label_ids, keep_inds = fbu.filter_ids( - sample_collection, - self._samples, - self._sample_ids, - self._label_ids, - patches_field=self._config.patches_field, + sample_ids, label_ids, keep_inds, good_inds = fbu.filter_ids( + samples, + self.sample_ids, + self.label_ids, + patches_field=self.config.patches_field, + allow_missing=allow_missing, + warn_missing=warn_missing, ) - self._curr_view = view + if good_inds is not None: + missing_size = good_inds.size - np.count_nonzero(good_inds) + else: + missing_size = None + + self._last_view = self._curr_view + self._curr_view = samples self._curr_sample_ids = sample_ids self._curr_label_ids = label_ids self._curr_keep_inds = keep_inds + self._curr_missing_size = missing_size return self @@ -180,33 +535,54 @@ def clear_view(self): """ self.use_view(self._samples) - def plot_distances(self, bins=100, log=False, backend="plotly", **kwargs): - """Plots a histogram of the distance between each example and its - nearest neighbor. + def reload(self): + """Reloads the index for the current view. - If `:meth:`find_duplicates` or :meth:`find_unique` has been executed, - the threshold used is also indicated on the plot. + Subclasses may override this method, but by default this method simply + passes the current :meth:`view` back into :meth:`use_view`, which + updates the index's current ID set based on any changes to the view + since the index was last loaded. + """ + self.use_view(self._curr_view) + + def cleanup(self): + """Deletes the similarity index from the backend.""" + raise NotImplementedError("subclass must implement cleanup()") + + def values(self, path_or_expr): + """Extracts a flat list of values from the given field or expression + corresponding to the current :meth:`view`. + + This method always returns values in the same order as + :meth:`current_sample_ids` and :meth:`current_label_ids`. Args: - bins (100): the number of bins to use - log (False): whether to use a log scale y-axis - backend ("plotly"): the plotting backend to use. Supported values - are ``("plotly", "matplotlib")`` - **kwargs: keyword arguments for the backend plotting method + path_or_expr: the values to extract, which can be: - Returns: - one of the following: + - the name of a sample field or ``embedded.field.name`` from + which to extract numeric or string values + - a :class:`fiftyone.core.expressions.ViewExpression` + defining numeric or string values to compute via + :meth:`fiftyone.core.collections.SampleCollection.values` - - a :class:`fiftyone.core.plots.plotly.PlotlyNotebookPlot`, if - you are working in a notebook context and the plotly backend is - used - - a plotly or matplotlib figure, otherwise + Returns: + a list of values """ - return fbs.plot_distances(self, bins, log, backend, **kwargs) + samples = self.view + patches_field = self.config.patches_field + + if patches_field is not None: + ids = self.current_label_ids + else: + ids = self.current_sample_ids + + return fbu.get_values( + samples, path_or_expr, ids, patches_field=patches_field + ) def sort_by_similarity( self, - query_ids, + query, k=None, reverse=False, aggregation="mean", @@ -214,20 +590,29 @@ def sort_by_similarity( _mongo=False, ): """Returns a view that sorts the samples/labels in :meth:`view` by - visual similarity to the specified query. + similarity to the specified query. - The query IDs can be any IDs in the full index of this instance, even - if the current :meth:`view` contains a subset of the full index. + When querying by IDs, the query can be any ID(s) in the full index of + this instance, even if the current :meth:`view` contains a subset of + the full index. Args: - query_ids: an ID or iterable of query IDs - k (None): the number of matches to return. By default, all - samples/labels are included - reverse (False): whether to sort by least similarity - aggregation ("mean"): the aggregation method to use to compute - composite similarities. Only applicable when ``query_ids`` - contains multiple IDs. Supported values are - ``("mean", "min", "max")`` + query: the query, which can be any of the following: + + - an ID or iterable of IDs + - a ``num_dims`` vector or ``num_queries x num_dims`` array + of vectors + - a prompt or iterable of prompts (if supported by the index) + + k (None): the number of matches to return. Some backends may + support ``None``, in which case all samples will be sorted + reverse (False): whether to sort by least similarity (True) or + greatest similarity (False). Some backends may not support + least similarity + aggregation ("mean"): the aggregation method to use when multiple + queries are provided. The default is ``"mean"``, which means + that the query vectors are averaged prior to searching. Some + backends may support additional options dist_field (None): the name of a float field in which to store the distance of each example to the specified query. The field is created if necessary @@ -235,8 +620,383 @@ def sort_by_similarity( Returns: a :class:`fiftyone.core.view.DatasetView` """ - return fbs.sort_by_similarity( - self, query_ids, k, reverse, aggregation, dist_field, _mongo + samples = self.view + patches_field = self.config.patches_field + + selecting_samples = patches_field is None or isinstance( + samples, fop.PatchesView + ) + + kwargs = dict( + query=self._parse_query(query), + k=k, + reverse=reverse, + aggregation=aggregation, + return_dists=dist_field is not None, + ) + + if dist_field is not None: + ids, dists = self._kneighbors(**kwargs) + else: + ids = self._kneighbors(**kwargs) + + if not selecting_samples: + label_ids = ids + + _ids = set(ids) + bools = np.array([_id in _ids for _id in self.current_label_ids]) + sample_ids = self.current_sample_ids[bools] + + # Store query distances + if dist_field is not None: + if selecting_samples: + values = dict(zip(ids, dists)) + samples.set_values(dist_field, values, key_field="id") + else: + label_type, path = samples._get_label_field_path( + patches_field, dist_field + ) + if issubclass(label_type, fol._LABEL_LIST_FIELDS): + samples._set_list_values_by_id( + path, + sample_ids, + label_ids, + dists, + path.rsplit(".", 1)[0], + ) + else: + values = dict(zip(sample_ids, dists)) + samples.set_values(path, values, key_field="id") + + # Construct sorted view + stages = [] + + if selecting_samples: + stage = fos.Select(ids, ordered=True) + stages.append(stage) + else: + # Sorting by object similarity but this is not a patches view, so + # arrange the samples in order of their first occuring label + result_sample_ids = _unique_no_sort(sample_ids) + stage = fos.Select(result_sample_ids, ordered=True) + stages.append(stage) + + if k is not None: + _ids = [ObjectId(_id) for _id in ids] + stage = fos.FilterLabels(patches_field, F("_id").is_in(_ids)) + stages.append(stage) + + if _mongo: + pipeline = [] + for stage in stages: + stage.validate(samples) + pipeline.extend(stage.to_mongo(samples)) + + return pipeline + + view = samples + for stage in stages: + view = view.add_stage(stage) + + return view + + def _parse_query(self, query): + if query is None: + raise ValueError("At least one query must be provided") + + if isinstance(query, np.ndarray): + # Query by vector(s) + if query.size == 0: + raise ValueError("At least one query vector must be provided") + + return query + + if etau.is_str(query): + query = [query] + else: + query = list(query) + + if not query: + raise ValueError("At least one query must be provided") + + if etau.is_numeric(query[0]): + return np.asarray(query) + + try: + ObjectId(query[0]) + is_prompts = False + except: + is_prompts = True + + if is_prompts: + if not self.config.supports_prompts: + raise ValueError( + "Invalid query '%s'; this model does not support prompts" + % query[0] + ) + + model = self.get_model() + with model: + return model.embed_prompts(query) + + return query + + def _kneighbors( + self, + query=None, + k=None, + reverse=False, + aggregation=None, + return_dists=False, + ): + """Returns the k-nearest neighbors for the given query. + + This method should only return results from the current :meth:`view`. + + Args: + query (None): the query, which can be any of the following: + + - an ID or list of IDs for which to return neighbors + - an embedding or ``num_queries x num_dim`` array of + embeddings for which to return neighbors + - Some backends may also support ``None``, in which case the + neighbors for all points in the current :meth:`view are + returned + + k (None): the number of neighbors to return. Some backends may + enforce upper bounds on this parameter + reverse (False): whether to sort by least similarity (True) or + greatest similarity (False). Some backends may not support + least similarity + aggregation (None): an optional aggregation method to use when + multiple queries are provided. All backends must support + ``"mean"``, which averages query vectors prior to searching. + Backends may support additional options as well + return_dists (False): whether to return query-neighbor distances + + Returns: + the query result, in one of the following formats: + + - an ``(ids, dists)`` tuple, when ``return_dists`` is True + - ``ids``, when ``return_dists`` is False + + In the above, ``ids`` contains the IDs of the nearest neighbors, in + one of the following formats: + + - a list of nearest neighbor IDs, when a single query ID or + vector is provided, **or** when an ``aggregation`` is + provided + - a list of lists of nearest neighbor IDs, when multiple + query IDs/vectors and no ``aggregation`` is provided + - a list of arrays of the **integer indexes** (not IDs) of + nearest neighbor points for every vector in the index, when + no query is provided + + and ``dists`` contains the corresponding query-neighbor distances + for each result in ``ids`` + """ + raise NotImplementedError("subclass must implement _kneighbors()") + + def get_model(self): + """Returns the stored model for this index. + + Returns: + a :class:`fiftyone.core.models.Model` + """ + if self._model is None: + model = self.config.model + if model is None: + raise ValueError("These results don't have a stored model") + + if etau.is_str(model): + model = foz.load_zoo_model(model) + + self._model = model + + return self._model + + def compute_embeddings( + self, + samples, + model=None, + batch_size=None, + num_workers=None, + skip_failures=True, + skip_existing=False, + warn_existing=False, + force_square=False, + alpha=None, + ): + """Computes embeddings for the given samples using this backend's + model. + + Args: + samples: a :class:`fiftyone.core.collections.SampleCollection` + model (None): a :class:`fiftyone.core.models.Model` to apply. If + not provided, these results must have been created with a + stored model, which will be used by default + batch_size (None): an optional batch size to use when computing + embeddings. Only applicable when a ``model`` is provided + num_workers (None): the number of workers to use when loading + images. Only applicable when a Torch-based model is being used + to compute embeddings + skip_failures (True): whether to gracefully continue without + raising an error if embeddings cannot be generated for a sample + skip_existing (False): whether to skip generating embeddings for + sample/label IDs that are already in the index + warn_existing (False): whether to log a warning if any IDs already + exist in the index + force_square (False): whether to minimally manipulate the patch + bounding boxes into squares prior to extraction. Only + applicable when a ``model`` and ``patches_field`` are specified + alpha (None): an optional expansion/contraction to apply to the + patches before extracting them, in ``[-1, inf)``. If provided, + the length and width of the box are expanded (or contracted, + when ``alpha < 0``) by ``(100 * alpha)%``. For example, set + ``alpha = 1.1`` to expand the boxes by 10%, and set + ``alpha = 0.9`` to contract the boxes by 10%. Only applicable + when a ``model`` and ``patches_field`` are specified + + Returns: + a tuple of: + + - a ``num_embeddings x num_dims`` array of embeddings + - a ``num_embeddings`` array of sample IDs + - a ``num_embeddings`` array of label IDs, if applicable, or else + ``None`` + """ + if model is None: + model = self.get_model() + + if skip_existing: + if self.config.patches_field is not None: + index_ids = self.label_ids + else: + index_ids = self.sample_ids + + if index_ids is not None: + samples = fbu.skip_ids( + samples, + index_ids, + patches_field=self.config.patches_field, + warn_existing=warn_existing, + ) + else: + logger.warning( + "This index does not support skipping existing IDs" + ) + + return fbu.get_embeddings( + samples, + model=model, + patches_field=self.config.patches_field, + embeddings_field=self.config.embeddings_field, + force_square=force_square, + alpha=alpha, + batch_size=batch_size, + num_workers=num_workers, + skip_failures=skip_failures, + ) + + @classmethod + def _from_dict(cls, d, samples, config, brain_key): + """Builds a :class:`SimilarityIndex` from a JSON representation of it. + + Args: + d: a JSON dict + samples: the :class:`fiftyone.core.collections.SampleCollection` + for the run + config: the :class:`SimilarityConfig` for the run + brain_key: the brain key + + Returns: + a :class:`SimilarityIndex` + """ + raise NotImplementedError("subclass must implement _from_dict()") + + +class DuplicatesMixin(object): + """Mixin for :class:`SimilarityIndex` instances that support duplicate + detection operations. + + Similarity backends can expose this mixin simply by implementing + :meth:`_radius_neighbors`. + """ + + def __init__(self): + self._thresh = None + self._unique_ids = None + self._duplicate_ids = None + self._neighbors_map = None + + @property + def thresh(self): + """The threshold used by the last call to :meth:`find_duplicates` or + :meth:`find_unique`. + """ + return self._thresh + + @property + def unique_ids(self): + """A list of unique IDs from the last call to :meth:`find_duplicates` + or :meth:`find_unique`. + """ + return self._unique_ids + + @property + def duplicate_ids(self): + """A list of duplicate IDs from the last call to + :meth:`find_duplicates` or :meth:`find_unique`. + """ + return self._duplicate_ids + + @property + def neighbors_map(self): + """A dictionary mapping IDs to lists of ``(dup_id, dist)`` tuples from + the last call to :meth:`find_duplicates`. + """ + return self._neighbors_map + + def _radius_neighbors(self, query=None, thresh=None, return_dists=False): + """Returns the neighbors within the given distance threshold for the + given query. + + This method should only return results from the current :meth:`view`. + + Args: + query (None): the query, which can be any of the following: + + - an ID or list of IDs for which to return neighbors + - an embedding or ``num_queries x num_dim`` array of + embeddings for which to return neighbors + - ``None``, in which case the neighbors for all points in the + current :meth:`view are returned + + thresh (None): the distance threshold to use + return_dists (False): whether to return query-neighbor distances + + Returns: + the query result, in one of the following formats: + + - an ``(ids, dists)`` tuple, when ``return_dists`` is True + - ``ids``, when ``return_dists`` is False + + In the above, ``ids`` contains the IDs of the nearest neighbors, in + one of the following formats: + + - a list of nearest neighbor IDs, when a single query ID or + vector is provided + - a list of lists of nearest neighbor IDs, when multiple + query IDs/vectors is provided + - a list of arrays of the **integer indexes** (not IDs) of + nearest neighbor points for every vector in the index, when + no query is provided + + and ``dists`` contains the corresponding query-neighbor distances + for each result in ``ids`` + """ + raise NotImplementedError( + "subclass must implement _radius_neighbors()" ) def find_duplicates(self, thresh=None, fraction=None): @@ -260,7 +1020,56 @@ def find_duplicates(self, thresh=None, fraction=None): automatically tuned to achieve the desired fraction of duplicates """ - return fbs.find_duplicates(self, thresh, fraction) + if self.config.patches_field is not None: + logger.info("Computing duplicate patches...") + ids = self.current_label_ids + else: + logger.info("Computing duplicate samples...") + ids = self.current_sample_ids + + # Detect duplicates + if fraction is not None: + num_keep = int(round(min(max(0, 1.0 - fraction), 1) * len(ids))) + unique_ids, thresh = self._remove_duplicates_count( + num_keep, ids, init_thresh=thresh + ) + else: + unique_ids = self._remove_duplicates_thresh(thresh, ids) + + _unique_ids = set(unique_ids) + duplicate_ids = [_id for _id in ids if _id not in _unique_ids] + + # Locate nearest non-duplicate for each duplicate + if unique_ids and duplicate_ids: + if self.config.patches_field is not None: + unique_view = self._samples.select_labels( + ids=unique_ids, fields=self.config.patches_field + ) + else: + unique_view = self._samples.select(unique_ids) + + with self.use_view(unique_view): + nearest_ids, dists = self._kneighbors( + query=duplicate_ids, k=1, return_dists=True + ) + + neighbors_map = defaultdict(list) + for dup_id, _ids, _dists in zip(duplicate_ids, nearest_ids, dists): + neighbors_map[_ids[0]].append((dup_id, _dists[0])) + + neighbors_map = { + k: sorted(v, key=lambda t: t[1]) + for k, v in neighbors_map.items() + } + else: + neighbors_map = {} + + logger.info("Duplicates computation complete") + + self._thresh = thresh + self._unique_ids = unique_ids + self._duplicate_ids = duplicate_ids + self._neighbors_map = neighbors_map def find_unique(self, count): """Queries the index to select a subset of examples of the specified @@ -276,7 +1085,127 @@ def find_unique(self, count): Args: count: the desired number of unique examples """ - return fbs.find_unique(self, count) + if self.config.patches_field is not None: + logger.info("Computing unique patches...") + ids = self.current_label_ids + else: + logger.info("Computing unique samples...") + ids = self.current_sample_ids + + unique_ids, thresh = self._remove_duplicates_count(count, ids) + + _unique_ids = set(unique_ids) + duplicate_ids = [_id for _id in ids if _id not in _unique_ids] + + logger.info("Uniqueness computation complete") + + self._thresh = thresh + self._unique_ids = unique_ids + self._duplicate_ids = duplicate_ids + self._neighbors_map = None + + def _remove_duplicates_count(self, num_keep, ids, init_thresh=None): + if init_thresh is not None: + thresh = init_thresh + else: + thresh = 1 + + if num_keep <= 0: + logger.info( + "threshold: -, kept: %d, target: %d", num_keep, num_keep + ) + return set(), None + + if num_keep >= len(ids): + logger.info( + "threshold: -, kept: %d, target: %d", num_keep, num_keep + ) + return set(ids), None + + thresh_lims = [0, None] + num_target = num_keep + num_keep = -1 + + while True: + keep_ids = self._remove_duplicates_thresh(thresh, ids) + num_keep_last = num_keep + num_keep = len(keep_ids) + + logger.info( + "threshold: %f, kept: %d, target: %d", + thresh, + num_keep, + num_target, + ) + + if num_keep == num_target or ( + num_keep == num_keep_last + and thresh_lims[1] is not None + and thresh_lims[1] - thresh_lims[0] < 1e-6 + ): + break + + if num_keep < num_target: + # Need to decrease threshold + thresh_lims[1] = thresh + thresh = 0.5 * (thresh_lims[0] + thresh) + else: + # Need to increase threshold + thresh_lims[0] = thresh + if thresh_lims[1] is not None: + thresh = 0.5 * (thresh + thresh_lims[1]) + else: + thresh *= 2 + + return keep_ids, thresh + + def _remove_duplicates_thresh(self, thresh, ids): + nearest_inds = self._radius_neighbors(thresh=thresh) + + n = len(ids) + keep = set(range(n)) + for ind in range(n): + if ind in keep: + keep -= {i for i in nearest_inds[ind] if i > ind} + + return [ids[i] for i in keep] + + def plot_distances(self, bins=100, log=False, backend="plotly", **kwargs): + """Plots a histogram of the distance between each example and its + nearest neighbor. + + If `:meth:`find_duplicates` or :meth:`find_unique` has been executed, + the threshold used is also indicated on the plot. + + Args: + bins (100): the number of bins to use + log (False): whether to use a log scale y-axis + backend ("plotly"): the plotting backend to use. Supported values + are ``("plotly", "matplotlib")`` + **kwargs: keyword arguments for the backend plotting method + + Returns: + one of the following: + + - a :class:`fiftyone.core.plots.plotly.PlotlyNotebookPlot`, if + you are working in a notebook context and the plotly backend is + used + - a plotly or matplotlib figure, otherwise + """ + metric = self.config.metric + thresh = self.thresh + + _, dists = self._kneighbors(k=1, return_dists=True) + dists = np.array([d[0] for d in dists.values()]) + + if backend == "matplotlib": + return _plot_distances_mpl( + dists, metric, thresh, bins, log, **kwargs + ) + + return _plot_distances_plotly( + dists, metric, thresh, bins, log, **kwargs + ) def duplicates_view( self, @@ -324,9 +1253,56 @@ def duplicates_view( "You must first call `find_duplicates()` to generate results" ) - return fbs.duplicates_view( - self, type_field, id_field, dist_field, sort_by, reverse - ) + samples = self.view + patches_field = self.config.patches_field + neighbors_map = self.neighbors_map + + if patches_field is not None and not isinstance( + samples, fop.PatchesView + ): + samples = samples.to_patches(patches_field) + + if sort_by == "distance": + key = lambda kv: min(e[1] for e in kv[1]) + elif sort_by == "count": + key = lambda kv: len(kv[1]) + else: + raise ValueError( + "Invalid sort_by='%s'; supported values are %s" + % (sort_by, ("distance", "count")) + ) + + existing_ids = set(samples.values("id")) + neighbors = [ + (k, v) for k, v in neighbors_map.items() if k in existing_ids + ] + + ids = [] + types = {} + nearest_ids = {} + dists = {} + for _id, duplicates in sorted(neighbors, key=key, reverse=reverse): + ids.append(_id) + types[_id] = "nearest" + nearest_ids[_id] = _id + dists[_id] = 0.0 + + for dup_id, dist in duplicates: + ids.append(dup_id) + types[dup_id] = "duplicate" + nearest_ids[dup_id] = _id + dists[dup_id] = dist + + if type_field is not None: + samples.set_values(type_field, types, key_field="id") + + if id_field is not None: + samples.set_values(id_field, nearest_ids, key_field="id") + + if dist_field is not None: + samples.set_values(dist_field, dists, key_field="id") + + return samples.select(ids, ordered=True) def unique_view(self): """Returns a view that contains only the unique examples generated by @@ -344,20 +1320,24 @@ def unique_view(self): "to generate results" ) - return fbs.unique_view(self) + samples = self.view + patches_field = self.config.patches_field + unique_ids = self.unique_ids - def visualize_duplicates( - self, visualization=None, backend="plotly", **kwargs - ): + if patches_field is not None and not isinstance( + samples, fop.PatchesView + ): + samples = samples.to_patches(patches_field) + + return samples.select(unique_ids) + + def visualize_duplicates(self, visualization, backend="plotly", **kwargs): """Generates an interactive scatterplot of the results generated by the last call to :meth:`find_duplicates`. - If provided, the ``visualization`` argument can be any visualization - computed on the same dataset (or subset of it) as long as it contains - every sample/object in the view whose results you are visualizing. If - no ``visualization`` argument is provided and the embeddings - have more than 3 dimensions, a 2D representation of the embeddings is - computed via :meth:`fiftyone.brain.compute_visualization`. + The ``visualization`` argument can be any visualization computed on the + same dataset (or subset of it) as long as it contains every + sample/object in the view whose results you are visualizing. The points are colored based on the following partition: @@ -374,7 +1354,7 @@ def visualize_duplicates( points in the plot. Args: - visualization (None): a + visualization: a :class:`fiftyone.brain.visualization.VisualizationResults` instance to use to visualize the results backend ("plotly"): the plotting backend to use. Supported values @@ -392,18 +1372,50 @@ def visualize_duplicates( "You must first call `find_duplicates()` to generate results" ) - return fbs.visualize_duplicates(self, visualization, backend, **kwargs) + samples = self.view + duplicate_ids = self.duplicate_ids + neighbors_map = self.neighbors_map + patches_field = self.config.patches_field + + dup_ids = set(duplicate_ids) + nearest_ids = set(neighbors_map.keys()) + + with visualization.use_view(samples, allow_missing=True): + if patches_field is not None: + ids = visualization.current_label_ids + else: + ids = visualization.current_sample_ids + + labels = [] + for _id in ids: + if _id in dup_ids: + label = "duplicate" + elif _id in nearest_ids: + label = "nearest" + else: + label = "unique" + + labels.append(label) + + if backend == "plotly": + kwargs["edges"] = _build_edges(ids, neighbors_map) + kwargs["edges_title"] = "neighbors" + kwargs["labels_title"] = "type" + + return visualization.visualize( + labels=labels, + classes=["unique", "nearest", "duplicate"], + backend=backend, + **kwargs, + ) - def visualize_unique(self, visualization=None, backend="plotly", **kwargs): + def visualize_unique(self, visualization, backend="plotly", **kwargs): """Generates an interactive scatterplot of the results generated by the last call to :meth:`find_unique`. - If provided, the ``visualization`` argument can be any visualization - computed on the same dataset (or subset of it) as long as it contains - every sample/object in the view whose results you are visualizing. If - no ``visualization`` argument is provided and the embeddings - have more than 3 dimensions, a 2D representation of the embeddings is - computed via :meth:`fiftyone.brain.compute_visualization`. + The ``visualization`` argument can be any visualization computed on the + same dataset (or subset of it) as long as it contains every + sample/object in the view whose results you are visualizing. The points are colored based on the following partition: @@ -416,7 +1428,7 @@ def visualize_unique(self, visualization=None, backend="plotly", **kwargs): points in the plot. Args: - visualization (None): a + visualization: a :class:`fiftyone.brain.visualization.VisualizationResults` instance to use to visualize the results backend ("plotly"): the plotting backend to use. Supported values @@ -434,49 +1446,148 @@ def visualize_unique(self, visualization=None, backend="plotly", **kwargs): "You must first call `find_unique()` to generate results" ) - return fbs.visualize_unique(self, visualization, backend, **kwargs) + samples = self.view + unique_ids = self.unique_ids + patches_field = self.config.patches_field - @classmethod - def _from_dict(cls, d, samples, config): - embeddings = np.array(d["embeddings"]) - return cls(samples, config, embeddings) + unique_ids = set(unique_ids) + with visualization.use_view(samples, allow_missing=True): + if patches_field is not None: + ids = visualization.current_label_ids + else: + ids = visualization.current_sample_ids -class SimilarityConfig(fob.BrainMethodConfig): - """Similarity configuration. + labels = [] + for _id in ids: + if _id in unique_ids: + label = "unique" + else: + label = "other" - Args: - embeddings_field (None): the sample field containing the embeddings, - if one was provided - model (None): the :class:`fiftyone.core.models.Model` or class name of - the model that was used to compute embeddings, if one was provided - patches_field (None): the sample field defining the patches being - analyzed, if any - metric (None): the embedding distance metric used - """ + labels.append(label) - def __init__( - self, - embeddings_field=None, - model=None, - patches_field=None, - metric=None, + return visualization.visualize( + labels=labels, + classes=["other", "unique"], + backend=backend, + **kwargs, + ) + + +def _unique_no_sort(values): + seen = set() + return [v for v in values if v not in seen and not seen.add(v)] + + +def _build_edges(ids, neighbors_map): + inds_map = {_id: idx for idx, _id in enumerate(ids)} + + edges = [] + for nearest_id, duplicates in neighbors_map.items(): + nearest_ind = inds_map[nearest_id] + for dup_id, _ in duplicates: + dup_ind = inds_map[dup_id] + edges.append((dup_ind, nearest_ind)) + + return np.array(edges) + + +def _plot_distances_plotly(dists, metric, thresh, bins, log, **kwargs): + import plotly.graph_objects as go + import fiftyone.core.plots.plotly as fopl + + counts, edges = np.histogram(dists, bins=bins) + left_edges = edges[:-1] + widths = edges[1:] - edges[:-1] + customdata = np.stack((edges[:-1], edges[1:]), axis=1) + + hover_lines = [ + "<b>count: %{y}</b>", + "distance: [%{customdata[0]:.2f}, %{customdata[1]:.2f}]", + ] + hovertemplate = "<br>".join(hover_lines) + "<extra></extra>" + + bar = go.Bar( + x=left_edges, + y=counts, + width=widths, + customdata=customdata, + offset=0, + marker_color="#FF6D04", + hovertemplate=hovertemplate, + showlegend=False, + ) + + traces = [bar] + + if thresh is not None: + line = go.Scatter( + x=[thresh, thresh], + y=[0, max(counts)], + mode="lines", + line=dict(color="#17191C", width=3), + hovertemplate="<b>thresh: %{x}</b><extra></extra>", + showlegend=False, + ) + traces.append(line) + + figure = go.Figure(traces) + + figure.update_layout( + xaxis_title="nearest neighbor distance (%s)" % metric, + yaxis_title="count", + hovermode="x", + yaxis_rangemode="tozero", + ) + + if log: + figure.update_layout(yaxis_type="log") + + figure.update_layout(**fopl._DEFAULT_LAYOUT) + figure.update_layout(**kwargs) + + if foc.is_jupyter_context(): + figure = fopl.PlotlyNotebookPlot(figure) + + return figure + + +def _plot_distances_mpl( + dists, metric, thresh, bins, log, ax=None, figsize=None, **kwargs +): + import matplotlib.pyplot as plt + + if ax is None: + fig, ax = plt.subplots() + else: + fig = ax.figure + + counts, edges = np.histogram(dists, bins=bins) + left_edges = edges[:-1] + widths = edges[1:] - edges[:-1] + + ax.bar( + left_edges, + counts, + width=widths, + align="edge", + color="#FF6D04", **kwargs, - ): - if model is not None and not etau.is_str(model): - model = etau.get_class_name(model) + ) - self.embeddings_field = embeddings_field - self.model = model - self.patches_field = patches_field - self.metric = metric - super().__init__(**kwargs) + if thresh is not None: + ax.vlines(thresh, 0, max(counts), color="#17191C", linewidth=3) - @property - def method(self): - return "similarity" + if log: + ax.set_yscale("log") - @property - def run_cls(self): - run_cls_name = self.__class__.__name__[: -len("Config")] - return getattr(fbs, run_cls_name) + ax.set_xlabel("nearest neighbor distance (%s)" % metric) + ax.set_ylabel("count") + + if figsize is not None: + fig.set_size_inches(*figsize) + + plt.tight_layout() + + return fig diff --git a/fiftyone/brain/visualization.py b/fiftyone/brain/visualization.py index 1da6c476..cce0a8dc 100644 --- a/fiftyone/brain/visualization.py +++ b/fiftyone/brain/visualization.py @@ -1,7 +1,7 @@ """ Visualization interface. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ @@ -10,7 +10,6 @@ import eta.core.utils as etau import fiftyone.core.brain as fob -import fiftyone.core.plots as fop import fiftyone.core.utils as fou fbu = fou.lazy_import("fiftyone.brain.internal.core.utils") @@ -24,34 +23,44 @@ class VisualizationResults(fob.BrainResults): Args: samples: the :class:`fiftyone.core.collections.SampleCollection` used config: the :class:`VisualizationConfig` used + brain_key: the brain key points: a ``num_points x num_dims`` array of visualization points + sample_ids (None): a ``num_points`` array of sample IDs + label_ids (None): a ``num_points`` array of label IDs, if applicable + backend (None): a :class:`Visualization` backend """ - def __init__(self, samples, config, points): - sample_ids, label_ids = fbu.get_ids( - samples, patches_field=config.patches_field - ) - - if len(sample_ids) != len(points): - ptype = "label" if config.patches_field is not None else "sample" - raise ValueError( - "Number of %s IDs (%d) does not match number of points (%d). " - "You may have missing data/labels that you need to omit from " - "your view" % (ptype, len(sample_ids), len(points)) + def __init__( + self, + samples, + config, + brain_key, + points, + sample_ids=None, + label_ids=None, + backend=None, + ): + super().__init__(samples, config, brain_key, backend=backend) + + if sample_ids is None: + sample_ids, label_ids = fbu.get_ids( + samples, + patches_field=config.patches_field, + data=points, + data_type="points", ) self.points = points + self.sample_ids = sample_ids + self.label_ids = label_ids - self._samples = samples - self._config = config - self._sample_ids = sample_ids - self._label_ids = label_ids self._last_view = None self._curr_view = None + self._curr_points = None self._curr_sample_ids = None self._curr_label_ids = None self._curr_keep_inds = None - self._curr_points = None + self._curr_good_inds = None self.use_view(samples) @@ -70,26 +79,79 @@ def config(self): @property def index_size(self): - """The number of examples in the index. + """The number of active points in the index. If :meth:`use_view` has been called to restrict the index, this property will reflect the size of the active index. """ return len(self._curr_sample_ids) + @property + def total_index_size(self): + """The total number of data points in the index. + + If :meth:`use_view` has been called to restrict the index, this value + may be larger than the current :meth:`index_size`. + """ + return len(self.points) + + @property + def missing_size(self): + """The total number of data points in :meth:`view` that are missing + from this index. + + This property is only applicable when :meth:`use_view` has been called, + and it will be ``None`` if no data points are missing. + """ + good = self._curr_good_inds + + if good is None: + return None + + return good.size - np.count_nonzero(good) + + @property + def current_points(self): + """The currently active points in the index. + + If :meth:`use_view` has been called, this may be a subset of the full + index. + """ + return self._curr_points + + @property + def current_sample_ids(self): + """The sample IDs of the currently active points in the index. + + If :meth:`use_view` has been called, this may be a subset of the full + index. + """ + return self._curr_sample_ids + + @property + def current_label_ids(self): + """The label IDs of the currently active points in the index, or + ``None`` if not applicable. + + If :meth:`use_view` has been called, this may be a subset of the full + index. + """ + return self._curr_label_ids + @property def view(self): """The :class:`fiftyone.core.collections.SampleCollection` against which results are currently being generated. - If :meth:`use_view` has been called, this view may be a subset of the - collection on which the full index was generated. + If :meth:`use_view` has been called, this view may be different than + the collection on which the full index was generated. """ return self._curr_view - def use_view(self, sample_collection): - """Restricts the index to the provided view, which must be a subset of - the full index's collection. + def use_view( + self, sample_collection, allow_missing=True, warn_missing=False + ): + """Restricts the index to the provided view. Subsequent calls to methods on this instance will only contain results from the specified view rather than the full index. @@ -119,18 +181,24 @@ def use_view(self, sample_collection): Args: sample_collection: a - :class:`fiftyone.core.collections.SampleCollection` defining a - subset of this index to use + :class:`fiftyone.core.collections.SampleCollection` + allow_missing (True): whether to allow the provided collection to + contain data points that this index does not contain (True) or + whether to raise an error in this case (False) + warn_missing (False): whether to log a warning if the provided + collection contains data points that this index does not + contain Returns: self """ - view, sample_ids, label_ids, keep_inds = fbu.filter_ids( + sample_ids, label_ids, keep_inds, good_inds = fbu.filter_ids( sample_collection, - self._samples, - self._sample_ids, - self._label_ids, + self.sample_ids, + self.label_ids, patches_field=self._config.patches_field, + allow_missing=allow_missing, + warn_missing=warn_missing, ) if keep_inds is not None: @@ -138,11 +206,12 @@ def use_view(self, sample_collection): else: points = self.points - self._curr_view = view + self._curr_view = sample_collection + self._curr_points = points self._curr_sample_ids = sample_ids self._curr_label_ids = label_ids self._curr_keep_inds = keep_inds - self._curr_points = points + self._curr_good_inds = good_inds return self @@ -153,6 +222,28 @@ def clear_view(self): """ self.use_view(self._samples) + def values(self, path_or_expr): + """Extracts a flat list of values from the given field or expression + corresponding to the current :meth:`view`. + + This method always returns values in the same order as + :meth:`current_points`, :meth:`current_sample_ids`, and + :meth:`current_label_ids`. + + Args: + path_or_expr: the values to extract, which can be: + + - the name of a sample field or ``embedded.field.name`` from + which to extract numeric or string values + - a :class:`fiftyone.core.expressions.ViewExpression` + defining numeric or string values to compute via + :meth:`fiftyone.core.collections.SampleCollection.values` + + Returns: + a list of values + """ + return fbv.values(self, path_or_expr) + def visualize( self, labels=None, @@ -161,7 +252,8 @@ def visualize( backend="plotly", **kwargs, ): - """Generates an interactive scatterplot of the visualization results. + """Generates an interactive scatterplot of the visualization results + for the current :meth:`view`. This method supports 2D or 3D visualizations, but interactive point selection is only available in 2D. @@ -214,10 +306,8 @@ def visualize( Returns: an :class:`fiftyone.core.plots.base.InteractivePlot` """ - return fop.scatterplot( - self._curr_points, - samples=self._curr_view, - link_field=self._config.patches_field, + return fbv.visualize( + self, labels=labels, sizes=sizes, classes=classes, @@ -226,9 +316,25 @@ def visualize( ) @classmethod - def _from_dict(cls, d, samples, config): + def _from_dict(cls, d, samples, config, brain_key): points = np.array(d["points"]) - return cls(samples, config, points) + + sample_ids = d.get("sample_ids", None) + if sample_ids is not None: + sample_ids = np.array(sample_ids) + + label_ids = d.get("label_ids", None) + if label_ids is not None: + label_ids = np.array(label_ids) + + return cls( + samples, + config, + brain_key, + points, + sample_ids=sample_ids, + label_ids=label_ids, + ) class VisualizationConfig(fob.BrainMethodConfig): @@ -237,8 +343,8 @@ class VisualizationConfig(fob.BrainMethodConfig): Args: embeddings_field (None): the sample field containing the embeddings, if one was provided - model (None): the :class:`fiftyone.core.models.Model` or class name of - the model that was used to compute embeddings, if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known patches_field (None): the sample field defining the patches being analyzed, if any num_dims (2): the dimension of the visualization space @@ -253,7 +359,7 @@ def __init__( **kwargs, ): if model is not None and not etau.is_str(model): - model = etau.get_class_name(model) + model = None self.embeddings_field = embeddings_field self.model = model @@ -277,8 +383,8 @@ class UMAPVisualizationConfig(VisualizationConfig): Args: embeddings_field (None): the sample field containing the embeddings, if one was provided - model (None): the :class:`fiftyone.core.models.Model` or class name of - the model that was used to compute embeddings, if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known patches_field (None): the sample field defining the patches being analyzed, if any num_dims (2): the dimension of the visualization space @@ -339,8 +445,8 @@ class TSNEVisualizationConfig(VisualizationConfig): Args: embeddings_field (None): the sample field containing the embeddings, if one was provided - model (None): the :class:`fiftyone.core.models.Model` or class name of - the model that was used to compute embeddings, if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known patches_field (None): the sample field defining the patches being analyzed, if any num_dims (2): the dimension of the visualization space @@ -418,8 +524,8 @@ class PCAVisualizationConfig(VisualizationConfig): Args: embeddings_field (None): the sample field containing the embeddings, if one was provided - model (None): the :class:`fiftyone.core.models.Model` or class name of - the model that was used to compute embeddings, if one was provided + model (None): the :class:`fiftyone.core.models.Model` or name of the + zoo model that was used to compute embeddings, if known patches_field (None): the sample field defining the patches being analyzed, if any num_dims (2): the dimension of the visualization space diff --git a/install.bash b/install.bash index 924cd28f..b0d47194 100644 --- a/install.bash +++ b/install.bash @@ -4,7 +4,7 @@ # Usage: # bash install.bash # -# Copyright 2017-2022, Voxel51, Inc. +# Copyright 2017-2023, Voxel51, Inc. # voxel51.com # diff --git a/production/README.md b/production/README.md index 47c01def..f3df7cc1 100644 --- a/production/README.md +++ b/production/README.md @@ -40,4 +40,4 @@ To add a model to the Brain, following these steps: ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> voxel51.com +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/production/marketing/README.md b/production/marketing/README.md index 958b5ada..e1759db9 100644 --- a/production/marketing/README.md +++ b/production/marketing/README.md @@ -18,4 +18,4 @@ experiment details with proprietary information that only exists in the brain. ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> voxel51.com +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/production/marketing/shot_bdd_detections.py b/production/marketing/shot_bdd_detections.py index e562fb28..52f1dd10 100644 --- a/production/marketing/shot_bdd_detections.py +++ b/production/marketing/shot_bdd_detections.py @@ -12,7 +12,7 @@ # From inside IPython run shot_bdd_detections.py -Copyright 2017-2022, Voxel51, Inc. +Copyright 2017-2023, Voxel51, Inc. voxel51.com """ import os diff --git a/production/marketing/shot_neardups.py b/production/marketing/shot_neardups.py index fe3fc9de..2560880a 100644 --- a/production/marketing/shot_neardups.py +++ b/production/marketing/shot_neardups.py @@ -13,7 +13,7 @@ # From inside IPython run shot_neardups.py -Copyright 2017-2022, Voxel51, Inc. +Copyright 2017-2023, Voxel51, Inc. voxel51.com """ import os diff --git a/production/marketing/use_cases_code.md b/production/marketing/use_cases_code.md index 076e5e22..fabb960d 100644 --- a/production/marketing/use_cases_code.md +++ b/production/marketing/use_cases_code.md @@ -115,4 +115,4 @@ fo.launch_dashboard(view=mistakes_view) ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> voxel51.com +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/production/models/simple_resnet/README.md b/production/models/simple_resnet/README.md index e3d9240b..5a2638f0 100644 --- a/production/models/simple_resnet/README.md +++ b/production/models/simple_resnet/README.md @@ -27,4 +27,4 @@ Steps to train and deploy the model: ## Copyright -Copyright 2017-2022, Voxel51, Inc.<br> voxel51.com +Copyright 2017-2023, Voxel51, Inc.<br> voxel51.com diff --git a/production/models/simple_resnet/config.py b/production/models/simple_resnet/config.py index 0c9614d3..d31abf25 100644 --- a/production/models/simple_resnet/config.py +++ b/production/models/simple_resnet/config.py @@ -1,7 +1,7 @@ """ Simple configuration setup for these experiments -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/production/models/simple_resnet/datasets.py b/production/models/simple_resnet/datasets.py index d42f40d4..854ced54 100644 --- a/production/models/simple_resnet/datasets.py +++ b/production/models/simple_resnet/datasets.py @@ -1,7 +1,7 @@ """ Implementation of datasets for the experiments -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/production/models/simple_resnet/preprocess.py b/production/models/simple_resnet/preprocess.py index 64a7441b..1ba697fb 100644 --- a/production/models/simple_resnet/preprocess.py +++ b/production/models/simple_resnet/preprocess.py @@ -1,7 +1,7 @@ """ Preprocessing functions -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/production/models/simple_resnet/train_classifier.py b/production/models/simple_resnet/train_classifier.py index 23009364..23c3a260 100644 --- a/production/models/simple_resnet/train_classifier.py +++ b/production/models/simple_resnet/train_classifier.py @@ -18,7 +18,7 @@ run train_classifier.py -t 2000 -e 12 -b 64 --n_rounds 1 --p_initial 1.0 -m /tmp/foo.pth -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/production/models/simple_resnet/training.py b/production/models/simple_resnet/training.py index 5da83525..75ad5fcb 100644 --- a/production/models/simple_resnet/training.py +++ b/production/models/simple_resnet/training.py @@ -1,7 +1,7 @@ """ Training functions -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/production/models/simple_resnet/utils.py b/production/models/simple_resnet/utils.py index 537fd10f..fef716b9 100644 --- a/production/models/simple_resnet/utils.py +++ b/production/models/simple_resnet/utils.py @@ -1,7 +1,7 @@ """ Utilities for experiments. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ diff --git a/requirements/dev.txt b/requirements/dev.txt index 7b44b253..d73bf272 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -6,5 +6,5 @@ ipython>=7.16.1 pandas==1.1.5 pre-commit==2.0.1 pylint==2.3.1 -pytest==5.4.3 +pytest==7.3.1 twine>=3 diff --git a/setup.py b/setup.py index 38549f7f..b05596a8 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ """ Installs `fiftyone-brain`. -| Copyright 2017-2022, Voxel51, Inc. +| Copyright 2017-2023, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ @@ -17,7 +17,7 @@ long_description += "\n## License\n\n" + fh.read() -VERSION = "0.9.1" +VERSION = "0.12.0" def get_version():
Video brain runs touch up Ensures that all brain runs either support video datasets or raise an informative error if they don't. ```py import random import numpy as np import fiftyone as fo import fiftyone.brain as fob import fiftyone.zoo as foz from fiftyone import ViewField as F dataset = foz.load_zoo_dataset("quickstart-video", max_samples=2) view = dataset.match_frames(F("frame_number") <= 2) for sample in view.iter_samples(autosave=True): for frame in sample.frames.values(): frame["pred_class"] = fo.Classification( label=random.choice(["foo", "bar", "spam", "eggs"]), confidence=random.random(), logits=np.random.random(4), ) frame["pred_dets"] = frame["detections"].copy() for detection in frame["pred_dets"].detections: detection.confidence = random.random() # # These support video datasets # fob.compute_exact_duplicates(view) fob.compute_hardness(view, "frames.pred_class") fob.compute_mistakenness(view, "frames.pred_dets", "frames.detections") # # These methods do not directly support video datasets, but they do support # frames views # fob.compute_similarity(view, brain_key="vid_sim") # ValueError: This method cannot use image models to compute video embeddings. Try providing precomputed video embeddings or converting to a frames view via `to_frames()` first fob.compute_similarity(view, patches_field="frames.detections", brain_key="det_sim") # ValueError: This method does not directly support frame patches for video collections. Try converting to a frames view via `to_frames()` first fob.compute_visualization(view, brain_key="vid_viz") # ValueError: This method cannot use image models to compute video embeddings. Try providing precomputed video embeddings or converting to a frames view via `to_frames()` first results7 = fob.compute_visualization(view, patches_field="frames.detections", brain_key="det_viz") # ValueError: This method does not directly support frame patches for video collections. Try converting to a frames view via `to_frames()` first view2 = view.to_frames(sample_frames=True, sparse=True) fob.compute_similarity(view2, patches_field="detections", brain_key="frames_det_sim") fob.compute_visualization(view2, patches_field="detections", brain_key="frames_det_viz") ```
2023-05-18T11:40:59
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-523
1b238ba3ffd25538cbef9041336a1a65a26e16b3
diff --git a/Makefile b/Makefile index 691ba6fc..d4554cc7 100644 --- a/Makefile +++ b/Makefile @@ -50,12 +50,11 @@ envfile: @mkdir -p $(shell dirname ${ENV_FILE}) && touch ${ENV_FILE} @echo SECRET_KEY=\"${GET_SECRET_KEY}\" > ${ENV_FILE} -envfile_testing: - @echo "-> Create the .env file and generate a secret key" - @if test -f ${ENV_FILE}; then echo ".env file exists already"; exit 1; fi - @mkdir -p $(shell dirname ${ENV_FILE}) && touch ${ENV_FILE} - @echo SECRET_KEY=\"${GET_SECRET_KEY}\" >> ${ENV_FILE} - @echo SCANCODEIO_DB_PORT=\"5433\" >> ${ENV_FILE} +envfile_testing: envfile + @echo PACKAGEDB_DB_USER=\"postgres\" >> ${ENV_FILE} + @echo PACKAGEDB_DB_PASSWORD=\"postgres\" >> ${ENV_FILE} + @echo SCANCODEIO_DB_USER=\"postgres\" >> ${ENV_FILE} + @echo SCANCODEIO_DB_PASSWORD=\"postgres\" >> ${ENV_FILE} isort: @echo "-> Apply isort changes to ensure proper imports ordering" diff --git a/azure-pipelines.yml b/azure-pipelines.yml index d91cce40..23faf951 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -5,60 +5,30 @@ # These jobs are using VMs with Azure-provided Python builds ################################################################################ +resources: + containers: + - container: postgres + image: postgres:13 + env: + POSTGRES_USER: postgres + POSTGRES_PASSWORD: postgres + ports: + - 5432:5432 + jobs: - template: etc/ci/azure-posix.yml parameters: job_name: ubuntu20_cpython image_name: ubuntu-20.04 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] + python_versions: ['3.10', '3.11', '3.12'] test_suites: - all: venv/bin/pytest -n 2 -vvs + all: make test - template: etc/ci/azure-posix.yml parameters: job_name: ubuntu22_cpython image_name: ubuntu-22.04 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] - test_suites: - all: venv/bin/pytest -n 2 -vvs - - - template: etc/ci/azure-posix.yml - parameters: - job_name: macos11_cpython - image_name: macOS-11 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] - test_suites: - all: venv/bin/pytest -n 2 -vvs - - - template: etc/ci/azure-posix.yml - parameters: - job_name: macos12_cpython - image_name: macOS-12 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] - test_suites: - all: venv/bin/pytest -n 2 -vvs - - - template: etc/ci/azure-posix.yml - parameters: - job_name: macos13_cpython - image_name: macOS-13 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] - test_suites: - all: venv/bin/pytest -n 2 -vvs - - - template: etc/ci/azure-win.yml - parameters: - job_name: win2019_cpython - image_name: windows-2019 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] - test_suites: - all: venv\Scripts\pytest -n 2 -vvs - - - template: etc/ci/azure-win.yml - parameters: - job_name: win2022_cpython - image_name: windows-2022 - python_versions: ['3.8', '3.9', '3.10', '3.11', '3.12'] + python_versions: ['3.10', '3.11', '3.12'] test_suites: - all: venv\Scripts\pytest -n 2 -vvs + all: make test diff --git a/etc/ci/azure-posix.yml b/etc/ci/azure-posix.yml index 9fdc7f15..b139a66c 100644 --- a/etc/ci/azure-posix.yml +++ b/etc/ci/azure-posix.yml @@ -18,6 +18,9 @@ jobs: test_suite_label: ${{ tsuite.key }} test_suite: ${{ tsuite.value }} + services: + postgres: postgres + steps: - checkout: self fetchDepth: 10 @@ -30,9 +33,12 @@ jobs: displayName: '${{ pyver }} - Install Python' - script: | - python${{ pyver }} --version - echo "python${{ pyver }}" > PYTHON_EXECUTABLE - ./configure --clean && ./configure --dev + make dev + make envfile_testing + sudo mkdir /etc/purldb + sudo cp .env /etc/purldb + sudo mkdir /etc/scancodeio + sudo cp .env /etc/scancodeio displayName: '${{ pyver }} - Configure' - script: $(test_suite) diff --git a/purldb_project/settings.py b/purldb_project/settings.py index 7651b0a0..a9592932 100644 --- a/purldb_project/settings.py +++ b/purldb_project/settings.py @@ -107,7 +107,6 @@ ) # Database - DATABASES = { 'default': { 'ENGINE': env.str('PACKAGEDB_DB_ENGINE', 'django.db.backends.postgresql'),
Determine a corresponding source repo(s) (such as Git) for a binary that does not have one obvious Sometimes we have a package that does not have sources or does not have a corresponding source repo (such as a git repo). We should be able to infer a source repo and possibly a tag or release in this repo based on other available metadata: - the homepage may be for a GitHub or GitLab repo or some other field - for instance a description (seen in NuGet packages) or a README may have a GitHub link We could also infer the tag based on a version, with or without V prefix. Use azure pipelines for CI Use azure pipelines for CI Use azure pipelines for CI
@TG1999 ping FYI. We may need this for Maven, npm and NuGet Done in https://github.com/nexB/purldb/pull/137
2024-08-08T00:26:33
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-506
14115c0a1b350178326182937f47b950a85c5d7c
diff --git a/minecode/visitors/maven.py b/minecode/visitors/maven.py index d5d8799c..1fabe5a0 100644 --- a/minecode/visitors/maven.py +++ b/minecode/visitors/maven.py @@ -125,6 +125,8 @@ def get_pom_text(namespace, name, version, qualifiers={}, base_url=MAVEN_BASE_UR qualifiers=qualifiers, base_url=base_url, ) + if not urls: + return # Get and parse POM info pom_url = urls['api_data_url'] # TODO: manage different types of errors (404, etc.) diff --git a/packagedb/api.py b/packagedb/api.py index 2f91a9c9..305e1539 100644 --- a/packagedb/api.py +++ b/packagedb/api.py @@ -250,13 +250,13 @@ def filter_by_checksums(self, request, *args, **kwargs): response_data = { 'status': f'Unsupported field(s) given: {unsupported_fields_str}' } - return Response(response_data) + return Response(response_data, status=status.HTTP_400_BAD_REQUEST) if not data: response_data = { 'status': 'No values provided' } - return Response(response_data) + return Response(response_data, status=status.HTTP_400_BAD_REQUEST) lookups = Q() for field, value in data.items(): @@ -467,14 +467,14 @@ def filter_by_checksums(self, request, *args, **kwargs): response_data = { 'status': f'Unsupported field(s) given: {unsupported_fields_str}' } - return Response(response_data) + return Response(response_data, status=status.HTTP_400_BAD_REQUEST) enhance_package_data = data.pop('enhance_package_data', False) if not data: response_data = { 'status': 'No values provided' } - return Response(response_data) + return Response(response_data, status=status.HTTP_400_BAD_REQUEST) lookups = Q() for field, value in data.items(): @@ -546,7 +546,7 @@ def create(self, request): serializer = UpdatePackagesSerializer(data=request.data) if not serializer.is_valid(): - return Response({'errors': serializer.errors}, status=400) + return Response({'errors': serializer.errors}, status=status.HTTP_400_BAD_REQUEST) validated_data = serializer.validated_data packages = validated_data.get('purls', []) @@ -812,7 +812,7 @@ def list(self, request, format=None): return Response( {'errors': serializer.errors}, status=status.HTTP_400_BAD_REQUEST, - ) + ) validated_data = serializer.validated_data purl = validated_data.get('purl') @@ -847,7 +847,7 @@ def list(self, request, format=None): message = { 'status': f'error(s) occurred when fetching metadata for {purl}: {errors}' } - return Response(message) + return Response(message, status=status.HTTP_400_BAD_REQUEST) for package in packages: get_source_package_and_add_to_package_set(package) @@ -960,7 +960,7 @@ def _reindex_package(package, reindexed_packages, **kwargs): serializer = self.serializer_class(data=request.data) if not serializer.is_valid(): - return Response({'errors': serializer.errors}, status=400) + return Response({'errors': serializer.errors}, status=status.HTTP_400_BAD_REQUEST) validated_data = serializer.validated_data packages = validated_data.get('packages', []) @@ -1065,7 +1065,7 @@ def reindex_metadata(self, request, *args, **kwargs): return Response( {'errors': serializer.errors}, status=status.HTTP_400_BAD_REQUEST, - ) + ) validated_data = serializer.validated_data purl = validated_data.get('purl') @@ -1097,7 +1097,7 @@ def reindex_metadata(self, request, *args, **kwargs): message = { 'status': f'error(s) occurred when fetching metadata for {purl}: {errors}' } - return Response(message) + return Response(message, status=status.HTTP_400_BAD_REQUEST) serializer = PackageAPISerializer(packages, many=True, context={'request': request}) return Response(serializer.data) @@ -1169,7 +1169,7 @@ def list(self, request): package_url = PackageURL.from_string(purl) except ValueError: serializer = PurlValidateResponseSerializer(response, context={'request': request}) - return Response(serializer.data) + return Response(serializer.data, status=status.HTTP_400_BAD_REQUEST) response['valid'] = True response["message"] = message_valid
Collect API endpoint response status code This endpoint returns a 200 status code on error. For example: `/api/collect/?purl=pkg:npm/[email protected]` ``` HTTP 200 OK Allow: GET, HEAD, OPTIONS Content-Type: application/json Vary: Accept { "status": "error(s) occurred when fetching metadata for pkg:npm/[email protected]: Package does not exist on npmjs: pkg:npm/[email protected]" } ``` We should have an appropriate status code to make things easier on the consumer side. By the way, calling `/api/collect/` with a purl return a proper 400 status code: ``` HTTP 400 Bad Request Allow: GET, HEAD, OPTIONS Content-Type: application/json Vary: Accept { "errors": { "purl": [ "This field is required." ] } } ``` Collect API endpoint response status code This endpoint returns a 200 status code on error. For example: `/api/collect/?purl=pkg:npm/[email protected]` ``` HTTP 200 OK Allow: GET, HEAD, OPTIONS Content-Type: application/json Vary: Accept { "status": "error(s) occurred when fetching metadata for pkg:npm/[email protected]: Package does not exist on npmjs: pkg:npm/[email protected]" } ``` We should have an appropriate status code to make things easier on the consumer side. By the way, calling `/api/collect/` with a purl return a proper 400 status code: ``` HTTP 400 Bad Request Allow: GET, HEAD, OPTIONS Content-Type: application/json Vary: Accept { "errors": { "purl": [ "This field is required." ] } } ```
2024-07-22T18:56:22
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-450
cc97e7032e12b55a4db7347517da0a6a20f8693f
diff --git a/README.rst b/README.rst index 90eef783..9db5c8bd 100644 --- a/README.rst +++ b/README.rst @@ -1,6 +1,6 @@ The purldb ========== -This repo consists of four main tools: +This repo consists of these main tools: - PackageDB that is the reference model (based on ScanCode toolkit) that contains package data with purl (Package URLs) being a first @@ -10,6 +10,8 @@ This repo consists of four main tools: matching - MatchCode.io that provides package matching functionalities for codebases - ClearCode that contains utilities to mine Clearlydefined for package data +- purldb-toolkit CLI utility and library to use the PurlDB, its API and various + related libraries. These are designed to be used first for reference such that one can query for packages by purl and validate purl existence. @@ -191,37 +193,3 @@ To run PurlDB and Matchcode.io with Docker: docker compose -f docker-compose.yml up -d docker compose -f docker-compose.matchcodeio.yml up -d - -Funding -------- - -This project was funded through the NGI Assure Fund https://nlnet.nl/assure, a -fund established by NLnet https://nlnet.nl/ with financial support from the -European Commission's Next Generation Internet programme, under the aegis of DG -Communications Networks, Content and Technology under grant agreement No 957073. - -This project is also funded through grants from the Google Summer of Code -program, continuing support and sponsoring from nexB Inc. and generous -donations from multiple sponsors. - - -License -------- - -Copyright (c) nexB Inc. and others. All rights reserved. - -purldb is a trademark of nexB Inc. - -SPDX-License-Identifier: Apache-2.0 AND CC-BY-SA-4.0 - -purldb software is licensed under the Apache License version 2.0. - -purldb data is licensed collectively under CC-BY-SA-4.0. - -See https://www.apache.org/licenses/LICENSE-2.0 for the license text. - -See https://creativecommons.org/licenses/by-sa/4.0/legalcode for the license text. - -See https://github.com/nexB/purldb for support or download. - -See https://aboutcode.org for more information about nexB OSS projects. diff --git a/docs/scripts/sphinx_build_link_check.sh b/docs/scripts/sphinx_build_link_check.sh old mode 100644 new mode 100755 diff --git a/docs/source/getting-started/contribute.rst b/docs/source/getting-started/contribute.rst index 1eedd60a..8f79b6e6 100644 --- a/docs/source/getting-started/contribute.rst +++ b/docs/source/getting-started/contribute.rst @@ -1,5 +1,5 @@ -Contirbute -=========== +Contribute +========== Documentation to support code and documentation contributions to purldb. diff --git a/docs/source/how-to-guides/index.rst b/docs/source/how-to-guides/index.rst index 47d33636..96414eb6 100644 --- a/docs/source/how-to-guides/index.rst +++ b/docs/source/how-to-guides/index.rst @@ -2,10 +2,7 @@ How-To-Guides ============= Here are the various how-to guides across various purldb projects to guide you -thourgh specifica use cases: - -- Code matching with Matchcode -- Getting symbols from a package (or a PURL) +through specific use cases: .. toctree:: :maxdepth: 2 diff --git a/docs/source/index.rst b/docs/source/index.rst index 910f1208..927c1b06 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -65,6 +65,16 @@ How-To documents explain how to accomplish specific tasks. ---- +Miscellaneous +------------- + +.. toctree:: + :maxdepth: 2 + + miscellaneous/index + +---- + Indices and tables ================== diff --git a/docs/source/miscellaneous/funding.rst b/docs/source/miscellaneous/funding.rst new file mode 100644 index 00000000..2b269bb5 --- /dev/null +++ b/docs/source/miscellaneous/funding.rst @@ -0,0 +1,11 @@ +Funding +======= + +This project was funded through the NGI Assure Fund https://nlnet.nl/assure, a +fund established by NLnet https://nlnet.nl/ with financial support from the +European Commission's Next Generation Internet programme, under the aegis of DG +Communications Networks, Content and Technology under grant agreement No 957073. + +This project is also funded through grants from the Google Summer of Code +program, continuing support and sponsoring from nexB Inc. and generous +donations from multiple sponsors. diff --git a/docs/source/miscellaneous/index.rst b/docs/source/miscellaneous/index.rst new file mode 100644 index 00000000..b5a22d52 --- /dev/null +++ b/docs/source/miscellaneous/index.rst @@ -0,0 +1,9 @@ +Miscellaneous +============= + +.. toctree:: + :maxdepth: 2 + + testing + funding + license diff --git a/docs/source/miscellaneous/license.rst b/docs/source/miscellaneous/license.rst new file mode 100644 index 00000000..62dd2645 --- /dev/null +++ b/docs/source/miscellaneous/license.rst @@ -0,0 +1,20 @@ +License +======= + +Copyright (c) nexB Inc. and others. All rights reserved. + +purldb is a trademark of nexB Inc. + +SPDX-License-Identifier: Apache-2.0 AND CC-BY-SA-4.0 + +purldb software is licensed under the Apache License version 2.0. + +purldb data is licensed collectively under CC-BY-SA-4.0. + +See https://www.apache.org/licenses/LICENSE-2.0 for the license text. + +See https://creativecommons.org/licenses/by-sa/4.0/legalcode for the license text. + +See https://github.com/nexB/purldb for support or download. + +See https://aboutcode.org for more information about nexB OSS projects. diff --git a/purldb-toolkit/README.rst b/purldb-toolkit/README.rst index 4ac94a98..07ed9bf0 100644 --- a/purldb-toolkit/README.rst +++ b/purldb-toolkit/README.rst @@ -1,135 +1,543 @@ purldb-toolkit ============== -purldb-toolkit is command line utility and library to use the PurlDB, its API and various related libraries. +.. contents:: :local: + :depth: 3 + +purldb-toolkit is a command line utility and library to use the PurlDB, its API and various related libraries. The ``purlcli`` command acts as a client to the PurlDB REST API end point(s) to expose PURL services. -It serves both as a tool, as a library and as an example on how to use the services programmatically. +It serves as a tool, a library and an example of how to use the services programmatically. + - Installation ------------ +.. code-block:: console + pip install purldb-toolkit Usage ----- -Use this command to get basic help:: +Use this command to get basic help: + +.. code-block:: console $ purlcli --help Usage: purlcli [OPTIONS] COMMAND [ARGS]... - - Return information from a PURL. - + + Return information from a PURL. + Options: - --help Show this message and exit. - + --help Show this message and exit. + Commands: - metadata Given one or more PURLs, for each PURL, return a mapping of... - urls Given one or more PURLs, for each PURL, return a list of all... - validate Check the syntax of one or more PURLs. - versions Given one or more PURLs, return a list of all known versions... + metadata Given one or more PURLs, for each PURL, return a mapping of... + urls Given one or more PURLs, for each PURL, return a list of all... + validate Check the syntax and upstream repo status of one or more PURLs. + versions Given one or more PURLs, return a list of all known versions... And the following subcommands: -- Validate a PURL:: +``validate``: validate a PURL +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. code-block:: console $ purlcli validate --help Usage: purlcli validate [OPTIONS] - - Check the syntax of one or more PURLs. - + + Check the syntax and upstream repo status of one or more PURLs. + Options: --purl TEXT PackageURL or PURL. --output FILENAME Write validation output as JSON to FILE. [required] --file FILENAME Read a list of PURLs from a FILE, one per line. --help Show this message and exit. +Examples +######## + +**Submit multiple PURLs using the command line:** + +.. code-block:: console + + purlcli validate --purl pkg:npm/[email protected] --purl pkg:nginx/[email protected] --output <path/to/output.json> + +*Sample output:* + +.. code-block:: json + + { + "headers": [ + { + "tool_name": "purlcli", + "tool_version": "0.2.0", + "options": { + "command": "validate", + "--purl": [ + "pkg:npm/[email protected]", + "pkg:nginx/[email protected]" + ], + "--file": null, + "--output": "<path/to/output.json>" + }, + "errors": [], + "warnings": [ + "'check_existence' is not supported for 'pkg:nginx/[email protected]'" + ] + } + ], + "packages": [ + { + "purl": "pkg:npm/[email protected]", + "valid": true, + "exists": true, + "message": "The provided Package URL is valid, and the package exists in the upstream repo." + }, + { + "purl": "pkg:nginx/[email protected]", + "valid": true, + "exists": null, + "message": "The provided PackageURL is valid, but `check_existence` is not supported for this package type." + } + ] + } + + +**Submit multiple PURLs using a .txt file:** + +.. code-block:: console + + purlcli validate --file <path/to/output.txt> --output <path/to/output.json> + +*Sample input.txt:* + +.. code-block:: text + + pkg:npm/[email protected] + pkg:nginx/[email protected] + + +Details +####### + +``validate`` calls the ``validate/`` endpoint of the `purldb API <https://public.purldb.io/api/>`_. + +See also https://public.purldb.io/api/docs/#/validate. + + +---- + + +``versions``: collect package versions for a PURL +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. code-block:: console -- Collect package versions for a PURL:: - $ purlcli versions --help Usage: purlcli versions [OPTIONS] - + Given one or more PURLs, return a list of all known versions for each PURL. - - Version information is not needed in submitted PURLs and if included will be - removed before processing. - + Options: --purl TEXT PackageURL or PURL. --output FILENAME Write versions output as JSON to FILE. [required] --file FILENAME Read a list of PURLs from a FILE, one per line. --help Show this message and exit. +Examples +######## -- Collect package metadata for a PURL:: +**Submit multiple PURLs using the command line:** + +.. code-block:: console + + purlcli versions --purl pkg:npm/canonical-path --purl pkg:nginx/nginx --output <path/to/output.json> + +*Sample output:* + +.. code-block:: json + + { + "headers": [ + { + "tool_name": "purlcli", + "tool_version": "0.2.0", + "options": { + "command": "versions", + "--purl": [ + "pkg:npm/canonical-path", + "pkg:nginx/nginx" + ], + "--file": null, + "--output": "<path/to/output.json>" + }, + "errors": [], + "warnings": [ + "'pkg:nginx/nginx' not supported with `versions` command" + ] + } + ], + "packages": [ + { + "purl": "pkg:npm/[email protected]", + "version": "0.0.1", + "release_date": "2013-12-19" + }, + { + "purl": "pkg:npm/[email protected]", + "version": "0.0.2", + "release_date": "2013-12-19" + }, + { + "purl": "pkg:npm/[email protected]", + "version": "1.0.0", + "release_date": "2018-10-24" + } + ] + } + + +Details +####### + +``versions`` calls ``versions()`` from `fetchcode/package_versions.py`. + +Version information is not needed in submitted PURLs and, if included, will be removed before processing. + + +---- + + +``metadata``: collect package metadata for a PURL +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. code-block:: console $ purlcli metadata --help Usage: purlcli metadata [OPTIONS] - + Given one or more PURLs, for each PURL, return a mapping of metadata fetched from the fetchcode package.py info() function. - + Options: --purl TEXT PackageURL or PURL. --output FILENAME Write meta output as JSON to FILE. [required] --file FILENAME Read a list of PURLs from a FILE, one per line. - --unique Return data only for unique PURLs. --help Show this message and exit. - -- Collect package URLs for a PURL:: +Examples +######## + +**Submit multiple PURLs using the command line:** + +.. code-block:: console + + purlcli metadata --purl pkg:openssl/[email protected] --purl pkg:nginx/[email protected] --purl pkg:gnu/[email protected] --output <path/to/output.json> + +*Sample output:* + +.. code-block:: json + + { + "headers": [ + { + "tool_name": "purlcli", + "tool_version": "0.2.0", + "options": { + "command": "metadata", + "--purl": [ + "pkg:openssl/[email protected]", + "pkg:nginx/[email protected]", + "pkg:gnu/[email protected]" + ], + "--file": null, + "--output": "<path/to/output.json>" + }, + "errors": [], + "warnings": [ + "'check_existence' is not supported for 'pkg:openssl/[email protected]'", + "'pkg:nginx/[email protected]' not supported with `metadata` command", + "'check_existence' is not supported for 'pkg:gnu/[email protected]'" + ] + } + ], + "packages": [ + { + "purl": "pkg:openssl/[email protected]", + "type": "openssl", + "namespace": null, + "name": "openssl", + "version": "3.0.6", + "qualifiers": {}, + "subpath": null, + "primary_language": "C", + "description": null, + "release_date": "2022-10-11T12:39:09", + "parties": [], + "keywords": [], + "homepage_url": "https://www.openssl.org", + "download_url": "https://github.com/openssl/openssl/archive/refs/tags/openssl-3.0.6.tar.gz", + "api_url": "https://api.github.com/repos/openssl/openssl", + "size": null, + "sha1": null, + "md5": null, + "sha256": null, + "sha512": null, + "bug_tracking_url": "https://github.com/openssl/openssl/issues", + "code_view_url": "https://github.com/openssl/openssl", + "vcs_url": "git://github.com/openssl/openssl.git", + "copyright": null, + "license_expression": null, + "declared_license": "Apache-2.0", + "notice_text": null, + "root_path": null, + "dependencies": [], + "contains_source_code": null, + "source_packages": [], + "repository_homepage_url": null, + "repository_download_url": null, + "api_data_url": null + }, + { + "purl": "pkg:gnu/[email protected]", + "type": "gnu", + "namespace": null, + "name": "glibc", + "version": "2.38", + "qualifiers": {}, + "subpath": null, + "primary_language": null, + "description": null, + "release_date": "2023-07-31T17:34:00", + "parties": [], + "keywords": [], + "homepage_url": "https://ftp.gnu.org/pub/gnu/glibc/", + "download_url": "https://ftp.gnu.org/pub/gnu/glibc/glibc-2.38.tar.gz", + "api_url": null, + "size": null, + "sha1": null, + "md5": null, + "sha256": null, + "sha512": null, + "bug_tracking_url": null, + "code_view_url": null, + "vcs_url": null, + "copyright": null, + "license_expression": null, + "declared_license": null, + "notice_text": null, + "root_path": null, + "dependencies": [], + "contains_source_code": null, + "source_packages": [], + "repository_homepage_url": null, + "repository_download_url": null, + "api_data_url": null + } + ] + } + + +Details +####### + +``metadata`` calls ``info()`` from `fetchcode/package.py`. + +The intended output for each PURL type supported by the ``metadata`` command is + +- an input PURL with a version: output the metadata for the input version +- an input PURL without a version: output a list of the metadata for all versions + +The output of the various PURL types currently supported in `fetchcode/package.py` varies from type to type at the moment -- the underlying functions will be updated as needed so that all produce the intended output for input PURLs with and without a version. + + +---- + + +``urls``: collect package URLs for a PURL +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. code-block:: console $ purlcli urls --help Usage: purlcli urls [OPTIONS] - + Given one or more PURLs, for each PURL, return a list of all known URLs fetched from the packageurl-python purl2url.py code. - + Options: --purl TEXT PackageURL or PURL. --output FILENAME Write urls output as JSON to FILE. [required] --file FILENAME Read a list of PURLs from a FILE, one per line. - --unique Return data only for unique PURLs. --head Validate each URL's existence with a head request. --help Show this message and exit. - -Funding -------- - -This project was funded through the NGI Assure Fund https://nlnet.nl/assure, a -fund established by NLnet https://nlnet.nl/ with financial support from the -European Commission's Next Generation Internet programme, under the aegis of DG -Communications Networks, Content and Technology under grant agreement No 957073. - -This project is also funded through grants from the Google Summer of Code -program, continuing support and sponsoring from nexB Inc. and generous -donations from multiple sponsors. - - -License -------- - -Copyright (c) nexB Inc. and others. All rights reserved. - -purldb is a trademark of nexB Inc. - -SPDX-License-Identifier: Apache-2.0 AND CC-BY-SA-4.0 - -purldb software is licensed under the Apache License version 2.0. - -purldb data is licensed collectively under CC-BY-SA-4.0. - -See https://www.apache.org/licenses/LICENSE-2.0 for the license text. - -See https://creativecommons.org/licenses/by-sa/4.0/legalcode for the license text. - -See https://github.com/nexB/purldb for support or download. - -See https://aboutcode.org for more information about nexB OSS projects. - +Examples +######## + +**Submit multiple PURLs using the command line:** + +.. code-block:: console + + purlcli urls --purl pkg:npm/[email protected] --purl pkg:nginx/[email protected] --purl pkg:rubygems/[email protected] --output <path/to/output.json> + +*Sample output:* + +.. code-block:: json + + { + "headers": [ + { + "tool_name": "purlcli", + "tool_version": "0.2.0", + "options": { + "command": "urls", + "--purl": [ + "pkg:npm/[email protected]", + "pkg:nginx/[email protected]", + "pkg:rubygems/[email protected]" + ], + "--file": null, + "--output": "<path/to/output.json>" + }, + "errors": [], + "warnings": [ + "'pkg:nginx/[email protected]' not supported with `urls` command", + "'check_existence' is not supported for 'pkg:rubygems/[email protected]'" + ] + } + ], + "packages": [ + { + "purl": "pkg:npm/[email protected]", + "download_url": "http://registry.npmjs.org/canonical-path/-/canonical-path-1.0.0.tgz", + "inferred_urls": [ + "https://www.npmjs.com/package/canonical-path/v/1.0.0", + "http://registry.npmjs.org/canonical-path/-/canonical-path-1.0.0.tgz" + ], + "repository_download_url": null, + "repository_homepage_url": "https://www.npmjs.com/package/canonical-path/v/1.0.0" + }, + { + "purl": "pkg:rubygems/[email protected]", + "download_url": "https://rubygems.org/downloads/rails-7.0.0.gem", + "inferred_urls": [ + "https://rubygems.org/gems/rails/versions/7.0.0", + "https://rubygems.org/downloads/rails-7.0.0.gem" + ], + "repository_download_url": null, + "repository_homepage_url": "https://rubygems.org/gems/rails/versions/7.0.0" + } + ] + } + + +**Include head and get requests:** + +``--head`` + +.. code-block:: console + + purlcli urls --purl pkg:npm/[email protected] --purl pkg:nginx/[email protected] --purl pkg:rubygems/[email protected] --output <path/to/output.json> --head + +*Sample output:* + +.. code-block:: json + + { + "headers": [ + { + "tool_name": "purlcli", + "tool_version": "0.2.0", + "options": { + "command": "urls", + "--purl": [ + "pkg:npm/[email protected]", + "pkg:nginx/[email protected]", + "pkg:rubygems/[email protected]" + ], + "--file": null, + "--head": true, + "--output": "<stdout>" + }, + "errors": [], + "warnings": [ + "'pkg:nginx/[email protected]' not supported with `urls` command", + "'check_existence' is not supported for 'pkg:rubygems/[email protected]'" + ] + } + ], + "packages": [ + { + "purl": "pkg:npm/[email protected]", + "download_url": { + "url": "http://registry.npmjs.org/canonical-path/-/canonical-path-1.0.0.tgz", + "get_request_status_code": 200, + "head_request_status_code": 301 + }, + "inferred_urls": [ + { + "url": "https://www.npmjs.com/package/canonical-path/v/1.0.0", + "get_request_status_code": 200, + "head_request_status_code": 200 + }, + { + "url": "http://registry.npmjs.org/canonical-path/-/canonical-path-1.0.0.tgz", + "get_request_status_code": 200, + "head_request_status_code": 301 + } + ], + "repository_download_url": { + "url": null, + "get_request_status_code": "N/A", + "head_request_status_code": "N/A" + }, + "repository_homepage_url": { + "url": "https://www.npmjs.com/package/canonical-path/v/1.0.0", + "get_request_status_code": 200, + "head_request_status_code": 200 + } + }, + { + "purl": "pkg:rubygems/[email protected]", + "download_url": { + "url": "https://rubygems.org/downloads/rails-7.0.0.gem", + "get_request_status_code": 200, + "head_request_status_code": 200 + }, + "inferred_urls": [ + { + "url": "https://rubygems.org/gems/rails/versions/7.0.0", + "get_request_status_code": 200, + "head_request_status_code": 200 + }, + { + "url": "https://rubygems.org/downloads/rails-7.0.0.gem", + "get_request_status_code": 200, + "head_request_status_code": 200 + } + ], + "repository_download_url": { + "url": null, + "get_request_status_code": "N/A", + "head_request_status_code": "N/A" + }, + "repository_homepage_url": { + "url": "https://rubygems.org/gems/rails/versions/7.0.0", + "get_request_status_code": 200, + "head_request_status_code": 200 + } + } + ] + } + + +Details +####### + +None atm.
Create PURL services documentation We should create and publish on RTD a comprehensive installation and usage documentation for the various PURL-based services of PurlDB. This should be backed by a publicly accessible demo system - [x] https://github.com/nexB/purldb/issues/445 with PR in https://github.com/nexB/purldb/pull/456 - [x] https://github.com/nexB/purldb/issues/446 merged with https://github.com/nexB/purldb/pull/450 - [x] https://github.com/nexB/purldb/issues/448 - [x] https://github.com/nexB/purldb/issues/457 Create PURL services documentation We should create and publish on RTD a comprehensive installation and usage documentation for the various PURL-based services of PurlDB. This should be backed by a publicly accessible demo system - [x] https://github.com/nexB/purldb/issues/445 with PR in https://github.com/nexB/purldb/pull/456 - [x] https://github.com/nexB/purldb/issues/446 merged with https://github.com/nexB/purldb/pull/450 - [x] https://github.com/nexB/purldb/issues/448 - [x] https://github.com/nexB/purldb/issues/457
We would need some doc for the purlcli and some doc for each of the endpoints @johnmhoran @AyanSinhaMahapatra is this something you guys could get started? Overall we would need some doc in the same style and structure as ScanCode.io? @pombredanne I just saw this. Yes, I'd be happy to work with @AyanSinhaMahapatra on this -- we can start by finding a time to discuss general structure and content and see how the ScanCode.io RTD informs our work. I'll also need to learn what the various endpoints do and how we use them. Unless you suggest otherwise, I would like to get a bit more of my PURL CLI work out of the way first since I'm still wrestling with the `urls` command, have not yet fully changed the four existing commands to the SCTK-like data structure, and have more commands to add (`scan` and `git` -- and perhaps others?). Some status update: - We created and published the API definitions documentation see code https://github.com/nexB/purldb/commit/cfc1c9676f9bb6a893bbb3ff573c726033e0e404 - This is deployed on public.purldb.io at `/api/docs` for the docs and the schema is at `/api/`. See the live API doc https://private.purldb.io/api/docs/ and schema https://private.purldb.io/api/schema - We also updated the built-in documentation of end points accordingly @pombredanne @AyanSinhaMahapatra As noted a bit earlier in [purldb PR 436](https://github.com/nexB/purldb/pull/436), I've added details to the existing purldb-toolkit RTD page re the use of the four current PURLCLI tools, with a mix of command and output examples and several related notes. The .rst is here: https://github.com/nexB/purldb/blob/f49eff87ae9a4226584096498836de7d5015f6a2/purldb-toolkit/README.rst The initial description above for this issue also says `This should be backed by a publicly accessible demo system`. What is this? Any examples or articles or a design doc? We would need some doc for the purlcli and some doc for each of the endpoints @johnmhoran @AyanSinhaMahapatra is this something you guys could get started? Overall we would need some doc in the same style and structure as ScanCode.io? @pombredanne I just saw this. Yes, I'd be happy to work with @AyanSinhaMahapatra on this -- we can start by finding a time to discuss general structure and content and see how the ScanCode.io RTD informs our work. I'll also need to learn what the various endpoints do and how we use them. Unless you suggest otherwise, I would like to get a bit more of my PURL CLI work out of the way first since I'm still wrestling with the `urls` command, have not yet fully changed the four existing commands to the SCTK-like data structure, and have more commands to add (`scan` and `git` -- and perhaps others?). Some status update: - We created and published the API definitions documentation see code https://github.com/nexB/purldb/commit/cfc1c9676f9bb6a893bbb3ff573c726033e0e404 - This is deployed on public.purldb.io at `/api/docs` for the docs and the schema is at `/api/`. See the live API doc https://private.purldb.io/api/docs/ and schema https://private.purldb.io/api/schema - We also updated the built-in documentation of end points accordingly @pombredanne @AyanSinhaMahapatra As noted a bit earlier in [purldb PR 436](https://github.com/nexB/purldb/pull/436), I've added details to the existing purldb-toolkit RTD page re the use of the four current PURLCLI tools, with a mix of command and output examples and several related notes. The .rst is here: https://github.com/nexB/purldb/blob/f49eff87ae9a4226584096498836de7d5015f6a2/purldb-toolkit/README.rst The initial description above for this issue also says `This should be backed by a publicly accessible demo system`. What is this? Any examples or articles or a design doc?
2024-05-30T19:55:15
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-429
b992c3c0e164b18c0af76dae984c4e58cfc037f2
diff --git a/purldb-toolkit/src/purldb_toolkit/purlcli.py b/purldb-toolkit/src/purldb_toolkit/purlcli.py index 0269ce97..a90e2c02 100644 --- a/purldb-toolkit/src/purldb_toolkit/purlcli.py +++ b/purldb-toolkit/src/purldb_toolkit/purlcli.py @@ -7,14 +7,21 @@ # See https://aboutcode.org for more information about nexB OSS projects. # +from enum import Enum import json import logging import os import re +import sys from importlib.metadata import version from pathlib import Path +import time +from itertools import groupby +from typing import NamedTuple +from urllib.parse import urljoin import click +from dataclasses import dataclass import requests from fetchcode.package import info from fetchcode.package_versions import SUPPORTED_ECOSYSTEMS, versions @@ -885,5 +892,286 @@ def clear_log_file(): os.remove(log_file) +class D2DPackage(NamedTuple): + purl: str + package_content: str + download_url: str + + +def get_packages_by_set(purl, purldb_api_url): + """ + Yield list of D2DPackages for each package_set of a purl. + """ + package_api_url = get_package(purl, purldb_api_url) + package_api_url = package_api_url.get("results")[0] + if not package_api_url: + return + for package_set in package_api_url.get("package_sets") or []: + packages = [] + for package_api_url in package_set.get("packages") or []: + package_response = requests.get(package_api_url) + package_data = package_response.json() + p = D2DPackage( + purl=package_data.get("purl"), + package_content=package_data.get("package_content"), + download_url=package_data.get("download_url"), + ) + packages.append(p) + yield packages + + +class PackagePair(NamedTuple): + from_package: D2DPackage + to_package: D2DPackage + + +# TODO: Keep in sync with the packagedb.models.PackageContentType +class PackageContentType(Enum): + SOURCE_REPO = 3, 'source_repo' + SOURCE_ARCHIVE = 4, 'source_archive' + BINARY = 5, 'binary' + +def generate_d2d_package_pairs(from_packages, to_packages): + """ + Yield PackagePair objects based on all the combinations of packages in the + from_packages and to_packages lists. + """ + + for from_package in from_packages: + for to_package in to_packages: + yield PackagePair(from_package=from_package, to_package=to_package) + + +def get_package_pairs_for_d2d(packages): + packages = sorted(packages, key=lambda p: p.package_content) + + packages_by_content = {} + + for content, content_packages in groupby(packages, key=lambda p: p.package_content): + packages_by_content[content] = list(content_packages) + + source_repo_packages = packages_by_content.get(PackageContentType.SOURCE_REPO.name.lower(), []) + source_archive_packages = packages_by_content.get(PackageContentType.SOURCE_ARCHIVE.name.lower(), []) + binary_packages = packages_by_content.get(PackageContentType.BINARY.name.lower(), []) + + yield from generate_d2d_package_pairs(from_packages=source_repo_packages, to_packages=binary_packages) + yield from generate_d2d_package_pairs(from_packages=source_archive_packages, to_packages=binary_packages) + yield from generate_d2d_package_pairs(from_packages=source_repo_packages, to_packages=source_archive_packages) + + + [email protected](name="d2d") [email protected]( + "--from-purl", + required=True, + help="PURL for the source or `from` package.", +) [email protected]( + "--to-purl", + required=True, + help="PURL for the destination or `to` package.", +) [email protected]( + "--output", + type=click.File(mode="w", encoding="utf-8"), + required=True, + help="Write results as JSON to FILE.", +) [email protected]( + "--purldb-api-url", + required=True, + default="https://public.purldb.io/api", + help="", +) [email protected]( + "--matchcode-api-url", + required=True, + default="https://matchcode.io/api/d2d", + help="PackageURL or PURL.", +) +def d2d(from_purl, to_purl, output, purldb_api_url, matchcode_api_url): + """ + Run a deploy-to-devel analysis using the "from" PURL and "to" PURL. + Wait for the analysis to complete and save results to the ``output`` FILE. + """ + run_id, project_url = map_deploy_to_devel(from_purl, to_purl, purldb_api_url, matchcode_api_url) + while True: + # TODO: Use a better progress indicator. + sys.stderr.write(".") + data = get_run_data(matchcode_api_url=matchcode_api_url,run_id=run_id) + if data.get("status") != "running": + break + time.sleep(5) + json.dump(get_project_data(project_url=project_url), output, indent=4) + + +def get_run_data(matchcode_api_url, run_id): + url = urljoin(matchcode_api_url, f"runs/{run_id}/") + response = requests.get(url) + data = response.json() + return data + + +def get_project_data(project_url): + response = requests.get(project_url) + data = response.json() + return data + + +@dataclass +class D2DProject: + project_url: str + done: bool + package_pair: PackagePair + result: dict + run_id: str + + [email protected](name="d2d-purl-set") [email protected]( + "--purl", + required=True, + help="Perform a deploy-to-devel on all the PURLs in the set of this PURL.", +) [email protected]( + "--output", + type=click.File(mode="w", encoding="utf-8"), + required=True, + help="Write results as JSON to FILE.", +) [email protected]( + "--purldb-api-url", + required=True, + default="https://public.purldb.io/api/", + help="", +) [email protected]( + "--matchcode-api-url", + required=True, + default="https://matchcode.io/api/", + help="", +) +def d2d_purl_set(purl, output, purldb_api_url, matchcode_api_url): + """ + Run a deploy-to-devel analysis using all the PURLs in the set of this PURL. . + Wait for the analysis to complete and save results to the ``output`` FILE. + """ + projects: list[D2DProject] = [] + for d2d_packages in get_packages_by_set(purl, purldb_api_url): + package_pairs = get_package_pairs_for_d2d(d2d_packages) + for package_pair in package_pairs: + print(f"Running D2D for {package_pair.from_package.purl} -> {package_pair.to_package.purl}") + run_id, project_url = map_deploy_to_devel( + from_purl=package_pair.from_package.purl, + to_purl=package_pair.to_package.purl, + purldb_api_url=purldb_api_url, + matchcode_api_url=matchcode_api_url, + ) + projects.append(D2DProject(project_url=project_url, done=False, run_id=run_id, package_pair=package_pair, result={})) + + while True: + for project in projects: + if project.done: + continue + # TODO: Use a better progress indicator. + sys.stderr.write(".") + data = get_run_data(matchcode_api_url=matchcode_api_url,run_id=run_id) + if data.get("status") != "running": + project.done = True + project.result = get_project_data(project_url=project.project_url) + time.sleep(1) + time.sleep(5) + if all(project.done for project in projects): + break + + d2d_results = [] + for project in projects: + d2d_results.append({ + "results" : { + "from": { + "purl": project.package_pair.from_package.purl, + "package_content": project.package_pair.from_package.package_content, + "download_url": project.package_pair.from_package.download_url + }, + "to": { + "purl": project.package_pair.to_package.purl, + "package_content": project.package_pair.to_package.package_content, + "download_url": project.package_pair.to_package.download_url + }, + "d2d_result": project.result + } + }) + json.dump(d2d_results, output, indent=4) + + +def map_deploy_to_devel(from_purl, to_purl, purldb_api_url, matchcode_api_url): + """ + Return the matchcode.io d2d run ID, project URL for a given pair of PURLs. + Raise an exception if we can not find download URLs for the PURLs. + """ + from_url, to_url = get_download_urls(from_purl=from_purl, to_purl=to_purl, purldb_api_url=purldb_api_url) + + if not from_url: + raise Exception(f"Could not find download URLs for the `from` PURL {from_purl}.") + + if not to_url: + raise Exception(f"Could not find download URLs for the `to` PURL {to_url}.") + + from_url = f"{from_url}#from" + to_url = f"{to_url}#to" + + input_urls = [from_url, to_url] + + d2d_results = get_d2d_results(matchcode_api_url, input_urls) + project_url = d2d_results.get("url") or None + run_url = d2d_results.get("runs") or [] + + if not run_url: + raise Exception(f"Could not find a run URL for the input URLs {input_urls}.") + + return run_url[0], project_url + + +def get_d2d_results(matchcode_api_url, input_urls): + url = urljoin(matchcode_api_url, "d2d/") + headers = {'Content-Type': 'application/json'} + d2d_results = requests.post(url=url, data=json.dumps({"input_urls": input_urls, "runs": []}), headers=headers).json() + return d2d_results + + +def get_download_urls(from_purl, to_purl, purldb_api_url): + """ + Return a tuple of download URLs for a given "from" and "to" PURL. + """ + from_url = get_download_url(from_purl, purldb_api_url) + to_url = get_download_url(to_purl, purldb_api_url) + + return from_url, to_url + + +def get_download_url(purl, purldb_api_url): + """ + Return the download URL for a given PURL or None. + """ + package = get_package(purl, purldb_api_url) + package = package.get("results") or [] + if package: + package = package[0] + else: + return None + return package.get("download_url") or None + + +def get_package(purl, purldb_api_url): + """ + Return a package mapping for a given PURL or empty dict. + """ + url = urljoin(purldb_api_url, f"packages/?purl={purl}") + response = requests.get(url=url) + data = response.json() + return data or {} + + + if __name__ == "__main__": purlcli()
Call purl2vcs in collect API #418 closes: #418
2024-05-09T11:18:27
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-348
b2c8013001dc02adb2dab4271392d56654aea09a
diff --git a/purldb-toolkit/src/purldb_toolkit/purlcli.py b/purldb-toolkit/src/purldb_toolkit/purlcli.py index e1809f1d..0269ce97 100644 --- a/purldb-toolkit/src/purldb_toolkit/purlcli.py +++ b/purldb-toolkit/src/purldb_toolkit/purlcli.py @@ -96,16 +96,18 @@ def get_metadata_details(purls, output, file, unique, command_name): if not purl: continue + purl_data = {} + purl_data["purl"] = purl + metadata_purl = check_metadata_purl(purl) if command_name == "metadata" and metadata_purl: metadata_warnings[purl] = metadata_purl continue - for release in list(info(purl)): - release_detail = release.to_dict() - release_detail.move_to_end("purl", last=False) - metadata_details["packages"].append(release_detail) + metadata_collection = collect_metadata(purl) + purl_data["metadata"] = metadata_collection + metadata_details["packages"].append(purl_data) metadata_details["headers"] = construct_headers( purls=purls, @@ -120,6 +122,19 @@ def get_metadata_details(purls, output, file, unique, command_name): return metadata_details +def collect_metadata(purl): + """ + Return a list of release-based metadata collections from fetchcode/package.py. + """ + collected_metadata = [] + for release in list(info(purl)): + release_detail = release.to_dict() + release_detail.move_to_end("purl", last=False) + collected_metadata.append(release_detail) + + return collected_metadata + + def check_metadata_purl(purl): """ Return a variable identifying the message for printing to the console by @@ -156,6 +171,9 @@ def check_metadata_purl(purl): if results["exists"] == False: return "not_in_upstream_repo" + if results["exists"] == None: + return "check_existence_not_supported" + def normalize_purls(purls, unique): """ @@ -246,14 +264,15 @@ def construct_headers( "valid_but_not_supported": f"'{purl}' not supported with `{command_name}` command", "valid_but_not_fully_supported": f"'{purl}' not fully supported with `urls` command", "not_in_upstream_repo": f"'{purl}' does not exist in the upstream repo", + "check_existence_not_supported": f"'check_existence' is not supported for '{purl}'", } if command_name in ["metadata", "urls", "validate", "versions"]: purl_warning = purl_warnings.get(purl, None) + if purl_warning: warning = warning_text[purl_warning] warnings.append(warning) - print(warning) continue log_file = Path(LOG_FILE_LOCATION) @@ -580,14 +599,14 @@ def get_validate_details(purls, output, file, unique, command_name): validated_purl = check_validate_purl(purl) - if command_name == "urls" and validated_purl in [ + if command_name == "validate" and validated_purl in [ "validation_error", "not_valid", "valid_but_not_supported", "not_in_upstream_repo", + "check_existence_not_supported", ]: validate_warnings[purl] = validated_purl - continue if validated_purl: validate_details["packages"].append(validate_purl(purl)) @@ -624,6 +643,9 @@ def check_validate_purl(purl): if results["exists"] == True: return check_validation + if results["exists"] == None: + return "check_existence_not_supported" + def validate_purl(purl): """ @@ -742,7 +764,6 @@ def get_versions_details(purls, output, file, unique, command_name): purl_data = {} purl_data["purl"] = purl - purl_data["versions"] = [] versions_purl = check_versions_purl(purl) @@ -750,24 +771,9 @@ def get_versions_details(purls, output, file, unique, command_name): versions_warnings[purl] = versions_purl continue - for package_version in list(versions(purl)): - purl_version_data = {} - purl_version = package_version.value - - # We use `versions()` from fetchcode/package_versions.py, which - # keeps the version (if any) of the input PURL in its output, so - # "pkg:pypi/[email protected]" is returned as - # "pkg:pypi/[email protected]@0.1.0", "pkg:pypi/[email protected]@0.2.0" - # etc. Thus, we remove any string starting with `@` first. - raw_purl = purl = re.split("[@,]+", purl)[0] - nested_purl = raw_purl + "@" + f"{purl_version}" - - purl_version_data["purl"] = nested_purl - purl_version_data["version"] = f"{purl_version}" - purl_version_data["release_date"] = f"{package_version.release_date}" - - purl_data["versions"].append(purl_version_data) + version_collection = collect_versions(purl) + purl_data["versions"] = version_collection versions_details["packages"].append(purl_data) versions_details["headers"] = construct_headers( @@ -783,6 +789,32 @@ def get_versions_details(purls, output, file, unique, command_name): return versions_details +def collect_versions(purl): + """ + Return a list of version objects collected from fetchcode/package_versions.py. + """ + collected_versions = [] + for package_version in list(versions(purl)): + purl_version_data = {} + purl_version = package_version.value + + # We use `versions()` from fetchcode/package_versions.py, which + # keeps the version (if any) of the input PURL in its output, so + # "pkg:pypi/[email protected]" is returned as + # "pkg:pypi/[email protected]@0.1.0", "pkg:pypi/[email protected]@0.2.0" + # etc. Thus, we remove any string starting with `@` first. + raw_purl = purl = re.split("[@,]+", purl)[0] + nested_purl = raw_purl + "@" + f"{purl_version}" + + purl_version_data["purl"] = nested_purl + purl_version_data["version"] = f"{purl_version}" + purl_version_data["release_date"] = f"{package_version.release_date}" + + collected_versions.append(purl_version_data) + + return collected_versions + + def check_versions_purl(purl): """ Return a variable identifying the message for printing to the console by @@ -828,6 +860,9 @@ def check_versions_purl(purl): if results["exists"] == False: return "not_in_upstream_repo" + if results["exists"] == None: + return "check_existence_not_supported" + # This handles the conflict between the `validate`` endpoint (treats # both "pkg:deb/debian/2ping" and "pkg:deb/2ping" as valid) and # fetchcode.package_versions versions() (returns None for "pkg:deb/2ping").
Mock the live fetching in purldb-toolkit tests As upstream data changes, tests using live fetching can fail -- we need to replace these by mocking the responses we want to test.
2024-03-20T00:38:40
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-305
ae6a810b103b96825e0fda919ac47f798d565d56
diff --git a/purldb-toolkit/src/purldb_toolkit/purlcli.py b/purldb-toolkit/src/purldb_toolkit/purlcli.py index 290f2dd0..e1809f1d 100644 --- a/purldb-toolkit/src/purldb_toolkit/purlcli.py +++ b/purldb-toolkit/src/purldb_toolkit/purlcli.py @@ -8,8 +8,11 @@ # import json +import logging +import os import re from importlib.metadata import version +from pathlib import Path import click import requests @@ -18,7 +21,7 @@ from packageurl import PackageURL from packageurl.contrib import purl2url -from packagedb.package_managers import VERSION_API_CLASSES_BY_PACKAGE_TYPE +LOG_FILE_LOCATION = os.path.join(os.path.expanduser("~"), "purlcli.log") @click.group() @@ -82,33 +85,21 @@ def get_metadata_details(purls, output, file, unique, command_name): metadata_details["headers"] = [] metadata_details["packages"] = [] - normalized_purls = [] - input_purls = [] - if unique: - for purl in purls: - purl, normalized_purl = normalize_purl(purl) - normalized_purls.append((purl, normalized_purl)) - if normalized_purl not in input_purls: - input_purls.append(normalized_purl) - else: - input_purls = purls + metadata_warnings = {} + + input_purls, normalized_purls = normalize_purls(purls, unique) + + clear_log_file() for purl in input_purls: + purl = purl.strip() if not purl: continue metadata_purl = check_metadata_purl(purl) - if command_name == "metadata" and metadata_purl == "not_valid": - print(f"'{purl}' not valid") - continue - - if command_name == "metadata" and metadata_purl == "valid_but_not_supported": - print(f"'{purl}' not supported with `metadata` command") - continue - - if command_name == "metadata" and metadata_purl == "not_in_upstream_repo": - print(f"'{purl}' does not exist in the upstream repo") + if command_name == "metadata" and metadata_purl: + metadata_warnings[purl] = metadata_purl continue for release in list(info(purl)): @@ -123,21 +114,72 @@ def get_metadata_details(purls, output, file, unique, command_name): command_name=command_name, normalized_purls=normalized_purls, unique=unique, + purl_warnings=metadata_warnings, ) return metadata_details -def normalize_purl(purl): +def check_metadata_purl(purl): """ - Remove substrings that start with the '@', '?' or '#' separators. + Return a variable identifying the message for printing to the console by + get_metadata_details() if (1) the input PURL is invalid, (2) its type is not + supported by `metadata` or (3) its existence was not validated (e.g., + "does not exist in the upstream repo"). + + This message will also be reported by construct_headers() in the + `warnings` field of the `header` section of the JSON object returned by + the `metadata` command. """ - input_purl = purl - purl = purl.strip() - purl = re.split("[@,?,#,]+", purl)[0] - normalized_purl = purl + check_validation = validate_purl(purl) + if check_validation is None: + return "validation_error" + results = check_validation + + if results["valid"] == False: + return "not_valid" + + # This is manually constructed from a visual inspection of fetchcode/package.py. + metadata_supported_ecosystems = [ + "bitbucket", + "cargo", + "github", + "npm", + "pypi", + "rubygems", + ] + metadata_purl = PackageURL.from_string(purl) + + if metadata_purl.type not in metadata_supported_ecosystems: + return "valid_but_not_supported" + + if results["exists"] == False: + return "not_in_upstream_repo" + + +def normalize_purls(purls, unique): + """ + If the command includes the `--unique` flag, take the list of input PURLs, + remove the portion of the PURL that starts with a PURL separator (`@`, `?` + or `#`), and return a deduplicated list of the resulting PURLs (in + `input_purls`) and a list of tuples of each pair of the original input PURL + and the normalized PURL (in `normalized_purls`). + """ + input_purls = [] + normalized_purls = [] + if unique: + for purl in purls: + input_purl = purl + purl = purl.strip() + purl = re.split("[@,?,#,]+", purl)[0] + normalized_purl = purl + normalized_purls.append((input_purl, normalized_purl)) + if normalized_purl not in input_purls: + input_purls.append(normalized_purl) + else: + input_purls = purls - return input_purl, normalized_purl + return input_purls, normalized_purls def construct_headers( @@ -148,6 +190,7 @@ def construct_headers( head=None, normalized_purls=None, unique=None, + purl_warnings=None, ): """ Return a list comprising the `headers` content of the dictionary output. @@ -185,47 +228,39 @@ def construct_headers( headers_content["options"] = options headers_content["purls"] = purls - if command_name == "metadata" and unique: - for purl in normalized_purls: - if purl[0] != purl[1]: - warnings.append(f"input PURL: '{purl[0]}' normalized to '{purl[1]}'") + if (command_name in ["metadata", "urls", "validate", "versions"]) and unique: + for input_purl, normalized_purl in normalized_purls: + if input_purl != normalized_purl: + warnings.append( + f"input PURL: '{input_purl}' normalized to '{normalized_purl}'" + ) for purl in purls: if not purl: continue - # `metadata` warnings: - metadata_purl = check_metadata_purl(purl) - - if command_name == "metadata" and metadata_purl == "not_valid": - warnings.append(f"'{purl}' not valid") - continue - - if command_name == "metadata" and metadata_purl == "valid_but_not_supported": - warnings.append(f"'{purl}' not supported with `metadata` command") - continue - - if command_name == "metadata" and metadata_purl == "not_in_upstream_repo": - warnings.append(f"'{purl}' does not exist in the upstream repo") - continue - - # `urls` warnings: - urls_purl = check_urls_purl(purl) - - if command_name == "urls" and urls_purl == "not_valid": - warnings.append(f"'{purl}' not valid") - continue - - if command_name == "urls" and urls_purl == "valid_but_not_supported": - warnings.append(f"'{purl}' not supported with `urls` command") - continue + warning_text = { + "error_fetching_purl": f"'error fetching {purl}'", + "validation_error": f"'{purl}' encountered a validation error", + "not_valid": f"'{purl}' not valid", + "valid_but_not_supported": f"'{purl}' not supported with `{command_name}` command", + "valid_but_not_fully_supported": f"'{purl}' not fully supported with `urls` command", + "not_in_upstream_repo": f"'{purl}' does not exist in the upstream repo", + } - if command_name == "urls" and urls_purl == "valid_but_not_fully_supported": - warnings.append(f"'{purl}' not fully supported with `urls` command") + if command_name in ["metadata", "urls", "validate", "versions"]: + purl_warning = purl_warnings.get(purl, None) + if purl_warning: + warning = warning_text[purl_warning] + warnings.append(warning) + print(warning) + continue - if command_name == "urls" and urls_purl == "not_in_upstream_repo": - warnings.append(f"'{purl}' does not exist in the upstream repo") - continue + log_file = Path(LOG_FILE_LOCATION) + if log_file.is_file(): + with open(log_file, "r") as f: + for line in f: + errors.append(line) headers_content["errors"] = errors headers_content["warnings"] = warnings @@ -234,40 +269,6 @@ def construct_headers( return headers -def check_metadata_purl(purl): - """ - Return a variable identifying the message for printing to the console by - get_metadata_details() if (1) the input PURL is invalid, (2) its type is not - supported by `metadata` or (3) its existence was not validated (e.g., - "does not exist in the upstream repo"). - - This message will also be reported by construct_headers() in the - `warnings` field of the `header` section of the JSON object returned by - the `metadata` command. - """ - results = check_existence(purl) - - if results["valid"] == False: - return "not_valid" - - # This is manually constructed from a visual inspection of fetchcode/package.py. - metadata_supported_ecosystems = [ - "bitbucket", - "cargo", - "github", - "npm", - "pypi", - "rubygems", - ] - metadata_purl = PackageURL.from_string(purl) - - if metadata_purl.type not in metadata_supported_ecosystems: - return "valid_but_not_supported" - - if results["exists"] == False: - return "not_in_upstream_repo" - - @purlcli.command(name="urls") @click.option( "--purl", @@ -301,7 +302,6 @@ def check_metadata_purl(purl): required=False, help="Validate each URL's existence with a head request.", ) -# We're passing `unique` but it's not yet fully implemented here or in the `urls` tests. def get_urls(purls, output, file, unique, head): """ Given one or more PURLs, for each PURL, return a list of all known URLs @@ -315,28 +315,27 @@ def get_urls(purls, output, file, unique, head): context = click.get_current_context() command_name = context.command.name - urls_info = get_urls_details(purls, output, file, head, command_name) + urls_info = get_urls_details(purls, output, file, unique, head, command_name) json.dump(urls_info, output, indent=4) -def get_urls_details(purls, output, file, head, command_name): +def get_urls_details(purls, output, file, unique, head, command_name): """ Return a dictionary containing URLs for each PURL in the `purls` input list. `check_urls_purl()` will print an error message to the console (also displayed in the JSON output) when necessary. """ urls_details = {} - urls_details["headers"] = construct_headers( - purls=purls, - output=output, - file=file, - head=head, - command_name=command_name, - ) - + urls_details["headers"] = [] urls_details["packages"] = [] - for purl in purls: + urls_warnings = {} + + input_purls, normalized_purls = normalize_purls(purls, unique) + + clear_log_file() + + for purl in input_purls: url_detail = {} url_detail["purl"] = purl @@ -344,23 +343,19 @@ def get_urls_details(purls, output, file, head, command_name): if not purl: continue - urls_purl = check_urls_purl(purl) - - # Print warnings to terminal. - if command_name == "urls" and urls_purl == "not_valid": - print(f"'{purl}' not valid") - continue + purl_status = check_urls_purl(purl) - if command_name == "urls" and urls_purl == "valid_but_not_supported": - print(f"'{purl}' not supported with `urls` command") + if command_name == "urls" and purl_status in [ + "validation_error", + "not_valid", + "valid_but_not_supported", + "not_in_upstream_repo", + ]: + urls_warnings[purl] = purl_status continue - if command_name == "urls" and urls_purl == "valid_but_not_fully_supported": - print(f"'{purl}' not fully supported with `urls` command") - - if command_name == "urls" and urls_purl == "not_in_upstream_repo": - print(f"'{purl}' does not exist in the upstream repo") - continue + if command_name == "urls" and purl_status in ["valid_but_not_fully_supported"]: + urls_warnings[purl] = purl_status # Add the URLs. url_purl = PackageURL.from_string(purl) @@ -411,6 +406,17 @@ def get_urls_details(purls, output, file, head, command_name): urls_details["packages"].append(url_detail) + urls_details["headers"] = construct_headers( + purls=purls, + output=output, + file=file, + head=head, + command_name=command_name, + normalized_purls=normalized_purls, + unique=unique, + purl_warnings=urls_warnings, + ) + return urls_details @@ -447,7 +453,10 @@ def check_urls_purl(purl): or its type is not supported (or not fully supported) by `urls`, or it does not exist in the upstream repo. """ - results = check_existence(purl) + check_validation = validate_purl(purl) + if check_validation is None: + return "validation_error" + results = check_validation if results["valid"] == False: return "not_valid" @@ -505,7 +514,6 @@ def check_urls_purl(purl): return "valid_but_not_fully_supported" -# Not yet converted to a SCTK-like data structure. @purlcli.command(name="validate") @click.option( "--purl", @@ -527,66 +535,147 @@ def check_urls_purl(purl): required=False, help="Read a list of PURLs from a FILE, one per line.", ) -def validate(purls, output, file): [email protected]( + "--unique", + is_flag=True, + required=False, + help="Return data only for unique PURLs.", +) +def validate(purls, output, file, unique): """ - Check the syntax of one or more PURLs. + Check the syntax and upstream repo status of one or more PURLs. """ check_for_duplicate_input_sources(purls, file) if file: purls = file.read().splitlines(False) - validated_purls = validate_purls(purls) + context = click.get_current_context() + command_name = context.command.name + + validated_purls = get_validate_details(purls, output, file, unique, command_name) json.dump(validated_purls, output, indent=4) -def validate_purls(purls): - api_query = "https://public.purldb.io/api/validate/" - validated_purls = [] - for purl in purls: +def get_validate_details(purls, output, file, unique, command_name): + """ + Return a dictionary containing validation data for each PURL in the `purls` + input list. + """ + validate_details = {} + validate_details["headers"] = [] + + validate_warnings = {} + + input_purls, normalized_purls = normalize_purls(purls, unique) + + validate_details["packages"] = [] + + clear_log_file() + + for purl in input_purls: purl = purl.strip() if not purl: continue - request_body = {"purl": purl, "check_existence": True} - response = requests.get(api_query, params=request_body) - results = response.json() - validated_purls.append(results) - return validated_purls + validated_purl = check_validate_purl(purl) + if command_name == "urls" and validated_purl in [ + "validation_error", + "not_valid", + "valid_but_not_supported", + "not_in_upstream_repo", + ]: + validate_warnings[purl] = validated_purl + continue + + if validated_purl: + validate_details["packages"].append(validate_purl(purl)) -def check_existence(purl): + validate_details["headers"] = construct_headers( + purls=purls, + output=output, + file=file, + command_name=command_name, + normalized_purls=normalized_purls, + unique=unique, + purl_warnings=validate_warnings, + ) + + return validate_details + + +def check_validate_purl(purl): """ - Return a JSON object containing data regarding the validity of the input PURL. + As applicable, return a variable indicating that the input PURL is + valid/invalid or does not exist in the upstream repo. """ + check_validation = validate_purl(purl) + if check_validation is None: + return "validation_error" + results = check_validation - # Based on packagedb.package_managers VERSION_API_CLASSES_BY_PACKAGE_TYPE - # -- and supported by testing the command -- it appears that the `validate` - # command `check_existence` check supports the following PURL types: + if results["valid"] == False: + return "not_valid" - # validate_supported_ecosystems = [ - # "cargo", - # "composer", - # "deb", - # "gem", - # "golang", - # "hex", - # "maven", - # "npm", - # "nuget", - # "pypi", - # ] + if results["exists"] == False: + return "not_in_upstream_repo" + + if results["exists"] == True: + return check_validation + + +def validate_purl(purl): + """ + Return a JSON object containing data from the PurlDB `validate` endpoint + regarding the validity of the input PURL. + + Based on packagedb.package_managers VERSION_API_CLASSES_BY_PACKAGE_TYPE + and packagedb/api.py class PurlValidateViewSet(viewsets.ViewSet) + -- and supported by testing the command -- it appears that the `validate` + command `check_existence` parameter supports the following PURL types: + + "cargo", + "composer", + "deb", + "gem", + "golang", + "hex", + "maven", + "npm", + "nuget", + "pypi", + """ + logger = logging.getLogger(__name__) api_query = "https://public.purldb.io/api/validate/" - purl = purl.strip() request_body = {"purl": purl, "check_existence": True} - response = requests.get(api_query, params=request_body) - results = response.json() - return results + try: + response = requests.get(api_query, params=request_body).json() + + except json.decoder.JSONDecodeError as e: + + print(f"validate_purl(): json.decoder.JSONDecodeError for '{purl}': {e}") + + logging.basicConfig( + filename=LOG_FILE_LOCATION, + level=logging.ERROR, + format="%(levelname)s - %(message)s", + filemode="w", + ) + + logger.error(f"validate_purl(): json.decoder.JSONDecodeError for '{purl}': {e}") + + except Exception as e: + print(f"'validate' endpoint error for '{purl}': {e}") + + else: + if response is None: + print(f"'{purl}' -- response.status_code for None = {response.status_code}") + return response -# Not yet converted to a SCTK-like data structure. @purlcli.command(name="versions") @click.option( "--purl", @@ -608,12 +697,15 @@ def check_existence(purl): required=False, help="Read a list of PURLs from a FILE, one per line.", ) -def get_versions(purls, output, file): [email protected]( + "--unique", + is_flag=True, + required=False, + help="Return data only for unique PURLs.", +) +def get_versions(purls, output, file, unique): """ Given one or more PURLs, return a list of all known versions for each PURL. - - Version information is not needed in submitted PURLs and if included will - be removed before processing. """ check_for_duplicate_input_sources(purls, file) @@ -623,64 +715,84 @@ def get_versions(purls, output, file): context = click.get_current_context() command_name = context.command.name - purl_versions = list_versions(purls, output, file, command_name) + purl_versions = get_versions_details(purls, output, file, unique, command_name) json.dump(purl_versions, output, indent=4) -# construct_headers() has not yet been implemented for this `versions` command -# -- or for the `validate` command. -def list_versions(purls, output, file, command_name): +def get_versions_details(purls, output, file, unique, command_name): """ Return a list of dictionaries containing version-related data for each PURL in the `purls` input list. `check_versions_purl()` will print an error message to the console (also displayed in the JSON output) when necessary. """ - purl_versions = [] - for purl in purls: - purl_data = {} - purl_data["purl"] = purl - purl_data["versions"] = [] + versions_details = {} + versions_details["headers"] = [] + versions_details["packages"] = [] + + versions_warnings = {} + input_purls, normalized_purls = normalize_purls(purls, unique) + + clear_log_file() + + for purl in input_purls: purl = purl.strip() if not purl: continue - versions_purl = check_versions_purl(purl) - - if command_name == "versions" and versions_purl == "not_valid": - print(f"'{purl}' not valid") - continue + purl_data = {} + purl_data["purl"] = purl + purl_data["versions"] = [] - if command_name == "versions" and versions_purl == "valid_but_not_supported": - print(f"'{purl}' not supported with `versions` command") - continue + versions_purl = check_versions_purl(purl) - if command_name == "versions" and versions_purl == "not_in_upstream_repo": - print(f"'{purl}' does not exist in the upstream repo") + if command_name == "versions" and versions_purl: + versions_warnings[purl] = versions_purl continue - for package_version_object in list(versions(purl)): + for package_version in list(versions(purl)): purl_version_data = {} - purl_version = package_version_object.to_dict()["value"] - nested_purl = purl + "@" + f"{purl_version}" + purl_version = package_version.value + + # We use `versions()` from fetchcode/package_versions.py, which + # keeps the version (if any) of the input PURL in its output, so + # "pkg:pypi/[email protected]" is returned as + # "pkg:pypi/[email protected]@0.1.0", "pkg:pypi/[email protected]@0.2.0" + # etc. Thus, we remove any string starting with `@` first. + raw_purl = purl = re.split("[@,]+", purl)[0] + nested_purl = raw_purl + "@" + f"{purl_version}" purl_version_data["purl"] = nested_purl purl_version_data["version"] = f"{purl_version}" - purl_version_data["release_date"] = ( - f'{package_version_object.to_dict()["release_date"]}' - ) + purl_version_data["release_date"] = f"{package_version.release_date}" purl_data["versions"].append(purl_version_data) - purl_versions.append(purl_data) + versions_details["packages"].append(purl_data) - return purl_versions + versions_details["headers"] = construct_headers( + purls=purls, + output=output, + file=file, + command_name=command_name, + normalized_purls=normalized_purls, + unique=unique, + purl_warnings=versions_warnings, + ) + + return versions_details def check_versions_purl(purl): """ - Return a message for printing to the console if the input PURL is invalid, - its type is not supported by `versions` or its existence was not validated. + Return a variable identifying the message for printing to the console by + get_versions_details() if (1) the input PURL is invalid, (2) its type is not + supported by `versions` or (3) its existence was not validated (e.g., + "does not exist in the upstream repo"). + + This message will also be reported by construct_headers() in the + `warnings` field of the `header` section of the JSON object returned by + the `versions` command. Note for dev purposes: SUPPORTED_ECOSYSTEMS (imported from fetchcode.package_versions) comprises the following types: @@ -699,13 +811,15 @@ def check_versions_purl(purl): "pypi", ] """ - results = check_existence(purl) + check_validation = validate_purl(purl) + if check_validation is None: + return "validation_error" + results = check_validation if results["valid"] == False: return "not_valid" supported = SUPPORTED_ECOSYSTEMS - versions_purl = PackageURL.from_string(purl) if versions_purl.type not in supported: @@ -729,5 +843,12 @@ def check_for_duplicate_input_sources(purls, file): raise click.UsageError("Use either purls or file.") +def clear_log_file(): + log_file = Path(LOG_FILE_LOCATION) + + if log_file.is_file(): + os.remove(log_file) + + if __name__ == "__main__": purlcli()
Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere.
@pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include.
2024-02-24T03:04:49
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-289
c919b7c4e70f5ab29ea996eb175d2f40119d129f
diff --git a/minecode/visitors/conan.py b/minecode/visitors/conan.py new file mode 100644 index 00000000..15ec2678 --- /dev/null +++ b/minecode/visitors/conan.py @@ -0,0 +1,159 @@ +# +# Copyright (c) nexB Inc. and others. All rights reserved. +# purldb is a trademark of nexB Inc. +# SPDX-License-Identifier: Apache-2.0 +# See http://www.apache.org/licenses/LICENSE-2.0 for the license text. +# See https://github.com/nexB/purldb for support or download. +# See https://aboutcode.org for more information about nexB OSS projects. +# + + +import logging + +import requests +import saneyaml +from packagedcode.conan import ConanFileHandler +from packageurl import PackageURL + +from minecode import priority_router +from packagedb.models import PackageContentType + +""" +Collect Conan packages from Conan Central. +""" + +logger = logging.getLogger(__name__) +handler = logging.StreamHandler() +logger.addHandler(handler) +logger.setLevel(logging.INFO) + + +def get_yaml_response(url): + """ + Fetch YAML content from the url and return it as a dictionary. + """ + try: + response = requests.get(url) + response.raise_for_status() + content = response.content.decode("utf-8") + return saneyaml.load(content) + except requests.exceptions.HTTPError as err: + logger.error(f"HTTP error occurred: {err}") + + +def get_conan_recipe(name, version): + """ + Return the contents of the `conanfile.py` and `conandata.yml` file for + the conan package described by name and version string. + """ + base_index_url = ( + "https://raw.githubusercontent.com/conan-io/" + "conan-center-index/master/recipes/" + ) + + conan_central_config_url = f"{base_index_url}/{name}/config.yml" + config = get_yaml_response(conan_central_config_url) + if not config: + return None, None + + versions = config.get("versions", {}) + recipe_location = versions.get(version, {}) + folder = recipe_location.get("folder") + + folder = recipe_location.get("folder") + if not folder: + logger.error(f"No folder found for version {version} of package {name}") + return None, None + + conanfile_py_url = f"{base_index_url}/{name}/{folder}/conanfile.py" + conandata_yml_url = f"{base_index_url}/{name}/{folder}/conandata.yml" + + conandata = get_yaml_response(conandata_yml_url) + + try: + response = requests.get(conanfile_py_url) + response.raise_for_status() + conanfile = response.text + except requests.exceptions.HTTPError as err: + logger.error( + f"HTTP error occurred while fetching conanfile.py for {name} {version}: {err}" + ) + conanfile = None + + return conanfile, conandata + + +def get_download_info(conandata, version): + """ + Return download_url and SHA256 hash from `conandata.yml`. + """ + sources = conandata.get("sources", {}) + pkg_data = sources.get(version, {}) + + download_url = pkg_data.get("url") + sha256 = pkg_data.get("sha256") + + if isinstance(download_url, list): + download_url = download_url[0] + + return download_url, sha256 + + +def map_conan_package(package_url): + """ + Add a conan `package_url` to the PackageDB. + + Return an error string if any errors are encountered during the process + """ + from minecode.model_utils import add_package_to_scan_queue + from minecode.model_utils import merge_or_create_package + + conanfile, conandata = get_conan_recipe( + name=package_url.name, + version=package_url.version, + ) + + download_url, sha256 = get_download_info(conandata, package_url.version) + + if not conanfile: + error = f"Package does not exist on conan central: {package_url}" + logger.error(error) + return error + if not download_url: + error = f"Package download_url does not exist on conan central: {package_url}" + logger.error(error) + return error + + package = ConanFileHandler._parse(conan_recipe=conanfile) + package.extra_data["package_content"] = PackageContentType.SOURCE_ARCHIVE + package.version = package_url.version + package.download_url = download_url + package.sha256 = sha256 + + db_package, _, _, error = merge_or_create_package(package, visit_level=0) + + # Submit package for scanning + if db_package: + add_package_to_scan_queue(db_package) + + return error + + +@priority_router.route("pkg:conan/.*") +def process_request(purl_str): + """ + Process `priority_resource_uri` containing a conan Package URL (PURL) as a + URI. + + This involves obtaining Package information for the PURL from + https://github.com/conan-io/conan-center-index and using it to create a new + PackageDB entry. The package is then added to the scan queue afterwards. + """ + package_url = PackageURL.from_string(purl_str) + if not package_url.version: + return + + error_msg = map_conan_package(package_url) + + if error_msg: + return error_msg
Index conan packages Conan is a good source of C/C++ packages. See https://github.com/conan-io/conan-center-index/
2024-02-13T17:25:44
0.0
[]
[]
aboutcode-org/purldb
aboutcode-org__purldb-281
21c24f4a47a03c2f47a8661c23f5330a0ecf10ab
diff --git a/purldb-toolkit/src/purldb_toolkit/purlcli.py b/purldb-toolkit/src/purldb_toolkit/purlcli.py index ac6a0772..8c50ec18 100644 --- a/purldb-toolkit/src/purldb_toolkit/purlcli.py +++ b/purldb-toolkit/src/purldb_toolkit/purlcli.py @@ -1,10 +1,15 @@ import json +import re +from importlib.metadata import version import click import requests from fetchcode.package import info from fetchcode.package_versions import SUPPORTED_ECOSYSTEMS, versions from packageurl import PackageURL +from packageurl.contrib import purl2url + +from packagedb.package_managers import VERSION_API_CLASSES_BY_PACKAGE_TYPE @click.group() @@ -14,7 +19,7 @@ def purlcli(): """ [email protected](name="validate") [email protected](name="metadata") @click.option( "--purl", "purls", @@ -27,7 +32,7 @@ def purlcli(): type=click.File(mode="w", encoding="utf-8"), required=True, default="-", - help="Write validation output as JSON to FILE.", + help="Write meta output as JSON to FILE.", ) @click.option( "--file", @@ -35,39 +40,226 @@ def purlcli(): required=False, help="Read a list of PURLs from a FILE, one per line.", ) -def validate(purls, output, file): [email protected]( + "--unique", + is_flag=True, + required=False, + help="Return data only for unique PURLs.", +) +def get_metadata(purls, output, file, unique): """ - Check the syntax of one or more PURLs. + Given one or more PURLs, for each PURL, return a mapping of metadata + fetched from the fetchcode package.py info() function. """ - if purls and file: - raise click.UsageError("Use either purls or file but not both.") - - if not (purls or file): - raise click.UsageError("Use either purls or file.") + check_for_duplicate_input_sources(purls, file) if file: purls = file.read().splitlines(False) - validated_purls = validate_purls(purls) - json.dump(validated_purls, output, indent=4) + context = click.get_current_context() + command_name = context.command.name + metadata_info = get_metadata_details(purls, output, file, unique, command_name) + json.dump(metadata_info, output, indent=4) + + +def get_metadata_details(purls, output, file, unique, command_name): + """ + Return a dictionary containing metadata for each PURL in the `purls` input + list. `check_metadata_purl()` will print an error message to the console + (also displayed in the JSON output) when necessary. + """ + metadata_details = {} + metadata_details["headers"] = [] + metadata_details["packages"] = [] + + normalized_purls = [] + input_purls = [] + if unique: + for purl in purls: + purl, normalized_purl = normalize_purl(purl) + normalized_purls.append((purl, normalized_purl)) + if normalized_purl not in input_purls: + input_purls.append(normalized_purl) + else: + input_purls = purls + + for purl in input_purls: + if not purl: + continue + + metadata_purl = check_metadata_purl(purl) + + if command_name == "metadata" and metadata_purl == "not_valid": + print(f"'{purl}' not valid") + continue + + if command_name == "metadata" and metadata_purl == "valid_but_not_supported": + print(f"'{purl}' not supported with `metadata` command") + continue + + if command_name == "metadata" and metadata_purl == "not_in_upstream_repo": + print(f"'{purl}' does not exist in the upstream repo") + continue + + for release in list(info(purl)): + release_detail = release.to_dict() + release_detail.move_to_end("purl", last=False) + metadata_details["packages"].append(release_detail) + + metadata_details["headers"] = construct_headers( + purls=purls, + output=output, + file=file, + command_name=command_name, + normalized_purls=normalized_purls, + unique=unique, + ) + + return metadata_details + + +def normalize_purl(purl): + """ + Remove substrings that start with the '@', '?' or '#' separators. + """ + input_purl = purl + purl = purl.strip() + purl = re.split("[@,?,#,]+", purl)[0] + normalized_purl = purl + + return input_purl, normalized_purl + + +def construct_headers( + purls=None, + output=None, + file=None, + command_name=None, + head=None, + normalized_purls=None, + unique=None, +): + """ + Return a list comprising the `headers` content of the dictionary output. + """ + headers = [] + headers_content = {} + options = {} + errors = [] + warnings = [] + + context_purls = [p for p in purls] + context_file = file + context_file_name = None + if context_file: + context_file_name = context_file.name + + headers_content["tool_name"] = "purlcli" + headers_content["tool_version"] = version("purldb_toolkit") + + options["command"] = command_name + options["--purl"] = context_purls + options["--file"] = context_file_name + + if head: + options["--head"] = True + + if unique: + options["--unique"] = True + + if isinstance(output, str): + options["--output"] = output + else: + options["--output"] = output.name + + headers_content["options"] = options + headers_content["purls"] = purls + + if command_name == "metadata" and unique: + for purl in normalized_purls: + if purl[0] != purl[1]: + warnings.append(f"input PURL: '{purl[0]}' normalized to '{purl[1]}'") -def validate_purls(purls): - api_query = "https://public.purldb.io/api/validate/" - validated_purls = [] for purl in purls: - purl = purl.strip() if not purl: continue - request_body = {"purl": purl, "check_existence": True} - response = requests.get(api_query, params=request_body) - results = response.json() - validated_purls.append(results) - return validated_purls + # `metadata` warnings: + metadata_purl = check_metadata_purl(purl) + if command_name == "metadata" and metadata_purl == "not_valid": + warnings.append(f"'{purl}' not valid") + continue [email protected](name="versions") + if command_name == "metadata" and metadata_purl == "valid_but_not_supported": + warnings.append(f"'{purl}' not supported with `metadata` command") + continue + + if command_name == "metadata" and metadata_purl == "not_in_upstream_repo": + warnings.append(f"'{purl}' does not exist in the upstream repo") + continue + + # `urls` warnings: + urls_purl = check_urls_purl(purl) + + if command_name == "urls" and urls_purl == "not_valid": + warnings.append(f"'{purl}' not valid") + continue + + if command_name == "urls" and urls_purl == "valid_but_not_supported": + warnings.append(f"'{purl}' not supported with `urls` command") + continue + + if command_name == "urls" and urls_purl == "valid_but_not_fully_supported": + warnings.append(f"'{purl}' not fully supported with `urls` command") + + if command_name == "urls" and urls_purl == "not_in_upstream_repo": + warnings.append(f"'{purl}' does not exist in the upstream repo") + continue + + headers_content["errors"] = errors + headers_content["warnings"] = warnings + headers.append(headers_content) + + return headers + + +def check_metadata_purl(purl): + """ + Return a variable identifying the message for printing to the console by + get_metadata_details() if (1) the input PURL is invalid, (2) its type is not + supported by `metadata` or (3) its existence was not validated (e.g., + "does not exist in the upstream repo"). + + This message will also be reported by construct_headers() in the + `warnings` field of the `header` section of the JSON object returned by + the `metadata` command. + """ + results = check_existence(purl) + + if results["valid"] == False: + return "not_valid" + + # This is manually constructed from a visual inspection of fetchcode/package.py. + metadata_supported_ecosystems = [ + "bitbucket", + "cargo", + "github", + "npm", + "pypi", + "rubygems", + ] + metadata_purl = PackageURL.from_string(purl) + + if metadata_purl.type not in metadata_supported_ecosystems: + return "valid_but_not_supported" + + if results["exists"] == False: + return "not_in_upstream_repo" + + [email protected](name="urls") @click.option( "--purl", "purls", @@ -80,7 +272,7 @@ def validate_purls(purls): type=click.File(mode="w", encoding="utf-8"), required=True, default="-", - help="Write versions output as JSON to FILE.", + help="Write urls output as JSON to FILE.", ) @click.option( "--file", @@ -88,62 +280,224 @@ def validate_purls(purls): required=False, help="Read a list of PURLs from a FILE, one per line.", ) -def get_versions(purls, output, file): [email protected]( + "--unique", + is_flag=True, + required=False, + help="Return data only for unique PURLs.", +) [email protected]( + "--head", + is_flag=True, + required=False, + help="Validate each URL's existence with a head request.", +) +# We're passing `unique` but it's not yet fully implemented here or in the `urls` tests. +def get_urls(purls, output, file, unique, head): """ - Given one or more PURLs, return a list of all known versions for each PURL. + Given one or more PURLs, for each PURL, return a list of all known URLs + fetched from the packageurl-python purl2url.py code. """ - if purls and file: - raise click.UsageError("Use either purls or file but not both.") - - if not (purls or file): - raise click.UsageError("Use either purls or file.") + check_for_duplicate_input_sources(purls, file) if file: purls = file.read().splitlines(False) - purl_versions = list_versions(purls) - json.dump(purl_versions, output, indent=4) + context = click.get_current_context() + command_name = context.command.name + urls_info = get_urls_details(purls, output, file, head, command_name) + json.dump(urls_info, output, indent=4) -def list_versions(purls): + +def get_urls_details(purls, output, file, head, command_name): """ - Return a list of dictionaries containing version-related data for each PURL - in the `purls` input list. `check_versions_purl()` will print an error - message to the console when necessary. + Return a dictionary containing URLs for each PURL in the `purls` input + list. `check_urls_purl()` will print an error message to the console + (also displayed in the JSON output) when necessary. """ - purl_versions = [] + urls_details = {} + urls_details["headers"] = construct_headers( + purls=purls, + output=output, + file=file, + head=head, + command_name=command_name, + ) + + urls_details["packages"] = [] + for purl in purls: - purl_data = {} - purl_data["purl"] = purl - purl_data["versions"] = [] + url_detail = {} + url_detail["purl"] = purl purl = purl.strip() if not purl: continue - if check_versions_purl(purl): - print(check_versions_purl(purl)) + urls_purl = check_urls_purl(purl) + + # Print warnings to terminal. + if command_name == "urls" and urls_purl == "not_valid": + print(f"'{purl}' not valid") continue - for package_version_object in list(versions(purl)): - purl_version_data = {} - purl_version = package_version_object.to_dict()["value"] - nested_purl = purl + "@" + f"{purl_version}" + if command_name == "urls" and urls_purl == "valid_but_not_supported": + print(f"'{purl}' not supported with `urls` command") + continue - purl_version_data["purl"] = nested_purl - purl_version_data["version"] = f"{purl_version}" - purl_version_data[ - "release_date" - ] = f'{package_version_object.to_dict()["release_date"]}' + if command_name == "urls" and urls_purl == "valid_but_not_fully_supported": + print(f"'{purl}' not fully supported with `urls` command") - purl_data["versions"].append(purl_version_data) + if command_name == "urls" and urls_purl == "not_in_upstream_repo": + print(f"'{purl}' does not exist in the upstream repo") + continue - purl_versions.append(purl_data) + # Add the URLs. + url_purl = PackageURL.from_string(purl) - return purl_versions + url_detail["download_url"] = {"url": purl2url.get_download_url(purl)} + + url_detail["inferred_urls"] = [ + {"url": inferred} for inferred in purl2url.get_inferred_urls(purl) + ] + + url_detail["repo_download_url"] = {"url": purl2url.get_repo_download_url(purl)} + + url_detail["repo_download_url_by_package_type"] = { + "url": purl2url.get_repo_download_url_by_package_type( + url_purl.type, url_purl.namespace, url_purl.name, url_purl.version + ) + } + + url_detail["repo_url"] = {"url": purl2url.get_repo_url(purl)} + + url_detail["url"] = {"url": purl2url.get_url(purl)} + + # Add the http status code data. + url_list = [ + "download_url", + # "inferred_urls" has to be handled separately because it has a nested list + "repo_download_url", + "repo_download_url_by_package_type", + "repo_url", + "url", + ] + if head: + for purlcli_url in url_list: + url_detail[purlcli_url]["get_request_status_code"] = make_head_request( + url_detail[purlcli_url]["url"] + ).get("get_request") + url_detail[purlcli_url]["head_request_status_code"] = make_head_request( + url_detail[purlcli_url]["url"] + ).get("head_request") + + for inferred_url in url_detail["inferred_urls"]: + inferred_url["get_request_status_code"] = make_head_request( + inferred_url["url"] + ).get("get_request") + inferred_url["head_request_status_code"] = make_head_request( + inferred_url["url"] + ).get("head_request") + + urls_details["packages"].append(url_detail) + + return urls_details + + +def make_head_request(url_detail): + """ + Make a head request (and as noted below, a get request as well, at least + for now) and return a dictionary containing status code data for the + incoming PURL URL. + + For now, this returns both get and head request status code data so the + user can evaluate -- requests.get() and requests.head() sometimes return + different status codes and sometimes return inaccurate codes, e.g., a + 404 when the URL actually exists. + """ + if url_detail is None: + return {"get_request": "N/A", "head_request": "N/A"} + + get_response = requests.get(url_detail) + get_request_status_code = get_response.status_code + + head_response = requests.head(url_detail) + head_request_status_code = head_response.status_code + + # Return a dictionary for readability. + return { + "get_request": get_request_status_code, + "head_request": head_request_status_code, + } + + +def check_urls_purl(purl): + """ + If applicable, return a variable indicating that the input PURL is invalid, + or its type is not supported (or not fully supported) by `urls`, or it + does not exist in the upstream repo. + """ + results = check_existence(purl) + + if results["valid"] == False: + return "not_valid" + + # Both of these lists are manually constructed from a visual inspection of + # packageurl-python/src/packageurl/contrib/purl2url.py. + + # This list applies to the purl2url.py `repo_url` section: + urls_supported_ecosystems_repo_url = [ + "bitbucket", + "cargo", + "gem", + "github", + "gitlab", + "golang", + "hackage", + "npm", + "nuget", + "pypi", + "rubygems", + ] + + # This list applies to the purl2url.py `download_url` section: + urls_supported_ecosystems_download_url = [ + "bitbucket", + "cargo", + "gem", + "github", + "gitlab", + "hackage", + "npm", + "nuget", + "rubygems", + ] + urls_purl = PackageURL.from_string(purl) [email protected](name="meta") + if ( + urls_purl.type not in urls_supported_ecosystems_repo_url + and urls_purl.type not in urls_supported_ecosystems_download_url + ): + return "valid_but_not_supported" + + if results["exists"] == False: + return "not_in_upstream_repo" + + if ( + urls_purl.type in urls_supported_ecosystems_repo_url + and urls_purl.type not in urls_supported_ecosystems_download_url + ) or ( + urls_purl.type not in urls_supported_ecosystems_repo_url + and urls_purl.type in urls_supported_ecosystems_download_url + ): + + return "valid_but_not_fully_supported" + + +# Not yet converted to a SCTK-like data structure. [email protected](name="validate") @click.option( "--purl", "purls", @@ -156,7 +510,7 @@ def list_versions(purls): type=click.File(mode="w", encoding="utf-8"), required=True, default="-", - help="Write versions output as JSON to FILE.", + help="Write validation output as JSON to FILE.", ) @click.option( "--file", @@ -164,57 +518,56 @@ def list_versions(purls): required=False, help="Read a list of PURLs from a FILE, one per line.", ) -def get_meta(purls, output, file): +def validate(purls, output, file): """ - Given one or more PURLs, return a mapping of metadata fetched from the API for each PURL. + Check the syntax of one or more PURLs. """ - if purls and file: - raise click.UsageError("Use either purls or file but not both.") - - if not (purls or file): - raise click.UsageError("Use either purls or file.") + check_for_duplicate_input_sources(purls, file) if file: purls = file.read().splitlines(False) - meta_info = get_meta_details(purls) - json.dump(meta_info, output, indent=4) + validated_purls = validate_purls(purls) + json.dump(validated_purls, output, indent=4) -def get_meta_details(purls): - """ - Return a list of dictionaries containing metadata for each PURL - in the `purls` input list. `check_meta_purl()` will print an error - message to the console when necessary. - """ - meta_details = [] - +def validate_purls(purls): + api_query = "https://public.purldb.io/api/validate/" + validated_purls = [] for purl in purls: - meta_detail = {} - meta_detail["purl"] = purl - meta_detail["metadata"] = [] - purl = purl.strip() if not purl: continue + request_body = {"purl": purl, "check_existence": True} + response = requests.get(api_query, params=request_body) + results = response.json() + validated_purls.append(results) - if check_meta_purl(purl): - print(check_meta_purl(purl)) - continue - - releases = [] - for release in list(info(purl)): - meta_detail["metadata"].append(release.to_dict()) - - meta_details.append(meta_detail) - - return meta_details + return validated_purls def check_existence(purl): """ Return a JSON object containing data regarding the validity of the input PURL. """ + + # Based on packagedb.package_managers VERSION_API_CLASSES_BY_PACKAGE_TYPE + # -- and supported by testing the command -- it appears that the `validate` + # command `check_existence` check supports the following PURL types: + + # validate_supported_ecosystems = [ + # "cargo", + # "composer", + # "deb", + # "gem", + # "golang", + # "hex", + # "maven", + # "npm", + # "nuget", + # "pypi", + # ] + api_query = "https://public.purldb.io/api/validate/" purl = purl.strip() request_body = {"purl": purl, "check_existence": True} @@ -224,53 +577,147 @@ def check_existence(purl): return results +# Not yet converted to a SCTK-like data structure. [email protected](name="versions") [email protected]( + "--purl", + "purls", + multiple=True, + required=False, + help="PackageURL or PURL.", +) [email protected]( + "--output", + type=click.File(mode="w", encoding="utf-8"), + required=True, + default="-", + help="Write versions output as JSON to FILE.", +) [email protected]( + "--file", + type=click.File(mode="r", encoding="utf-8"), + required=False, + help="Read a list of PURLs from a FILE, one per line.", +) +def get_versions(purls, output, file): + """ + Given one or more PURLs, return a list of all known versions for each PURL. + + Version information is not needed in submitted PURLs and if included will + be removed before processing. + """ + check_for_duplicate_input_sources(purls, file) + + if file: + purls = file.read().splitlines(False) + + context = click.get_current_context() + command_name = context.command.name + + purl_versions = list_versions(purls, output, file, command_name) + json.dump(purl_versions, output, indent=4) + + +# construct_headers() has not yet been implemented for this `versions` command +# -- or for the `validate` command. +def list_versions(purls, output, file, command_name): + """ + Return a list of dictionaries containing version-related data for each PURL + in the `purls` input list. `check_versions_purl()` will print an error + message to the console (also displayed in the JSON output) when necessary. + """ + purl_versions = [] + for purl in purls: + purl_data = {} + purl_data["purl"] = purl + purl_data["versions"] = [] + + purl = purl.strip() + if not purl: + continue + + versions_purl = check_versions_purl(purl) + + if command_name == "versions" and versions_purl == "not_valid": + print(f"'{purl}' not valid") + continue + + if command_name == "versions" and versions_purl == "valid_but_not_supported": + print(f"'{purl}' not supported with `versions` command") + continue + + if command_name == "versions" and versions_purl == "not_in_upstream_repo": + print(f"'{purl}' does not exist in the upstream repo") + continue + + for package_version_object in list(versions(purl)): + purl_version_data = {} + purl_version = package_version_object.to_dict()["value"] + nested_purl = purl + "@" + f"{purl_version}" + + purl_version_data["purl"] = nested_purl + purl_version_data["version"] = f"{purl_version}" + purl_version_data["release_date"] = ( + f'{package_version_object.to_dict()["release_date"]}' + ) + + purl_data["versions"].append(purl_version_data) + + purl_versions.append(purl_data) + + return purl_versions + + def check_versions_purl(purl): """ Return a message for printing to the console if the input PURL is invalid, its type is not supported by `versions` or its existence was not validated. + + Note for dev purposes: SUPPORTED_ECOSYSTEMS (imported from + fetchcode.package_versions) comprises the following types: + [ + "cargo", + "composer", + "conan", + "deb", + "gem", + "github", + "golang", + "hex", + "maven", + "npm", + "nuget", + "pypi", + ] """ results = check_existence(purl) if results["valid"] == False: - return f"There was an error with your '{purl}' query -- the Package URL you provided is not valid." + return "not_valid" supported = SUPPORTED_ECOSYSTEMS versions_purl = PackageURL.from_string(purl) if versions_purl.type not in supported: - return f"The provided PackageURL '{purl}' is valid, but `versions` is not supported for this package type." + return "valid_but_not_supported" - if results["exists"] != True: - return f"There was an error with your '{purl}' query. Make sure that '{purl}' actually exists in the relevant repository." + if results["exists"] == False: + return "not_in_upstream_repo" + # This handles the conflict between the `validate`` endpoint (treats + # both "pkg:deb/debian/2ping" and "pkg:deb/2ping" as valid) and + # fetchcode.package_versions versions() (returns None for "pkg:deb/2ping"). + if versions(purl) is None: + return "valid_but_not_supported" -def check_meta_purl(purl): - """ - Return a message for printing to the console if the input PURL is invalid, - its type is not supported by `meta` or its existence was not validated. - """ - results = check_existence(purl) - - if results["valid"] == False: - return f"There was an error with your '{purl}' query -- the Package URL you provided is not valid." - - # This is manually constructed from a visual inspection of fetchcode/package.py: - SUPPORTED_ECOSYSTEMS = [ - "cargo", - "npm", - "pypi", - "github", - "bitbucket", - "rubygems", - ] - meta_purl = PackageURL.from_string(purl) - if meta_purl.type not in SUPPORTED_ECOSYSTEMS: - return f"The provided PackageURL '{purl}' is valid, but `meta` is not supported for this package type." +def check_for_duplicate_input_sources(purls, file): + if purls and file: + raise click.UsageError("Use either purls or file but not both.") - if results["exists"] != True: - return f"There was an error with your '{purl}' query. Make sure that '{purl}' actually exists in the relevant repository." + if not (purls or file): + raise click.UsageError("Use either purls or file.") if __name__ == "__main__": diff --git a/requirements.txt b/requirements.txt index 9f58e271..39c6b4d2 100644 --- a/requirements.txt +++ b/requirements.txt @@ -74,7 +74,7 @@ natsort==8.2.0 normality==2.5.0 openpyxl==3.1.2 packagedcode-msitools==0.101.210706 -packageurl-python==0.11.2 +packageurl-python==0.13.4 packaging==23.2 packvers==21.5 parameter-expansion-patched==0.3.1 diff --git a/setup.cfg b/setup.cfg index 30566b83..fb55cee1 100644 --- a/setup.cfg +++ b/setup.cfg @@ -51,7 +51,7 @@ install_requires = jawa == 2.2.0 markdown == 3.5.1 natsort == 8.2.0 - packageurl-python == 0.11.2 + packageurl-python == 0.13.4 psycopg[binary]==3.1.17 PyGithub == 1.56 reppy2 == 0.3.6
Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere. Create PURL services CLI tool and library To best support using various PURL-based services, I would like to have a command client tool and library as a client API that can expose these services for integration elsewhere.
@pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include. @pombredanne @AyanSinhaMahapatra I've looked at the SCTK [fetch_thirdparty.py](https://github.com/nexB/scancode-toolkit/blob/develop/etc/scripts/fetch_thirdparty.py) example, but I have to admit that I don't understand what a complete command for that utility would look like or how it might apply to the current issue. Examples of how to run the `fetch_thirdparty` example would be helpful for me to explore how that works. (I've looked but found no documentation/examples for that utility.) In addition, the description above of the current issue seems rather vague. What does it mean to create a client API tool to access PURL services? Examples of PURL services we want to handle, and some descriptions of user input and output, would be particularly helpful. The only exposure I've had so far with the PurlDB is the experimentation I've done since last Friday evening with the new `validate` endpoint. - Is that an example of a service you want this CLI tool/library to handle? - Would we want, for example, a command-line function that enables a user to input -- through the command line or perhaps a `.txt` or `.xlxs` file -- the 1+ PURLs that he or she wants to vet with the new `validate` endpoint? Maybe with options of terminal output and a `.xlsx` workbook? @pombredanne Now that we've (initially) addressed the `validate` endpoint with our new CLI, what additional "services" do you want me to focus on, and how can I identify them and begin to understand how users use those services? @pombredanne As noted last week, I'm blocked for now from additional CLI work until we can add the missing details to your initial description of this issue, i.e., ID the additional services, commands and use cases we want to include.
2024-01-31T01:36:09
0.0
[]
[]
pysal/legendgram
pysal__legendgram-10
3680c63f6f599fd697211c4e7a4c54718f746637
diff --git a/legendgram/legendgram.py b/legendgram/legendgram.py index 163d9a5..42b2546 100644 --- a/legendgram/legendgram.py +++ b/legendgram/legendgram.py @@ -1,13 +1,17 @@ from .util import make_location as _make_location import numpy as np +from matplotlib.colors import Colormap +from palettable.palette import Palette + + def legendgram(f, ax, y, breaks, pal, bins=50, clip=None, loc = 'lower left', legend_size=(.27,.2), frameon=False, tick_params = None): ''' Add a histogram in a choropleth with colors aligned with map ... - + Arguments --------- f : Figure @@ -17,7 +21,7 @@ def legendgram(f, ax, y, breaks, pal, bins=50, clip=None, breaks : list Sequence with breaks for each class (i.e. boundary values for colors) - pal : palettable colormap + pal : palettable colormap or matplotlib colormap clip : tuple [Optional. Default=None] If a tuple, clips the X axis of the histogram to the bounds provided. @@ -33,16 +37,20 @@ def legendgram(f, ax, y, breaks, pal, bins=50, clip=None, Returns ------- - axis containing the legendgram. + axis containing the legendgram. ''' k = len(breaks) - assert k == pal.number, "provided number of classes does not match number of colors in palette." histpos = _make_location(ax, loc, legend_size=legend_size) - histax = f.add_axes(histpos) N, bins, patches = histax.hist(y, bins=bins, color='0.1') #--- - pl = pal.get_mpl_colormap() + if isinstance(pal, Palette): + assert k == pal.number, "provided number of classes does not match number of colors in palette." + pl = pal.get_mpl_colormap() + elif isinstance(pal, Colormap): + pl = pal + else: + raise ValueError("pal needs to be either palettable colormap or matplotlib colormap, got {}".format(type(pal))) bucket_breaks = [0]+[np.searchsorted(bins, i) for i in breaks] for c in range(k): for b in range(bucket_breaks[c], bucket_breaks[c+1]):
matplotlib cmap alongside palettable Hi, is there a reason why only palettable color maps are allowed? I have made a tiny change to allow direct use of matplotlib cmaps to give it a bit more flexibility. For example, I am using `seaborn.light_palette` to get the color palette for choropleth and it works nicely with the changes in https://github.com/martinfleis/legendgram/tree/mpl_cmap. Did I miss anything? Can I prepare PR implementing this in a more robust way?
2020-04-09T15:35:19
0.0
[]
[]
hfg-gmuend/zoomaker
hfg-gmuend__zoomaker-5
81e1a3a4d226254584b1ee0f919aa38af4506092
diff --git a/README.md b/README.md index a7a1bfa..9aafec0 100644 --- a/README.md +++ b/README.md @@ -8,14 +8,13 @@ Zoomaker is a command-line tool that helps install AI models, git repositories a - **single source of truth**: all resources are neatly defined in the `zoo.yaml` file - **freeze versions**: know exactly which revision of a resources is installed at any time - **only download once**: optimize bandwidth and cache your models locally -- **optimize disk usage**: downloaded models are symlinked to the installation folder (small files <5MB are duplicate) +- **optimize disk usage**: downloaded models are cached ## 😻 TL;DR 1. Install Zoomaker `pip install zoomaker` 2. Define your resources in the `zoo.yaml` file 3. Run `zoomaker install` to install them -(on Windows: `zoomaker install --no-symlinks`, see [hints](https://github.com/hfg-gmuend/zoomaker#%EF%B8%8F-limitations-on-windows) below) ## 📦 Installation @@ -121,7 +120,7 @@ scripts: start_webui: | conda activate automatic1111 cd /home/$(whoami)/stable-diffusion-webui/ - ./webui.sh --theme dark --xformers --no-half + ./webui.sh --xformers --no-half ``` </details> @@ -138,7 +137,7 @@ resources: rename_to: analog-diffusion-v1.safetensors ``` Please note: -The resource `type: download` can be seen as the last resort. Currently there is no caching or symlinking of web downloads. Recommended to avoid it :) +The resource `type: download` can be seen as the last resort. Existing web downloads are skipped, but no other caching. It is recommended to avoid web downloads :) </details> ## 🧮 zoo.yaml Structure @@ -171,18 +170,9 @@ All commands are run from the root of the project, where also your `zoo.yaml` fi | `zoomaker run <script_name>` | Run CLI scripts as defined in `zoo.yaml` | | `zoomaker --help` | Get help using the Zoomaker CLI | | `zoomaker --version` | Show current Zoomaker version | -| `zoomaker --no-symlinks` | Do not use symlinks for installing resources | -## ⚠️ Limitations on Windows -Symlinks are not widely supported on Windows, which limits the caching mechanism used by Zoomaker. To work around this limitation, you can disable symlinks by using the `--no-symlinks` flag with the install command: -```bash -zoomaker install --no-symlinks -``` - -This will still use the cache directory for checking if files are already cached, but if not, they will be downloaded and duplicated directly to the installation directory, saving bandwidth but increasing disk usage. Alternatively, you can use the [Windows Subsystem for Linux "WSL"](https://docs.microsoft.com/en-us/windows/wsl/install-win10) (don't forget to [enable developer mode](https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development)) or run Zoomaker as an administrator to enable symlink support on Windows. - -## 🤗 Hugging Face Access Token +## 🤗 Hugging Face Access Token and Custom Cache Location You might be asked for a [Hugging Face Access Token](https://huggingface.co/docs/hub/security-tokens) during `zoomaker install`. Some resources on Hugging Face require accepting the terms of use of the model. You can set your access token by running this command in a terminal. The command `huggingface-cli` is automatically shipped alongside zoomaker. @@ -190,6 +180,13 @@ You might be asked for a [Hugging Face Access Token](https://huggingface.co/docs huggingface-cli login ``` +You can specify a custom cache location by setting the HF_HOME environment variable. The default cache location is `~/.cache/huggingface/`. + +```bash +export HF_HOME=/path/to/your/cache +zoomaker install +``` + ## 🙏 Acknowledgements - Most of the internal heavy lifting is done be the [huggingface_hub library](https://huggingface.co/docs/huggingface_hub/guides/download) by Hugging Face. Thanks! - "Zoomaker Safari Hacker Cat" cover image by Alia Tasler, based on this [OpenMoji](https://openmoji.org/library/emoji-1F431-200D-1F4BB/). Thanks! diff --git a/pyproject.toml b/pyproject.toml index 13803c9..b993aa0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "zoomaker" -version = "0.7.0" +version = "0.8.0" description = "Zoomaker - Friendly house keeping for your AI model zoo and related resources." authors = ["Benedikt Groß"] readme = "README.md" @@ -10,8 +10,8 @@ repository = "https://github.com/hfg-gmuend/zoomaker" [tool.poetry.dependencies] python = "^3.9" -huggingface-hub = "^0.14.0" -GitPython = "^3.1.31" +huggingface-hub = "^0.23.0" +GitPython = "^3.1.43" [tool.poetry.scripts] zoomaker = 'zoomaker:main' diff --git a/zoomaker.py b/zoomaker.py index 129bd05..aa1cf61 100644 --- a/zoomaker.py +++ b/zoomaker.py @@ -35,12 +35,10 @@ def _check_yaml(self): if type not in ["huggingface", "git", "download"]: raise Exception(f"❌ Unknown resource type: {type}") - def install(self, no_symlinks: bool = False): + def install(self): print(f"👋 ===> {self.yaml_file} <===") print(f"name: {self.data.get('name', 'N/A')}") print(f"version: {self.data.get('version', 'N/A')}\n") - if no_symlinks: - print(f"⛔️ installing resources without symlinks ...") print(f"👇 installing resources ...") counter = 0; for group, resources in self.data["resources"].items(): @@ -61,7 +59,8 @@ def install(self, no_symlinks: bool = False): if type == "huggingface": repo_id = "/".join(src.split("/")[0:2]) repo_filepath = "/".join(src.split("/")[2:]) - downloaded = hf_hub_download(repo_id=repo_id, filename=repo_filepath, local_dir=install_to, revision=revision, local_dir_use_symlinks=False if no_symlinks else "auto") + downloaded = hf_hub_download(repo_id=repo_id, filename=repo_filepath, local_dir=install_to, revision=revision) + print(f"\t size: {self._get_file_size(downloaded)}") if rename_to: self._rename_file(downloaded, os.path.join(install_to, rename_to)) # Git @@ -90,11 +89,17 @@ def install(self, no_symlinks: bool = False): # Download else: filename = self._slugify(os.path.basename(src)) - downloaded = self._download_file(src, os.path.join(install_to, filename)) - if rename_to: - self._rename_file(downloaded, os.path.join(install_to, rename_to)) - if revision: - print(f"\trevision is not supported for download. Ignoring revision: {revision}") + destination = os.path.join(install_to, filename) + destinationRenamed = os.path.join(install_to, rename_to) + if os.path.exists(destination) or os.path.exists(destinationRenamed): + print(f"\t ℹ️ Skipping download: '{filename}' already exists") + else: + downloaded = self._download_file(src, destination) + print(f"\t size: {self._get_file_size(downloaded)}") + if rename_to: + self._rename_file(downloaded, destinationRenamed) + if revision: + print(f"\trevision is not supported for download. Ignoring revision: {revision}") print(f"\n✅ {counter} resources installed.") @@ -123,6 +128,17 @@ def _rename_file(self, src, dest): else: os.rename(src, dest) + def _get_file_size(self, path): + size = os.stat(path).st_size + if size < 1024: + return f"{size} bytes" + elif size < pow(1024, 2): + return f"{round(size/1024, 2)} KB" + elif size < pow(1024, 3): + return f"{round(size/(pow(1024,2)), 2)} MB" + elif size < pow(1024, 4): + return f"{round(size/(pow(1024,3)), 2)} GB" + def _download_file(self, url, filename): response = requests.get(url, stream=True) total_size_in_bytes = int(response.headers.get('content-length', 0)) @@ -160,12 +176,11 @@ def main(): parser = argparse.ArgumentParser(description="Install models, git repos and run scripts defined in the zoo.yaml file.") parser.add_argument("command", nargs="?", choices=["install", "run"], help="The command to execute.") parser.add_argument("script", nargs="?", help="The script name to execute.") - parser.add_argument("--no-symlinks", action='store_true', help="Do not create symlinks for the installed resources.") - parser.add_argument("-v", "--version", action='version', help="The current version of the zoomaker.", version="0.7.0") + parser.add_argument("-v", "--version", action="version", help="The current version of the zoomaker.", version="0.8.0") args = parser.parse_args() if args.command == "install": - Zoomaker("zoo.yaml").install(args.no_symlinks) + Zoomaker("zoo.yaml").install() elif args.command == "run": Zoomaker("zoo.yaml").run(args.script) else:
error running tests on windows for RC 0.8 when running the tests on windows I get this error ``` python tests.py . downloading: 100%|███████████████████████████████████████████| 1.78k/1.78k [00:00<00:00, 1.78MiB/s] . downloading: 164kiB [00:00, 43.0MiB/s] learned_embeds.bin: 100%|█████████████████████████████████████████████████████████████████| 3.82k/3.82k [00:00<?, ?B/s] .E.. ====================================================================== ERROR: test_install_huggingface_cached (__main__.ZoomakerTestCase.test_install_huggingface_cached) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\user\Documents\projects\zoomaker\test\tests.py", line 88, in test_install_huggingface_cached self.assertTrue(os.path.exists(filepath)) ^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: exists: path should be string, bytes, os.PathLike or integer, not NoneType ``` It looks like [try_to_load_from_cache](https://github.com/hfg-gmuend/zoomaker/blob/6d8615274e43bda8c14213a24b8e1ff29c3044a6/test/tests.py#L83) returns None which should mean that no cache is found according to the docs: > Returns: `Optional[str]` or `_CACHED_NO_EXIST`: Will return `None` if the file was not cached. Otherwise: - The exact path to the cached file if it's found in the cache - A special value `_CACHED_NO_EXIST` if the file does not exist at the given commit hash and this fact was cached.
2024-05-27T11:47:56
0.0
[]
[]
libAtoms/QUIP
libAtoms__QUIP-649
694dc9c0d53243e43b7e1af2a2800c3bdf5fd847
diff --git a/.github/workflows/build-wheels.yml b/.github/workflows/build-wheels.yml index b9a2bc7fa..f4bc089d7 100644 --- a/.github/workflows/build-wheels.yml +++ b/.github/workflows/build-wheels.yml @@ -24,7 +24,7 @@ jobs: with: python-version: '3.x' - name: Install cibuildwheel - run: python -m pip install cibuildwheel==2.9.0 + run: python -m pip install cibuildwheel==2.19.1 - name: Build wheels run: | diff --git a/quippy/Makefile b/quippy/Makefile index 1be91e152..41968c538 100644 --- a/quippy/Makefile +++ b/quippy/Makefile @@ -86,7 +86,7 @@ F2PY_LINK_ARGS = $(shell ${PYTHON} -c 'import sys; print(" ".join([arg for arg i all: build f90wrap: - ${PIP} install ${QUIPPY_INSTALL_OPTS} "f90wrap>=0.2.6" + ${PIP} install ${QUIPPY_INSTALL_OPTS} "f90wrap>=0.2.6,<0.2.14" clean: rm -f _quippy${EXT_SUFFIX} ${F90WRAP_FILES} ${WRAP_FPP_FILES} diff --git a/quippy/setup.py b/quippy/setup.py index 1df87ec9c..26da8441c 100644 --- a/quippy/setup.py +++ b/quippy/setup.py @@ -90,7 +90,7 @@ def build_extension(self, ext): 'Programming Language :: Python :: 3.9', ], url='https://github.com/libAtoms/QUIP', - install_requires=['numpy>=1.13', 'f90wrap>=0.2.6', 'ase>=3.17.0'], + install_requires=['numpy>=1.13,<2', 'f90wrap>=0.2.6', 'ase>=3.17.0'], python_requires=">=3.6", packages=['quippy'], package_data={'quippy': package_data_files},
Python 3.12 support I am wondering if there is a chance for a Python 3.12 version.
Yup, we'd be happy to support this. Would you be willing to make a PR changing the wheel building script? https://github.com/libAtoms/QUIP/blob/public/.github/workflows/build-wheels.yml Once this works I'll be happy to tag a new release to PyPI. Most likely just needs a version bump of cibuildwheel [here](https://github.com/libAtoms/QUIP/blob/public/.github/workflows/build-wheels.yml#L27)
2024-07-01T20:29:52
0.0
[]
[]
libAtoms/QUIP
libAtoms__QUIP-553
bebe7183b7c2b8c89aaabdbcab68abdb27f9c2f1
diff --git a/quippy/quippy/descriptors.py b/quippy/quippy/descriptors.py index 5251996808..75e5d49286 100644 --- a/quippy/quippy/descriptors.py +++ b/quippy/quippy/descriptors.py @@ -5,7 +5,7 @@ # HQ X Portions of this code were written by # HQ X Tamas K. Stenczel, James Kermode # HQ X -# HQ X Copyright 2019 +# HQ X Copyright 2019-2023 # HQ X # HQ X These portions of the source code are released under the GNU General # HQ X Public License, version 2, http://www.gnu.org/copyleft/gpl.html @@ -214,6 +214,9 @@ def calc(self, at, grad=False, args_str=None, cutoff=None, **calc_kwargs): descriptor_out[key] = np.concatenate([x.T for x in val]) elif key == "grad_data": descriptor_out[key] = np.transpose(np.concatenate(val, axis=2), axes=(2, 1, 0)) + elif key == "ii": + # copy cross-index into Numpy arrays (f90wrap exposes pointers) + descriptor_out[key] = [np.copy(x) for x in val] if "ii" in descriptor_out.keys(): grad_index_0based = []
quippy-ase conversion problems in gradients Gradient indices ['ii'] and gradient mask ['has_data'] returned in quippy, are incorrect. The minimum working example attached compares quippy output to quip output. This was tested on both Linux and MacOS. [Archive.zip](https://github.com/libAtoms/QUIP/files/9249900/Archive.zip)
Hi Ioan, quippy uses a different convention (when returning the gradient data) to the Fortran descriptor container. The key fields are `grad_data`, `grad_index_0based` and, if you need it, `pos`. In your example: - `data["grad_data"].shape` is `(2655, 3, 1501)` which means that in total there are 2655 derivatives of descriptors w.r.t. neighbours, in 3 Cartesian dimensions, and the descriptor size is 1501. - `data["grad_index_0based"].shape` is `(2655, 2)` and contains information of all of the gradients in the rows: column 0 is the descriptor index (in case of SOAP, this coincides with the atom index, but it doesn't have to), column 1 is the index of the neighbouring atom the descriptor was differentiated with respect to. - `data["pos"].shape` is `(2655, 3)`, again, mapping on all gradients and containing the difference vectors, which is useful if you want to assemble virials. I'd like to understand why `"ii"` is corrupted, but I think you should, for the time being, regard it as superfluous, since you have all the info you need. Hope this helps. Also note that the header of the `quip` output is not consistent with the lines below - sorry for that: the `derivative w.r.t. Cartesian dimension` column is missing, although it is n the right order. I'll fix that as well. @jameskermode having said that, it is unclear why and how the `"ii"` field gets corrupted - since the `"grad_index_0based"` field is filled from the very same `"ii"` field in `quippy/quippy/descriptors.py`. Unless `descriptor_data_mono_to_dict` only copies the pointer address and somehow the target (probably at the Fortran level?) gets deallocated. The other fields are all passed through some `np.` operation which means that they get their own storage freshly allocated and data is explicitly copied. @imagdau let me know what you need exactly. If you don't need `"ii"` or the other stuff, then we should remove the keys in `descriptors.py`, otherwise I am happy to keep them in some form, but we need to make sure they stay. > Unless descriptor_data_mono_to_dict only copies the pointer address and somehow the target (probably at the Fortran level?) gets deallocated. The other fields are all passed through some np. operation which means that they get their own storage freshly allocated and data is explicitly copied. Yes, this makes sense - Fortran arrays are exposed to Python directly, not copied.
2023-02-16T08:46:10
0.0
[]
[]
libAtoms/QUIP
libAtoms__QUIP-436
3d26ddf19b18e1110f12cca72c40eb13647ec916
diff --git a/src/GAP b/src/GAP index 17468dca62..ade605f83d 160000 --- a/src/GAP +++ b/src/GAP @@ -1,1 +1,1 @@ -Subproject commit 17468dca621ff08a01cceb95bf66650e877331ff +Subproject commit ade605f83dbea5a4a2c192a7daa4147b488cb17b diff --git a/src/libAtoms/ScaLAPACK.f95 b/src/libAtoms/ScaLAPACK.f95 index 3d18f670e2..ee82f34187 100644 --- a/src/libAtoms/ScaLAPACK.f95 +++ b/src/libAtoms/ScaLAPACK.f95 @@ -48,7 +48,7 @@ module ScaLAPACK_module private #ifdef SCALAPACK -integer, external :: indxl2g, numroc +integer, external :: ilcm, indxg2p, indxl2g, numroc #endif integer, parameter :: dlen_ = 50 @@ -154,6 +154,20 @@ module ScaLAPACK_module module procedure ScaLAPACK_diag_spinorZ, ScaLAPACK_diag_spinorD end interface diag_spinor +public :: get_lwork_pdgeqrf +interface get_lwork_pdgeqrf + module procedure get_lwork_pdgeqrf_i32o64 + module procedure ScaLAPACK_get_lwork_pdgeqrf + module procedure ScaLAPACK_matrix_get_lwork_pdgeqrf +end interface + +public :: get_lwork_pdormqr +interface get_lwork_pdormqr + module procedure get_lwork_pdormqr_i32o64 + module procedure ScaLAPACK_get_lwork_pdormqr + module procedure ScaLAPACK_matrix_get_lwork_pdormqr +end interface + public :: ScaLAPACK_pdgeqrf_wrapper, ScaLAPACK_pdtrtrs_wrapper, ScaLAPACK_pdormqr_wrapper public :: ScaLAPACK_matrix_QR_solve, ScaLAPACK_to_array1d, ScaLAPACK_to_array2d @@ -1373,7 +1387,6 @@ subroutine ScaLAPACK_pdormqr_wrapper(A_info, A_data, C_info, C_data, tau, work) n = C_info%N_C k = size(tau) - call reallocate(tau, k) ! @fixme circular logic call reallocate(work, 1) call pdormqr('L', 'T', m, n, k, A_data, 1, 1, A_info%desc, & tau, C_data, 1, 1, C_info%desc, work, -1, info) @@ -1461,4 +1474,134 @@ subroutine ScaLAPACK_to_array2d(A_info, A_data, array) #endif end subroutine ScaLAPACK_to_array2d +! returns 64bit minimal work array length for pdgeqrf from 32bit sources +! adapted from documentation of pdgeqrf +function get_lwork_pdgeqrf_i32o64(m, n, ia, ja, mb_a, nb_a, & + myrow, mycol, rsrc_a, csrc_a, nprow, npcol) result(lwork) + integer, intent(in) :: m, n, ia, ja, mb_a, nb_a + integer, intent(in) :: myrow, mycol, rsrc_a, csrc_a, nprow, npcol + integer(idp) :: lwork + + integer :: iarow, iacol, iroff, icoff + integer(idp) :: mp0, nq0, nb64 + + lwork = 0 + +#ifdef SCALAPACK + iroff = mod(ia-1, mb_a) + icoff = mod(ja-1, nb_a) + iarow = indxg2p(ia, mb_a, myrow, rsrc_a, nprow) + iacol = indxg2p(ja, nb_a, mycol, csrc_a, npcol) + mp0 = numroc(m+iroff, mb_a, myrow, iarow, nprow) + nq0 = numroc(n+icoff, nb_a, mycol, iacol, npcol) + + nb64 = int(nb_a, idp) + lwork = nb64 * (mp0 + nq0 + nb64) +#endif +end function get_lwork_pdgeqrf_i32o64 + +function ScaLAPACK_get_lwork_pdgeqrf(this, m, n, mb_a, nb_a) result(lwork) + type(ScaLAPACK), intent(in) :: this + integer, intent(in) :: m, n, mb_a, nb_a + integer(idp) :: lwork + + integer, parameter :: ia = 1, ja = 1, rsrc_a = 0, csrc_a = 0 + + lwork = 0 + +#ifdef SCALAPACK + lwork = get_lwork_pdgeqrf(m, n, ia, ja, mb_a, nb_a, & + this%my_proc_row, this%my_proc_col, rsrc_a, csrc_a, & + this%n_proc_rows, this%n_proc_cols) +#endif +end function ScaLAPACK_get_lwork_pdgeqrf + +function ScaLAPACK_matrix_get_lwork_pdgeqrf(this) result(lwork) + type(Matrix_ScaLAPACK_Info), intent(in) :: this + integer(idp) :: lwork + + lwork = 0 + +#ifdef SCALAPACK + lwork = get_lwork_pdgeqrf(this%ScaLAPACK_obj, this%N_R, this%N_C, this%NB_R, this%NB_C) +#endif +end function ScaLAPACK_matrix_get_lwork_pdgeqrf + +! returns 64bit minimal work array length for pdormqr from 32bit sources +! adapted from documentation of pdormqr +function get_lwork_pdormqr_i32o64(side, m, n, ia, ja, mb_a, nb_a, ic, jc, & + mb_c, nb_c, myrow, mycol, rsrc_a, csrc_a, rsrc_c, csrc_c, nprow, npcol) result(lwork) + character :: side + integer, intent(in) :: m, n, ia, ja, mb_a, nb_a + integer, intent(in) :: ic, jc, mb_c, nb_c + integer, intent(in) :: rsrc_a, csrc_a, rsrc_c, csrc_c + integer, intent(in) :: myrow, mycol, nprow, npcol + integer(idp) :: lwork + + integer :: lcm, lcmq + integer :: iarow, iroffa, icoffa, iroffc, icoffc, icrow, iccol + integer(idp) :: npa0, mpc0, nqc0, nr, nb64, lwork1, lwork2 + + lwork = 0 + +#ifdef SCALAPACK + iroffc = mod(ic-1, mb_c) + icoffc = mod(jc-1, nb_c) + icrow = indxg2p(ic, mb_c, myrow, rsrc_c, nprow) + iccol = indxg2p(jc, nb_c, mycol, csrc_c, npcol) + mpc0 = numroc(m+iroffc, mb_c, myrow, icrow, nprow) + nqc0 = numroc(n+icoffc, nb_c, mycol, iccol, npcol) + + nb64 = int(nb_a, idp) + lwork1 = (nb64 * (nb64 - 1)) / 2 + if (side == 'L') then + lwork2 = (nqc0 + mpc0) * nb64 + else if (side == 'R') then + iroffa = mod(ia-1, mb_a) + icoffa = mod(ja-1, nb_a) + iarow = indxg2p(ia, mb_a, myrow, rsrc_a, nprow) + npa0 = numroc(n+iroffa, mb_a, myrow, iarow, nprow) + + lcm = ilcm(nprow, npcol) + lcmq = lcm / npcol + nr = numroc(n+icoffc, nb_a, 0, 0, npcol) + nr = numroc(int(nr, isp), nb_a, 0, 0, lcmq) + nr = max(npa0 + nr, mpc0) + lwork2 = (nqc0 + nr) * nb64 + end if + lwork = max(lwork1, lwork2) + nb64 * nb64 +#endif +end function get_lwork_pdormqr_i32o64 + +function ScaLAPACK_get_lwork_pdormqr(this, side, m, n, mb_a, nb_a, mb_c, nb_c) result(lwork) + type(ScaLAPACK), intent(in) :: this + character :: side + integer, intent(in) :: m, n, mb_a, nb_a, mb_c, nb_c + integer(idp) :: lwork + + integer, parameter :: ia = 1, ja = 1, rsrc_a = 0, csrc_a = 0 + integer, parameter :: ic = 1, jc = 1, rsrc_c = 0, csrc_c = 0 + + lwork = 0 + +#ifdef SCALAPACK + lwork = get_lwork_pdormqr(side, m, n, ia, ja, mb_a, nb_a, ic, jc, mb_c, nb_c, & + this%my_proc_row, this%my_proc_col, rsrc_a, csrc_a, rsrc_c, csrc_c, & + this%n_proc_rows, this%n_proc_cols) +#endif +end function ScaLAPACK_get_lwork_pdormqr + +function ScaLAPACK_matrix_get_lwork_pdormqr(A_info, C_info, side) result(lwork) + type(Matrix_ScaLAPACK_Info), intent(in) :: A_info, C_info + character :: side + integer(idp) :: lwork + + lwork = 0 + +#ifdef SCALAPACK + lwork = get_lwork_pdormqr(A_info%ScaLAPACK_obj, side, C_info%N_R, & + C_info%N_C, A_info%NB_R, A_info%NB_C, A_info%NB_R, A_info%NB_C) +#endif +end function ScaLAPACK_matrix_get_lwork_pdormqr + end module ScaLAPACK_module
Feat: decouple blocksizes Cheat `pdtrtrs` to think blocksizes of A are identical asserting that there is only a single process column. Change `mpi_blocksize` to `mpi_blocksize_rows` and `mpi_blocksize_cols`. :warning: This is a breaking change wrt. `mpi_blocksize`. Should not be much of a problem as it's an optional argument, only relevant for people who have tried out MPI and set it manually. Depends on https://github.com/libAtoms/GAP/pull/42
2022-05-19T17:48:08
0.0
[]
[]
kootenpv/access_points
kootenpv__access_points-22
7ca2aaed39b966249a30580fd8b6f13612b4ac04
diff --git a/access_points/__init__.py b/access_points/__init__.py index b0040cf..ab7ecdc 100644 --- a/access_points/__init__.py +++ b/access_points/__init__.py @@ -94,22 +94,28 @@ def get_cmd(self): cmd = "airport -s" return path + cmd + # OSX Monterey doesn't output the BSSID unless you `sudo` which means the + # old method using a regexp to match those lines fails. Since the output + # is column-formatted, we can use that instead and it works on both + # Monterey-without-BSSID and pre-Monterey-with-BSSID. def parse_output(self, output): results = [] - # 5 times 2 "letters and/or digits" followed by ":" - # Then one time only 2 "letters and/or digits" - # Use non-capturing groups (?:...) to use {} for amount - # One wrapping group (...) to capture the whole thing - bbsid_re = re.compile("((?:[0-9a-zA-Z]{2}:){5}(?:[0-9a-zA-Z]){2})") security_start_index = False + # First line looks like this (multiple whitespace truncated to fit.) + # `\w+SSID BSSID\w+ RSSI CHANNEL HT CC SECURITY (auth/unicast/group)` + # ` ^ ssid_end_index` + # ` ^ rssi_start_index` + # ` ^ ^ bssid` for line in output.split("\n"): if line.strip().startswith("SSID BSSID"): security_start_index = line.index("SECURITY") + ssid_end_index = line.index("SSID") + 4 + rssi_start_index = line.index("RSSI") elif line and security_start_index and 'IBSS' not in line: try: - ssid = bbsid_re.split(line)[0].strip() - bssid = bbsid_re.findall(line)[0] - rssi = bbsid_re.split(line)[-1].strip().split()[0] + ssid = line[0:ssid_end_index].strip() + bssid = line[ssid_end_index+1:rssi_start_index-1].strip() + rssi = line[rssi_start_index:rssi_start_index+4].strip() security = line[security_start_index:] ap = AccessPoint(ssid, bssid, rssi_to_quality(int(rssi)), security) results.append(ap)
"list index out of range" ``` list index out of range Line: CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) CFInternet19 -90 36 Y -- WPA2(PSK/AES/AES) BTWi-fi -87 11 Y -- NONE BTHub6-C3NH -87 11 Y -- WPA2(PSK/AES/AES) BTWifi-X -84 1 Y -- WPA2(802.1x/AES/AES) SKYDAAPC -82 1 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -61 5,+1 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -62 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) ``` On macOS Monterey, `12.0 Beta (21A5522h)`. `Python 3.6.5 |Anaconda, Inc.| (default, Apr 26 2018, 08:42:37)` (It output one of those chunks for every SSID in the list but I don't think it's useful to provide all of them?)
What exactly was the full traceback of the error? ``` 0%| | 0/1 [00:00<?, ?it/s] 100%|██████████| 1/1 [00:02<00:00, 2.55s/it] 100%|██████████| 1/1 [00:02<00:00, 2.55s/it] Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Please provide the output of the error below this line at github.com/kootenpv/access_points/issues list index out of range Line: CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Output: SSID BSSID RSSI CHANNEL HT CC SECURITY (auth/unicast/group) CFInternet19 -91 100 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -86 48 Y -- WPA2(PSK/AES/AES) CFInternet19 -81 8 Y -- WPA2(PSK/AES/AES) TALKTALK28850F -76 1 Y -- WPA2(PSK/AES/AES) SKY2E1D5 -70 11 Y -- WPA2(PSK/AES/AES) CFL-6BDJ-2.4G -53 5 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) CFL-6BDJ -64 64 Y -- WPA(PSK/AES,TKIP/TKIP) WPA2(PSK/AES,TKIP/TKIP) Traceback (most recent call last): File "/usr/local/bin/whereami", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.9/site-packages/whereami/__main__.py", line 80, in main learn(args.location, args.num_samples, args.device) File "/usr/local/lib/python3.9/site-packages/whereami/learn.py", line 31, in learn train_model() File "/usr/local/lib/python3.9/site-packages/whereami/pipeline.py", line 22, in train_model raise ValueError("No wifi access points have been found during training") ValueError: No wifi access points have been found during training ```
2022-01-11T17:17:16
0.0
[]
[]
vertica/vertica-python
vertica__vertica-python-538
2dca694918144c83ba044f9c65c9aebe08564687
diff --git a/vertica_python/vertica/cursor.py b/vertica_python/vertica/cursor.py index b5e358e5..a451ccdb 100644 --- a/vertica_python/vertica/cursor.py +++ b/vertica_python/vertica/cursor.py @@ -409,6 +409,10 @@ def nextset(self): # result of a DDL/transaction self.rowcount = -1 return True + elif isinstance(self._message, messages.CopyInResponse): + raise errors.MessageError( + 'Unexpected nextset() state after END_OF_RESULT_RESPONSES: {self._message}\n' + 'HINT: Do you pass multiple COPY statements into Cursor.copy()?') elif isinstance(self._message, messages.ErrorResponse): raise errors.QueryError.from_error_response(self._message, self.operation) else: @@ -458,6 +462,7 @@ def copy(self, sql, data, **kwargs): """ sql = as_text(sql) + self.operation = sql if self.closed(): raise errors.InterfaceError('Cursor is closed') @@ -473,13 +478,11 @@ def copy(self, sql, data, **kwargs): else: raise TypeError("Not valid type of data {0}".format(type(data))) - # TODO: check sql is a valid `COPY FROM STDIN` SQL statement - self._logger.info(u'Execute COPY statement: [{}]'.format(sql)) # Execute a `COPY FROM STDIN` SQL statement self.connection.write(messages.Query(sql)) - buffer_size = kwargs['buffer_size'] if 'buffer_size' in kwargs else DEFAULT_BUFFER_SIZE + self.buffer_size = kwargs.get('buffer_size', DEFAULT_BUFFER_SIZE) while True: message = self.connection.read_message() @@ -490,10 +493,10 @@ def copy(self, sql, data, **kwargs): elif isinstance(message, messages.ReadyForQuery): break elif isinstance(message, messages.CommandComplete): - pass + break elif isinstance(message, messages.CopyInResponse): try: - self._send_copy_data(stream, buffer_size) + self._send_copy_data(stream, self.buffer_size) except Exception as e: # COPY termination: report the cause of failure to the backend self.connection.write(messages.CopyFail(str(e))) @@ -503,8 +506,13 @@ def copy(self, sql, data, **kwargs): # Successful termination for COPY self.connection.write(messages.CopyDone()) + elif isinstance(message, messages.RowDescription): + raise errors.MessageError(f'Unexpected message: {message}\n' + f'HINT: Query for Cursor.copy() should be a `COPY FROM STDIN` SQL statement.' + ' `COPY FROM LOCAL` should be executed with Cursor.execute().\n' + f'SQL: {sql}') else: - raise errors.MessageError('Unexpected message: {0}'.format(message)) + raise errors.MessageError(f'Unexpected message: {message}') if self.error is not None: raise self.error
Unexpected message: RowDescription Anyone saw this before? ``` sql_con.execute(insert_stmt, records) File "/usr/local/lib/python3.11/site-packages/sqlalchemy/future/engine.py", line 280, in execute return self._execute_20( ^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1953, in _execute_context self._handle_dbapi_exception( File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception util.raise_( File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/compat.py", line 211, in raise_ raise exception File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1890, in _execute_context self.dialect.do_executemany( File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 733, in do_executemany cursor.executemany(statement, parameters) File "/usr/local/lib/python3.11/site-packages/vertica_python/vertica/cursor.py", line 185, in wrap return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/vertica_python/vertica/cursor.py", line 319, in executemany self.copy(copy_statement, data) File "/usr/local/lib/python3.11/site-packages/vertica_python/vertica/cursor.py", line 507, in copy raise errors.MessageError('Unexpected message: {0}'.format(message)) sqlalchemy.exc.InternalError: (vertica_python.errors.MessageError) Unexpected message: RowDescription: [ Column(name=OUTPUT, data_type_oid=6, data_type_name=Integer, schema_name=None, table_name=None, table_oid=0, attribute_number=0, precision=None, scale=None, null_ok=True, is_identity=False, format_code=1, internal_size=8, display_size=20)] ```
@nicolaerosia Would you please provide the Vertica server version, vertica-python version and a reproducer program? @sitingren hello Unfortunately I cannot reproduce, it only happens from time to time. Vertica version: `Vertica Analytic Database v11.1.1-8` vertica-python: `1.3.5` <- I can try `1.3.6` if you think that will help I'm using sqlalchemy on top, ``` insert_stmt = sqlalchemy.insert(MyTable).inline() records = df.to_dict(orient="records") sql_con.execute(insert_stmt, records) ``` Maybe it's related to `.inline()` ? @nicolaerosia I cannot reproduce with just 3 lines of code. Can you provide a full sqlalchemy program with the definition of `MyTable` and sample data in `df`?
2023-11-21T07:57:58
0.0
[]
[]
vertica/vertica-python
vertica__vertica-python-527
be720039f620569bc76da91265bb558495eeb3ba
diff --git a/vertica_python/vertica/cursor.py b/vertica_python/vertica/cursor.py index 06e33057..25dc7c7d 100644 --- a/vertica_python/vertica/cursor.py +++ b/vertica_python/vertica/cursor.py @@ -562,7 +562,8 @@ def flush_to_end_of_result(self): while True: message = self.connection.read_message() - if isinstance(message, END_OF_RESULT_RESPONSES): + if (isinstance(message, messages.ReadyForQuery) or + isinstance(message, END_OF_RESULT_RESPONSES)): self._message = message break
calling cursor.nextset() after cursor.fetchone() can cause query hang hello, while trying to implement a workaround for [this issue](https://github.com/vertica/vertica-python/issues/255) on apache airflow [(cf. this issue](https://github.com/apache/airflow/issues/32993). I've found that the following code: ```python cursor.execute(sql) cursor.fetchall() while cursor.nextset(): cursor.fetchone() ``` causes execution hang when sql is two insert queries and the second one causes an error or if the second query is a select the error is not caused by the first rows returned by this select (like when you select col1 / col2 and col2 is 0 for some rows). It seems that in this case the errormessage is not processed neither in fetchone nor in flush_to_end_of_result function making this one waits for a END_OF_RESULT_RESPONSES message that will never come. calling cursor.nextset() after cursor.fetchone() can cause query hang hello, while trying to implement a workaround for [this issue](https://github.com/vertica/vertica-python/issues/255) on apache airflow [(cf. this issue](https://github.com/apache/airflow/issues/32993). I've found that the following code: ```python cursor.execute(sql) cursor.fetchall() while cursor.nextset(): cursor.fetchone() ``` causes execution hang when sql is two insert queries and the second one causes an error or if the second query is a select the error is not caused by the first rows returned by this select (like when you select col1 / col2 and col2 is 0 for some rows). It seems that in this case the errormessage is not processed neither in fetchone nor in flush_to_end_of_result function making this one waits for a END_OF_RESULT_RESPONSES message that will never come.
2023-08-07T10:32:56
0.0
[]
[]
Gatewatcher/pytest-vulture
Gatewatcher__pytest-vulture-16
8bba9e6bf9b0a327e1a4ec0e4d4bd559752fb834
diff --git a/README.rst b/README.rst index 7d50e49..eb32cec 100644 --- a/README.rst +++ b/README.rst @@ -1,79 +1,125 @@ pytest vulture -------------- -Run vulture (https://pypi.org/project/vulture/) with pytest to find dead code. +This plugin enables you to run `vulture` (https://pypi.org/project/vulture/) alongside `pytest`, +allowing for dead code detection during your testing process. Sample Usage ============ -.. code-block:: shell - py.test --vulture +To integrate `vulture` with `pytest` and find dead code, use the following commands: -would be the most simple usage and would run vulture for all error messages. +1. **Basic Usage** + Run `vulture` with `pytest` to check for dead code: -.. code-block:: shell + .. code-block:: shell - py.test --vulture --vulture-cfg-file=/test/vulture.ini + pytest --vulture -This would use the vulture with the /test/vulture.ini config path +2. **Custom Configuration** + Specify a custom configuration file path: -Ignoring vulture messages in source code -======================================== + .. code-block:: shell -- ignoring lines : + pytest --vulture --vulture-cfg-file=/path/to/vulture.ini -.. code-block:: python + **Note:** By default, the tool looks for configuration files in the following order: - def test(): - a = 2 # vulture: ignore + - ``pyproject.toml`` + - ``tox.ini`` + - ``vulture.ini`` -- ignoring methods : +Ignoring Vulture Messages +========================= -.. code-block:: python +You can ignore specific warnings from `vulture` directly in the source code. Here’s how: - def test(): # vulture: ignore - pass +- **Ignore Specific Lines:** -- ignoring classes : + .. code-block:: python -.. code-block:: python + def test_function(): + unused_variable = 42 # vulture: ignore - class Test: # vulture: ignore - pass +- **Ignore Entire Methods:** + .. code-block:: python -Config file -============ + def ignored_function(): # vulture: ignore + pass + +- **Ignore Classes:** + + .. code-block:: python + + class IgnoredClass: # vulture: ignore + pass + + + +Configuring with ``pyproject.toml`` +=================================== + +Here’s an example of how to configure `vulture` using ``pyproject.toml``: + +.. code-block:: toml + + [tool.vulture] + # Exclude specific paths (e.g., test directories) + exclude = [ + "*/test/*", + ] + + # Ignore specific files in the `pytest` output (but they are still checked by `vulture`) + ignore = [ + "src/some_ignored_file.py", + ] -The config file (the path can be defined by the --vulture-cfg-file option) can look like this :: + # Ignore specific function or variable names + ignore-names = [ + "deprecated_function", + ] + + # Ignore decorators + ignore-decorators = [ + "@app.route", + "@celery.task", + ] + + # Ignore specific types of messages (e.g., imports) + ignore-types = [ + "import", + ] + + # Define the source path + source-path = "src" + +Configuring with ``.ini`` Config Files +====================================== + +Here’s an example of how to configure `vulture` using an ``.ini`` file: + +.. code-block:: ini [vulture] - # completely exclude files for vulture exclude = - */test/* # We usualy exclude tests because tests can cover dead code + */test/* # Usually exclude tests as they may cover dead code - # those file are ignored by pytest, but still computed by vulture ignore = - src/toto.py + src/some_ignored_file.py - # ignoring names in code ignore-names = - delimiter + deprecated_function - # ignoring decorators ignore-decorators = - @application.errorhandler - @application.route - @celery_app.task - @application.app.errorhandler + @app.route + @celery.task - # ignore vulture type of messages ignore-types = attribute variable - Acknowledgements ================ @@ -93,6 +139,11 @@ If you encounter any problems, please file an issue along with a detailed descri Releases ======== +2.2.0 +~~~~~~ + +- Add pyproject.toml support for parameters + 2.0.2 ~~~~~~ diff --git a/coverage.ini b/coverage.ini index 6091f9c..0188d48 100644 --- a/coverage.ini +++ b/coverage.ini @@ -15,3 +15,4 @@ exclude_lines = raise AssertionError raise NotImplementedError if __name__ == (?:'|")__main__(?:'|"): + if TYPE_CHECKING: diff --git a/pylint.ini b/pylint.ini deleted file mode 100644 index cd18a26..0000000 --- a/pylint.ini +++ /dev/null @@ -1,175 +0,0 @@ -[MASTER] -jobs=0 -limit-inference-results=100 -persistent=yes -suggestion-mode=yes -unsafe-load-any-extension=no - - -[MESSAGES CONTROL] -disable= - duplicate-code, - too-few-public-methods, - - -[REPORTS] -evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) -output-format=text -reports=yes -score=yes - - -[REFACTORING] -max-nested-blocks=5 -never-returning-functions=sys.exit - - -[BASIC] - -argument-naming-style=snake_case -argument-rgx=[a-z_][a-z0-9_]{2,30}$ -attr-naming-style=snake_case -attr-rgx=[a-z_][a-z0-9_]{2,50}$ -bad-names=foo, - bar, - foobar, - baz, - toto, - tutu, - tata, - plop, - lol, - ploplol, - pwet -class-attribute-naming-style=any -class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,50}|(__.*__))$ -class-naming-style=PascalCase -class-rgx=[A-Z_][a-zA-Z0-9]+$ -const-naming-style=UPPER_CASE -const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__)|(_logger)|(app(lication)?))$ -docstring-min-length=20 -function-naming-style=snake_case -function-rgx=[a-z_][a-z0-9_]{2,50}$ -good-names=i, - j, - k, - mo, - np, - pd, - fd, - fo, - db, - _ - -include-naming-hint=no -inlinevar-naming-style=any -inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ -method-naming-style=snake_case -method-rgx=[a-z_][a-z0-9_]{2,50}$ -module-naming-style=snake_case -module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ -no-docstring-rgx=^_ -property-classes=abc.abstractproperty -variable-naming-style=snake_case -variable-rgx=[a-z_][a-z0-9_]{2,30}$ - - -[FORMAT] -ignore-long-lines=^\s*(# )?<?https?://\S+>?$ -indent-after-paren=4 -indent-string=' ' -max-line-length=120 -max-module-lines=1000 -no-space-check=trailing-comma, - dict-separator -single-line-class-stmt=no -single-line-if-stmt=no - -[SPELLING] - -max-spelling-suggestions=4 -spelling-store-unknown-words=no - - -[TYPECHECK] -contextmanager-decorators=contextlib.contextmanager -ignore-mixin-members=yes -ignore-none=yes -ignore-on-opaque-inference=yes -ignored-classes=optparse.Values,thread._local,_thread._local -missing-member-hint=yes -missing-member-hint-distance=1 -missing-member-max-choices=1 - -[LOGGING] -logging-format-style=old -logging-modules=logging - - -[VARIABLES] -allow-global-unused-variables=yes -callbacks=cb_, - _cb -dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_ -ignored-argument-names=_.*|^ignored_|^unused_ -init-import=no -redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io - - -[STRING] -check-str-concat-over-line-jumps=no - - -[MISCELLANEOUS] -notes=FAIL,TODO,FIXME - - -[SIMILARITIES] -ignore-comments=yes -ignore-docstrings=yes -ignore-imports=no -min-similarity-lines=4 - - -[DEPRECATED_BUILTINS] -bad-functions=map, - filter - - -[DESIGN] - -max-args=5 -max-attributes=7 -max-bool-expr=5 -max-branches=12 -max-locals=15 -max-parents=7 -max-public-methods=20 -max-returns=6 -max-statements=50 -min-public-methods=2 - - -[IMPORTS] -allow-wildcard-with-all=no -analyse-fallback-blocks=no -deprecated-modules=optparse,tkinter.tix -known-third-party=enchant - -[CLASSES] -defining-attr-methods=__init__, - __new__, - setUp -exclude-protected=_asdict, - _fields, - _replace, - _source, - _make - -valid-classmethod-first-arg=cls -valid-metaclass-classmethod-first-arg=cls - - -[EXCEPTIONS] -overgeneral-exceptions=BaseException, - Exception diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 0000000..6460758 --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,37 @@ +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" + +[project] +name = "pytest-vulture" +version = "2.2.0" +description = "A pytest plugin to checks dead code with vulture" +readme = "README.rst" +requires-python = ">=3.7" +license = {text = "MIT"} +authors = [ + {name = "Abadie Moran", email = "[email protected]"}, +] +maintainers = [ + {name = "Abadie Moran", email = "[email protected]"}, +] +classifiers = [ + "Framework :: Pytest", +] +dependencies = [ + "vulture>=2.0,<3.0", + "pytest>=7.0.0", +] + +[project.entry-points.pytest11] +vulture = "pytest_vulture.plugin" + +[tool.mypy] +warn_unused_ignores = true +strict_optional = true +incremental = true +ignore_missing_imports = true +check_untyped_defs = true +warn_no_return = true +warn_return_any = true +no_implicit_optional = true diff --git a/ruff.toml b/ruff.toml new file mode 100644 index 0000000..df1872b --- /dev/null +++ b/ruff.toml @@ -0,0 +1,53 @@ +lint.select = ["ALL"] +lint.unfixable = [ + "T20", # flake8-print, would remove prints + "RUF001", "RUF002", "RUF003", # ambiguous-unicode-character-string, would replace characters unexpectedly +] +target-version = "py37" + +lint.ignore = [ + "D301", + "D400", + "D105", + "D102", + "ANN002", + "ANN003", + "FA100", + "S603", # check for execution of untrusted input + "ANN101", # Missing type annotation for self in method + "ANN102", # Missing type annotation for cls in classmethod + "D205", # 1 blank line required between summary line and description + "FBT002", # boolean default value in function definition + "TRY300", # Consider moving this statement to an `else` block + "COM812", # format conflits + "ISC001", # format conflits + "D107", # docstring in init +] +extend-exclude = [ + "**/.tox/", + "**/.idea/", + "test/examples" +] +line-length = 120 + +[lint.per-file-ignores] +"**/test/*" = [ + "S101", # Use of assert detected. + "S106", # Possible hardcoded password. + "B011", # Do not call assert False since python -O removes these calls. + "ARG001", # Unused function argument (mostly fixtures) + "PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable + "ANN", # flake8-annotations + "PLR0913", # Too many arguments to function call + "SLF001", # Private member accessed + "D103", # docstring +] + +[lint.pydocstyle] +convention = "pep257" + +[lint.mccabe] +max-complexity = 10 + +[lint.isort] +lines-after-imports = 2 \ No newline at end of file diff --git a/setup.cfg b/setup.cfg deleted file mode 100644 index 43ab2dd..0000000 --- a/setup.cfg +++ /dev/null @@ -1,40 +0,0 @@ -[aliases] -test = pytest - -[egg_info] -tag_build = -tag_date = 0 -tag_svn_revision = 0 - -[sdist] -owner = root -group = root -formats = bztar, xztar - -[isort] -known_first_party = -not_skip = - __init__.py, -multi_line_output = 3 -force_grid_wrap = 2 -combine_as_imports = true -combine_star = true -include_trailing_comma = true -lines_after_imports = 2 -lines_between_types = 1 - -[pycodestyle] -; E501 refers to line too long errors, which is already handled by pylint -ignore = E501 - -[mypy] -warn_unused_ignores = true -strict_optional = true -cache_dir = .cache/mypy -incremental = true -ignore_missing_imports = true -check_untyped_defs = true -show_none_errors = true -warn_no_return = true -warn_return_any = true -no_implicit_optional = true diff --git a/setup.py b/setup.py deleted file mode 100644 index 0b167ba..0000000 --- a/setup.py +++ /dev/null @@ -1,73 +0,0 @@ -# -*- coding: utf-8 -*- -"""pytest-vulture -============= -Plugin for py.test for doing vulture tests -""" -import os - -from pathlib import Path - -from setuptools import ( - find_packages, - setup, -) - - -IS_TEST = bool(os.environ.get("IS_UNIT_TEST", 0)) - -install_requires = [ - 'vulture <3.0, >2.0 ', -] -if not IS_TEST: - install_requires.append("pytest >= 7.0.0") - -test_requires = [ - 'pylint==2.14.5', - 'pytest==7.1.2', - 'pytest-runner==5.2', - 'pytest-cov==2.10.1', - 'pytest-pycodestyle==2.3.0', - 'pytest-pylint==0.18.0', - 'pytest-isort==3.0.0', - 'pytest-mccabe==2.0', - 'pytest-mypy==0.9.1', -] - -dev_requires = [ -] - -# to prevent coverage bugs -ENTRY_POINTS = { - "pytest11": [ - "vulture = pytest_vulture.plugin", - ] -} if not IS_TEST else {} - -setup( - name='pytest-vulture', - version='2.1.1', - include_package_data=True, - author='Abadie Moran', - author_email='[email protected]', - maintainer='Abadie Moran', - maintainer_email='[email protected]', - license='MIT', - url='https://github.com/Gatewatcher/pytest-vulture', - description='A pytest plugin to checks dead code with vulture', - long_description=(Path(__file__).parent / "README.rst").read_text(encoding="utf-8"), - long_description_content_type="text/x-rst", - package_dir={ - '': 'src', - }, - packages=find_packages( - 'src' - ), - install_requires=install_requires, - tests_require=test_requires, - extras_require={ - 'test': test_requires, - 'dev': test_requires + dev_requires, - }, - entry_points=ENTRY_POINTS, - classifiers=["Framework :: Pytest"], -) diff --git a/tox.ini b/tox.ini index 1000d13..3e76477 100644 --- a/tox.ini +++ b/tox.ini @@ -1,6 +1,7 @@ [tox] -envlist = py3{7,8,9,10} +envlist = py3{7,8,9,10,11,12} distdir = {toxinidir}/dist +skipsdist = True [pytest] testpaths = test/tests/ src/ @@ -20,15 +21,17 @@ addopts = --cov src/ --cov test/tests/ --doctest-modules - --isort - --mccabe --mypy - --pylint --pylint-rcfile pylint.ini --disable-pytest-warnings --verbose [testenv] -deps = .[test] -setenv = IS_UNIT_TEST = 1 -commands = {envpython} setup.py test --addopts "--basetemp {envtmpdir} --confcutdir .. --cache-clear {posargs}" +deps = + vulture>=2.0,<3.0 + -r test-requirements.txt +commands = pytest {posargs} + +[package] +setup_path = pyproject.toml +source_path = src
Support pyproject.toml settings It seems only settings in `vulture.ini` are supported. Please consider supporting vulture settings located in `pyproject.toml` of a project. E.g. ``` [tool.vulture] exclude = ["build/", "docs/source/conf_correct.py"] make_whitelist = true ```
2024-11-21T16:43:28
0.0
[]
[]
Gatewatcher/pytest-vulture
Gatewatcher__pytest-vulture-8
85fdb64ea17799b23f57c827d05e912a21bac0f3
diff --git a/README.rst b/README.rst index 118a896..7d50e49 100644 --- a/README.rst +++ b/README.rst @@ -93,7 +93,7 @@ If you encounter any problems, please file an issue along with a detailed descri Releases ======== -2.0.1 +2.0.2 ~~~~~~ - Uses vulture with pytest (tested with python 3.7 3.8 and 3.9, with vulture==2.3 and pytest 7.x) diff --git a/setup.py b/setup.py index aa49839..786033f 100644 --- a/setup.py +++ b/setup.py @@ -44,7 +44,7 @@ setup( name='pytest-vulture', - version='2.0.1', + version='2.0.2', include_package_data=True, author='Abadie Moran', author_email='[email protected]',
the file is not shown in tox
2022-10-12T14:09:27
0.0
[]
[]
fumitoh/modelx
fumitoh__modelx-160
1dd92c7ed69f14aa7d739a073c129c6f9ddc986c
diff --git a/modelx/serialize/serializer_6.py b/modelx/serialize/serializer_6.py index e11ffe6..d41e2d3 100644 --- a/modelx/serialize/serializer_6.py +++ b/modelx/serialize/serializer_6.py @@ -1493,7 +1493,10 @@ def condition(cls, node): return True def decode(self): - valstr = self.node.first_token.string.strip() + if isinstance(self.node, ast.UnaryOp): # such as -1 + valstr = self.node.first_token.string.strip() + self.node.last_token.string.strip() + else: + valstr = self.node.first_token.string.strip() if valstr in ["True", "False", "None"]: return ast.literal_eval(self.node) else:
Can't save "-1" value of the reference variable ``` import modelx as mx m,s = mx.new_model(), mx.new_space() s.a = -1 mx.write_model(m, 'm') mx.read_model('m') ```
2024-10-05T12:05:07
0.0
[]
[]
fumitoh/modelx
fumitoh__modelx-136
0e5c7ce78f52b0502d65cbe2a758a4617125449d
diff --git a/modelx/core/space.py b/modelx/core/space.py index 05448a2..3a3c2df 100644 --- a/modelx/core/space.py +++ b/modelx/core/space.py @@ -1146,6 +1146,10 @@ def parameters(self, parameters): src = "lambda " + ", ".join(parameters) + ": None" self._impl.set_formula(src) + @parameters.deleter + def parameters(self): + self._impl.del_formula() + def del_formula(self): """Delete formula""" self._impl.del_formula()
Implement del UserSpace.parameters Implement `del UserSpace.parameters` which works the same as `del UserSpace.formula`
2024-07-15T03:32:19
0.0
[]
[]
imankulov/django-plausible-proxy
imankulov__django-plausible-proxy-8
ce95c38ec9dd2ce7e2aa1d4f449c763a1557e3ab
diff --git a/HISTORY.md b/HISTORY.md index a61a4d1..67910b9 100644 --- a/HISTORY.md +++ b/HISTORY.md @@ -4,6 +4,8 @@ - Drop support for Python 3.7. - Make it possible to call send_custom_event() with explicit remote_addr. +- Make it possible to render the {% plausible %} templatetag without the request object, when PLAUSIBLE_DOMAIN is set + in settings. Thanks @hendi for the report. Ref #7. ## 0.4.0 (2023-03-30) diff --git a/plausible_proxy/templatetags/plausible.py b/plausible_proxy/templatetags/plausible.py index 0038e99..3804ac0 100644 --- a/plausible_proxy/templatetags/plausible.py +++ b/plausible_proxy/templatetags/plausible.py @@ -1,10 +1,9 @@ from django import template +from django.conf import settings from django.forms.utils import flatatt from django.urls import reverse from django.utils.safestring import mark_safe -from plausible_proxy.services import get_default_domain - register = template.Library() @@ -25,7 +24,14 @@ def plausible(context, domain=None, script="script.js"): `<script data-domain="example.com" src="/js/script.js" defer></script>` """ if domain is None: - domain = get_default_domain(context["request"]) + domain = getattr(settings, "PLAUSIBLE_DOMAIN", None) + if domain is None: + request = context.get("request") + if request is None: + raise ValueError( + "PLAUSIBLE_DOMAIN is not defined and request is not set in context." + ) + domain = request.get_host() attrs = { "defer": True, "data-domain": domain,
KeyError: 'request' during csrf error * Django Plausible Proxy version: 0.4.0 * Python version: 3.11.3 * Operating System: Debian stable ### Description When a CSRF error gets triggered there is no `request` in the template's `context`. This causes django-plausible-proxy's templatetage to fail [at this line](https://github.com/imankulov/django-plausible-proxy/blob/612d92603fb134fdab80908464a9259b911bd101/plausible_proxy/templatetags/plausible.py#L28) ### What I Did Create a form, alter CSRF token via browser dev tools and submit the form.
2023-05-25T15:29:32
0.0
[]
[]
victordibia/llmx
victordibia__llmx-7
f2d467a7c0d3fc5c444c37e5a8d834a60801e0d5
diff --git a/.gitignore b/.gitignore index da2b505..1ada4b3 100644 --- a/.gitignore +++ b/.gitignore @@ -9,6 +9,7 @@ notebooks/.env __pycache__/ *.py[cod] *$py.class +configs/config.yml .DS_Store n diff --git a/MANIFEST.in b/MANIFEST.in index 35c5c37..fa14852 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -1,1 +1,3 @@ -recursive-exclude notebooks \ No newline at end of file +recursive-exclude notebooks +recursive-exclude configs +recursive-exclude tests \ No newline at end of file diff --git a/README.md b/README.md index 779816e..1a4d58a 100644 --- a/README.md +++ b/README.md @@ -85,6 +85,8 @@ export PALM_PROJECT_ID=<your gcp project id> export PALM_PROJECT_LOCATION=<your project location> ``` +You can also set the default provider via a config file. Use the yaml format in this [sample `config.default.yml` file](configs/config.default.yml) and set the `LLMX_CONFIG_PATH` to the path of the config file. + ```python from llmx import llm from llmx.datamodel import TextGenerationConfig diff --git a/configs/config.default.yml b/configs/config.default.yml new file mode 100644 index 0000000..2566a5c --- /dev/null +++ b/configs/config.default.yml @@ -0,0 +1,40 @@ +# Sets the the default model to use for llm() when no provider parameter is set. + +model: + provider: openai + parameters: + api_key: null +# example with azureopenai model +# model: +# provider: azureopenai +# parameters: +# api_key: <your-api-key> +# api_type: azure +# api_base: <your-api-base> +# api_version: <your-api-version> +# organization: <your-organization> # or null +# model: <your-model> + +# example with huggingface model +# model: +# provider: huggingface +# parameters: +# model: uukuguy/speechless-llama2-hermes-orca-platypus-13b +# device_map: auto +# trust_remote_code: true + +# # example palm via vertex ai +# model: +# provider: palm +# parameters: +# model: chat-bison@001 +# project_id: <your-project-id> +# project_location: <your-project-location> +# palm_key_file: <path-to-your-palm-key-file> + +# # example palm via makersuite +# model: +# provider: palm +# parameters: +# model: chat-bison-001 +# api_key: <your-makersuite-api-key> diff --git a/llmx/generators/text/cohere_textgen.py b/llmx/generators/text/cohere_textgen.py index d1ba3c2..7b36db1 100644 --- a/llmx/generators/text/cohere_textgen.py +++ b/llmx/generators/text/cohere_textgen.py @@ -14,6 +14,7 @@ def __init__( self, api_key: str = None, provider: str = "cohere", + model: str = None, ): super().__init__(provider=provider) api_key = api_key or os.environ.get("COHERE_API_KEY", None) @@ -23,6 +24,7 @@ def __init__( ) self.client = cohere.Client(api_key) self.model_list = providers[provider]["models"] + self.model_name = model or "command" def format_messages(self, messages): prompt = "" @@ -42,14 +44,14 @@ def generate( ) -> TextGenerationResponse: use_cache = config.use_cache messages = self.format_messages(messages) - self.model_name = config.model + self.model_name = config.model or self.model_name max_tokens = ( self.model_list[config.model] if config.model in self.model_list else 1024 ) cohere_config = { - "model": config.model or "command", + "model": self.model_name, "prompt": messages, "max_tokens": config.max_tokens or max_tokens, "temperature": config.temperature, diff --git a/llmx/generators/text/hf_textgen.py b/llmx/generators/text/hf_textgen.py index 1643957..ec8794d 100644 --- a/llmx/generators/text/hf_textgen.py +++ b/llmx/generators/text/hf_textgen.py @@ -95,7 +95,7 @@ def __init__(self, provider: str = "huggingface", device_map=None, **kwargs): self.dialogue_type = kwargs.get("dialogue_type", "alpaca") - self.model_name = kwargs.get("model", "TheBloke/gpt4-x-vicuna-13B-HF") + self.model_name = kwargs.get("model", "uukuguy/speechless-llama2-hermes-orca-platypus-13b") self.load_in_8bit = kwargs.get("load_in_8bit", False) self.trust_remote_code = kwargs.get("trust_remote_code", False) self.device = kwargs.get("device", self.get_default_device()) diff --git a/llmx/generators/text/openai_textgen.py b/llmx/generators/text/openai_textgen.py index fe221ec..277fd0c 100644 --- a/llmx/generators/text/openai_textgen.py +++ b/llmx/generators/text/openai_textgen.py @@ -29,6 +29,7 @@ def __init__( api_type: str = None, api_base: str = None, api_version: str = None, + model: str = None, ): super().__init__(provider=provider) api_key = api_key or os.environ.get("OPENAI_API_KEY", None) @@ -47,6 +48,8 @@ def __init__( if api_type: openai.api_type = api_type + self.model_name = model or "gpt-3.5-turbo" + # print content of class fields # print(vars(openai)) @@ -57,7 +60,7 @@ def generate( **kwargs, ) -> TextGenerationResponse: use_cache = config.use_cache - model = config.model or "gpt-3.5-turbo-0301" + model = config.model or self.model_name prompt_tokens = num_tokens_from_messages(messages) max_tokens = max(context_lengths.get(model, 4096) - prompt_tokens - 10, 200) diff --git a/llmx/generators/text/palm_textgen.py b/llmx/generators/text/palm_textgen.py index 9c199f4..21731a4 100644 --- a/llmx/generators/text/palm_textgen.py +++ b/llmx/generators/text/palm_textgen.py @@ -23,6 +23,7 @@ def __init__( project_id: str = os.environ.get("PALM_PROJECT_ID", None), project_location=os.environ.get("PALM_PROJECT_LOCATION", "us-central1"), provider: str = "palm", + model: str = None, ): super().__init__(provider=provider) @@ -42,6 +43,7 @@ def __init__( self.credentials = get_gcp_credentials(palm_key_file) if palm_key_file else None self.model_list = providers[provider]["models"] if provider in providers else {} + self.model_name = model or "chat-bison" def format_messages(self, messages): palm_messages = [] @@ -78,7 +80,7 @@ def generate( **kwargs, ) -> TextGenerationResponse: use_cache = config.use_cache - model = config.model or "chat-bison" + model = config.model or self.model_name system_messages, messages = self.format_messages(messages) self.model_name = model diff --git a/llmx/generators/text/textgen.py b/llmx/generators/text/textgen.py index 3bd914b..9a3b0ba 100644 --- a/llmx/generators/text/textgen.py +++ b/llmx/generators/text/textgen.py @@ -1,10 +1,26 @@ +from ...utils import load_config from .openai_textgen import OpenAITextGenerator from .palm_textgen import PalmTextGenerator from .cohere_textgen import CohereTextGenerator +import logging +logger = logging.getLogger(__name__) -def llm(provider: str = "openai", **kwargs): - if provider.lower() == "openai" or provider.lower() == "default": + +def llm(provider: str = None, **kwargs): + + # load config + if provider is None: + # attempt to load config from environment variable LLMX_CONFIG_PATH + config = load_config() + if config: + provider = config["model"]["provider"] + kwargs = config["model"]["parameters"] + if provider is None: + logger.info("No provider specified. Defaulting to 'openai'.") + provider = "openai" + if provider.lower() == "openai" or provider.lower() == "default" or provider.lower( + ) == "azureopenai" or provider.lower() == "azureoai": return OpenAITextGenerator(**kwargs) elif provider.lower() == "palm" or provider.lower() == "google": return PalmTextGenerator(provider=provider, **kwargs) @@ -13,11 +29,6 @@ def llm(provider: str = "openai", **kwargs): elif provider.lower() == "hf" or provider.lower() == "huggingface": try: import transformers - from transformers import ( - AutoTokenizer, - AutoModelForCausalLM, - GenerationConfig, - ) except ImportError: raise ImportError( "Please install the `transformers` package to use the HFTextGenerator class. pip install llmx[transformers]" diff --git a/llmx/utils.py b/llmx/utils.py index 26dcaf6..71d24b5 100644 --- a/llmx/utils.py +++ b/llmx/utils.py @@ -12,6 +12,7 @@ import google.auth.transport.requests from google.oauth2 import service_account import requests +import yaml logger = logging.getLogger(__name__) @@ -128,3 +129,36 @@ def gcp_request( ) return response.json() + + +def load_config(config_path: str = "LLMX_CONFIG_PATH"): + try: + config_path = os.environ.get(config_path, None) + if config_path is not None: + try: + with open(config_path, "r", encoding="utf-8") as f: + config = yaml.safe_load(f) + logger.info( + f"Loaded config {config['model']['provider']} from '%s'.", + config_path) + return config + except FileNotFoundError as file_not_found: + logger.info( + "Error: Config file not found at '%s'. Please check the LLMX_CONFIG_PATH environment variable. %s", + config_path, + str(file_not_found)) + except IOError as io_error: + logger.info( + "Error: Could not read the config file at '%s'. %s", + config_path, str(io_error)) + except yaml.YAMLError as yaml_error: + logger.info( + "Error: Malformed YAML in config file at '%s'. %s", + config_path, str(yaml_error)) + else: + logger.info( + "Info:LLMX_CONFIG_PATH environment variable is not set. Please set it to the path of your config file to setup your default model.") + except Exception as error: + logger.info("Error: An unexpected error occurred: %s", str(error)) + + return None diff --git a/llmx/version.py b/llmx/version.py index 9d43133..e733f91 100644 --- a/llmx/version.py +++ b/llmx/version.py @@ -1,2 +1,2 @@ -VERSION = "0.0.12a" +VERSION = "0.0.13a" APP_NAME = "llmx" diff --git a/notebooks/tutorial.ipynb b/notebooks/tutorial.ipynb index 6a4a05d..239b250 100644 --- a/notebooks/tutorial.ipynb +++ b/notebooks/tutorial.ipynb @@ -114,7 +114,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 5, "metadata": {}, "outputs": [ { @@ -157,7 +157,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 6, "metadata": {}, "outputs": [ { @@ -192,7 +192,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 7, "metadata": {}, "outputs": [ { @@ -219,16 +219,18 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "hf_generator = llm(provider=\"hf\", model=\"uukuguy/speechless-llama2-hermes-orca-platypus-13b\", device_map=\"auto\")" + ] + }, + { + "cell_type": "code", + "execution_count": 9, "metadata": {}, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Loading checkpoint shards: 100%|██████████| 3/3 [00:09<00:00, 3.25s/it]\n" - ] - }, { "name": "stdout", "output_type": "stream", @@ -238,18 +240,10 @@ } ], "source": [ - "hf_generator = llm(provider=\"hf\", model=\"uukuguy/speechless-llama2-hermes-orca-platypus-13b\", device_map=\"auto\")\n", "hf_config = TextGenerationConfig(temperature=0, max_tokens=650, use_cache=False)\n", "hf_response = hf_generator.generate(messages, config=hf_config)\n", "print(hf_response.text[0].content)" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/pyproject.toml b/pyproject.toml index 01bfa4f..ab8adf7 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -25,7 +25,8 @@ dependencies = [ "diskcache", "cohere", "google.auth", - "typer" + "typer", + "pyyaml", ] optional-dependencies = {web = ["fastapi", "uvicorn"], transformers = ["transformers[torch]>=4.26"]}
Support Setting Default LLM from Env Variable ## What For certain use cases e.g., cli scripts and web apps, it is important to provide some mechanism to externally set the default LLM provider and configuration ## Work Items - set LLMX_CONFIG environment variable which should be a path to the configuration for a model - in `llmx.generators.text.textgen.py` , check for an available config and use to instantiate `llm()` . Merge provided kwargs with the content of the config file, with kwargs taking precedence
2023-09-23T23:00:52
0.0
[]
[]
arangodb/python-arango
arangodb__python-arango-331
3e43b368b63292ed243ff5f7aabb3f6bfac859f1
diff --git a/arango/database.py b/arango/database.py index 1c1713e0..ca0895b9 100644 --- a/arango/database.py +++ b/arango/database.py @@ -2948,6 +2948,14 @@ def begin_batch_execution( """ return BatchDatabase(self._conn, return_result, max_workers) + def fetch_transaction(self, transaction_id: str) -> "TransactionDatabase": + """Fetch an existing transaction. + + :param transaction_id: The ID of the existing transaction. + :type transaction_id: str + """ + return TransactionDatabase(connection=self._conn, transaction_id=transaction_id) + def begin_transaction( self, read: Union[str, Sequence[str], None] = None, @@ -3125,6 +3133,9 @@ class TransactionDatabase(Database): :type lock_timeout: int | None :param max_size: Max transaction size in bytes. :type max_size: int | None + :param transaction_id: Initialize using an existing transaction instead of creating + a new transaction. + :type transaction_id: str | None """ def __init__( @@ -3137,6 +3148,7 @@ def __init__( allow_implicit: Optional[bool] = None, lock_timeout: Optional[int] = None, max_size: Optional[int] = None, + transaction_id: Optional[str] = None, ) -> None: self._executor: TransactionApiExecutor super().__init__( @@ -3150,6 +3162,7 @@ def __init__( allow_implicit=allow_implicit, lock_timeout=lock_timeout, max_size=max_size, + transaction_id=transaction_id, ), ) diff --git a/arango/exceptions.py b/arango/exceptions.py index 000a0f8f..52ad8ffd 100644 --- a/arango/exceptions.py +++ b/arango/exceptions.py @@ -772,6 +772,10 @@ class TransactionAbortError(ArangoServerError): """Failed to abort transaction.""" +class TransactionFetchError(ArangoServerError): + """Failed to fetch existing transaction.""" + + class TransactionListError(ArangoServerError): """Failed to retrieve transactions.""" diff --git a/arango/executor.py b/arango/executor.py index 47ac4a19..b854c671 100644 --- a/arango/executor.py +++ b/arango/executor.py @@ -19,6 +19,7 @@ OverloadControlExecutorError, TransactionAbortError, TransactionCommitError, + TransactionFetchError, TransactionInitError, TransactionStatusError, ) @@ -241,6 +242,9 @@ class TransactionApiExecutor: :type max_size: int :param allow_dirty_read: Allow reads from followers in a cluster. :type allow_dirty_read: bool | None + :param transaction_id: Initialize using an existing transaction instead of starting + a new transaction. + :type transaction_id: str | None """ def __init__( @@ -254,6 +258,7 @@ def __init__( lock_timeout: Optional[int] = None, max_size: Optional[int] = None, allow_dirty_read: bool = False, + transaction_id: Optional[str] = None, ) -> None: self._conn = connection @@ -275,19 +280,29 @@ def __init__( if max_size is not None: data["maxTransactionSize"] = max_size - request = Request( - method="post", - endpoint="/_api/transaction/begin", - data=data, - headers={"x-arango-allow-dirty-read": "true"} if allow_dirty_read else None, - ) - resp = self._conn.send_request(request) + if transaction_id is None: + request = Request( + method="post", + endpoint="/_api/transaction/begin", + data=data, + headers=( + {"x-arango-allow-dirty-read": "true"} if allow_dirty_read else None + ), + ) + resp = self._conn.send_request(request) - if not resp.is_success: - raise TransactionInitError(resp, request) + if not resp.is_success: + raise TransactionInitError(resp, request) + + result = resp.body["result"] + self._id: str = result["id"] + else: + self._id = transaction_id - result: Json = resp.body["result"] - self._id: str = result["id"] + try: + self.status() + except TransactionStatusError as err: + raise TransactionFetchError(err.response, err.request) @property def context(self) -> str: diff --git a/docs/transaction.rst b/docs/transaction.rst index 18d60a68..66fb50c8 100644 --- a/docs/transaction.rst +++ b/docs/transaction.rst @@ -68,6 +68,15 @@ logical unit of work (ACID compliant). assert '_rev' in txn_col.insert({'_key': 'Lily'}) assert len(txn_col) == 6 + # Fetch an existing transaction. Useful if you have received a Transaction ID + # from some other part of your system or an external system. + original_txn = db.begin_transaction(write='students') + txn_col = original_txn.collection('students') + assert '_rev' in txn_col.insert({'_key': 'Chip'}) + txn_db = db.fetch_transaction(original_txn.transaction_id) + txn_col = txn_db.collection('students') + assert '_rev' in txn_col.insert({'_key': 'Alya'}) + # Abort the transaction txn_db.abort_transaction() assert 'Kate' not in col
feature request: Continue an existing transaction I've come across a case where a transaction needs to be shared across multiple systems. If we wrap the REST API we can easily achieve this by setting the `x-arango-trx-id` header. However, we would like to be able receive transaction IDs on both ends and continue the transaction seamlessly using the python-arango interface, instead of crudely performing raw queries against `/_cursor`. I've come up with the following hack, which _does_ work, but given that `_executor` is private, and `_executor.id` specifically doesn't have a setter, I'm guessing there may be a reason it's discouraged: ```python from arango.database import TransactionDatabase from arango.client import ArangoClient def continue_transaction(db: StandardDatabase, transaction_id): trx = TransactionDatabase(connection=deepcopy(db.conn)) trx._executor._id = transaction_id return trx db = ArangoClient(...).db(...) trx = continue_transaction(db=db, transaction_id="1234") trx.collection("vertex").insert({"_key": "test"}) trx.commit_transaction() # Alternatively, don't commit here and let the client who provided the transaction commit it themselves. ``` Would it make sense to support something like this directly? It seems to me like a reasonable use-case. If so, I'm happy to take a stab at developing a PR for this myself.
Hi @Moortiii, I understand your proposal, and I think it is quite sensible. Updating the same transaction concurrently can cause some uncertainty due to timing issues, but when done carefully, I can imagine some valid use-cases. As you pointed out, the `_executor.id` is indeed private. While adding a setter would be the easy way out of this, it would potentially allow users to write code like this: ```python trx = db.begin_transaction() col1 = trx.collection("col1") trx._executor._id = another_transaction col2 = trx.collection("col2") ``` Not only the transaction ID can easily get lost, thus preventing one from ever accessing the initial transaction again, but the problem can be easily overlooked, as it is hidden in just one line of code. Frankly, I believe even the `x-arango-trx-id` setting trick is way better - it may look weird, but it's "loud and clear", there will be no problem figuring out what (and why) you wrote it there. **Following up on what I would consider a reasonable solution** - Modify the `TransactionAPIExecutor` constructor such that it contains a new field, `transaction_id`, which can be `None` or `str` (basically an `Optional[str]`). In case it is a `str`, the constructor should no longer send a request to */_api/transaction/begin*, but set the `_id` property directly and check the `status()` of the transaction in order to validate it really exists. - The same parameter should be added to the `TransactionDatabase`, which would forward it to the executor. This is straight forward. - The `StandardDatabase` should get a `fetch_transaction` method, which takes the transaction ID and returns a `TransactionDatabase`. I'm suggesting `fetch_transaction` because it implies that a transaction may (or may not) be there, rather than continuing one (which is not necessarily "paused"). **Testing** Introduce a test case `test_transaction.py`, something simple, just to check that we're able to use both the initial transaction and the "continued" object. ```python def test_transaction_fetch(db, col, docs): txn_db = db.begin_transaction(write=col.name) txn_col = txn_db.collection(col.name) txn_db2 = db.fetch_transaction(txn_db.transaction_id) # insert some documents using both txn's # ... ``` **Docs** A small edit in _transaction.rst_ would be great to showcase how `fetch_transaction` is supposed to be used. I'm ready to implement the above. Or, if you want to give it a go, I'm perfectly fine with that, but don't feel pressured, I'm just mentioning since you offered. Let me know how you want to proceed. These seem like sensible changes that should be straight forward enough to implement. I'll give it a shot later today and report back. Thanks! I agree with your comment about `continue` which could imply the ability to "pause" a transaction. I hadn't thought about it that way, but you're probably right that it would cause some confusion, especially for new users of ArangoDB.
2024-03-11T11:06:12
0.0
[]
[]
arangodb/python-arango
arangodb__python-arango-296
879434ffc880a7f47665c45278f4c60b25c1be6d
diff --git a/arango/utils.py b/arango/utils.py index 8ad925c5..359b1e37 100644 --- a/arango/utils.py +++ b/arango/utils.py @@ -120,5 +120,5 @@ def build_filter_conditions(filters: Json) -> str: if not filters: return "" - conditions = [f"doc.{k} == {json.dumps(v)}" for k, v in filters.items()] + conditions = [f"doc.`{k}` == {json.dumps(v)}" for k, v in filters.items()] return "FILTER " + " AND ".join(conditions)
broken build_filter_conditions for keys with spaces // backtick escaping missing? I recently ran into a crash (while my code did not change) when using collection.find() to check whether the following dict data already exists: {'article': 'bottle-white', 'amount': 50000, 'order by date': '2023-09-20', 'delivery date': '2023-10-20', 'type': 'bottle capacity'} I traced it down to utils.py / build_filter_conditions, which is conditions = [f"doc.{k} == {json.dumps(v)}" for k, v in filters.items()] but should be conditions = [f"doc.`{k}` == {json.dumps(v)}" for k, v in filters.items()] to allow keys with spaces, as possible in ArangoDB. I would have bet that it worked on a previous version of python-arango, but the git seems to say otherwise. Thanks for considering a fix. Edit: I appears version 7.6.0 was the last one working: It does not have the build_filter_conditions function but relies on an other method that transmits the json data directly to a url, instead of composing an AQL query that now breaks.
2023-11-13T18:35:51
0.0
[]
[]
aiortc/aiortc
aiortc__aiortc-1090
55b29590db07a5d98fb4b7fbee969893c1c1ce92
diff --git a/src/aiortc/rtcpeerconnection.py b/src/aiortc/rtcpeerconnection.py index 17e06cb0a..70b5ee9d2 100644 --- a/src/aiortc/rtcpeerconnection.py +++ b/src/aiortc/rtcpeerconnection.py @@ -928,6 +928,8 @@ async def setRemoteDescription( iceTransport._role_set = True # set DTLS role + if description.type == "offer" and media.dtls.role == "client": + dtlsTransport._set_role(role="server") if description.type == "answer": dtlsTransport._set_role( role="server" if media.dtls.role == "client" else "client" @@ -1273,8 +1275,6 @@ def __validate_description( raise ValueError("ICE username fragment or password is missing") # check DTLS role is allowed - if description.type == "offer" and media.dtls.role != "auto": - raise ValueError("DTLS setup attribute must be 'actpass' for an offer") if description.type in ["answer", "pranswer"] and media.dtls.role not in [ "client", "server",
aiortc rejecting offer with setup=passive against RFC guidance Repro steps 1. Have an SDP that has `a=setup:passive` in it 2. Create `RTCSessionDescription` with this sdp and `type="offer"` 3. Call `pc.setRemoteDescription` with above session description Expected 1. This succeeds. See [RFC spec](https://datatracker.ietf.org/doc/html/rfc8842#section-5.3-6) that states receiver should be prepared for both `active` and `passive` as well. > Even though an offerer is required to insert an "SDP" setup attribute with an "actpass" attribute value in initial offers ([Section 5.2](https://datatracker.ietf.org/doc/html/rfc8842#sec-oa-offer)) and subsequent offers ([Section 5.5](https://datatracker.ietf.org/doc/html/rfc8842#sec-oa-mod)), the **_answerer MUST be able to receive initial and subsequent offers with other attribute values_**, in order to be backward compatible with older implementations that might insert other attribute values in initial and subsequent offers. Actual 1. An exception is raised with message "DTLS setup attribute must be 'actpass' for an offer" from [__validate_description method](https://github.com/aiortc/aiortc/blob/e9c13eab915ddc27f365356ed9ded0585b9a1bb7/src/aiortc/rtcpeerconnection.py#L1277) PS: I see this got discussed a bit in https://github.com/aiortc/aiortc/issues/267
2024-05-04T00:48:16
0.0
[]
[]
jupyterlab-contrib/jupyter-archive
jupyterlab-contrib__jupyter-archive-66
3a66ae589b5fa82af3c8b5f93dab9f4bcc2b009d
diff --git a/jupyter_archive/handlers.py b/jupyter_archive/handlers.py index 35e3064..4401959 100644 --- a/jupyter_archive/handlers.py +++ b/jupyter_archive/handlers.py @@ -72,15 +72,15 @@ def make_writer(handler, archive_format="zip"): def make_reader(archive_path): - archive_format = "".join(archive_path.suffixes)[1:] + archive_format = "".join(archive_path.suffixes) - if archive_format.endswith("zip"): + if archive_format.endswith(".zip"): archive_file = zipfile.ZipFile(archive_path, mode="r") - elif any([archive_format.endswith(ext) for ext in ["tgz", "tar.gz"]]): + elif any([archive_format.endswith(ext) for ext in [".tgz", ".tar.gz"]]): archive_file = tarfile.open(archive_path, mode="r|gz") - elif any([archive_format.endswith(ext) for ext in ["tbz", "tbz2", "tar.bz", "tar.bz2"]]): + elif any([archive_format.endswith(ext) for ext in [".tbz", ".tbz2", ".tar.bz", ".tar.bz2"]]): archive_file = tarfile.open(archive_path, mode="r|bz2") - elif any([archive_format.endswith(ext) for ext in ["txz", "tar.xz"]]): + elif any([archive_format.endswith(ext) for ext in [".txz", ".tar.xz"]]): archive_file = tarfile.open(archive_path, mode="r|xz") else: raise ValueError("'{}' is not a valid archive format.".format(archive_format)) @@ -141,8 +141,7 @@ async def get(self, archive_path, include_body=False): raise web.HTTPError(400) archive_path = pathlib.Path(cm.root_dir) / url2path(archive_path) - archive_name = archive_path.name - archive_filename = archive_path.with_suffix(".{}".format(archive_format)).name + archive_filename = f"{archive_path.name}.{archive_format}" self.log.info("Prepare {} for archiving and downloading.".format(archive_filename)) self.set_header("content-type", "application/octet-stream")
Check the extension of extracted file for backward matching Fix #63 I have modified it to check the extension of extracted file for backward matching. Check the extension of extracted file for backward matching Fix #63 I have modified it to check the extension of extracted file for backward matching.
Thank you for very quick review and merging!! Thank you for very quick review and merging!!
2021-09-10T08:18:26
0.0
[]
[]
unioslo/harborapi
unioslo__harborapi-96
a44ba8dc41238025ab6c6122d7483b5357849fd7
diff --git a/CHANGELOG.md b/CHANGELOG.md index 60ac6307..6549f7e2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,7 +12,22 @@ While the project is still on major version 0, breaking changes may be introduce <!-- changelog follows --> -<!-- ## Unreleased --> +## Unreleased + +### Changed + +- Models updated to API schema from [c97253f](https://github.com/goharbor/harbor/blob/4a12623459a754ff4d07fbd1cddb4df436e8524c/api/v2.0/swagger.yaml) +- All root models now share a common `harborapi.models.base.RootModel` base class. + +### Fixed + +- Root models failing to build on Pydantic 2.10. + +### Removed + +- Generic aliased root models: + - `harborapi.models.base.StrDictRootModel` + - `harborapi.models.base.StrRootModel` ## [0.25.3](https://github.com/unioslo/harborapi/tree/harborapi-v0.25.3) - 2024-08-26 diff --git a/codegen/ast/fragments/main/registryproviders.py b/codegen/ast/fragments/main/registryproviders.py index 5a9fb291..6e6d2b05 100644 --- a/codegen/ast/fragments/main/registryproviders.py +++ b/codegen/ast/fragments/main/registryproviders.py @@ -8,7 +8,7 @@ from pydantic import Field -class RegistryProviders(RootModel): +class RegistryProviders(RootModel[Dict[str, RegistryProviderInfo]]): root: Dict[str, RegistryProviderInfo] = Field( default={}, description="The registry providers. Each key is the name of the registry provider.", diff --git a/codegen/ast/parser.py b/codegen/ast/parser.py index c4ed2dee..4693add6 100644 --- a/codegen/ast/parser.py +++ b/codegen/ast/parser.py @@ -310,7 +310,7 @@ def modify_module(tree: ast.Module, fragment_dir: FragmentDir) -> ast.Module: # Imports that should be added to every file ADD_IMPORTS = { - "harborapi.models.base": ["StrDictRootModel", "StrRootModel"], + "harborapi.models.base": [], } # type: dict[str, list[str]] # module: list[import_name] @@ -321,6 +321,8 @@ def add_imports(tree: ast.Module) -> ast.Module: if isinstance(node, ast.ImportFrom): if node.module in ADD_IMPORTS: node_names = [node.name for node in node.names] + if not node.module: + continue for name in ADD_IMPORTS[node.module]: if name not in node_names: node.names.append(ast.alias(name=name, asname=None)) @@ -328,9 +330,12 @@ def add_imports(tree: ast.Module) -> ast.Module: # Remaining imports that were not appended to existing imports for name in set(ADD_IMPORTS) - added: names = [ast.alias(name=name, asname=None) for name in ADD_IMPORTS[name]] + if not names: + continue # no imports to add from module + + # Inserts `from module import name1, name2, ...` tree.body.insert(1, ast.ImportFrom(module=name, names=names, level=0)) # Assume from __future__ is at the top of the file - # Regardless, we automatically sort imports afterwards anyway return tree @@ -365,107 +370,6 @@ class Foo(RootModel): # lacks parametrized base raise ValueError(f"Class definition '{classdef.name}' does not have a root field.") -def fix_rootmodel_base(classdef: ast.ClassDef) -> None: - """Adds the appropriate subclass as the base of a RootModel type. - - Depending on the root value annotation, the function will assign one of two - bases: - - - `StrDictRootModel` if the root value annotation is `Optional[Dict[str, T]]` - - `StrRootModel` if the root value annotation is `str` - - As of goharbor/harbor@5c02fd8, there are no models encapsulating dicts - whose root value type is `Dict[str, T]`; they are always `Optional[Dict[str, T]]`. - - Examples - -------- - - ``` - class Foo(RootModel): - root: Optional[Dict[str, str]] - # -> - class Foo(StrDictRootModel[str]): - root: Optional[Dict[str, str]] - ``` - - Also works for str root models: - ``` - class Bar(RootModel): - root: str - # -> - class Bar(StrRootModel): - root: str - ``` - - See also - -------- - `harborapi.models.base.StrRootModel` - `harborapi.models.base.StrDictRootModel` - """ - # Determine what sort of root model we are dealing with - root_type = get_rootmodel_type(classdef) - base = "RootModel" - vt = "Any" - # Root type is a string annotation - # e.g. root: "Dict[str, str]" - if isinstance(root_type, ast.Name): - # HACK: this will break for root models with more complicated signatures, - # but we are not dealing with that right now - if "Dict[str" in root_type.id: - base = "StrDictRootModel" - # HACK: create Python statement with the type annotation - # and then parse it to get the AST - # Say our annotation is `Dict[str, str]`, we want to pass - # `str` as the type parameter to `StrDictRootModel`. - annotation = ast.parse(f"var: {root_type.id}").body[0].annotation - # If the annotation is Optional[Dict[str, str]], then we need - # to go through one more slice to get the value type - # i.e. Optional[Dict[str, str]] -> Dict[str, str] -> str - if "Optional" in root_type.id: - slc = annotation.slice.slice - else: - slc = annotation.slice - vt = slc.elts[1].id # (KT, VT) - elif root_type.id == "str": - base = "StrRootModel" - # Root type is an annotation with a subscript, e.g. Dict[str, T] - # or Optional[Dict[str, T]] - elif isinstance(root_type, ast.Subscript): - # Inspect the AST to determine the type of root model - # If annotation is wrapped in Optional[], we need to get the inner slice - if getattr(root_type.value, "id", None) == "Optional": - inner_root_type = getattr(root_type, "slice") - else: - inner_root_type = root_type - if getattr(inner_root_type.value, "id", None) == "Dict": - base = "StrDictRootModel" - vt = inner_root_type.slice.elts[1].id # (KT, VT) - # TODO: handle list root types - else: - raise ValueError(f"Invalid root type: {root_type}") - - # Construct the node for the class's new base - if base == "StrDictRootModel": - classdef.bases = [ - ast.Subscript( - value=ast.Name(id="StrDictRootModel"), - slice=ast.Index(ast.Name(id=vt)), - ) - ] - else: - # Otherwise, we use the base we determined earlier - classdef.bases = [ast.Name(id=base)] - - -def fix_rootmodels(tree: ast.Module, classdefs: dict[str, ast.ClassDef]) -> ast.Module: - for node in ast.walk(tree): - if not isinstance(node, ast.ClassDef): - continue - if _get_class_base_name(node) == "RootModel": - fix_rootmodel_base(node) - return tree - - def insert_or_update_classdefs( tree: ast.Module, classdefs: dict[str, ast.ClassDef] ) -> ast.Module: @@ -554,7 +458,6 @@ def add_fragments(tree: ast.Module, directory: Path) -> ast.Module: statements["stmts"].extend(stmts["stmts"]) new_tree = insert_or_update_classdefs(tree, classdefs) new_tree = insert_statements(new_tree, statements) - new_tree = fix_rootmodels(new_tree, classdefs) return new_tree diff --git a/harborapi/models/base.py b/harborapi/models/base.py index 796b7680..068bbe84 100644 --- a/harborapi/models/base.py +++ b/harborapi/models/base.py @@ -9,10 +9,13 @@ from __future__ import annotations from typing import Any -from typing import Dict +from typing import Generator from typing import Iterable +from typing import Mapping from typing import Optional +from typing import Sequence from typing import Set +from typing import Tuple from typing import Type from typing import TypeVar @@ -54,40 +57,33 @@ class RootModel(PydanticRootModel[T]): model_config = ConfigDict(validate_assignment=True) + root: T + def __bool__(self) -> bool: return bool(self.root) - -class StrDictRootModel(RootModel[Optional[Dict[str, T]]]): - # All JSON keys are string, so the key type does need to be - # parameterized with a generic type. - - def __iter__(self) -> Any: + def __iter__(self) -> Generator[Tuple[Any, Any], None, None]: # TODO: fix API spec so root types can never be none, only # the empty container. That way we can always iterate and access # without checking for None. - if self.root is not None: - return iter(self.root) - return iter([]) + if isinstance(self.root, Iterable): + yield from iter(self.root) # pyright: ignore[reportUnknownArgumentType, reportUnknownMemberType] + else: + yield from iter([]) - def __getitem__(self, item: str) -> Optional[T]: - if self.root is not None: - return self.root[item] + def __getitem__(self, key: Any) -> Any: + if isinstance(self.root, (Mapping, Sequence)): + return self.root[key] # pyright: ignore[reportUnknownVariableType, reportUnknownMemberType] return None # Enables dot access to dict keys for backwards compatibility def __getattr__(self, attr: str) -> T: try: - return self.root[attr] # type: ignore # forego None check and let KeyError raise - except (KeyError, TypeError): + return self.root[attr] # pyright: ignore[reportUnknownVariableType, reportIndexIssue] + except (KeyError, TypeError, IndexError): raise AttributeError(f"{self.__class__.__name__} has no attribute {attr}") -class StrRootModel(RootModel[str]): - def __str__(self) -> str: - return str(self.root) - - class BaseModel(PydanticBaseModel): model_config = ConfigDict(extra="allow", validate_assignment=True, strict=False) diff --git a/harborapi/models/models.py b/harborapi/models/models.py index dd0bba62..769db0b7 100644 --- a/harborapi/models/models.py +++ b/harborapi/models/models.py @@ -1,5 +1,6 @@ from __future__ import annotations +from datetime import datetime from enum import Enum from typing import Any from typing import Dict @@ -10,15 +11,12 @@ from typing import Union from pydantic import AnyUrl -from pydantic import AwareDatetime from pydantic import Field +from pydantic import RootModel from pydantic import ValidationInfo from pydantic import field_validator from pydantic import model_validator -from harborapi.models.base import StrDictRootModel -from harborapi.models.base import StrRootModel - from ..log import logger from .base import BaseModel from .scanner import Severity @@ -72,10 +70,10 @@ class Repository(BaseModel): default=None, description="The count that the artifact inside the repository pulled", ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the repository" ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the repository" ) @@ -131,10 +129,10 @@ class Tag(BaseModel): default=None, description="The ID of the artifact that the tag attached to" ) name: Optional[str] = Field(default=None, description="The name of the tag") - push_time: Optional[AwareDatetime] = Field( + push_time: Optional[datetime] = Field( default=None, description="The push time of the tag" ) - pull_time: Optional[AwareDatetime] = Field( + pull_time: Optional[datetime] = Field( default=None, description="The latest pull time of the tag" ) immutable: Optional[bool] = Field( @@ -142,11 +140,11 @@ class Tag(BaseModel): ) -class ExtraAttrs(StrDictRootModel[Any]): +class ExtraAttrs(RootModel[Optional[Dict[str, Dict[str, Any]]]]): root: Optional[Dict[str, Any]] = None -class Annotations(StrDictRootModel[str]): +class Annotations(RootModel[Optional[Dict[str, str]]]): root: Optional[Dict[str, str]] = None @@ -188,10 +186,10 @@ class Label(BaseModel): project_id: Optional[int] = Field( default=None, description="The ID of project that the label belongs to" ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time the label" ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the label" ) @@ -215,12 +213,12 @@ class SBOMOverview(BaseModel): The generate SBOM overview information """ - start_time: Optional[AwareDatetime] = Field( + start_time: Optional[datetime] = Field( default=None, description="The start time of the generating sbom report task", examples=["2006-01-02T14:04:05Z"], ) - end_time: Optional[AwareDatetime] = Field( + end_time: Optional[datetime] = Field( default=None, description="The end time of the generating sbom report task", examples=["2006-01-02T15:04:05Z"], @@ -314,7 +312,7 @@ class AuditLog(BaseModel): default=None, description="The operation against the repository in this log entry.", ) - op_time: Optional[AwareDatetime] = Field( + op_time: Optional[datetime] = Field( default=None, description="The time when this operation is triggered.", examples=["2006-01-02T15:04:05Z"], @@ -387,10 +385,13 @@ class PreheatPolicy(BaseModel): enabled: Optional[bool] = Field( default=None, description="Whether the preheat policy enabled" ) - creation_time: Optional[AwareDatetime] = Field( + scope: Optional[str] = Field( + default=None, description="The scope of preheat policy" + ) + creation_time: Optional[datetime] = Field( default=None, description="The Create Time of preheat policy" ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The Update Time of preheat policy" ) @@ -617,10 +618,10 @@ class Registry(BaseModel): status: Optional[str] = Field( default=None, description="Health status of the registry." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The create time of the policy." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the policy." ) @@ -699,7 +700,7 @@ class FilterStyle(BaseModel): values: Optional[List[str]] = Field(default=None, description="The filter values") -class ResourceList(StrDictRootModel[int]): +class ResourceList(RootModel[Optional[Dict[str, int]]]): root: Optional[Dict[str, int]] = None @@ -714,10 +715,8 @@ class ReplicationExecution(BaseModel): default=None, description="The status of the execution" ) trigger: Optional[str] = Field(default=None, description="The trigger mode") - start_time: Optional[AwareDatetime] = Field( - default=None, description="The start time" - ) - end_time: Optional[AwareDatetime] = Field(default=None, description="The end time") + start_time: Optional[datetime] = Field(default=None, description="The start time") + end_time: Optional[datetime] = Field(default=None, description="The end time") status_text: Optional[str] = Field(default=None, description="The status text") total: Optional[int] = Field( default=None, description="The total count of all executions" @@ -766,10 +765,10 @@ class ReplicationTask(BaseModel): dst_resource: Optional[str] = Field( default=None, description="The destination resource that the task operates" ) - start_time: Optional[AwareDatetime] = Field( + start_time: Optional[datetime] = Field( default=None, description="The start time of the task" ) - end_time: Optional[AwareDatetime] = Field( + end_time: Optional[datetime] = Field( default=None, description="The end time of the task" ) @@ -780,7 +779,7 @@ class RobotCreated(BaseModel): id: Optional[int] = Field(default=None, description="The ID of the robot") name: Optional[str] = Field(default=None, description="The name of the robot") secret: Optional[str] = Field(default=None, description="The secret of the robot") - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the robot." ) expires_at: Optional[int] = Field( @@ -882,7 +881,7 @@ class ScheduleObj(BaseModel): cron: Optional[str] = Field( default=None, description="A cron expression, a time-based job scheduler." ) - next_scheduled_time: Optional[AwareDatetime] = Field( + next_scheduled_time: Optional[datetime] = Field( default=None, description="The next time to schedule to run the job." ) @@ -983,7 +982,7 @@ class QuotaUpdateReq(BaseModel): hard: Optional[ResourceList] = None -class QuotaRefObject(StrDictRootModel[Any]): +class QuotaRefObject(RootModel[Optional[Dict[str, Dict[str, Any]]]]): root: Optional[Dict[str, Any]] = None @@ -994,10 +993,10 @@ class Quota(BaseModel): ref: Optional[QuotaRefObject] = None hard: Optional[ResourceList] = None used: Optional[ResourceList] = None - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="the creation time of the quota" ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="the update time of the quota" ) @@ -1049,10 +1048,10 @@ class ScannerRegistration(BaseModel): default=False, description="Indicate whether use internal registry addr for the scanner to pull content or not", ) - create_time: Optional[AwareDatetime] = Field( + create_time: Optional[datetime] = Field( default=None, description="The creation time of this registration" ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of this registration" ) adapter: Optional[str] = Field( @@ -1266,19 +1265,19 @@ class UserGroupSearchItem(BaseModel): ) -class EventType(StrRootModel): +class EventType(RootModel[str]): root: str = Field( ..., description="Webhook supported event type.", examples=["PULL_ARTIFACT"] ) -class NotifyType(StrRootModel): +class NotifyType(RootModel[str]): root: str = Field( ..., description="Webhook supported notify type.", examples=["http"] ) -class PayloadFormatType(StrRootModel): +class PayloadFormatType(RootModel[str]): root: str = Field( ..., description="The type of webhook paylod format.", examples=["CloudEvents"] ) @@ -1327,10 +1326,10 @@ class WebhookPolicy(BaseModel): creator: Optional[str] = Field( default=None, description="The creator of the webhook policy." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The create time of the webhook policy." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the webhook policy." ) enabled: Optional[bool] = Field( @@ -1350,10 +1349,10 @@ class WebhookLastTrigger(BaseModel): enabled: Optional[bool] = Field( default=None, description="Whether or not the webhook policy enabled." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of webhook policy." ) - last_trigger_time: Optional[AwareDatetime] = Field( + last_trigger_time: Optional[datetime] = Field( default=None, description="The last trigger time of webhook policy." ) @@ -1373,10 +1372,10 @@ class WebhookJob(BaseModel): job_detail: Optional[str] = Field( default=None, description="The webhook job notify detailed data." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The webhook job creation time." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The webhook job update time." ) @@ -1673,10 +1672,10 @@ class OIDCUserInfo(BaseModel): default=None, description="the secret of the OIDC user that can be used for CLI to push/pull artifacts", ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the OIDC user info record." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the OIDC user info record." ) @@ -1693,10 +1692,10 @@ class UserResp(BaseModel): description="indicate the admin privilege is grant by authenticator (LDAP), is always false unless it is the current login user", ) oidc_user_meta: Optional[OIDCUserInfo] = None - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the user." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the user." ) @@ -1813,7 +1812,7 @@ class Accessory(BaseModel): default=None, description="The artifact size of the accessory" ) icon: Optional[str] = Field(default=None, description="The icon of the accessory") - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the accessory" ) @@ -1866,10 +1865,8 @@ class ScanDataExportExecution(BaseModel): default=None, description="The status of the execution" ) trigger: Optional[str] = Field(default=None, description="The trigger mode") - start_time: Optional[AwareDatetime] = Field( - default=None, description="The start time" - ) - end_time: Optional[AwareDatetime] = Field(default=None, description="The end time") + start_time: Optional[datetime] = Field(default=None, description="The start time") + end_time: Optional[datetime] = Field(default=None, description="The end time") status_text: Optional[str] = Field(default=None, description="The status text") user_name: Optional[str] = Field( default=None, description="The name of the user triggering the job" @@ -1895,10 +1892,10 @@ class WorkerPool(BaseModel): worker_pool_id: Optional[str] = Field( default=None, description="the id of the worker pool" ) - start_at: Optional[AwareDatetime] = Field( + start_at: Optional[datetime] = Field( default=None, description="The start time of the work pool" ) - heartbeat_at: Optional[AwareDatetime] = Field( + heartbeat_at: Optional[datetime] = Field( default=None, description="The heartbeat time of the work pool" ) concurrency: Optional[int] = Field( @@ -1920,13 +1917,13 @@ class Worker(BaseModel): job_id: Optional[str] = Field( default=None, description="the id of the running job in the worker" ) - start_at: Optional[AwareDatetime] = Field( + start_at: Optional[datetime] = Field( default=None, description="The start time of the worker" ) check_in: Optional[str] = Field( default=None, description="the checkin of the running job in the worker" ) - checkin_at: Optional[AwareDatetime] = Field( + checkin_at: Optional[datetime] = Field( default=None, description="The checkin time of the worker" ) @@ -1978,7 +1975,7 @@ class ScheduleTask(BaseModel): cron: Optional[str] = Field( default=None, description="the cron of the current schedule task" ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="the update time of the schedule task" ) @@ -2092,7 +2089,7 @@ class Errors(BaseModel): errors: Optional[List[Error]] = None -class AdditionLinks(StrDictRootModel[AdditionLink]): +class AdditionLinks(RootModel[Optional[Dict[str, AdditionLink]]]): root: Optional[Dict[str, AdditionLink]] = None @@ -2133,12 +2130,12 @@ class NativeReportSummary(BaseModel): examples=[300], ) summary: Optional[VulnerabilitySummary] = None - start_time: Optional[AwareDatetime] = Field( + start_time: Optional[datetime] = Field( default=None, description="The start time of the scan process that generating report", examples=["2006-01-02T14:04:05Z"], ) - end_time: Optional[AwareDatetime] = Field( + end_time: Optional[datetime] = Field( default=None, description="The end time of the scan process that generating report", examples=["2006-01-02T15:04:05Z"], @@ -2182,10 +2179,10 @@ class CVEAllowlist(BaseModel): description="the time for expiration of the allowlist, in the form of seconds since epoch. This is an optional attribute, if it's not set the CVE allowlist does not expire.", ) items: Optional[List[CVEAllowlistItem]] = None - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the allowlist." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the allowlist." ) @@ -2239,7 +2236,7 @@ class GeneralInfo(BaseModel): '{"closable":true,"message":"your banner message content","type":"warning","fromDate":"06/19/2023","toDate":"06/21/2023"}' ], ) - current_time: Optional[AwareDatetime] = Field( + current_time: Optional[datetime] = Field( default=None, description="The current time of the server." ) registry_url: Optional[str] = Field( @@ -2303,10 +2300,10 @@ class GCHistory(BaseModel): schedule: Optional[ScheduleObj] = None job_status: Optional[str] = Field(default=None, description="the status of gc job.") deleted: Optional[bool] = Field(default=None, description="if gc job was deleted.") - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="the creation time of gc job." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="the update time of gc job." ) @@ -2329,10 +2326,10 @@ class ExecHistory(BaseModel): deleted: Optional[bool] = Field( default=None, description="if purge job was deleted." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="the creation time of purge job." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="the update time of purge job." ) @@ -2342,10 +2339,10 @@ class Schedule(BaseModel): status: Optional[str] = Field( default=None, description="The status of the schedule." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="the creation time of the schedule." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="the update time of the schedule." ) schedule: Optional[ScheduleObj] = None @@ -2395,7 +2392,9 @@ class SupportedWebhookEventTypes(BaseModel): payload_formats: Optional[List[PayloadFormat]] = None -class InternalConfigurationsResponse(StrDictRootModel[InternalConfigurationValue]): +class InternalConfigurationsResponse( + RootModel[Optional[Dict[str, InternalConfigurationValue]]] +): root: Optional[Dict[str, InternalConfigurationValue]] = None @@ -2516,7 +2515,7 @@ class SecuritySummary(BaseModel): ) -class ScanOverview(StrDictRootModel[NativeReportSummary]): +class ScanOverview(RootModel[Optional[Dict[str, NativeReportSummary]]]): """Overview of scan results.""" root: Optional[Dict[str, NativeReportSummary]] = None @@ -2552,10 +2551,10 @@ class Project(BaseModel): default=None, description="The ID of referenced registry when the project is a proxy cache project.", ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the project." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the project." ) deleted: Optional[bool] = Field( @@ -2639,10 +2638,10 @@ class ReplicationPolicy(BaseModel): enabled: Optional[bool] = Field( default=None, description="Whether the policy is enabled or not." ) - creation_time: Optional[AwareDatetime] = Field( + creation_time: Optional[datetime] = Field( default=None, description="The create time of the policy." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the policy." ) speed: Optional[int] = Field(default=None, description="speed limit for each task") @@ -2682,11 +2681,18 @@ class Robot(BaseModel): default=None, description="The expiration date of the robot" ) permissions: Optional[List[RobotPermission]] = None - creator: Optional[str] = Field(default=None, description="The creator of the robot") - creation_time: Optional[AwareDatetime] = Field( + creator_type: Optional[str] = Field( + default=None, + description="The type of the robot creator, like local(harbor_user) or robot.", + ) + creator_ref: Optional[int] = Field( + default=None, + description="The reference of the robot creator, like the id of harbor user.", + ) + creation_time: Optional[datetime] = Field( default=None, description="The creation time of the robot." ) - update_time: Optional[AwareDatetime] = Field( + update_time: Optional[datetime] = Field( default=None, description="The update time of the robot." ) @@ -2777,10 +2783,10 @@ class Artifact(BaseModel): ) size: Optional[int] = Field(default=None, description="The size of the artifact") icon: Optional[str] = Field(default=None, description="The digest of the icon") - push_time: Optional[AwareDatetime] = Field( + push_time: Optional[datetime] = Field( default=None, description="The push time of the artifact" ) - pull_time: Optional[AwareDatetime] = Field( + pull_time: Optional[datetime] = Field( default=None, description="The latest pull time of the artifact" ) extra_attrs: Optional[ExtraAttrs] = None @@ -2808,7 +2814,7 @@ def scan(self) -> Optional[NativeReportSummary]: return None -class RegistryProviders(StrDictRootModel[RegistryProviderInfo]): +class RegistryProviders(RootModel[Dict[str, RegistryProviderInfo]]): root: Dict[str, RegistryProviderInfo] = Field( default={}, description="The registry providers. Each key is the name of the registry provider.", diff --git a/harborapi/models/scanner.py b/harborapi/models/scanner.py index 2330cc0e..00c86ae6 100644 --- a/harborapi/models/scanner.py +++ b/harborapi/models/scanner.py @@ -17,12 +17,10 @@ from pydantic import AwareDatetime from pydantic import ConfigDict from pydantic import Field +from pydantic import RootModel from pydantic import ValidationInfo from pydantic import field_validator -from harborapi.models.base import StrDictRootModel -from harborapi.models.base import StrRootModel - from ..log import logger from ..version import SemVer from ..version import get_semver @@ -52,7 +50,7 @@ def semver(self) -> SemVer: return get_semver(self.version) -class ScannerProperties(StrDictRootModel[str]): +class ScannerProperties(RootModel[Optional[Dict[str, str]]]): """ A set of custom properties that can further describe capabilities of a given scanner. @@ -93,7 +91,7 @@ class ScannerCapability(BaseModel): ) -class ScanRequestId(StrRootModel): +class ScanRequestId(RootModel[str]): root: str = Field( ..., description="A unique identifier returned by the [/scan](#/operation/AcceptScanRequest] operations. The format of the\nidentifier is not imposed but it should be unique enough to prevent collisons when polling for scan reports.\n", diff --git a/pyproject.toml b/pyproject.toml index a1c2a2d5..89a7cdaa 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -182,6 +182,7 @@ asyncio_mode = "auto" [tool.datamodel-codegen] base-class = ".base.BaseModel" +class-name = ".base.RootModel" field-constraints = true snake-case-field = true strip-default-none = false
Error with latest Pydantic Pydantic 2.10.1 seems to break Harborapi. ```console >>> from harborapi import HarborClient Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/harborapi/__init__.py", line 4, in <module> from . import auth File "/usr/local/lib/python3.10/dist-packages/harborapi/auth.py", line 10, in <module> from harborapi.models.models import Robot File "/usr/local/lib/python3.10/dist-packages/harborapi/models/__init__.py", line 3, in <module> from . import scanner File "/usr/local/lib/python3.10/dist-packages/harborapi/models/scanner.py", line 55, in <module> class ScannerProperties(StrDictRootModel[str]): File "/usr/local/lib/python3.10/dist-packages/pydantic/main.py", line 811, in __class_getitem__ submodel = _generics.create_generic_submodel(model_name, origin, args, params) File "/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generics.py", line 137, in create_generic_submodel created_model = meta( File "/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_model_construction.py", line 137, in __new__ cls = cast('type[BaseModel]', super().__new__(mcs, cls_name, bases, namespace, **kwargs)) File "/usr/lib/python3.10/abc.py", line 106, in __new__ cls = super().__new__(mcls, name, bases, namespace, **kwargs) TypeError: mro() returned a non-class ('PydanticRecursiveRef') ``` Pydantic 2.9.2 works though! ```console >>> from harborapi import HarborClient >>> ```
I'll look into it ASAP.
2024-11-25T13:33:20
0.0
[]
[]
unioslo/harborapi
unioslo__harborapi-38
13f0e762be4d8fa43864d3252fa4383e6e666ea1
diff --git a/harborapi/utils.py b/harborapi/utils.py index 389e2b5d..1710f5c8 100644 --- a/harborapi/utils.py +++ b/harborapi/utils.py @@ -184,7 +184,7 @@ def get_basicauth(username: str, secret: str) -> SecretStr: # Finds the next url in a pagination header (e.g. Link: </api/v2.0/endpoint?page=X&page_size=Y>; rel="next") # Ripped from: https://docs.github.com/en/rest/guides/using-pagination-in-the-rest-api?apiVersion=2022-11-28#example-creating-a-pagination-method -PAGINATION_NEXT_PATTERN = re.compile(r"(?<=<)([\S]*)(?=>; rel=\"next\")") +PAGINATION_NEXT_PATTERN = re.compile('<([^>]+)>; rel="next"') # Finds the API path in a URL (e.g. /api/v2.0/) API_PATH_PATTERN = re.compile(r"\/api\/v[0-9]\.[0-9]{1,2}") @@ -210,7 +210,7 @@ def parse_pagination_url(url: str, strip: bool = True) -> Optional[str]: if not match: return None - m = match.group(0) + m = match.group(1) # exclude rel="next" from the match if not strip: return m
Fails to parse next pagination link when query contains spaces A `"Link"` header with the value ``` '</api/v2.0/audit-logs?page=2&page_size=10&q=operation={push pull},resource_type=artifact>; rel="next"' ``` fails due to the presence of a space in the `operation` query param value. The regex that was ripped from GitHub's own API documentation does not account for the presence of a space in the query itself. The regex must not split on whitespace inside the `<>` delimiters.
2023-04-03T09:53:25
0.0
[]
[]
ewels/rich-click
ewels__rich-click-176
e0add165aec74edda70cfd395802f4e3d7ba2a2e
diff --git a/foo.py b/foo.py new file mode 100644 index 00000000..e69de29b diff --git a/src/rich_click/cli.py b/src/rich_click/cli.py index 1aba11d2..586ef619 100644 --- a/src/rich_click/cli.py +++ b/src/rich_click/cli.py @@ -110,6 +110,14 @@ def convert( type=click.Choice(["html", "svg"], case_sensitive=False), help="Optionally render help text as HTML or SVG. By default, help text is rendered normally.", ) [email protected]( + "--suppress-warnings/--do-not-suppress-warnings", + is_flag=True, + default=False, + hidden=True, + help="Suppress warnings when there are conflicting entry_points." + " (This option is hidden because this situation is extremely rare).", +) @click.option( # The rich-click CLI uses a special implementation of --help, # which is aware of the --rich-config object. @@ -132,6 +140,7 @@ def main( ctx: RichContext, script_and_args: List[str], output: Literal[None, "html", "svg"], + suppress_warnings: bool, rich_config: Optional[RichHelpConfiguration], show_help: bool, ) -> None: @@ -164,20 +173,50 @@ def main( click.echo(ctx.get_help(), color=ctx.color) ctx.exit() - sys.path.append(".") - script, *args = script_and_args - _from_entry_points = False - - scripts = {script.name: script for script in entry_points(group="console_scripts")} - if script in scripts: - module_path, function_name = scripts[script].value.split(":", 1) - _from_entry_points = True - elif ":" in script: + _selected: List[str] = [] + module_path = "" + function_name = "" + + for s in entry_points(group="console_scripts"): + if script == s.name: + if not _selected: + module_path, function_name = s.value.split(":", 1) + if suppress_warnings: + break + if s.value not in _selected: + _selected.append(s.value) + + if len(_selected) > 1 and not suppress_warnings: + # This is an extremely rare edge case that comes up when the user sets the PYTHONPATH themselves. + if script in sys.argv: + _args = sys.argv.copy() + _args[_args.index(script)] = f"{module_path}:{function_name}" + else: + _args = ["rich-click", f"{module_path}:{function_name}"] + + click.echo( + click.style( + f"WARNING: Multiple entry_points correspond with script '{script}': {_selected!r}." + "\nThis can happen when an 'egg-info' directory exists, you're using a virtualenv," + " and you have set a custom PYTHONPATH." + f"\n\nThe selected script is '{module_path}:{function_name}', which is being executed now." + "\n\nIt is safer and recommended that you specify the MODULE:CLICK_COMMAND" + f" ('{module_path}:{function_name}') instead of the script ('{script}'), like this:" + f"\n\n>>> rich-click {' '.join(_args)}" + "\n\nAlternatively, you can pass --suppress-warnings to the rich-click CLI," + " which will disable this message.", + fg="red", + ), + file=sys.stderr, + ) + + if ":" in script and not module_path: # the path to a function was passed module_path, function_name = script.split(":", 1) - else: + + if not module_path: raise click.ClickException(f"No such script: {script_and_args[0]}") prog = module_path.split(".", 1)[0]
`rich-click` CLI for version `1.8.0dev1` breaks backwards compatibility in edge case Long story, not worth describing in detail since nobody reads this other than me. But I have a weird setup where I alias `dbt` to run `cli dbt`, and `cli dbt` is a Click CLI that runs `separate_venv/bin/rich-click dbt`, except when it does this it tries to call `cli dbt` (newly introduced behavior, and incorrect) instead of `separate_venv/bin/dbt` (old behavior, and correct).
2024-04-10T01:42:18
0.0
[]
[]
ewels/rich-click
ewels__rich-click-152
8aba9a75f0321c53f039831029bcf931360fc8f7
diff --git a/CHANGELOG.md b/CHANGELOG.md index afa7bbe1..057f0b3c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,8 +1,16 @@ # Changelog: rich-click +## Version 1.7.3 + +- Fix false deprecation warning. + +## Version 1.7.2 + +- Add support for rich formatting in epilog text [[#146](https://github.com/ewels/rich-click/pull/146)] + ## Version 1.7.1 -- Fix bug with `rich-click` CLI not working with Python 3.12. [#141](https://github.com/ewels/rich-click/issues/141) +- Fix bug with `rich-click` CLI not working with Python 3.12. [[#141](https://github.com/ewels/rich-click/issues/141)] - Fix compatibility issue with `dbt-core` CLI. [[#140](https://github.com/ewels/rich-click/issues/140)] ## Version 1.7.0 diff --git a/src/rich_click/__init__.py b/src/rich_click/__init__.py index 437ecdfb..62a23a69 100644 --- a/src/rich_click/__init__.py +++ b/src/rich_click/__init__.py @@ -6,7 +6,7 @@ customisation required. """ -__version__ = "1.7.1" +__version__ = "1.7.3" # Import the entire click API here. # We need to manually import these instead of `from click import *` to force mypy to recognize a few type annotation overrides for the rich_click decorators. diff --git a/src/rich_click/cli.py b/src/rich_click/cli.py index 963fc593..a7041a99 100644 --- a/src/rich_click/cli.py +++ b/src/rich_click/cli.py @@ -9,7 +9,7 @@ from importlib import metadata # type: ignore[import,unused-ignore] except ImportError: # Python < 3.8 - import importlib_metadata as metadata # type: ignore[no-redef,import-not-found] + import importlib_metadata as metadata # type: ignore[no-redef,import-not-found,unused-ignore] import click from rich.console import Console @@ -17,9 +17,8 @@ from rich.panel import Panel from rich.text import Text -from rich_click import command as rich_command -from rich_click import group as rich_group -from rich_click import RichCommand, RichCommandCollection, RichGroup, RichMultiCommand +from rich_click.decorators import command as rich_command +from rich_click.decorators import group as rich_group from rich_click.rich_click import ( ALIGN_ERRORS_PANEL, ERRORS_PANEL_TITLE, @@ -29,6 +28,7 @@ STYLE_USAGE, STYLE_USAGE_COMMAND, ) +from rich_click.rich_command import RichCommand, RichCommandCollection, RichGroup, RichMultiCommand console = Console() @@ -70,7 +70,7 @@ def patch() -> None: click.Command = RichCommand # type: ignore[misc] click.CommandCollection = RichCommandCollection # type: ignore[misc] if "MultiCommand" in dir(click): - click.MultiCommand = RichMultiCommand # type: ignore[assignment,misc] + click.MultiCommand = RichMultiCommand # type: ignore[assignment,misc,unused-ignore] def entry_points(*, group: str) -> "metadata.EntryPoints": # type: ignore[name-defined]
Swap `from rich_click import ...` in `rich_click/cli.py` to specify modules. Replace this: ```python from rich_click import command as rich_command from rich_click import group as rich_group from rich_click import RichCommand, RichCommandCollection, RichGroup, RichMultiCommand ``` with this: ```python from rich_click.decorators import command as rich_command from rich_click.decorators import group as rich_group from rich_click.rich_command import RichCommand, RichCommandCollection, RichGroup, RichMultiCommand ``` The former is causing an annoying deprecation error. This needs to be implemented in the 1.7.x branch, and then released+patched. I will get to this later today when I am at home and can access my personal laptop.
2024-01-05T15:50:56
0.0
[]
[]
ewels/rich-click
ewels__rich-click-123
6bd4eac22522e2adf8bb5604910a0483bc71e54c
diff --git a/CHANGELOG.md b/CHANGELOG.md index 5c1eabac..7bc956dc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -20,9 +20,11 @@ This release comes after merging a huge pull-request from [@BrutalSimplicity](ht This PR closes a number of issues: - [#25](https://github.com/ewels/rich-click/issues/25): Add tests! +- [#38](https://github.com/ewels/rich-click/issues/38): Support `click.MultiCommand` - [#90](https://github.com/ewels/rich-click/issues/90): `click.ClickException` should output to `stderr` - [#88](https://github.com/ewels/rich-click/issues/88): Rich Click breaks contract of Click's `format_help` and its callers - [#18](https://github.com/ewels/rich-click/issues/18): Options inherited from context settings aren't applied +- [#114](https://github.com/ewels/rich-click/issues/114): `ctx.exit(exit_code)` not showing nonzero exit codes. In addition: diff --git a/src/rich_click/__init__.py b/src/rich_click/__init__.py index 05bafb6f..147c2bc3 100644 --- a/src/rich_click/__init__.py +++ b/src/rich_click/__init__.py @@ -19,16 +19,15 @@ from . import rich_click # noqa: F401 from rich_click._compat_click import CLICK_IS_BEFORE_VERSION_8X as _CLICK_IS_BEFORE_VERSION_8X -from rich_click.rich_command import RichCommand +from rich_click.rich_command import RichBaseCommand, RichCommand, RichGroup, RichMultiCommand # noqa: F401 from rich_click.rich_context import RichContext -from rich_click.rich_group import RichGroup from rich_click.rich_help_configuration import RichHelpConfiguration # MyPy does not like star imports. Therefore when we are type checking, we import each individual module # from click here. This way MyPy will recognize the import and not throw any errors. Furthermore, because of # the TYPE_CHECKING check, it does not influence the start routine at all. if TYPE_CHECKING: - from click import argument, Choice, option, Path, version_option # noqa: F401 + from click import argument, Choice, option, pass_context, Path, version_option # noqa: F401 __all__ = [ "argument", @@ -41,6 +40,7 @@ "rich_config", "RichContext", "RichHelpConfiguration", + "pass_context", ] diff --git a/src/rich_click/cli.py b/src/rich_click/cli.py index 2673725f..c40b3d24 100644 --- a/src/rich_click/cli.py +++ b/src/rich_click/cli.py @@ -19,7 +19,7 @@ from rich_click import command as rich_command from rich_click import group as rich_group -from rich_click import RichCommand, RichGroup +from rich_click import RichBaseCommand, RichCommand, RichGroup, RichMultiCommand from rich_click.rich_click import ( ALIGN_ERRORS_PANEL, ERRORS_PANEL_TITLE, @@ -68,6 +68,8 @@ def patch() -> None: click.command = rich_command click.Group = RichGroup click.Command = RichCommand + click.BaseCommand = RichBaseCommand + click.MultiCommand = RichMultiCommand def main(args: Optional[List[str]] = None) -> Any: diff --git a/src/rich_click/rich_command.py b/src/rich_click/rich_command.py index 8cf441dd..c3c85fac 100644 --- a/src/rich_click/rich_command.py +++ b/src/rich_click/rich_command.py @@ -1,32 +1,29 @@ +import errno +import os import sys -from typing import ClassVar, Optional, Type +from functools import wraps +from typing import Any, Callable, cast, ClassVar, Optional, overload, Sequence, TextIO, Type, Union import click +from click.utils import make_str, PacifyFlushWrapper +from rich_click._compat_click import CLICK_IS_BEFORE_VERSION_8X from rich_click.rich_click import rich_abort_error, rich_format_error, rich_format_help from rich_click.rich_context import RichContext from rich_click.rich_help_formatter import RichHelpFormatter -class RichCommand(click.Command): - """Richly formatted click Command. +class RichBaseCommand(click.BaseCommand): + """Richly formatted click BaseCommand. - Inherits click.Command and overrides help and error methods + Inherits click.BaseCommand and overrides help and error methods to print richly formatted output. + + This class can be used as a mixin for other click command objects. """ - standalone_mode = False context_class: ClassVar[Type[RichContext]] = RichContext - - def __init__(self, *args, **kwargs): - """Create Rich Command. - - Accepts same arguments as Click Command. - - Docs reference: https://click.palletsprojects.com/ - """ - super().__init__(*args, **kwargs) - self._formatter: Optional[RichHelpFormatter] = None # type: ignore[annotation-unchecked] + _formatter: Optional[RichHelpFormatter] = None @property def console(self): @@ -45,33 +42,143 @@ def help_config(self): return self.context_settings.get("rich_help_config") @property - def formatter(self): + def formatter(self) -> RichHelpFormatter: """Rich Help Formatter. This is separate instance from the formatter used to display help, but is created from the same `RichHelpConfiguration`. Currently only used for error reporting. """ + if self._formatter is None: + self._formatter = RichHelpFormatter(config=self.help_config) return self._formatter - def main(self, *args, standalone_mode: bool = True, **kwargs): - formatter = self._formatter = RichHelpFormatter(config=self.help_config) + @wraps(click.BaseCommand.main) + def main( + self, + args: Optional[Sequence[str]] = None, + prog_name: Optional[str] = None, + complete_var: Optional[str] = None, + standalone_mode: bool = True, + windows_expand_args: bool = True, + **extra: Any, + ) -> Any: + # It's not feasible to use super().main() in this context and retain exact parity in behavior. + # The reason why is explained in a comment in click's source code in the "except Exit as e" block. + + if args is None: + if CLICK_IS_BEFORE_VERSION_8X: + from click.utils import get_os_args + + args = get_os_args() + else: + args = sys.argv[1:] + + if os.name == "nt" and windows_expand_args: + from click.utils import _expand_args + + args = _expand_args(args) + else: + args = list(args) + + if prog_name is None: + if CLICK_IS_BEFORE_VERSION_8X: + prog_name = make_str(os.path.basename(sys.argv[0] if sys.argv else __file__)) + else: + from click.utils import _detect_program_name + + prog_name = _detect_program_name() + + # Process shell completion requests and exit early. + if CLICK_IS_BEFORE_VERSION_8X: + from click.core import _bashcomplete + + _bashcomplete(self, prog_name, complete_var) + else: + self._main_shell_completion(extra, prog_name, complete_var) + try: - rv = super().main(*args, standalone_mode=False, **kwargs) - if not standalone_mode: - return rv - except click.ClickException as e: - rich_format_error(e, formatter) - if not standalone_mode: - raise - sys.stderr.write(formatter.getvalue()) - sys.exit(e.exit_code) + try: + with self.make_context(prog_name, args, **extra) as ctx: + rv = self.invoke(ctx) + if not standalone_mode: + return rv + # it's not safe to `ctx.exit(rv)` here! + # note that `rv` may actually contain data like "1" which + # has obvious effects + # more subtle case: `rv=[None, None]` can come out of + # chained commands which all returned `None` -- so it's not + # even always obvious that `rv` indicates success/failure + # by its truthiness/falsiness + ctx.exit() + except (EOFError, KeyboardInterrupt): + click.echo(file=sys.stderr) + raise click.exceptions.Abort() from None + except click.exceptions.ClickException as e: + rich_format_error(e, self.formatter) + if not standalone_mode: + raise + sys.stderr.write(self.formatter.getvalue()) + sys.exit(e.exit_code) + except OSError as e: + if e.errno == errno.EPIPE: + sys.stdout = cast(TextIO, PacifyFlushWrapper(sys.stdout)) + sys.stderr = cast(TextIO, PacifyFlushWrapper(sys.stderr)) + sys.exit(1) + else: + raise + except click.exceptions.Exit as e: + if standalone_mode: + sys.exit(e.exit_code) + else: + return e.exit_code except click.exceptions.Abort: - rich_abort_error(formatter) + rich_abort_error(self.formatter) if not standalone_mode: raise - sys.stderr.write(formatter.getvalue()) + sys.stderr.write(self.formatter.getvalue()) sys.exit(1) def format_help(self, ctx: click.Context, formatter: click.HelpFormatter): rich_format_help(self, ctx, formatter) + + +class RichCommand(RichBaseCommand, click.Command): + """Richly formatted click Command. + + Inherits click.Command and overrides help and error methods + to print richly formatted output. + """ + + +class RichMultiCommand(RichBaseCommand, click.MultiCommand): + """Richly formatted click MultiCommand. + + Inherits click.MultiCommand and overrides help and error methods + to print richly formatted output. + """ + + +class RichGroup(RichBaseCommand, click.Group): + """Richly formatted click Group. + + Inherits click.Group and overrides help and error methods + to print richly formatted output. + """ + + command_class: Type[RichCommand] = RichCommand + group_class = type + + @overload + def command(self, __func: Callable[..., Any]) -> click.Command: + ... + + @overload + def command(self, *args: Any, **kwargs: Any) -> Callable[[Callable[..., Any]], click.Command]: + ... + + def command(self, *args: Any, **kwargs: Any) -> Union[Callable[[Callable[..., Any]], click.Command], click.Command]: + # This method override is required for Click 7.x compatibility. + # (The command_class ClassVar was not added until 8.0.) + kwargs.setdefault("cls", self.command_class) + return super().command(*args, **kwargs) diff --git a/src/rich_click/rich_group.py b/src/rich_click/rich_group.py index 8aac41c3..659f2f28 100644 --- a/src/rich_click/rich_group.py +++ b/src/rich_click/rich_group.py @@ -1,93 +1,11 @@ -import sys -from typing import Any, Callable, Optional, overload, Type, Union +import warnings -import click +from rich_click.rich_command import RichGroup -from rich_click.rich_click import rich_abort_error, rich_format_error, rich_format_help -from rich_click.rich_command import RichCommand -from rich_click.rich_context import RichContext -from rich_click.rich_help_formatter import RichHelpFormatter +warnings.warn( + "RichCommand is moving from rich_click.rich_group to rich_click.rich_command in a future version.", + DeprecationWarning, +) -class RichGroup(click.Group): - """Richly formatted click Group. - - Inherits click.Group and overrides help and error methods - to print richly formatted output. - """ - - context_class: Type[RichContext] = RichContext - command_class: Type[RichCommand] = RichCommand - group_class = type - - def __init__(self, *args, **kwargs): - """Create Rich Group. - - Accepts same arguments as Click Command - - Docs reference: https://click.palletsprojects.com/ - """ - super().__init__(*args, **kwargs) - self._formatter: Optional[RichHelpFormatter] = None # type: ignore[annotation-unchecked] - - @property - def console(self): - """Rich Console. - - This is a separate instance from the help formatter that allows full control of the - console configuration. - - See `rich_config` decorator for how to apply the settings. - """ - return self.context_settings.get("rich_console") - - @property - def help_config(self): - """Rich Help Configuration.""" - return self.context_settings.get("rich_help_config") - - @property - def formatter(self): - """Rich Help Formatter. - - This is separate instance from the formatter used to display help, - but is created from the same `RichHelpConfiguration`. Currently only used - for error reporting. - """ - return self._formatter - - def main(self, *args, standalone_mode: bool = True, **kwargs): - formatter = self._formatter = RichHelpFormatter(config=self.help_config) - try: - rv = super().main(*args, standalone_mode=False, **kwargs) - if not standalone_mode: - return rv - except click.ClickException as e: - rich_format_error(e, formatter) - if not standalone_mode: - raise - sys.stderr.write(formatter.getvalue()) - sys.exit(e.exit_code) - except click.exceptions.Abort: - rich_abort_error(formatter) - if not standalone_mode: - raise - sys.stderr.write(formatter.getvalue()) - sys.exit(1) - - def format_help(self, ctx: click.Context, formatter: click.HelpFormatter): - rich_format_help(self, ctx, formatter) - - @overload - def command(self, __func: Callable[..., Any]) -> click.Command: - ... - - @overload - def command(self, *args: Any, **kwargs: Any) -> Callable[[Callable[..., Any]], click.Command]: - ... - - def command(self, *args: Any, **kwargs: Any) -> Union[Callable[[Callable[..., Any]], click.Command], click.Command]: - # This method override is required for Click 7.x compatibility. - # (The command_class ClassVar was not added until 8.0.) - kwargs.setdefault("cls", self.command_class) - return super().command(*args, **kwargs) +__all__ = ["RichGroup"]
Is `click.MultiCommand` supported? Here's a minimal example using `click.MultiCommand` for lazy loading of subcommands. Is this already supported and I just haven't figured out how to use it, or is this not supported yet? ```py import rich_click as click class Cli(click.MultiCommand): def list_commands(self, ctx): return ["foo"] def get_command(self, ctx, name): if name == "foo": from foo import bar return bar raise NotImplementedError(f"The command '{name}' is not implemented.") @click.group(cls=Cli) def cli(): pass ```
Rich-click works by overwriting the default `cls` used in the `@group` decorator, so if you define your own like in this example it will overwrite rich-click and use default click. You can explicitly use the rich-click classes to get around this (see [example](https://github.com/ewels/rich-click/blob/main/examples/02_declarative.py)) but yeah - we don't have a `rich_click.MultiCommand` yet. I guess it should be possible to add support for this though. Need to do something similar to the [`Command` one](https://github.com/ewels/rich-click/blob/main/src/rich_click/rich_command.py). Any ETA on this? Would love to use rick-click but can't use it without `.MultiCommand`... No ETA sorry, no. Happy to accept PRs to tackle this though. According to [click docs](https://click.palletsprojects.com/en/8.1.x/api/#click.MultiCommand) click.Command.format_help() method calls .format_usage(), .format_help_text(), .format_options() and .format_epilog() for the actual output. In contrast, [rich_format_help()](https://github.com/ewels/rich-click/blob/8e7523c891e0d3132f3c49e98154945e371d5e9b/src/rich_click/rich_click.py#L392), which is used in [RichCommand](https://github.com/ewels/rich-click/blob/8e7523c891e0d3132f3c49e98154945e371d5e9b/src/rich_click/rich_command.py#L11), appears to be a a single block of code. class click.MultiCommand(Command) overrides .format_options() which calls the parent's .format_options() and then its own [.format_commands()](https://github.com/pallets/click/blob/d0af32d8e7d78f4fdaf9c3d1559e1ffaac1b2569/src/click/core.py#L1599) which does three things when outputting the subcommands: 1. ignores empty commands (loading function returned None) 2. ignores commands with the .hidden property 3. Adds "extra space" between command name and the short help string Perhaps 1. and 2. can/should be added to rich_format_help() (unless the logic already exists), but it seems that it works fine on MultiCommands as it is. A simple solution to @nilsvu's problem seems to be overriding the .format_help() of MultiCommand by the same code used in [rich_click.Command](https://github.com/ewels/rich-click/blob/8e7523c891e0d3132f3c49e98154945e371d5e9b/src/rich_click/rich_command.py#L76) (as suggested by @ewels). This solution works fine for me. ```python class Cli(click.MultiCommand): ... def format_help(self, ctx: click.Context, formatter: click.HelpFormatter): rich_format_help(self, ctx, formatter) ``` Thanks @hsunner, that's very helpful and works well! Sounds like simply subclassing `click.MultiCommand` as `rich_click.MultiCommand` and overriding `format_help` and `main` like for the standard command would do it.
2023-10-04T16:53:04
0.0
[]
[]
ewels/rich-click
ewels__rich-click-115
43bd6865250a2f0b2abeeb09bda0f98e7286e4bd
diff --git a/setup.py b/setup.py index ce8ac488..59e7c76c 100644 --- a/setup.py +++ b/setup.py @@ -1,14 +1,8 @@ from setuptools import setup setup( - install_requires=[ - "click>=7", - "rich>=10.7.0", - "importlib-metadata; python_version < '3.8'", - "typing_extensions", - "packaging", - ], + install_requires=["click>=7", "rich>=10.7.0", "importlib-metadata; python_version < '3.8'", "typing_extensions"], extras_require={ - "dev": ["pre-commit", "pytest", "flake8", "flake8-docstrings", "pytest-cov"], + "dev": ["pre-commit", "pytest", "flake8", "flake8-docstrings", "pytest-cov", "packaging"], }, ) diff --git a/src/rich_click/_compat_click.py b/src/rich_click/_compat_click.py index 720c72c9..1fa4c1c2 100644 --- a/src/rich_click/_compat_click.py +++ b/src/rich_click/_compat_click.py @@ -1,7 +1,17 @@ -import click -from packaging import version +try: + from importlib import metadata # type: ignore +except ImportError: + # Python < 3.8 + import importlib_metadata as metadata # type: ignore -CLICK_IS_BEFORE_VERSION_8X = version.parse(click.__version__) < version.parse("8.0.0") + +click_version = metadata.version("click") +_major = int(click_version.split(".")[0]) +_minor = int(click_version.split(".")[1]) + + +CLICK_IS_BEFORE_VERSION_8X = _major < 8 +CLICK_IS_VERSION_80 = _major == 8 and _minor == 0 if CLICK_IS_BEFORE_VERSION_8X: diff --git a/src/rich_click/rich_click.py b/src/rich_click/rich_click.py index 22ee6df8..da788fc6 100644 --- a/src/rich_click/rich_click.py +++ b/src/rich_click/rich_click.py @@ -20,6 +20,7 @@ from rich.text import Text from typing_extensions import Literal +from rich_click._compat_click import CLICK_IS_BEFORE_VERSION_8X, CLICK_IS_VERSION_80 from rich_click.rich_help_configuration import OptionHighlighter, RichHelpConfiguration from rich_click.rich_help_formatter import RichHelpFormatter @@ -301,13 +302,25 @@ def _get_parameter_help( items.append(Text(config.envvar_string.format(envvar), style=config.style_option_envvar)) # Default value - if getattr(param, "show_default", None): - # param.default is the value, but click is a bit clever in choosing what to show here - # eg. --debug/--no-debug, default=False will show up as [default: no-debug] instead of [default: False] - # To avoid duplicating loads of code, let's just pull out the string from click with a regex - # Example outputs from param.get_help_record(ctx)[-1] are: - # [default: foo] - # [env var: EMAIL, EMAIL_ADDRESS; default: foo] + # Click 7.x, 8.0, and 8.1 all behave slightly differently when handling the default value help text. + if CLICK_IS_BEFORE_VERSION_8X: + parse_default = param.default is not None and (param.show_default or getattr(ctx, "show_default", None)) + elif CLICK_IS_VERSION_80: + show_default_is_str = isinstance(getattr(param, "show_default", None), str) + parse_default = show_default_is_str or (param.default is not None and (param.show_default or ctx.show_default)) + else: + show_default = False + show_default_is_str = False + if getattr(param, "show_default", None) is not None: + if isinstance(param.show_default, str): + show_default_is_str = show_default = True + else: + show_default = param.show_default + else: + show_default = getattr(ctx, "show_default", False) + parse_default = bool(show_default_is_str or (show_default and (param.default is not None))) + + if parse_default: default_str_match = re.search(r"\[(?:.+; )?default: (.*)\]", param.get_help_record(ctx)[-1]) if default_str_match: # Don't show the required string, as we show that afterwards anyway
options inherited from context settings aren't applied Thanks for the library, love that its a drop in replacement Just creating this to track this here/let you know. Defaults set in [context settings](https://click.palletsprojects.com/en/8.0.x/api/#click.Command) aren't detected by command/groups Minimal example: ```python import os USE_RICH = "USE_RICH" in os.environ if USE_RICH: import rich_click as click # type: ignore[import] else: import click # type: ignore[no-redef] DEFAULT_CONTEXT_SETTINGS = {"show_default": True} @click.command(context_settings=DEFAULT_CONTEXT_SETTINGS) @click.option("--use-context-settings", default="should work", help="help text shows") @click.option("--overwrite-to-succeed", default="does work", show_default=True, help="shows here too") def main(use_context_settings, overwrite_to_succeed) -> None: click.echo(use_context_settings) click.echo(overwrite_to_succeed) if __name__ == "__main__": main() ``` ![image](https://user-images.githubusercontent.com/7804791/155363615-be3b11fb-0268-4ecf-b00a-6ac7c6a9c26b.png) Would be willing to create a PR for this at some point in the future, I would imagine one just has to look these up in the parent context instead of just checking the parameters passed, but may be a bit more complicated than that...
Ah well spotted! Many thanks for the minimal example too.. A PR would be great if you get a chance (I'm away on holiday currently) otherwise I'll try to look into it when I get a chance 👍🏻 Tried [creating a basic helper](https://github.com/seanbreckenridge/rich-click/blob/6aa3d0b2ccbb5535827c60ac4a9f6de02aa027e6/src/rich_click/rich_click.py#L152-L173), but hit the wall I was expecting, not sure how click handles this In particular [this stuff](https://github.com/seanbreckenridge/rich-click/blob/6aa3d0b2ccbb5535827c60ac4a9f6de02aa027e6/src/rich_click/rich_click.py#L214-L217): ```python show_default_val = _get_param_value(param, ctx, "show_default") ## hmm -- show_default returns False even when ctx has a value set, since # show_default=False is the default for the parameter? Not sure how click # properly resolves this ``` Are there any plans to fix this bug? Yes - it was hopefully fixed in #89 but that PR went quiet. I recently ported it to #103 where I'm trying to resolve merge conflicts (#104). Once those are both merged, hopefully this issue should be resolved. I've been slow to merge because #89 is a massive PR that affects a lot of the package. Hopefully resolved by merging #89 - please let me know this is not the case. The defaults are still not being shown any updates on that? I've had the browser tab for this open for over a month but no, not had a chance to look at it yet sorry. PRs welcome. Hey, I have a fix for this! 😄 But before I implement it, I think we'll need to expand the test coverage to different versions of Click. Long story short, Click 7, 8.0, and 8.1 all behave slightly differently. I will work on doing that first, then we can merge in the fix. The whole inheritance of Defaults / Environment variables / Configuration file / CLI parameters and their precedence is quite complex in Click. It doesn't indeed work out of the box and that's not rich-click fault! 😅 That being said, I think I also [solved this issue in Click Extra](https://kdeldycke.github.io/click-extra/config.html#configuration-loader), so feel free to have a look if your looking for inspiration or ideas: https://github.com/kdeldycke/click-extra Any news? I need the test suite changes in to solve this since there are different behaviors in different Click versions. But once that's in I can merge a fix. Apologies all, I'm struggling to find time to dedicate to the maintenance of this project - it's become a lot more popular than I anticipated and passed the "does what I need it to do" threshold a long time ago. I was kind of hoping to have the functionality wrapped into another library by now 😅 (as it has been for Typer, in fairness). I'd be very happy to expand the team of maintainers (add collaborators to the repo who can review + merge) if anyone is interested, or any other solutions. Anyway, I'll merge the testing PR now @dwreeves (thanks for this work!). But please note that the merges are getting more and more blind on my part.. Thanks for your work with this library @ewels, and totally understood that maintaining things for a long time can be quite the commitment. Part of why I added the test suite changes were to help provide some insight into what code changes were doing across all possible users, which can reduce maintenance burdens. I'll get working on addressing this issue today; I'm going to be doing a little café personal work sesh in an hour.
2023-07-16T22:46:04
0.0
[]
[]
ewels/rich-click
ewels__rich-click-110
c8797a7616af9c042e540ecdec8f35a4e5162b24
diff --git a/README.md b/README.md index ce9a70c0..07a44ef3 100644 --- a/README.md +++ b/README.md @@ -224,12 +224,20 @@ click.rich_click.ERRORS_EPILOGUE = "To find out more, visit [link=https://mytool The default behaviour of rich-click is to use the full width of the terminal for output. However, if you've carefully crafted your help texts for the default narrow click output, you may find that you now have a lot of whitespace at the side of the panels. -To limit the maximum width of the help output, set `MAX_WIDTH` in characters, as follows: +To limit the maximum width of the help output, regardless of the terminal size, set `WIDTH` in characters as follows: ```python -click.rich_click.MAX_WIDTH = 100 +click.rich_click.WIDTH = 128 ``` +To still use the full width of the terminal up to a certain limit, set `MAX_WIDTH` in characters as follows: + +```python +click.rich_click.MAX_WIDTH = 96 +``` + +Setting `MAX_WIDTH` overrides the effect of `WIDTH` + ### Styling Most aspects of rich-click formatting can be customised, from colours to alignment. @@ -411,7 +419,8 @@ STYLE_ERRORS_PANEL_BORDER = "red" ALIGN_ERRORS_PANEL = "left" STYLE_ERRORS_SUGGESTION = "dim" STYLE_ABORTED = "red" -MAX_WIDTH = None # Set to an int to limit to that many characters +WIDTH = None # Set to int for a fixed character limit regardless of the terminal width +MAX_WIDTH = None # Set to int for a max character limit that is less than the terminal width. Overrides WIDTH limit COLOR_SYSTEM = "auto" # Set to None to disable colors # Fixed strings diff --git a/src/rich_click/rich_click.py b/src/rich_click/rich_click.py index da788fc6..73ebf2fe 100644 --- a/src/rich_click/rich_click.py +++ b/src/rich_click/rich_click.py @@ -73,7 +73,8 @@ ALIGN_ERRORS_PANEL = "left" STYLE_ERRORS_SUGGESTION = "dim" STYLE_ABORTED = "red" -MAX_WIDTH = int(getenv("TERMINAL_WIDTH")) if getenv("TERMINAL_WIDTH") else None # type: ignore +WIDTH = int(getenv("TERMINAL_WIDTH")) if getenv("TERMINAL_WIDTH") else None # type: ignore +MAX_WIDTH = int(getenv("TERMINAL_WIDTH")) if getenv("TERMINAL_WIDTH") else WIDTH # type: ignore COLOR_SYSTEM: Optional[ Literal["auto", "standard", "256", "truecolor", "windows"] ] = "auto" # Set to None to disable colors @@ -798,6 +799,7 @@ def get_module_help_configuration() -> RichHelpConfiguration: ALIGN_ERRORS_PANEL, STYLE_ERRORS_SUGGESTION, STYLE_ABORTED, + WIDTH, MAX_WIDTH, COLOR_SYSTEM, FORCE_TERMINAL, diff --git a/src/rich_click/rich_help_configuration.py b/src/rich_click/rich_help_configuration.py index 95388cf1..3b027a81 100644 --- a/src/rich_click/rich_help_configuration.py +++ b/src/rich_click/rich_help_configuration.py @@ -72,6 +72,9 @@ class RichHelpConfiguration: align_errors_panel: rich.align.AlignMethod = field(default="left") style_errors_suggestion: rich.style.StyleType = field(default="dim") style_aborted: rich.style.StyleType = field(default="red") + width: Optional[int] = field( + default_factory=lambda: (int(getenv("TERMINAL_WIDTH")) if getenv("TERMINAL_WIDTH") else None) # type: ignore + ) max_width: Optional[int] = field( default_factory=lambda: (int(getenv("TERMINAL_WIDTH")) if getenv("TERMINAL_WIDTH") else None) # type: ignore ) diff --git a/src/rich_click/rich_help_formatter.py b/src/rich_click/rich_help_formatter.py index b3afade7..8238da7b 100644 --- a/src/rich_click/rich_help_formatter.py +++ b/src/rich_click/rich_help_formatter.py @@ -36,7 +36,7 @@ def create_console(config: RichHelpConfiguration, file: Optional[IO[str]] = None file: Optional IO stream to write Rich Console output Defaults to None. """ - return Console( + console = Console( theme=rich.theme.Theme( { "option": config.style_option, @@ -51,10 +51,13 @@ def create_console(config: RichHelpConfiguration, file: Optional[IO[str]] = None highlighter=config.highlighter, color_system=config.color_system, force_terminal=config.force_terminal, - width=config.max_width, file=file, + width=config.width, legacy_windows=config.legacy_windows, ) + if isinstance(config.max_width, int): + console.width = min(config.max_width, console.size.width) + return console def get_module_config() -> RichHelpConfiguration:
MAX_WIDTH setting acts only as fixed width ### Expected behavior Use the full width of the terminal for output if it is less than the MAX_WIDTH setting ### Actual behavior The rendered help is always displayed in MAX_WIDTH ### Screenshots With MAX_WIDTH = 128 ![image](https://github.com/ewels/rich-click/assets/15620712/8aac9eea-f8ed-411f-91de-bbc2e49109cc) With MAX_WIDTH = None ![image](https://github.com/ewels/rich-click/assets/15620712/a617b19f-285b-47f6-9aae-10122ceaff8d) MAX_WIDTH setting acts only as fixed width ### Expected behavior Use the full width of the terminal for output if it is less than the MAX_WIDTH setting ### Actual behavior The rendered help is always displayed in MAX_WIDTH ### Screenshots With MAX_WIDTH = 128 ![image](https://github.com/ewels/rich-click/assets/15620712/8aac9eea-f8ed-411f-91de-bbc2e49109cc) With MAX_WIDTH = None ![image](https://github.com/ewels/rich-click/assets/15620712/a617b19f-285b-47f6-9aae-10122ceaff8d)
Yeah, the name is a little misleading. It should be called `width` instead of `max_width` really - that's what it's setting: https://github.com/ewels/rich-click/blob/8e7523c891e0d3132f3c49e98154945e371d5e9b/src/rich_click/rich_help_formatter.py#L47 I guess it shouldn't be _too_ difficult to override the setting if the terminal width is known and non-zero though 🤔 Yeah, the name is a little misleading. It should be called `width` instead of `max_width` really - that's what it's setting: https://github.com/ewels/rich-click/blob/8e7523c891e0d3132f3c49e98154945e371d5e9b/src/rich_click/rich_help_formatter.py#L47 I guess it shouldn't be _too_ difficult to override the setting if the terminal width is known and non-zero though 🤔
2023-06-12T13:09:32
0.0
[]
[]
SBU-BMI/wsinfer
SBU-BMI__wsinfer-177
58ffaa832e36b184c0cb50db38835f9ab81338a9
diff --git a/wsinfer/errors.py b/wsinfer/errors.py index 7cdd402..5a2a4b5 100644 --- a/wsinfer/errors.py +++ b/wsinfer/errors.py @@ -41,3 +41,7 @@ class CannotReadSpacing(WsinferException): class NoBackendException(WsinferException): ... + + +class BackendNotAvailable(WsinferException): + """The requested backend is not available.""" diff --git a/wsinfer/wsi.py b/wsinfer/wsi.py index 74e571b..6284178 100644 --- a/wsinfer/wsi.py +++ b/wsinfer/wsi.py @@ -10,6 +10,7 @@ import tifffile from PIL import Image +from .errors import BackendNotAvailable from .errors import CannotReadSpacing from .errors import DuplicateFilePrefixesFound from .errors import NoBackendException @@ -35,6 +36,11 @@ HAS_TIFFSLIDE = False logger.debug(f"Unable to import tiffslide due to error: {err}") +if not HAS_TIFFSLIDE and not HAS_OPENSLIDE: + raise NoBackendException( + "No backend is available. Please install openslide or tiffslide." + ) + @overload def set_backend(name: Literal["openslide"]) -> type[openslide.OpenSlide]: @@ -54,8 +60,18 @@ def set_backend( raise ValueError(f"Unknown backend: {name}") logger.info(f"Setting backend to {name}") if name == "openslide": + if not HAS_OPENSLIDE: + raise BackendNotAvailable( + "OpenSlide is not available. Please install the OpenSlide compiled" + " library and the Python package 'openslide-python'." + " See https://openslide.org/ for more information." + ) WSI = openslide.OpenSlide elif name == "tiffslide": + if not HAS_TIFFSLIDE: + raise BackendNotAvailable( + "TiffSlide is not available. Please install 'tiffslide'." + ) WSI = tiffslide.TiffSlide else: raise ValueError(f"Unknown backend: {name}") @@ -73,8 +89,6 @@ def set_backend( # For typing an object that has a method `read_region`. - - class CanReadRegion(Protocol): def read_region( self, location: tuple[int, int], level: int, size: tuple[int, int] @@ -100,6 +114,11 @@ def _get_mpp_openslide(slide_path: str | Path) -> tuple[float, float]: CannotReadSpacing if spacing cannot be read from the whole slide iamge. """ logger.debug("Attempting to read MPP using OpenSlide") + if not HAS_OPENSLIDE: + logger.critical( + "Cannot read MPP with OpenSlide because OpenSlide is not available" + ) + raise CannotReadSpacing() slide = openslide.OpenSlide(slide_path) mppx: float | None = None mppy: float | None = None @@ -170,6 +189,12 @@ def _get_mpp_tiffslide( slide_path: str | Path, ) -> tuple[float, float]: """Read MPP using TiffSlide.""" + if not HAS_TIFFSLIDE: + logger.critical( + "Cannot read MPP with TiffSlide because TiffSlide is not available" + ) + raise CannotReadSpacing() + slide = tiffslide.TiffSlide(slide_path) mppx: float | None = None mppy: float | None = None
make model output names align with QuPath names would have to be capitalized ![image](https://github.com/SBU-BMI/wsinfer/assets/17690870/09210f10-6685-45bc-87fa-06f54136b366)
2023-08-03T17:18:36
0.0
[]
[]
SBU-BMI/wsinfer
SBU-BMI__wsinfer-143
bb0316fa02661ff532b176596d18e46aa0a893f6
diff --git a/wsinfer/cli/infer.py b/wsinfer/cli/infer.py index 7aea76a..f7af44b 100644 --- a/wsinfer/cli/infer.py +++ b/wsinfer/cli/infer.py @@ -164,7 +164,7 @@ def get_stdout(args) -> str: "model": { "config": dataclasses.asdict(model_obj.config), "huggingface_location": hf_info, - "path": model_obj.model_path, + "path": str(model_obj.model_path), }, "runtime": { "version": __version__, diff --git a/wsinfer/modellib/data.py b/wsinfer/modellib/data.py index 00c6b97..ac1e45d 100644 --- a/wsinfer/modellib/data.py +++ b/wsinfer/modellib/data.py @@ -165,13 +165,8 @@ def __getitem__( location=(minx, miny), level=0, size=(width, height) ) patch_im = patch_im.convert("RGB") - # Resize to the expected patch size (and spacing). - patch_im = patch_im.resize((self.patch_size, self.patch_size)) if self.transform is not None: patch_im = self.transform(patch_im) - if not isinstance(patch_im, (Image.Image, torch.Tensor)): - raise TypeError( - f"patch image must be an Image of Tensor, but got {type(patch_im)}" - ) + return patch_im, torch.as_tensor([minx, miny, width, height]) diff --git a/wsinfer/modellib/run_inference.py b/wsinfer/modellib/run_inference.py index 10ac2c9..9c6abb0 100644 --- a/wsinfer/modellib/run_inference.py +++ b/wsinfer/modellib/run_inference.py @@ -194,8 +194,6 @@ def run_inference( slide_coords_arr = np.concatenate(slide_coords, axis=0) slide_df = pd.DataFrame( dict( - # FIXME: should we include the slide path in the CSV? Probably not. - slide=wsi_path, minx=slide_coords_arr[:, 0], miny=slide_coords_arr[:, 1], width=slide_coords_arr[:, 2],
[BUG] saving JSON with weights file of type pathlib.Path raises error The `run_metadata*.json` file is saved in a corrupted state if one of the values is not json serializable. the type `pathlib.Path` is one such type that leads to errors...
change near this line: https://github.com/SBU-BMI/wsinfer/blob/45dfeeaef1853612dcebecd4d362e1c9cdb55e41/wsinfer/cli/infer.py#L151 perhaps add a test too... ```python else: weights_files = str(weights_file) ```
2023-07-12T19:28:57
0.0
[]
[]
duo-labs/parliament
duo-labs__parliament-185
1ea9f0b3890223fa9a47503be9f1b739610ac73e
diff --git a/parliament/__init__.py b/parliament/__init__.py index 91dd4b3..1596023 100644 --- a/parliament/__init__.py +++ b/parliament/__init__.py @@ -115,7 +115,8 @@ def is_arn_match(resource_type, arn_format, resource): if "bucket" in resource_type: # We have to do a special case here for S3 buckets - if "/" in resource: + # and since resources can use variables which contain / need to replace them + if "/" in strip_var_from_arn(resource, "theVar"): return False # The ARN has at least 6 parts, separated by a colon. Ensure these exist. @@ -144,7 +145,6 @@ def is_arn_match(resource_type, arn_format, resource): # Some of the arn_id's contain regexes of the form "[key]" so replace those with "*" resource_id = re.sub(r"\[.+?\]", "*", resource_id) - return is_glob_match(arn_id, resource_id) def is_arn_strictly_valid(resource_type, arn_format, resource): @@ -166,7 +166,6 @@ def is_arn_strictly_valid(resource_type, arn_format, resource): - resource: ARN regex from IAM policy """ - if is_arn_match(resource_type, arn_format, resource): # this would have already raised exception arn_parts = arn_format.split(":") @@ -187,13 +186,16 @@ def is_arn_strictly_valid(resource_type, arn_format, resource): return False # replace aws variable and check for other colons - resource_id_no_vars = re.sub(r"\$\{aws.\w+\}", "", resource_id) + resource_id_no_vars = strip_var_from_arn(resource_id) if ":" in resource_id_no_vars and not ":" in arn_id: return False return True return False +def strip_var_from_arn(arn, replace_with=""): + return re.sub(r"\$\{aws.[\w\/]+\}", replace_with, arn) + def is_glob_match(s1, s2): # This comes from https://github.com/duo-labs/parliament/issues/36#issuecomment-574001764
Policy variables in the resource element not supported Hi, example policy: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "BucketAccess", "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:GetBucketVersioning", "s3:GetEncryptionConfiguration", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3::bucket-for-client-${aws:PrincipalTag/Namespace}-*" } ] } ``` Policy variable ${aws:PrincipalTag/Namespace} causes: ```json { "description": "", "detail": [ { "action": "s3:GetBucketLocation", "required_format": "arn:*:s3:::*" }, { "action": "s3:GetBucketVersioning", "required_format": "arn:*:s3:::*" }, { "action": "s3:GetEncryptionConfiguration", "required_format": "arn:*:s3:::*" }, { "action": "s3:ListBucket", "required_format": "arn:*:s3:::*" }, { "action": "s3:ListBucketMultipartUploads", "required_format": "arn:*:s3:::*" }, { "action": "s3:ListBucketVersions", "required_format": "arn:*:s3:::*" } ], "issue": "RESOURCE_MISMATCH", "location": { "column": 3, "filepath": "main.json", "line": 4 }, "match": true, "severity": "MEDIUM", "title": "No resources match for the given action" } ``` Maybe extending this condition https://github.com/duo-labs/parliament/blob/main/parliament/__init__.py#L118 by some regexp will be good enough for start ? ;) ( ex. (?P<Variable>\${aws:.*}) ) Take care!
Looks like you are missing a colon. S3 bucket resources are of the form `arn:*:s3:::bucket`, but you have `arn:*:s3::bucket`. After fixing that though, looks like there is an issue with Parliament. For the documentation's ARN, I do a regex to clear out the variables: https://github.com/duo-labs/parliament/blob/afb6930992e7c22b128b377ce0fd46fef9e1d1a8/parliament/__init__.py#L248 I guess something similar could be done for this variable (I'm not entirely sure what impact that might have on any other checks). I'm marking this as a bug to be fixed, but unfortunately I don't expect to be able to dig into this for a while.
2021-03-18T17:56:44
0.0
[]
[]
duo-labs/parliament
duo-labs__parliament-177
da69dfd77981d8ef87923261980caa6eea9ae9a5
diff --git a/parliament/__init__.py b/parliament/__init__.py index f9b5d48..fd7d7c0 100644 --- a/parliament/__init__.py +++ b/parliament/__init__.py @@ -147,6 +147,52 @@ def is_arn_match(resource_type, arn_format, resource): return is_glob_match(arn_id, resource_id) +def is_arn_strictly_valid(resource_type, arn_format, resource): + """ + Strictly validate the arn_format specified in the docs, with the resource + given in the IAM policy. These can each be strings with globbing. For example, we + want to match the following two strings: + - arn:*:s3:::*/* + - arn:aws:s3:::*personalize* + + That should return true because you could have "arn:aws:s3:::personalize/" which matches both. + + However when not using *, must include the resource type in the resource arn and wildcards + are not valid for the resource type portion (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namesspaces) + + Input: + - resource_type: Example "bucket", this is only used to identify special cases. + - arn_format: ARN regex from the docs + - resource: ARN regex from IAM policy + + """ + + if is_arn_match(resource_type, arn_format, resource): + # this would have already raised exception + arn_parts = arn_format.split(":") + resource_parts = resource.split(":") + arn_id = ":".join(arn_parts[5:]) + resource_id = ":".join(resource_parts[5:]) + + # Does the resource contain a resource type component + # regex looks for a resource type word like "user" or "cluster-endpoint" followed by a + # : or / and then anything else excluding the resource type string starting with a * + arn_id_resource_type = re.match(r"(^[^\*][\w-]+)[\/\:].+", arn_id) + + if arn_id_resource_type != None and resource_id != "*": + + # https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namesspaces + # The following is not allowed: arn:aws:iam::123456789012:u* + if not (resource_id.startswith(arn_id_resource_type[1])): + return False + + # replace aws variable and check for other colons + resource_id_no_vars = re.sub(r"\$\{aws.\w+\}", "", resource_id) + if ":" in resource_id_no_vars and not ":" in arn_id: + return False + + return True + return False def is_glob_match(s1, s2): # This comes from https://github.com/duo-labs/parliament/issues/36#issuecomment-574001764 diff --git a/parliament/statement.py b/parliament/statement.py index 805c27c..43aaea3 100644 --- a/parliament/statement.py +++ b/parliament/statement.py @@ -4,6 +4,7 @@ from . import ( iam_definition, is_arn_match, + is_arn_strictly_valid, expand_action, UnknownActionException, UnknownPrefixException, @@ -921,7 +922,7 @@ def analyze_statement(self): self.resource_star[action_key] += 1 match_found = True continue - if is_arn_match(resource_type, arn_format, resource.value): + if is_arn_strictly_valid(resource_type, arn_format, resource.value): match_found = True continue
parliament should catch invalid s3 bucket object resource arns I expected `parliament` to catch the following error: ```json { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action":["s3:GetObject"], "Resource": ["arn:aws:s3:::bucket1:*"] } } ``` Which should have been `arn:aws:s3:::bucket1/*` instead of `:*` at the end of the ARN. Execution of `parliament` against that policy revealed no errors, warnings, or non-zero return code. Thanks!
2021-03-04T23:15:49
0.0
[]
[]
mdomke/schwifty
mdomke__schwifty-221
3f4b34308356049fa15a4acf621c69e8f20a7ff6
diff --git a/schwifty/registry.py b/schwifty/registry.py index 5689f5f..06bd528 100644 --- a/schwifty/registry.py +++ b/schwifty/registry.py @@ -72,7 +72,7 @@ def parse_v2(data: dict[str, Any]) -> list[dict[str, Any]]: def expand(entry: dict[str, Any], src: str, dst: str) -> list[dict[str, Any]]: values = entry.pop(src) entry.setdefault("primary", False) - return [entry | {dst: value} for value in values] + return [{**entry, dst: value} for value in values] return list( itertools.chain.from_iterable(
Missing declaration of dropping Python 3.8 support schwifty 2024.8.1 [uses](https://github.com/mdomke/schwifty/blob/3f4b34308356049fa15a4acf621c69e8f20a7ff6/schwifty/registry.py#L75) the Python 3.9 only `{} | {}` dict merge syntax but is still describing itself as [>=3.8 compatible](https://github.com/mdomke/schwifty/blob/3f4b34308356049fa15a4acf621c69e8f20a7ff6/pyproject.toml#L24) giving a: ``` unsupported operand type(s) for |: 'dict' and 'dict' ``` Either that syntax needs replacing or the minimium supported Python version should be bumped so that running a unpinned `pip install schwifty` with Python 3.8 doesn't lead to a broken installation (although you will need to yank the current release to meaningfully apply that fix).
Ran into the same issue, discovered schwifty is broken on 2024.8.0 as well - latest version that works on 3.8 is 2024.6.1. Is it possible to perhaps fix the library for Python 3.8? It still has a few months left until EOL and the vast majority of Python libraries still support it.
2024-08-28T20:17:04
0.0
[]
[]
mdomke/schwifty
mdomke__schwifty-218
a5b1595965797c847cc0cfc22028033496b97326
diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 535eb7c..6d439e2 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -5,6 +5,13 @@ Changelog Versions follow `CalVer <http://www.calver.org/>`_ with the scheme ``YY.0M.Micro``. +Unreleased +---------- +Added +~~~~~ +* Allow ``BIC`` and ``IBAN`` objects to be deepcopied (thanks to `@binaryDiv <https://github.com/binaryDiv>`_ + for pointing this out). + `2024.08.0`_ - 2024/08/13 ------------------------- Added diff --git a/schwifty/common.py b/schwifty/common.py index 99a5f8b..ab3b81a 100644 --- a/schwifty/common.py +++ b/schwifty/common.py @@ -31,6 +31,9 @@ def __eq__(self, other: Any) -> bool: def __lt__(self, other: Any) -> bool: return str(self) < str(other) + def __deepcopy__(self, memo: dict[str, Any] | None = None) -> Self: + return self.__class__(str(self)) + @property def compact(self) -> str: """str: Compact representation of the code. It's preferable to call ``str(obj)``"""
IBAN object cannot be deepcopied Currently, when trying a `copy.deepcopy()` on an `IBAN` object, an exception is raised. This can happen for example when deepcopying a dictionary that contains an `IBAN` object. A simple `copy()` works perfectly fine, it even returns a new instance: ```pycon >>> from schwifty import IBAN >>> import copy >>> >>> iban = IBAN('DE91100000000123456789') >>> iban <IBAN=DE91100000000123456789> >>> >>> iban_copy = copy.copy(iban) >>> id(iban), id(iban_copy) (125298528552432, 125298507391216) ``` However, a `deepcopy()` results in the following exception: ```pycon >>> iban_deepcopy = copy.deepcopy(iban) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.12/copy.py", line 162, in deepcopy y = _reconstruct(x, memo, *rv) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/copy.py", line 259, in _reconstruct state = deepcopy(state, memo) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/copy.py", line 136, in deepcopy y = copier(x, memo) ^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/copy.py", line 221, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/copy.py", line 162, in deepcopy y = _reconstruct(x, memo, *rv) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/copy.py", line 253, in _reconstruct y = func(*args) ^^^^^^^^^^^ File "/usr/lib/python3.12/copyreg.py", line 99, in __newobj__ return cls.__new__(cls, *args) ^^^^^^^^^^^^^^^^^^^^^^^ TypeError: BBAN.__new__() missing 1 required positional argument: 'value' ``` I've tested this on the latest version of schwifty (2024.6.1) and Python 3.12.4.
Adding the following implementation for `__deepcopy__` to the `BBAN` class seems to fix the issue. However, I'm not sure if this is the best solution or if that implementation should be done in the `common.Base` class. ```python def __deepcopy__(self, memo): cls = type(self) return cls(str(self.country_code), str(self)) ```
2024-08-13T09:11:17
0.0
[]
[]
mdomke/schwifty
mdomke__schwifty-193
1e6234b231587c48c47e9e57b73279b35c5f6dc2
diff --git a/schwifty/bank_registry/generated_cz.json b/schwifty/bank_registry/generated_cz.json index a4b2147..d7b8b9e 100644 --- a/schwifty/bank_registry/generated_cz.json +++ b/schwifty/bank_registry/generated_cz.json @@ -4,16 +4,16 @@ "primary": true, "bic": "KOMBCZPP", "bank_code": "0100", - "name": "Komer\u00c4\u008dn\u00c3\u00ad banka, a.s.", - "short_name": "Komer\u00c4\u008dn\u00c3\u00ad banka, a.s." + "name": "Komer\u010dn\u00ed banka, a.s.", + "short_name": "Komer\u010dn\u00ed banka, a.s." }, { "country_code": "CZ", "primary": true, "bic": "CEKOCZPP", "bank_code": "0300", - "name": "\u00c4\u008ceskoslovensk\u00c3\u00a1 obchodn\u00c3\u00ad banka, a. s.", - "short_name": "\u00c4\u008ceskoslovensk\u00c3\u00a1 obchodn\u00c3\u00ad banka, a. s." + "name": "\u010ceskoslovensk\u00e1 obchodn\u00ed banka, a. s.", + "short_name": "\u010ceskoslovensk\u00e1 obchodn\u00ed banka, a. s." }, { "country_code": "CZ", @@ -28,16 +28,16 @@ "primary": true, "bic": "CNBACZPP", "bank_code": "0710", - "name": "\u00c4\u008cESK\u00c3\u0081 N\u00c3\u0081RODN\u00c3\u008d BANKA", - "short_name": "\u00c4\u008cESK\u00c3\u0081 N\u00c3\u0081RODN\u00c3\u008d BANKA" + "name": "\u010cESK\u00c1 N\u00c1RODN\u00cd BANKA", + "short_name": "\u010cESK\u00c1 N\u00c1RODN\u00cd BANKA" }, { "country_code": "CZ", "primary": true, "bic": "GIBACZPX", "bank_code": "0800", - "name": "\u00c4\u008cesk\u00c3\u00a1 spo\u00c5\u0099itelna, a.s.", - "short_name": "\u00c4\u008cesk\u00c3\u00a1 spo\u00c5\u0099itelna, a.s." + "name": "\u010cesk\u00e1 spo\u0159itelna, a.s.", + "short_name": "\u010cesk\u00e1 spo\u0159itelna, a.s." }, { "country_code": "CZ", @@ -52,8 +52,8 @@ "primary": true, "bic": "CITFCZPP", "bank_code": "2060", - "name": "Citfin, spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo", - "short_name": "Citfin, spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo" + "name": "Citfin, spo\u0159iteln\u00ed dru\u017estvo", + "short_name": "Citfin, spo\u0159iteln\u00ed dru\u017estvo" }, { "country_code": "CZ", @@ -68,24 +68,24 @@ "primary": true, "bic": "", "bank_code": "2100", - "name": "Hypote\u00c4\u008dn\u00c3\u00ad banka, a.s.", - "short_name": "Hypote\u00c4\u008dn\u00c3\u00ad banka, a.s." + "name": "\u010cSOB Hypote\u010dn\u00ed banka, a.s.", + "short_name": "\u010cSOB Hypote\u010dn\u00ed banka, a.s." }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "2200", - "name": "Pen\u00c4\u009b\u00c5\u00ben\u00c3\u00ad d\u00c5\u00afm, spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo", - "short_name": "Pen\u00c4\u009b\u00c5\u00ben\u00c3\u00ad d\u00c5\u00afm, spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo" + "name": "Pen\u011b\u017en\u00ed d\u016fm, spo\u0159iteln\u00ed dru\u017estvo", + "short_name": "Pen\u011b\u017en\u00ed d\u016fm, spo\u0159iteln\u00ed dru\u017estvo" }, { "country_code": "CZ", "primary": true, "bic": "ARTTCZPP", "bank_code": "2220", - "name": "Artesa, spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo", - "short_name": "Artesa, spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo" + "name": "Artesa, spo\u0159iteln\u00ed dru\u017estvo", + "short_name": "Artesa, spo\u0159iteln\u00ed dru\u017estvo" }, { "country_code": "CZ", @@ -100,24 +100,24 @@ "primary": true, "bic": "", "bank_code": "2260", - "name": "NEY spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo", - "short_name": "NEY spo\u00c5\u0099iteln\u00c3\u00ad dru\u00c5\u00bestvo" + "name": "NEY spo\u0159iteln\u00ed dru\u017estvo", + "short_name": "NEY spo\u0159iteln\u00ed dru\u017estvo" }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "2275", - "name": "Podnikatelsk\u00c3\u00a1 dru\u00c5\u00bestevn\u00c3\u00ad z\u00c3\u00a1lo\u00c5\u00bena", - "short_name": "Podnikatelsk\u00c3\u00a1 dru\u00c5\u00bestevn\u00c3\u00ad z\u00c3\u00a1lo\u00c5\u00bena" + "name": "Podnikatelsk\u00e1 dru\u017estevn\u00ed z\u00e1lo\u017ena", + "short_name": "Podnikatelsk\u00e1 dru\u017estevn\u00ed z\u00e1lo\u017ena" }, { "country_code": "CZ", "primary": true, "bic": "CITICZPX", "bank_code": "2600", - "name": "Citibank Europe plc, organiza\u00c4\u008dn\u00c3\u00ad slo\u00c5\u00beka", - "short_name": "Citibank Europe plc, organiza\u00c4\u008dn\u00c3\u00ad slo\u00c5\u00beka" + "name": "Citibank Europe plc, organiza\u010dn\u00ed slo\u017eka", + "short_name": "Citibank Europe plc, organiza\u010dn\u00ed slo\u017eka" }, { "country_code": "CZ", @@ -140,8 +140,8 @@ "primary": true, "bic": "BPPFCZP1", "bank_code": "3050", - "name": "BNP Paribas Personal Finance SA, od\u00c5\u00a1t\u00c4\u009bpn\u00c3\u00bd z\u00c3\u00a1vod", - "short_name": "BNP Paribas Personal Finance SA, od\u00c5\u00a1t\u00c4\u009bpn\u00c3\u00bd z\u00c3\u00a1vod" + "name": "BNP Paribas Personal Finance SA, od\u0161t\u011bpn\u00fd z\u00e1vod", + "short_name": "BNP Paribas Personal Finance SA, od\u0161t\u011bpn\u00fd z\u00e1vod" }, { "country_code": "CZ", @@ -172,8 +172,8 @@ "primary": true, "bic": "NROZCZPP", "bank_code": "4300", - "name": "N\u00c3\u00a1rodn\u00c3\u00ad rozvojov\u00c3\u00a1 banka, a.s.", - "short_name": "N\u00c3\u00a1rodn\u00c3\u00ad rozvojov\u00c3\u00a1 banka, a.s." + "name": "N\u00e1rodn\u00ed rozvojov\u00e1 banka, a.s.", + "short_name": "N\u00e1rodn\u00ed rozvojov\u00e1 banka, a.s." }, { "country_code": "CZ", @@ -204,24 +204,24 @@ "primary": true, "bic": "COBACZPX", "bank_code": "6200", - "name": "COMMERZBANK Aktiengesellschaft, pobo\u00c4\u008dka Praha", - "short_name": "COMMERZBANK Aktiengesellschaft, pobo\u00c4\u008dka Praha" + "name": "COMMERZBANK Aktiengesellschaft, pobo\u010dka Praha", + "short_name": "COMMERZBANK Aktiengesellschaft, pobo\u010dka Praha" }, { "country_code": "CZ", "primary": true, "bic": "BREXCZPP", "bank_code": "6210", - "name": "mBank S.A., organiza\u00c4\u008dn\u00c3\u00ad slo\u00c5\u00beka", - "short_name": "mBank S.A., organiza\u00c4\u008dn\u00c3\u00ad slo\u00c5\u00beka" + "name": "mBank S.A., organiza\u010dn\u00ed slo\u017eka", + "short_name": "mBank S.A., organiza\u010dn\u00ed slo\u017eka" }, { "country_code": "CZ", "primary": true, "bic": "GEBACZPP", "bank_code": "6300", - "name": "BNP Paribas S.A., pobo\u00c4\u008dka \u00c4\u008cesk\u00c3\u00a1 republika", - "short_name": "BNP Paribas S.A., pobo\u00c4\u008dka \u00c4\u008cesk\u00c3\u00a1 republika" + "name": "BNP Paribas S.A., pobo\u010dka \u010cesk\u00e1 republika", + "short_name": "BNP Paribas S.A., pobo\u010dka \u010cesk\u00e1 republika" }, { "country_code": "CZ", @@ -236,8 +236,8 @@ "primary": true, "bic": "SUBACZPP", "bank_code": "6700", - "name": "V\u00c5\u00a1eobecn\u00c3\u00a1 \u00c3\u00baverov\u00c3\u00a1 banka a.s., pobo\u00c4\u008dka Praha", - "short_name": "V\u00c5\u00a1eobecn\u00c3\u00a1 \u00c3\u00baverov\u00c3\u00a1 banka a.s., pobo\u00c4\u008dka Praha" + "name": "V\u0161eobecn\u00e1 \u00faverov\u00e1 banka a.s., pobo\u010dka Praha", + "short_name": "V\u0161eobecn\u00e1 \u00faverov\u00e1 banka a.s., pobo\u010dka Praha" }, { "country_code": "CZ", @@ -252,72 +252,72 @@ "primary": true, "bic": "DEUTCZPX", "bank_code": "7910", - "name": "Deutsche Bank Aktiengesellschaft Filiale Prag, organiza\u00c4\u008dn\u00c3\u00ad slo\u00c5\u00beka", - "short_name": "Deutsche Bank Aktiengesellschaft Filiale Prag, organiza\u00c4\u008dn\u00c3\u00ad slo\u00c5\u00beka" + "name": "Deutsche Bank Aktiengesellschaft Filiale Prag, organiza\u010dn\u00ed slo\u017eka", + "short_name": "Deutsche Bank Aktiengesellschaft Filiale Prag, organiza\u010dn\u00ed slo\u017eka" }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "7950", - "name": "Raiffeisen stavebn\u00c3\u00ad spo\u00c5\u0099itelna a.s.", - "short_name": "Raiffeisen stavebn\u00c3\u00ad spo\u00c5\u0099itelna a.s." + "name": "Raiffeisen stavebn\u00ed spo\u0159itelna a.s.", + "short_name": "Raiffeisen stavebn\u00ed spo\u0159itelna a.s." }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "7960", - "name": "\u00c4\u008cSOB Stavebn\u00c3\u00ad spo\u00c5\u0099itelna, a.s.", - "short_name": "\u00c4\u008cSOB Stavebn\u00c3\u00ad spo\u00c5\u0099itelna, a.s." + "name": "\u010cSOB Stavebn\u00ed spo\u0159itelna, a.s.", + "short_name": "\u010cSOB Stavebn\u00ed spo\u0159itelna, a.s." }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "7970", - "name": "MONETA Stavebn\u00c3\u00ad Spo\u00c5\u0099itelna, a.s.", - "short_name": "MONETA Stavebn\u00c3\u00ad Spo\u00c5\u0099itelna, a.s." + "name": "MONETA Stavebn\u00ed Spo\u0159itelna, a.s.", + "short_name": "MONETA Stavebn\u00ed Spo\u0159itelna, a.s." }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "7990", - "name": "Modr\u00c3\u00a1 pyramida stavebn\u00c3\u00ad spo\u00c5\u0099itelna, a.s.", - "short_name": "Modr\u00c3\u00a1 pyramida stavebn\u00c3\u00ad spo\u00c5\u0099itelna, a.s." + "name": "Modr\u00e1 pyramida stavebn\u00ed spo\u0159itelna, a.s.", + "short_name": "Modr\u00e1 pyramida stavebn\u00ed spo\u0159itelna, a.s." }, { "country_code": "CZ", "primary": true, "bic": "GENOCZ21", "bank_code": "8030", - "name": "Volksbank Raiffeisenbank Nordoberpfalz eG pobo\u00c4\u008dka Cheb", - "short_name": "Volksbank Raiffeisenbank Nordoberpfalz eG pobo\u00c4\u008dka Cheb" + "name": "Volksbank Raiffeisenbank Nordoberpfalz eG pobo\u010dka Cheb", + "short_name": "Volksbank Raiffeisenbank Nordoberpfalz eG pobo\u010dka Cheb" }, { "country_code": "CZ", "primary": true, "bic": "OBKLCZ2X", "bank_code": "8040", - "name": "Oberbank AG pobo\u00c4\u008dka \u00c4\u008cesk\u00c3\u00a1 republika", - "short_name": "Oberbank AG pobo\u00c4\u008dka \u00c4\u008cesk\u00c3\u00a1 republika" + "name": "Oberbank AG pobo\u010dka \u010cesk\u00e1 republika", + "short_name": "Oberbank AG pobo\u010dka \u010cesk\u00e1 republika" }, { "country_code": "CZ", "primary": true, "bic": "", "bank_code": "8060", - "name": "Stavebn\u00c3\u00ad spo\u00c5\u0099itelna \u00c4\u008cesk\u00c3\u00a9 spo\u00c5\u0099itelny, a.s.", - "short_name": "Stavebn\u00c3\u00ad spo\u00c5\u0099itelna \u00c4\u008cesk\u00c3\u00a9 spo\u00c5\u0099itelny, a.s." + "name": "Stavebn\u00ed spo\u0159itelna \u010cesk\u00e9 spo\u0159itelny, a.s.", + "short_name": "Stavebn\u00ed spo\u0159itelna \u010cesk\u00e9 spo\u0159itelny, a.s." }, { "country_code": "CZ", "primary": true, "bic": "CZEECZPP", "bank_code": "8090", - "name": "\u00c4\u008cesk\u00c3\u00a1 exportn\u00c3\u00ad banka, a.s.", - "short_name": "\u00c4\u008cesk\u00c3\u00a1 exportn\u00c3\u00ad banka, a.s." + "name": "\u010cesk\u00e1 exportn\u00ed banka, a.s.", + "short_name": "\u010cesk\u00e1 exportn\u00ed banka, a.s." }, { "country_code": "CZ", @@ -364,16 +364,16 @@ "primary": true, "bic": "COMMCZPP", "bank_code": "8255", - "name": "Bank of Communications Co., Ltd., Prague Branch od\u00c5\u00a1t\u00c4\u009bpn\u00c3\u00bd z\u00c3\u00a1vod", - "short_name": "Bank of Communications Co., Ltd., Prague Branch od\u00c5\u00a1t\u00c4\u009bpn\u00c3\u00bd z\u00c3\u00a1vod" + "name": "Bank of Communications Co., Ltd., Prague Branch od\u0161t\u011bpn\u00fd z\u00e1vod", + "short_name": "Bank of Communications Co., Ltd., Prague Branch od\u0161t\u011bpn\u00fd z\u00e1vod" }, { "country_code": "CZ", "primary": true, "bic": "ICBKCZPP", "bank_code": "8265", - "name": "Industrial and Commercial Bank of China Limited, Prague Branch, od\u00c5\u00a1t\u00c4\u009bpn\u00c3\u00bd z\u00c3\u00a1vod", - "short_name": "Industrial and Commercial Bank of China Limited, Prague Branch, od\u00c5\u00a1t\u00c4\u009bpn\u00c3\u00bd z\u00c3\u00a1vod" + "name": "Industrial and Commercial Bank of China Limited, Prague Branch, od\u0161t\u011bpn\u00fd z\u00e1vod", + "short_name": "Industrial and Commercial Bank of China Limited, Prague Branch, od\u0161t\u011bpn\u00fd z\u00e1vod" }, { "country_code": "CZ", diff --git a/scripts/get_bank_registry_cz.py b/scripts/get_bank_registry_cz.py index 28fdb2d..6e28793 100644 --- a/scripts/get_bank_registry_cz.py +++ b/scripts/get_bank_registry_cz.py @@ -8,7 +8,7 @@ def process(): - datas = pandas.read_csv(URL, encoding="latin1", delimiter=";", dtype="str") + datas = pandas.read_csv(URL, encoding="utf-8", delimiter=";", dtype="str") datas = datas.dropna(how="all") datas.fillna("", inplace=True)
maintenance Please change the maintainer of the package to figo GmbH instead of @mdomke . And make sure the pypi account is owned by figo. Thanks.
Has been fixed in d10f81302f326e523417d2d3c43ed96c3519711a
2024-04-16T11:43:11
0.0
[]
[]
mdomke/schwifty
mdomke__schwifty-169
443067202fc1b97c63c965b3f8879f33a9e5b965
diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 39503a6..54f634e 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -5,6 +5,13 @@ Changelog Versions follow `CalVer <http://www.calver.org/>`_ with the scheme ``YY.0M.Micro``. +`2023.11.1`_ - tbd +------------------------- +Changed +~~~~~~~ +* The Swiss bank registry is now generated from the SIX Group. + + `2023.11.0`_ - 2023/11/17 ------------------------- Changed @@ -435,6 +442,7 @@ Added * Added :attr:`.BIC.country` and :attr:`.IBAN.country`. +.. _2023.11.1: https://github.com/mdomke/schwifty/compare/2023.11.0...2023.11.1 .. _2023.11.0: https://github.com/mdomke/schwifty/compare/2023.10.0...2023.11.0 .. _2023.10.0: https://github.com/mdomke/schwifty/compare/2023.09.0...2023.10.0 .. _2023.09.0: https://github.com/mdomke/schwifty/compare/2023.06.0...2023.09.0 diff --git a/schwifty/bank_registry/generated_ch.json b/schwifty/bank_registry/generated_ch.json new file mode 100644 index 0000000..3f1b4ae --- /dev/null +++ b/schwifty/bank_registry/generated_ch.json @@ -0,0 +1,8714 @@ +[ + { + "name": "Schweizerische Nationalbank", + "short_name": "Schweizerische Nationalbank", + "bank_code": "00100", + "bic": "SNBZCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Schweizerische Nationalbank", + "short_name": "Schweizerische Nationalbank", + "bank_code": "00110", + "bic": "SNBZCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schweizerische Nationalbank", + "short_name": "Schweizerische Nationalbank", + "bank_code": "00115", + "bic": "SNBZCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Uster 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00202", + "bic": "UBSWCHZH86N", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Arbon", + "short_name": "UBS Switzerland AG", + "bank_code": "00203", + "bic": "UBSWCHZH93A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Arosa", + "short_name": "UBS Switzerland AG", + "bank_code": "00204", + "bic": "UBSWCHZH70E", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00206", + "bic": "UBSWCHZH80V", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - B\u00fclach", + "short_name": "UBS Switzerland AG", + "bank_code": "00207", + "bic": "UBSWCHZH81M", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Chur", + "short_name": "UBS Switzerland AG", + "bank_code": "00208", + "bic": "UBSWCHZH70A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Davos Platz 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00209", + "bic": "UBSWCHZH72D", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Brugg AG", + "short_name": "UBS Switzerland AG", + "bank_code": "00210", + "bic": "UBSWCHZH52A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - D\u00fcdingen", + "short_name": "UBS Switzerland AG", + "bank_code": "00211", + "bic": "UBSWCHZH31A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Altst\u00e4tten SG", + "short_name": "UBS Switzerland AG", + "bank_code": "00213", + "bic": "UBSWCHZH94N", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Horgen 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00214", + "bic": "UBSWCHZH88A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Gen\u00e8ve 2", + "short_name": "UBS Switzerland AG", + "bank_code": "00215", + "bic": "UBSWCHZH12C", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Lachen SZ", + "short_name": "UBS Switzerland AG", + "bank_code": "00216", + "bic": "UBSWCHZH88B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Kreuzlingen 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00217", + "bic": "UBSWCHZH82P", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Weinfelden", + "short_name": "UBS Switzerland AG", + "bank_code": "00219", + "bic": "UBSWCHZH85B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Buchs SG 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00220", + "bic": "UBSWCHZH94P", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - St. Moritz 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00221", + "bic": "UBSWCHZH75A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Wallisellen", + "short_name": "UBS Switzerland AG", + "bank_code": "00222", + "bic": "UBSWCHZH83B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Dietikon 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00223", + "bic": "UBSWCHZH89D", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00225", + "bic": "UBSWCHZH80G", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Porrentruy 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00226", + "bic": "UBSWCHZH29A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Thun", + "short_name": "UBS Switzerland AG", + "bank_code": "00227", + "bic": "UBSWCHZH36A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Nyon 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00228", + "bic": "UBSWCHZH12T", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00230", + "bic": "UBSWCHZH80A", + "country_code": "CH", + "primary": true + }, + { + "name": "UBS Switzerland AG - Aarau", + "short_name": "UBS Switzerland AG", + "bank_code": "00231", + "bic": "UBSWCHZH50A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Baden", + "short_name": "UBS Switzerland AG", + "bank_code": "00232", + "bic": "UBSWCHZH54A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Basel", + "short_name": "UBS Switzerland AG", + "bank_code": "00233", + "bic": "UBSWCHZH40A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Bellinzona", + "short_name": "UBS Switzerland AG", + "bank_code": "00234", + "bic": "UBSWCHZH65A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Bern 94", + "short_name": "UBS Switzerland AG", + "bank_code": "00235", + "bic": "UBSWCHZH30A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Chiasso 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00236", + "bic": "UBSWCHZH68B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Meilen", + "short_name": "UBS Switzerland AG", + "bank_code": "00238", + "bic": "UBSWCHZH87C", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Frauenfeld", + "short_name": "UBS Switzerland AG", + "bank_code": "00239", + "bic": "UBSWCHZH85A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Gen\u00e8ve 2", + "short_name": "UBS Switzerland AG", + "bank_code": "00240", + "bic": "UBSWCHZH12A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Interlaken", + "short_name": "UBS Switzerland AG", + "bank_code": "00241", + "bic": "UBSWCHZH38A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Chaux-de-Fonds, La", + "short_name": "UBS Switzerland AG", + "bank_code": "00242", + "bic": "UBSWCHZH23A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Lausanne", + "short_name": "UBS Switzerland AG", + "bank_code": "00243", + "bic": "UBSWCHZH10A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Lichtensteig", + "short_name": "UBS Switzerland AG", + "bank_code": "00244", + "bic": "UBSWCHZH96C", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Liestal", + "short_name": "UBS Switzerland AG", + "bank_code": "00245", + "bic": "UBSWCHZH44A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Locarno 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00246", + "bic": "UBSWCHZH66A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Lugano", + "short_name": "UBS Switzerland AG", + "bank_code": "00247", + "bic": "UBSWCHZH69A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Luzern", + "short_name": "UBS Switzerland AG", + "bank_code": "00248", + "bic": "UBSWCHZH60A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Montreux 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00249", + "bic": "UBSWCHZH18D", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Rapperswil SG", + "short_name": "UBS Switzerland AG", + "bank_code": "00250", + "bic": "UBSWCHZH86M", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00251", + "bic": "UBSWCHZH80H", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Rorschach", + "short_name": "UBS Switzerland AG", + "bank_code": "00252", + "bic": "UBSWCHZH94A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - R\u00fcti ZH", + "short_name": "UBS Switzerland AG", + "bank_code": "00253", + "bic": "UBSWCHZH86P", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - St. Gallen", + "short_name": "UBS Switzerland AG", + "bank_code": "00254", + "bic": "UBSWCHZH90A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Vevey 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00255", + "bic": "UBSWCHZH18A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Wil SG 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00256", + "bic": "UBSWCHZH95A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Winterthur", + "short_name": "UBS Switzerland AG", + "bank_code": "00257", + "bic": "UBSWCHZH84A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Wohlen AG 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00258", + "bic": "UBSWCHZH56B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Zollikon", + "short_name": "UBS Switzerland AG", + "bank_code": "00259", + "bic": "UBSWCHZH87B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Fribourg", + "short_name": "UBS Switzerland AG", + "bank_code": "00260", + "bic": "UBSWCHZH17A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Olten", + "short_name": "UBS Switzerland AG", + "bank_code": "00261", + "bic": "UBSWCHZH46A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Solothurn", + "short_name": "UBS Switzerland AG", + "bank_code": "00262", + "bic": "UBSWCHZH45A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Brig/Brigue", + "short_name": "UBS Switzerland AG", + "bank_code": "00263", + "bic": "UBSWCHZH39A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Martigny 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00264", + "bic": "UBSWCHZH19B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Sion/Sitten", + "short_name": "UBS Switzerland AG", + "bank_code": "00265", + "bic": "UBSWCHZH19E", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Bulle", + "short_name": "UBS Switzerland AG", + "bank_code": "00266", + "bic": "UBSWCHZH16C", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00267", + "bic": "UBSWCHZH80K", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Sierre/Siders", + "short_name": "UBS Switzerland AG", + "bank_code": "00268", + "bic": "UBSWCHZH39L", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00269", + "bic": "UBSWCHZH80M", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00270", + "bic": "UBSWCHZH80N", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00271", + "bic": "UBSWCHZH80A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Biel/Bienne", + "short_name": "UBS Switzerland AG", + "bank_code": "00272", + "bic": "UBSWCHZH25A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Zug", + "short_name": "UBS Switzerland AG", + "bank_code": "00273", + "bic": "UBSWCHZH63A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00274", + "bic": "UBSWCHZH80Q", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00275", + "bic": "UBSWCHZH80R", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Affoltern am Albis", + "short_name": "UBS Switzerland AG", + "bank_code": "00276", + "bic": "UBSWCHZH89E", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Monthey 2", + "short_name": "UBS Switzerland AG", + "bank_code": "00277", + "bic": "UBSWCHZH18L", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Kloten", + "short_name": "UBS Switzerland AG", + "bank_code": "00278", + "bic": "UBSWCHZH83A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Gen\u00e8ve 2", + "short_name": "UBS Switzerland AG", + "bank_code": "00279", + "bic": "UBSWCHZH12B", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Glattbrugg", + "short_name": "UBS Switzerland AG", + "bank_code": "00283", + "bic": "UBSWCHZH81H", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Glarus", + "short_name": "UBS Switzerland AG", + "bank_code": "00284", + "bic": "UBSWCHZH87D", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00285", + "bic": "UBSWCHZH80P", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00286", + "bic": "UBSWCHZH80A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Schaffhausen", + "short_name": "UBS Switzerland AG", + "bank_code": "00287", + "bic": "UBSWCHZH82A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Emmenbr\u00fccke 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00288", + "bic": "UBSWCHZH60G", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Neuch\u00e2tel", + "short_name": "UBS Switzerland AG", + "bank_code": "00290", + "bic": "UBSWCHZH20A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Regensdorf 1", + "short_name": "UBS Switzerland AG", + "bank_code": "00291", + "bic": "UBSWCHZH81A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Basel", + "short_name": "UBS Switzerland AG", + "bank_code": "00292", + "bic": "UBSWCHZH40M", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - R\u00fcschlikon", + "short_name": "UBS Switzerland AG", + "bank_code": "00293", + "bic": "UBSWCHZH88C", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Visp/Vi\u00e8ge", + "short_name": "UBS Switzerland AG", + "bank_code": "00294", + "bic": "UBSWCHZH39G", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - St. Margrethen SG", + "short_name": "UBS Switzerland AG", + "bank_code": "00295", + "bic": "UBSWCHZH94F", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Wetzikon ZH 2", + "short_name": "UBS Switzerland AG", + "bank_code": "00296", + "bic": "UBSWCHZH86Q", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Yverdon-les-Bains", + "short_name": "UBS Switzerland AG", + "bank_code": "00297", + "bic": "UBSWCHZH14A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "00298", + "bic": "UBSWCHZH80A", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS AG", + "short_name": "UBS AG", + "bank_code": "00315", + "bic": "UBSBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00700", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00702", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00703", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00704", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00705", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00706", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00708", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00709", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00710", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00711", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00712", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00713", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00714", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00715", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00716", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00717", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00718", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00719", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00720", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00721", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00722", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00723", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00724", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00725", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00726", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00727", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00728", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00729", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00730", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00731", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00732", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00733", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00734", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00735", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00736", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00737", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00738", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00739", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00740", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00741", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00742", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00743", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00744", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00745", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00746", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00747", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00748", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00749", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00750", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00751", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00752", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00753", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00754", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00755", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00756", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00757", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00758", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "00759", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Aargauische Kantonalbank", + "short_name": "Aargauische Kantonalbank", + "bank_code": "00761", + "bic": "KBAGCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Appenzeller Kantonalbank", + "short_name": "Appenzeller Kantonalbank", + "bank_code": "00763", + "bic": "AIKACH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "00764", + "bic": "BSCTCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cantonale du Valais", + "short_name": "Banque Cantonale du Valais", + "bank_code": "00765", + "bic": "BCVSCH2LXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cantonale Neuch\u00e2teloise", + "short_name": "Banque Cantonale Neuch\u00e2teloise", + "bank_code": "00766", + "bic": "BCNNCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "00767", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "00768", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Basellandschaftliche Kantonalbank", + "short_name": "Basellandschaftliche Kantonalbank", + "bank_code": "00769", + "bic": "BLKBCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Basler Kantonalbank", + "short_name": "Basler Kantonalbank", + "bank_code": "00770", + "bic": "BKBBCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Glarner Kantonalbank", + "short_name": "Glarner Kantonalbank", + "bank_code": "00773", + "bic": "GLKBCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Graub\u00fcndner Kantonalbank", + "short_name": "Graub\u00fcndner Kantonalbank", + "bank_code": "00774", + "bic": "GRKBCH2270A", + "country_code": "CH", + "primary": true + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "00777", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "00778", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": true + }, + { + "name": "Nidwaldner Kantonalbank", + "short_name": "Nidwaldner Kantonalbank", + "bank_code": "00779", + "bic": "NIKACH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Obwaldner Kantonalbank", + "short_name": "Obwaldner Kantonalbank", + "bank_code": "00780", + "bic": "OBWKCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "00781", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Schaffhauser Kantonalbank", + "short_name": "Schaffhauser Kantonalbank", + "bank_code": "00782", + "bic": "SHKBCH2SXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Thurgauer Kantonalbank", + "short_name": "Thurgauer Kantonalbank", + "bank_code": "00784", + "bic": "KBTGCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Urner Kantonalbank", + "short_name": "Urner Kantonalbank", + "bank_code": "00785", + "bic": "URKNCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Zuger Kantonalbank", + "short_name": "Zuger Kantonalbank", + "bank_code": "00787", + "bic": "KBZGCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cantonale de Gen\u00e8ve", + "short_name": "Banque Cantonale de Gen\u00e8ve", + "bank_code": "00788", + "bic": "BCGECHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cantonale du Jura SA", + "short_name": "Banque Cantonale du Jura SA", + "bank_code": "00789", + "bic": "BCJUCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Berner Kantonalbank AG", + "short_name": "Berner Kantonalbank AG", + "bank_code": "00790", + "bic": "KBBECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Credit Suisse (Schweiz) AG", + "short_name": "Credit Suisse (Schweiz) AG", + "bank_code": "04835", + "bic": "CRESCHZZ80A", + "country_code": "CH", + "primary": true + }, + { + "name": "Credit Suisse AG", + "short_name": "Credit Suisse AG", + "bank_code": "04866", + "bic": "CRESCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Cr\u00e9dit Mutuel de la Vall\u00e9e SA", + "short_name": "Cr\u00e9dit Mutuel de la Vall\u00e9e SA", + "bank_code": "06180", + "bic": "RBABCH22180", + "country_code": "CH", + "primary": true + }, + { + "name": "Caisse d'Epargne de Cossonay soci\u00e9t\u00e9 coop\u00e9rative", + "short_name": "Caisse d'Epargne de Cossonay soci\u00e9t\u00e9 coop\u00e9rative", + "bank_code": "06182", + "bic": "RBABCH22182", + "country_code": "CH", + "primary": true + }, + { + "name": "Sparkasse Sense", + "short_name": "Sparkasse Sense", + "bank_code": "06186", + "bic": "RBABCH22186", + "country_code": "CH", + "primary": true + }, + { + "name": "Caisse d'Epargne Courtelary SA", + "short_name": "Caisse d'Epargne Courtelary SA", + "bank_code": "06240", + "bic": "RBABCH22240", + "country_code": "CH", + "primary": true + }, + { + "name": "Valiant Bank AG", + "short_name": "Valiant Bank AG", + "bank_code": "06300", + "bic": "VABECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bernerland Bank AG", + "short_name": "Bernerland Bank AG", + "bank_code": "06313", + "bic": "RBABCH22313", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis AG", + "short_name": "Clientis AG", + "bank_code": "06336", + "bic": "RBABCH22CLI", + "country_code": "CH", + "primary": true + }, + { + "name": "SB Saanen Bank AG", + "short_name": "SB Saanen Bank AG", + "bank_code": "06342", + "bic": "RBABCH22342", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank SLM AG", + "short_name": "Bank SLM AG", + "bank_code": "06363", + "bic": "RBABCH22363", + "country_code": "CH", + "primary": true + }, + { + "name": "Spar+Leihkasse Riggisberg AG", + "short_name": "Spar+Leihkasse Riggisberg AG", + "bank_code": "06374", + "bic": "RBABCH22374", + "country_code": "CH", + "primary": true + }, + { + "name": "Burgerliche Ersparniskasse Bern Genossenschaft", + "short_name": "Burgerliche Ersparniskasse Bern Genossenschaft", + "bank_code": "06382", + "bic": "RBABCH22382", + "country_code": "CH", + "primary": true + }, + { + "name": "Ersparniskasse Affoltern i. E. AG", + "short_name": "Ersparniskasse Affoltern i. E. AG", + "bank_code": "06387", + "bic": "RBABCH22387", + "country_code": "CH", + "primary": true + }, + { + "name": "Entris Banking AG", + "short_name": "Entris Banking AG", + "bank_code": "06395", + "bic": "RBABCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis Sparkasse Oftringen Genossenschaft", + "short_name": "Clientis Sparkasse Oftringen Genossenschaft", + "bank_code": "06428", + "bic": "RBABCH22428", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis Bank im Thal AG", + "short_name": "Clientis Bank im Thal AG", + "bank_code": "06434", + "bic": "RBABCH22434", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Oberaargau AG", + "short_name": "Bank Oberaargau AG", + "bank_code": "06450", + "bic": "RBABCH22450", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis Bank Aareland AG", + "short_name": "Clientis Bank Aareland AG", + "bank_code": "06575", + "bic": "RBABCH22575", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Leerau Genossenschaft", + "short_name": "Bank Leerau Genossenschaft", + "bank_code": "06588", + "bic": "RBABCH22588", + "country_code": "CH", + "primary": true + }, + { + "name": "Sparkasse Schwyz AG", + "short_name": "Sparkasse Schwyz AG", + "bank_code": "06633", + "bic": "RBABCH22633", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis EB Entlebucher Bank AG", + "short_name": "Clientis EB Entlebucher Bank AG", + "bank_code": "06670", + "bic": "RBABCH22670", + "country_code": "CH", + "primary": true + }, + { + "name": "GRB Glarner Regionalbank Genossenschaft", + "short_name": "GRB Glarner Regionalbank Genossenschaft", + "bank_code": "06807", + "bic": "RBABCH22807", + "country_code": "CH", + "primary": true + }, + { + "name": "Sparhafen Bank AG", + "short_name": "Sparhafen Bank AG", + "bank_code": "06808", + "bic": "BSZHCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Sparcassa 1816 Genossenschaft", + "short_name": "Sparcassa 1816 Genossenschaft", + "bank_code": "06814", + "bic": "RBABCH22814", + "country_code": "CH", + "primary": true + }, + { + "name": "BANK ZIMMERBERG AG", + "short_name": "BANK ZIMMERBERG AG", + "bank_code": "06824", + "bic": "RBABCH22824", + "country_code": "CH", + "primary": true + }, + { + "name": "Regiobank M\u00e4nnedorf AG", + "short_name": "Regiobank M\u00e4nnedorf AG", + "bank_code": "06828", + "bic": "RBABCH22828", + "country_code": "CH", + "primary": true + }, + { + "name": "Lienhardt & Partner Privatbank Z\u00fcrich AG", + "short_name": "Lienhardt & Partner Privatbank Z\u00fcrich AG", + "bank_code": "06830", + "bic": "RBABCH22830", + "country_code": "CH", + "primary": true + }, + { + "name": "Ersparniskasse Schaffhausen AG", + "short_name": "Ersparniskasse Schaffhausen AG", + "bank_code": "06835", + "bic": "RBABCH22835", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Avera Genossenschaft", + "short_name": "Bank Avera Genossenschaft", + "bank_code": "06850", + "bic": "RBABCH22850", + "country_code": "CH", + "primary": true + }, + { + "name": "BS Bank Schaffhausen AG", + "short_name": "BS Bank Schaffhausen AG", + "bank_code": "06858", + "bic": "RBABCH22858", + "country_code": "CH", + "primary": true + }, + { + "name": "Spar- und Leihkasse Thayngen AG", + "short_name": "Spar- und Leihkasse Thayngen AG", + "bank_code": "06866", + "bic": "RBABCH22866", + "country_code": "CH", + "primary": true + }, + { + "name": "Leihkasse Stammheim AG", + "short_name": "Leihkasse Stammheim AG", + "bank_code": "06875", + "bic": "RBABCH22875", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00fcrcher Landbank AG", + "short_name": "Z\u00fcrcher Landbank AG", + "bank_code": "06877", + "bic": "RBABCH22877", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank BSU Genossenschaft", + "short_name": "Bank BSU Genossenschaft", + "bank_code": "06888", + "bic": "RBABCH22888", + "country_code": "CH", + "primary": true + }, + { + "name": "acrevis Bank AG", + "short_name": "acrevis Bank AG", + "bank_code": "06900", + "bic": "ACRGCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Alpha RHEINTAL Bank AG", + "short_name": "Alpha RHEINTAL Bank AG", + "bank_code": "06920", + "bic": "ARBHCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis Bank Oberuzwil AG", + "short_name": "Clientis Bank Oberuzwil AG", + "bank_code": "06935", + "bic": "RBABCH22935", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis Bank Toggenburg AG", + "short_name": "Clientis Bank Toggenburg AG", + "bank_code": "06955", + "bic": "RBABCH22955", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank in Zuzwil", + "short_name": "Bank in Zuzwil", + "bank_code": "06964", + "bic": "RBABCH22964", + "country_code": "CH", + "primary": true + }, + { + "name": "Clientis Bank Thur Genossenschaft", + "short_name": "Clientis Bank Thur Genossenschaft", + "bank_code": "06977", + "bic": "RBABCH22977", + "country_code": "CH", + "primary": true + }, + { + "name": "Biene Bank im Rheintal Genossenschaft", + "short_name": "Biene Bank im Rheintal Genossenschaft", + "bank_code": "06980", + "bic": "RBABCH22980", + "country_code": "CH", + "primary": true + }, + { + "name": "Banco Santander International SA", + "short_name": "Banco Santander International SA", + "bank_code": "08235", + "bic": "BSCHCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Credit Europe Bank (Suisse) SA", + "short_name": "Credit Europe Bank (Suisse) SA", + "bank_code": "08236", + "bic": "FSUICHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Investec Bank (Switzerland) AG", + "short_name": "Investec Bank (Switzerland) AG", + "bank_code": "08238", + "bic": "IVESCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Cr\u00e9dit Agricole next Bank (Suisse) SA", + "short_name": "Cr\u00e9dit Agricole next Bank (Suisse) SA", + "bank_code": "08243", + "bic": "AGRICHGXXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Kaleido Privatbank AG", + "short_name": "Kaleido Privatbank AG", + "bank_code": "08245", + "bic": "ANPRCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "One Swiss Bank SA", + "short_name": "One Swiss Bank SA", + "bank_code": "08246", + "bic": "BQBHCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Van Lanschot Kempen (Schweiz) AG", + "short_name": "Van Lanschot Kempen (Schweiz) AG", + "bank_code": "08248", + "bic": "FVLBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banca Popolare di Sondrio (Suisse) SA", + "short_name": "Banca Popolare di Sondrio (Suisse) SA", + "bank_code": "08252", + "bic": "POSOCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Internationale \u00e0 Luxembourg (Suisse) SA", + "short_name": "Banque Internationale \u00e0 Luxembourg (Suisse) SA", + "bank_code": "08268", + "bic": "BILSCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BBVA SA", + "short_name": "BBVA SA", + "bank_code": "08270", + "bic": "BBVACHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Goldman Sachs Bank AG", + "short_name": "Goldman Sachs Bank AG", + "bank_code": "08278", + "bic": "GOLDCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Frankfurter Bankgesellschaft (Schweiz) AG", + "short_name": "Frankfurter Bankgesellschaft (Schweiz) AG", + "bank_code": "08288", + "bic": "FBGSCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "ODDO BHF (Schweiz) AG", + "short_name": "ODDO BHF (Schweiz) AG", + "bank_code": "08289", + "bic": "BHFBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "08296", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Spar- und Leihkasse Wynigen AG", + "short_name": "Spar- und Leihkasse Wynigen AG", + "bank_code": "08300", + "bic": "SLWYCH21XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Hypothekarbank Lenzburg AG", + "short_name": "Hypothekarbank Lenzburg AG", + "bank_code": "08307", + "bic": "HYPLCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque SYZ SA", + "short_name": "Banque SYZ SA", + "bank_code": "08309", + "bic": "SYCOCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Caisse d'Epargne de Nyon soci\u00e9t\u00e9 coop\u00e9rative", + "short_name": "Caisse d'Epargne de Nyon soci\u00e9t\u00e9 coop\u00e9rative", + "bank_code": "08326", + "bic": "CAGYCH21XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Caisse d'Epargne d'Aubonne Soci\u00e9te coop\u00e9rative", + "short_name": "Caisse d'Epargne d'Aubonne Soci\u00e9te coop\u00e9rative", + "bank_code": "08327", + "bic": "RBABCH22CEA", + "country_code": "CH", + "primary": true + }, + { + "name": "Ersparniskasse Speicher", + "short_name": "Ersparniskasse Speicher", + "bank_code": "08329", + "bic": "ERSCCH21XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Baloise Bank AG", + "short_name": "Baloise Bank AG", + "bank_code": "08334", + "bic": "KBSOCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "LGT Bank (Schweiz) AG", + "short_name": "LGT Bank (Schweiz) AG", + "bank_code": "08335", + "bic": "BLFLCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Alg\u00e9rienne du Commerce Ext\u00e9rieur SA", + "short_name": "Banque Alg\u00e9rienne du Commerce Ext\u00e9rieur SA", + "bank_code": "08346", + "bic": "AEXTCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Caisse d'Epargne et de Cr\u00e9dit Mutuel de Chermignon", + "short_name": "Caisse d'Epargne et de Cr\u00e9dit Mutuel de Chermignon", + "bank_code": "08348", + "bic": "CDEMCH21XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Caisse d'Epargne Riviera, soci\u00e9t\u00e9 coop\u00e9rative", + "short_name": "Caisse d'Epargne Riviera, soci\u00e9t\u00e9 coop\u00e9rative", + "bank_code": "08349", + "bic": "CDDVCH21XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Loomis Schweiz AG", + "short_name": "Loomis Schweiz AG", + "bank_code": "08350", + "bic": "SEPOCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SOS Cash & Value SA", + "short_name": "SOS Cash & Value SA", + "bank_code": "08351", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Loomis Schweiz SA", + "short_name": "Loomis Schweiz SA", + "bank_code": "08352", + "bic": "VIMMCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "China Construction Bank Corporation, Beijing, Swiss Branch", + "short_name": "China Construction Bank Corporation, Beijing, Swiss Branch", + "bank_code": "08373", + "bic": "PCBCCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "CACEIS Bank, Montrouge, succursale de Nyon / Suisse", + "short_name": "CACEIS Bank, Montrouge, succursale de Nyon / Suisse", + "bank_code": "08374", + "bic": "ISAECH2NXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Vontobel Swiss Financial Advisers AG", + "short_name": "Vontobel Swiss Financial Advisers AG", + "bank_code": "08377", + "bic": "VSFACHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "NPB Neue Privat Bank AG", + "short_name": "NPB Neue Privat Bank AG", + "bank_code": "08378", + "bic": "NEPICHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "QNB (Suisse) SA", + "short_name": "QNB (Suisse) SA", + "bank_code": "08379", + "bic": "QNBACHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "J.P. Morgan (Suisse) SA", + "short_name": "J.P. Morgan (Suisse) SA", + "bank_code": "08380", + "bic": "MGTCCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Reyl & Cie S.A.", + "short_name": "Reyl & Cie S.A.", + "bank_code": "08384", + "bic": "REYLCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "NBK Banque Priv\u00e9e (Suisse) SA", + "short_name": "NBK Banque Priv\u00e9e (Suisse) SA", + "bank_code": "08385", + "bic": "NBOKCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "ING Bank N.V., Amsterdam, succursale de Lancy/Gen\u00e8ve", + "short_name": "ING Bank N.V., Amsterdam, succursale de Lancy/Gen\u00e8ve", + "bank_code": "08387", + "bic": "BBRUCHGTXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Eric Sturdza SA", + "short_name": "Banque Eric Sturdza SA", + "bank_code": "08388", + "bic": "BABRCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bezirks-Sparkasse Dielsdorf Genossenschaft", + "short_name": "Bezirks-Sparkasse Dielsdorf Genossenschaft", + "bank_code": "08389", + "bic": "BZSDCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Alternative Bank Schweiz AG", + "short_name": "Alternative Bank Schweiz AG", + "bank_code": "08390", + "bic": "ABSOCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "WIR Bank Genossenschaft", + "short_name": "WIR Bank Genossenschaft", + "bank_code": "08391", + "bic": "WIRBCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Freie Gemeinschaftsbank Genossenschaft", + "short_name": "Freie Gemeinschaftsbank Genossenschaft", + "bank_code": "08392", + "bic": "FRGGCHB1XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank EKI Genossenschaft", + "short_name": "Bank EKI Genossenschaft", + "bank_code": "08393", + "bic": "EKIICH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank EEK AG", + "short_name": "Bank EEK AG", + "bank_code": "08394", + "bic": "EEKBCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Spar- und Leihkasse Bucheggberg", + "short_name": "Spar- und Leihkasse Bucheggberg", + "bank_code": "08395", + "bic": "SLBUCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Spar- und Leihkasse G\u00fcrbetal AG", + "short_name": "Spar- und Leihkasse G\u00fcrbetal AG", + "bank_code": "08396", + "bic": "SLGUCH2MXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "DC Bank Deposito-Cassa der Stadt Bern", + "short_name": "DC Bank Deposito-Cassa der Stadt Bern", + "bank_code": "08397", + "bic": "RBABCH22DCB", + "country_code": "CH", + "primary": true + }, + { + "name": "VZ Depotbank AG", + "short_name": "VZ Depotbank AG", + "bank_code": "08398", + "bic": "VZDBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Migros Bank AG", + "short_name": "Migros Bank AG", + "bank_code": "08401", + "bic": "MIGRCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08440", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08441", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08442", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08443", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08444", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08445", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08446", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08447", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08448", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08449", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08450", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08451", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08452", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08453", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08454", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08455", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08456", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "08457", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Corn\u00e8r Banca SA", + "short_name": "Corn\u00e8r Banca SA", + "bank_code": "08490", + "bic": "CBLUCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banca Credinvest SA", + "short_name": "Banca Credinvest SA", + "bank_code": "08495", + "bic": "BCRECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banca Aletti & C. (Suisse) SA", + "short_name": "Banca Aletti & C. (Suisse) SA", + "bank_code": "08496", + "bic": "VRBPCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SAXO BANK (SCHWEIZ) AG", + "short_name": "SAXO BANK (SCHWEIZ) AG", + "bank_code": "08497", + "bic": "SAXOCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Axion SWISS Bank SA", + "short_name": "Axion SWISS Bank SA", + "bank_code": "08498", + "bic": "UNCECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "FAB Private Bank (Suisse) SA", + "short_name": "FAB Private Bank (Suisse) SA", + "bank_code": "08499", + "bic": "NBPSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Mercantil Bank (Schweiz) AG", + "short_name": "Mercantil Bank (Schweiz) AG", + "bank_code": "08509", + "bic": "BAMRCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Julius B\u00e4r & Co AG", + "short_name": "Bank Julius B\u00e4r & Co AG", + "bank_code": "08515", + "bic": "BAERCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Gantrisch Genossenschaft", + "short_name": "Bank Gantrisch Genossenschaft", + "bank_code": "08518", + "bic": "BGAGCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Ersparniskasse R\u00fceggisberg Genossenschaft", + "short_name": "Ersparniskasse R\u00fceggisberg Genossenschaft", + "bank_code": "08519", + "bic": "EKRUCH21XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BBO Bank Brienz Oberhasli AG", + "short_name": "BBO Bank Brienz Oberhasli AG", + "bank_code": "08521", + "bic": "BBOBCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Societ\u00e0 Bancaria Ticinese", + "short_name": "Societ\u00e0 Bancaria Ticinese", + "bank_code": "08522", + "bic": "SBTICH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank f\u00fcr Tirol und Vorarlberg Aktiengesellschaft, Innsbruck", + "short_name": "Bank f\u00fcr Tirol und Vorarlberg Aktiengesellschaft, Innsbruck", + "bank_code": "08525", + "bic": "BTVACH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Privatbank IHAG Z\u00fcrich AG", + "short_name": "Privatbank IHAG Z\u00fcrich AG", + "bank_code": "08528", + "bic": "IHZUCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SELVI & Cie SA", + "short_name": "SELVI & Cie SA", + "bank_code": "08533", + "bic": "SELVCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "VP Bank (Schweiz) AG", + "short_name": "VP Bank (Schweiz) AG", + "bank_code": "08534", + "bic": "VPBVCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Internationale de Commerce-BRED (Suisse) SA", + "short_name": "Banque Internationale de Commerce-BRED (Suisse) SA", + "bank_code": "08537", + "bic": "BICFCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bergos AG", + "short_name": "Bergos AG", + "bank_code": "08539", + "bic": "BEGOCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BANQUE BANORIENT (SUISSE) SA", + "short_name": "BANQUE BANORIENT (SUISSE) SA", + "bank_code": "08540", + "bic": "BLOMCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Scobag Privatbank AG", + "short_name": "Scobag Privatbank AG", + "bank_code": "08543", + "bic": "SCOPCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Hyposwiss Private Bank Gen\u00e8ve SA", + "short_name": "Hyposwiss Private Bank Gen\u00e8ve SA", + "bank_code": "08548", + "bic": "CCIECHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BZ Bank Aktiengesellschaft", + "short_name": "BZ Bank Aktiengesellschaft", + "bank_code": "08553", + "bic": "BZBKCH2WXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Privatbank Von Graffenried AG", + "short_name": "Privatbank Von Graffenried AG", + "bank_code": "08564", + "bic": "GRAFCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Dreyfus S\u00f6hne & Cie AG, Banquiers", + "short_name": "Dreyfus S\u00f6hne & Cie AG, Banquiers", + "bank_code": "08565", + "bic": "DREYCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Bonh\u00f4te & Cie SA", + "short_name": "Banque Bonh\u00f4te & Cie SA", + "bank_code": "08570", + "bic": "BONHCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Edmond de Rothschild (Suisse) S.A", + "short_name": "Edmond de Rothschild (Suisse) S.A", + "bank_code": "08571", + "bic": "PRIBCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Trafina Privatbank AG", + "short_name": "Trafina Privatbank AG", + "bank_code": "08572", + "bic": "TRAPCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Maerki, Baumann & Co. AG", + "short_name": "Maerki, Baumann & Co. AG", + "bank_code": "08573", + "bic": "MAEBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BANKMED SUISSE SA", + "short_name": "BANKMED SUISSE SA", + "bank_code": "08574", + "bic": "MEDSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale Private Banking (Suisse) SA", + "short_name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale Private Banking (Suisse) SA", + "bank_code": "08582", + "bic": "RUEGCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banca del Ceresio SA", + "short_name": "Banca del Ceresio SA", + "bank_code": "08584", + "bic": "BACECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banca Zarattini & Co. SA", + "short_name": "Banca Zarattini & Co. SA", + "bank_code": "08609", + "bic": "EUBACH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "DZ PRIVATBANK (Schweiz) AG", + "short_name": "DZ PRIVATBANK (Schweiz) AG", + "bank_code": "08615", + "bic": "GENOCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "EFG Bank European Financial Group SA", + "short_name": "EFG Bank European Financial Group SA", + "bank_code": "08616", + "bic": "EFGBCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Nomura Bank (Switzerland) Ltd", + "short_name": "Nomura Bank (Switzerland) Ltd", + "bank_code": "08619", + "bic": "NBSZCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Havilland (Liechtenstein) AG, Vaduz, Zweig. Z\u00fcrich", + "short_name": "Banque Havilland (Liechtenstein) AG, Vaduz, Zweig. Z\u00fcrich", + "bank_code": "08623", + "bic": "BPGECHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Audi (Suisse) SA", + "short_name": "Banque Audi (Suisse) SA", + "bank_code": "08624", + "bic": "AUDSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque de Commerce et de Placements SA", + "short_name": "Banque de Commerce et de Placements SA", + "bank_code": "08629", + "bic": "BPCPCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Schroder & Co Bank AG", + "short_name": "Schroder & Co Bank AG", + "bank_code": "08634", + "bic": "BJHSCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "JPMorgan Chase Bank, National Association, Columbus", + "short_name": "JPMorgan Chase Bank, National Association, Columbus", + "bank_code": "08635", + "bic": "CHASCHGXXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Citibank (Switzerland) AG", + "short_name": "Citibank (Switzerland) AG", + "bank_code": "08638", + "bic": "CBSWCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Union Bancaire Priv\u00e9e, UBP SA", + "short_name": "Union Bancaire Priv\u00e9e, UBP SA", + "bank_code": "08657", + "bic": "UBPGCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Deutsche Bank (Suisse) S.A.", + "short_name": "Deutsche Bank (Suisse) S.A.", + "bank_code": "08659", + "bic": "DEUTCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "GPB (Schweiz) AG", + "short_name": "GPB (Schweiz) AG", + "bank_code": "08660", + "bic": "RKBZCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Rothschild & Co Bank AG", + "short_name": "Rothschild & Co Bank AG", + "bank_code": "08661", + "bic": "ROTACHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "PKB PRIVATE BANK SA", + "short_name": "PKB PRIVATE BANK SA", + "bank_code": "08663", + "bic": "PKBSCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "PKB PRIVATBANK AG", + "short_name": "PKB PRIVATBANK AG", + "bank_code": "08664", + "bic": "PKBSCH2269A", + "country_code": "CH", + "primary": false + }, + { + "name": "PKB PRIVATBANK SA", + "short_name": "PKB PRIVATBANK SA", + "bank_code": "08665", + "bic": "PKBSCH2212A", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca del Sempione SA", + "short_name": "Banca del Sempione SA", + "bank_code": "08666", + "bic": "BASECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "EFG Bank AG", + "short_name": "EFG Bank AG", + "bank_code": "08667", + "bic": "EFGBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BNP Paribas (Suisse) SA", + "short_name": "BNP Paribas (Suisse) SA", + "bank_code": "08686", + "bic": "BPPBCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "HSBC Private Bank (Suisse) SA", + "short_name": "HSBC Private Bank (Suisse) SA", + "bank_code": "08689", + "bic": "BLICCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Citibank N.A., Sioux Falls", + "short_name": "Citibank N.A., Sioux Falls", + "bank_code": "08700", + "bic": "CITICHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "HSBC Bank plc, London, Zweigniederlassung Z\u00fcrich", + "short_name": "HSBC Bank plc, London, Zweigniederlassung Z\u00fcrich", + "bank_code": "08701", + "bic": "HSBCCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Leonteq Securities AG", + "short_name": "Leonteq Securities AG", + "bank_code": "08702", + "bic": "EFGFCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "bank zweiplus ag", + "short_name": "bank zweiplus ag", + "bank_code": "08703", + "bic": "BZPLCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "AEK BANK 1826 Genossenschaft", + "short_name": "AEK BANK 1826 Genossenschaft", + "bank_code": "08704", + "bic": "AEKTCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale", + "short_name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale", + "bank_code": "08705", + "bic": "SGABCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank CIC (SCHWEIZ) AG", + "short_name": "Bank CIC (SCHWEIZ) AG", + "bank_code": "08710", + "bic": "CIALCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Arab Bank (Switzerland) Ltd.", + "short_name": "Arab Bank (Switzerland) Ltd.", + "bank_code": "08719", + "bic": "ARBSCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Gonet & Cie SA", + "short_name": "Gonet & Cie SA", + "bank_code": "08721", + "bic": "GONECHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank of America Europe DAC, Dublin, Zurich Branch", + "short_name": "Bank of America Europe DAC, Dublin, Zurich Branch", + "bank_code": "08726", + "bic": "BOFACH2XXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Cramer & Cie SA", + "short_name": "Banque Cramer & Cie SA", + "bank_code": "08727", + "bic": "CRAMCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "LLB (Schweiz) AG", + "short_name": "LLB (Schweiz) AG", + "bank_code": "08731", + "bic": "LINSCH23XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Thalwil Genossenschaft", + "short_name": "Bank Thalwil Genossenschaft", + "bank_code": "08733", + "bic": "BKTHCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Barclays Bank (Suisse) SA", + "short_name": "Barclays Bank (Suisse) SA", + "bank_code": "08735", + "bic": "BARCCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Thaler SA", + "short_name": "Banque Thaler SA", + "bank_code": "08737", + "bic": "THALCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "CA Indosuez (Switzerland) SA", + "short_name": "CA Indosuez (Switzerland) SA", + "bank_code": "08740", + "bic": "AGRICHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CA Indosuez (Switzerland) SA", + "short_name": "CA Indosuez (Switzerland) SA", + "bank_code": "08741", + "bic": "AGRICHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "CA Indosuez (Switzerland) SA", + "short_name": "CA Indosuez (Switzerland) SA", + "bank_code": "08742", + "bic": "AGRICHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CA Indosuez (Switzerland) SA", + "short_name": "CA Indosuez (Switzerland) SA", + "bank_code": "08743", + "bic": "AGRICHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank J. Safra Sarasin AG", + "short_name": "Bank J. Safra Sarasin AG", + "bank_code": "08750", + "bic": "SARACHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank J. Safra Sarasin AG", + "short_name": "Bank J. Safra Sarasin AG", + "bank_code": "08751", + "bic": "SARACHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank J. Safra Sarasin AG", + "short_name": "Bank J. Safra Sarasin AG", + "bank_code": "08752", + "bic": "SARACHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Pictet & Cie SA", + "short_name": "Banque Pictet & Cie SA", + "bank_code": "08755", + "bic": "PICTCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Baumann & Cie KmG", + "short_name": "Baumann & Cie KmG", + "bank_code": "08756", + "bic": "BAUMCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank Vontobel AG", + "short_name": "Bank Vontobel AG", + "bank_code": "08757", + "bic": "VONTCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Lombard Odier & Cie SA", + "short_name": "Banque Lombard Odier & Cie SA", + "bank_code": "08760", + "bic": "LOCYCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Reichmuth & Co.", + "short_name": "Reichmuth & Co.", + "bank_code": "08761", + "bic": "REICCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "CBH - Compagnie Bancaire Helv\u00e9tique SA", + "short_name": "CBH - Compagnie Bancaire Helv\u00e9tique SA", + "bank_code": "08762", + "bic": "BSSACHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bordier & Cie SCmA", + "short_name": "Bordier & Cie SCmA", + "bank_code": "08767", + "bic": "BORDCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Mirabaud & Cie SA", + "short_name": "Mirabaud & Cie SA", + "bank_code": "08770", + "bic": "MIRACHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "E. Gutzwiller & Cie Banquiers", + "short_name": "E. Gutzwiller & Cie Banquiers", + "bank_code": "08775", + "bic": "GUTZCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Piguet Galland & Cie SA", + "short_name": "Piguet Galland & Cie SA", + "bank_code": "08777", + "bic": "PIGUCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Rahn+Bodmer Co.", + "short_name": "Rahn+Bodmer Co.", + "bank_code": "08779", + "bic": "RAHNCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Deutsche Bank AG Z\u00fcrich Branch", + "short_name": "Deutsche Bank AG Z\u00fcrich Branch", + "bank_code": "08780", + "bic": "DEUTCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Swissquote Bank SA", + "short_name": "Swissquote Bank SA", + "bank_code": "08781", + "bic": "SWQBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Spar- und Leihkasse Frutigen AG", + "short_name": "Spar- und Leihkasse Frutigen AG", + "bank_code": "08784", + "bic": "SLFFCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Regiobank Solothurn AG", + "short_name": "Regiobank Solothurn AG", + "bank_code": "08785", + "bic": "RSOSCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Pfandbriefbank schweizerischer Hypothekarinstitute AG", + "short_name": "Pfandbriefbank schweizerischer Hypothekarinstitute AG", + "bank_code": "08787", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "BANQUE HERITAGE SA", + "short_name": "BANQUE HERITAGE SA", + "bank_code": "08788", + "bic": "HFTCCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Habib Bank AG Z\u00fcrich", + "short_name": "Habib Bank AG Z\u00fcrich", + "bank_code": "08789", + "bic": "HBZUCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BNP PARIBAS, Paris, succursale de Zurich", + "short_name": "BNP PARIBAS, Paris, succursale de Zurich", + "bank_code": "08792", + "bic": "PARBCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "IBKR Financial Services AG", + "short_name": "IBKR Financial Services AG", + "bank_code": "08797", + "bic": "TMBECH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "InCore Bank AG", + "short_name": "InCore Bank AG", + "bank_code": "08799", + "bic": "INCOCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Tellco AG", + "short_name": "Tellco AG", + "bank_code": "08820", + "bic": "DOMICHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Hypo Vorarlberg Bank AG, Bregenz, Zweigniederl., St. Gallen", + "short_name": "Hypo Vorarlberg Bank AG, Bregenz, Zweigniederl., St. Gallen", + "bank_code": "08821", + "bic": "RBABCH22VLH", + "country_code": "CH", + "primary": true + }, + { + "name": "CIM Banque SA", + "short_name": "CIM Banque SA", + "bank_code": "08822", + "bic": "CIMMCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "TradeXBank AG", + "short_name": "TradeXBank AG", + "bank_code": "08825", + "bic": "SLBZCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "S.P. Hinduja Banque Priv\u00e9e SA", + "short_name": "S.P. Hinduja Banque Priv\u00e9e SA", + "bank_code": "08827", + "bic": "ABSGCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "ABANCA CORPORACION BANCARIA S.A., BETANZOS", + "short_name": "ABANCA CORPORACION BANCARIA S.A., BETANZOS", + "bank_code": "08831", + "bic": "CAGLCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "COMMERZBANK AG FF, Zweigniederlassung Z\u00fcrich", + "short_name": "COMMERZBANK AG FF, Zweigniederlassung Z\u00fcrich", + "bank_code": "08836", + "bic": "COBACHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque du L\u00e9man SA", + "short_name": "Banque du L\u00e9man SA", + "bank_code": "08838", + "bic": "BLEMCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Quilvest (Switzerland) Ltd.", + "short_name": "Quilvest (Switzerland) Ltd.", + "bank_code": "08839", + "bic": "QVCHCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banco Ita\u00fa (Suisse) SA", + "short_name": "Banco Ita\u00fa (Suisse) SA", + "bank_code": "08841", + "bic": "ITAUCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Globalance Bank AG", + "short_name": "Globalance Bank AG", + "bank_code": "08842", + "bic": "GLBNCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Dukascopy Bank SA", + "short_name": "Dukascopy Bank SA", + "bank_code": "08843", + "bic": "DUBACHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Helvetische Bank AG", + "short_name": "Helvetische Bank AG", + "bank_code": "08845", + "bic": "SFBFCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Aquila AG", + "short_name": "Aquila AG", + "bank_code": "08846", + "bic": "AQULCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "IG Bank S.A.", + "short_name": "IG Bank S.A.", + "bank_code": "08848", + "bic": "IGBKCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "compenswiss (Fonds de compensation AVS/AI/APG)", + "short_name": "compenswiss (Fonds de compensation AVS/AI/APG)", + "bank_code": "08850", + "bic": "FAVSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SUVA", + "short_name": "SUVA", + "bank_code": "08851", + "bic": "SUAACH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Swiss Life AG (KV, geb. Verm\u00f6gen)", + "short_name": "Swiss Life AG (KV, geb. Verm\u00f6gen)", + "bank_code": "08852", + "bic": "SLAMCHZZKV0", + "country_code": "CH", + "primary": true + }, + { + "name": "Schweizerische R\u00fcckversicherungs-Gesellschaft AG", + "short_name": "Schweizerische R\u00fcckversicherungs-Gesellschaft AG", + "bank_code": "08853", + "bic": "SWRECHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "AXA Leben AG", + "short_name": "AXA Leben AG", + "bank_code": "08854", + "bic": "AXIPCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Zurich Insurance Company Ltd", + "short_name": "Zurich Insurance Company Ltd", + "bank_code": "08855", + "bic": "ZURICHZFXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Baloise Leben AG", + "short_name": "Baloise Leben AG", + "bank_code": "08856", + "bic": "BAAMCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Baloise Versicherung AG", + "short_name": "Baloise Versicherung AG", + "bank_code": "08858", + "bic": "BAAMCHBBBVG", + "country_code": "CH", + "primary": true + }, + { + "name": "Schweizerische Mobiliar Versicherungsgesellschaft AG", + "short_name": "Schweizerische Mobiliar Versicherungsgesellschaft AG", + "bank_code": "08859", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "CACEIS Investor Services Bank S.A.", + "short_name": "CACEIS Investor Services Bank S.A.", + "bank_code": "08863", + "bic": "FETACHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Remaco Asset Management AG", + "short_name": "Remaco Asset Management AG", + "bank_code": "08865", + "bic": "RMCOCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "LF Finance (Suisse) SA", + "short_name": "LF Finance (Suisse) SA", + "bank_code": "08866", + "bic": "LFFSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "State Street Bank International GmbH, M\u00fcnchen, ZN Z\u00fcrich", + "short_name": "State Street Bank International GmbH, M\u00fcnchen, ZN Z\u00fcrich", + "bank_code": "08867", + "bic": "SSBECHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Delen (Suisse) SA", + "short_name": "Delen (Suisse) SA", + "bank_code": "08868", + "bic": "DELECHG2XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00e4hringer Privatbank AG", + "short_name": "Z\u00e4hringer Privatbank AG", + "bank_code": "08871", + "bic": "ZAPRCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SIX SIS AG", + "short_name": "SIX SIS AG", + "bank_code": "08880", + "bic": "INSECHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SIX SIS AG", + "short_name": "SIX SIS AG", + "bank_code": "08881", + "bic": "INSECHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "SIX SIS AG", + "short_name": "SIX SIS AG", + "bank_code": "08887", + "bic": "INSECHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparkasse Bundespersonal", + "short_name": "Sparkasse Bundespersonal", + "bank_code": "08890", + "bic": "SKBPCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "BIZ Bank f\u00fcr Internationalen Zahlungsausgleich", + "short_name": "BIZ Bank f\u00fcr Internationalen Zahlungsausgleich", + "bank_code": "08899", + "bic": "BISBCHBBXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "PostFinance AG", + "short_name": "PostFinance AG", + "bank_code": "09000", + "bic": "POFICHBEXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "PostFinance AG", + "short_name": "PostFinance AG", + "bank_code": "30000", + "bic": "POFICHBEXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "ODDO BHF (Schweiz) AG", + "short_name": "ODDO BHF (Schweiz) AG", + "bank_code": "30002", + "bic": "BHFBCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrich Lebensvers.-Ges. AG Individual Life geb. Verm\u00f6gen", + "short_name": "Z\u00fcrich Lebensvers.-Ges. AG Individual Life geb. Verm\u00f6gen", + "bank_code": "30003", + "bic": "ZURICHZFZLT", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrich Lebensvers.-Ges. AG Group Life geb. Verm\u00f6gen", + "short_name": "Z\u00fcrich Lebensvers.-Ges. AG Group Life geb. Verm\u00f6gen", + "bank_code": "30004", + "bic": "ZURICHZFZKT", + "country_code": "CH", + "primary": false + }, + { + "name": "UBS Switzerland AG - Z\u00fcrich", + "short_name": "UBS Switzerland AG", + "bank_code": "30005", + "bic": "UBSWCHZH80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrich Versicherungs.-Ges. AG gebundenes Verm\u00f6gen", + "short_name": "Z\u00fcrich Versicherungs.-Ges. AG gebundenes Verm\u00f6gen", + "bank_code": "30006", + "bic": "ZURICHZFZNT", + "country_code": "CH", + "primary": false + }, + { + "name": "SWISS4.0 SA", + "short_name": "SWISS4.0 SA", + "bank_code": "30015", + "bic": "IWSSCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cr\u00e9dit Mutuel de la Vall\u00e9e SA", + "short_name": "Cr\u00e9dit Mutuel de la Vall\u00e9e SA", + "bank_code": "30020", + "bic": "RBABCH22180", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne de Cossonay soci\u00e9t\u00e9 coop\u00e9rative", + "short_name": "Caisse d'Epargne de Cossonay soci\u00e9t\u00e9 coop\u00e9rative", + "bank_code": "30021", + "bic": "RBABCH22182", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparkasse Sense", + "short_name": "Sparkasse Sense", + "bank_code": "30022", + "bic": "RBABCH22186", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne Courtelary SA", + "short_name": "Caisse d'Epargne Courtelary SA", + "bank_code": "30023", + "bic": "RBABCH22240", + "country_code": "CH", + "primary": false + }, + { + "name": "Valiant Bank AG", + "short_name": "Valiant Bank AG", + "bank_code": "30024", + "bic": "VABECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bernerland Bank AG", + "short_name": "Bernerland Bank AG", + "bank_code": "30026", + "bic": "RBABCH22313", + "country_code": "CH", + "primary": false + }, + { + "name": "SB Saanen Bank AG", + "short_name": "SB Saanen Bank AG", + "bank_code": "30027", + "bic": "RBABCH22342", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank SLM AG", + "short_name": "Bank SLM AG", + "bank_code": "30028", + "bic": "RBABCH22363", + "country_code": "CH", + "primary": false + }, + { + "name": "Spar+Leihkasse Riggisberg AG", + "short_name": "Spar+Leihkasse Riggisberg AG", + "bank_code": "30029", + "bic": "RBABCH22374", + "country_code": "CH", + "primary": false + }, + { + "name": "Burgerliche Ersparniskasse Bern Genossenschaft", + "short_name": "Burgerliche Ersparniskasse Bern Genossenschaft", + "bank_code": "30030", + "bic": "RBABCH22382", + "country_code": "CH", + "primary": false + }, + { + "name": "Ersparniskasse Affoltern i. E. AG", + "short_name": "Ersparniskasse Affoltern i. E. AG", + "bank_code": "30031", + "bic": "RBABCH22387", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis Sparkasse Oftringen Genossenschaft", + "short_name": "Clientis Sparkasse Oftringen Genossenschaft", + "bank_code": "30033", + "bic": "RBABCH22428", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis Bank im Thal AG", + "short_name": "Clientis Bank im Thal AG", + "bank_code": "30034", + "bic": "RBABCH22434", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Oberaargau AG", + "short_name": "Bank Oberaargau AG", + "bank_code": "30035", + "bic": "RBABCH22450", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis Bank Aareland AG", + "short_name": "Clientis Bank Aareland AG", + "bank_code": "30036", + "bic": "RBABCH22575", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Leerau Genossenschaft", + "short_name": "Bank Leerau Genossenschaft", + "bank_code": "30037", + "bic": "RBABCH22588", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparkasse Schwyz AG", + "short_name": "Sparkasse Schwyz AG", + "bank_code": "30038", + "bic": "RBABCH22633", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis EB Entlebucher Bank AG", + "short_name": "Clientis EB Entlebucher Bank AG", + "bank_code": "30040", + "bic": "RBABCH22670", + "country_code": "CH", + "primary": false + }, + { + "name": "GRB Glarner Regionalbank Genossenschaft", + "short_name": "GRB Glarner Regionalbank Genossenschaft", + "bank_code": "30042", + "bic": "RBABCH22807", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparhafen Bank AG", + "short_name": "Sparhafen Bank AG", + "bank_code": "30043", + "bic": "BSZHCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparcassa 1816 Genossenschaft", + "short_name": "Sparcassa 1816 Genossenschaft", + "bank_code": "30044", + "bic": "RBABCH22814", + "country_code": "CH", + "primary": false + }, + { + "name": "BANK ZIMMERBERG AG", + "short_name": "BANK ZIMMERBERG AG", + "bank_code": "30045", + "bic": "RBABCH22824", + "country_code": "CH", + "primary": false + }, + { + "name": "Regiobank M\u00e4nnedorf AG", + "short_name": "Regiobank M\u00e4nnedorf AG", + "bank_code": "30046", + "bic": "RBABCH22828", + "country_code": "CH", + "primary": false + }, + { + "name": "Lienhardt & Partner Privatbank Z\u00fcrich AG", + "short_name": "Lienhardt & Partner Privatbank Z\u00fcrich AG", + "bank_code": "30047", + "bic": "RBABCH22830", + "country_code": "CH", + "primary": false + }, + { + "name": "Ersparniskasse Schaffhausen AG", + "short_name": "Ersparniskasse Schaffhausen AG", + "bank_code": "30048", + "bic": "RBABCH22835", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Avera Genossenschaft", + "short_name": "Bank Avera Genossenschaft", + "bank_code": "30049", + "bic": "RBABCH22850", + "country_code": "CH", + "primary": false + }, + { + "name": "BS Bank Schaffhausen AG", + "short_name": "BS Bank Schaffhausen AG", + "bank_code": "30050", + "bic": "RBABCH22858", + "country_code": "CH", + "primary": false + }, + { + "name": "Spar- und Leihkasse Thayngen AG", + "short_name": "Spar- und Leihkasse Thayngen AG", + "bank_code": "30051", + "bic": "RBABCH22866", + "country_code": "CH", + "primary": false + }, + { + "name": "Leihkasse Stammheim AG", + "short_name": "Leihkasse Stammheim AG", + "bank_code": "30053", + "bic": "RBABCH22875", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Landbank AG", + "short_name": "Z\u00fcrcher Landbank AG", + "bank_code": "30054", + "bic": "RBABCH22877", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank BSU Genossenschaft", + "short_name": "Bank BSU Genossenschaft", + "bank_code": "30055", + "bic": "RBABCH22888", + "country_code": "CH", + "primary": false + }, + { + "name": "acrevis Bank AG", + "short_name": "acrevis Bank AG", + "bank_code": "30056", + "bic": "ACRGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Alpha RHEINTAL Bank AG", + "short_name": "Alpha RHEINTAL Bank AG", + "bank_code": "30057", + "bic": "ARBHCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis Bank Oberuzwil AG", + "short_name": "Clientis Bank Oberuzwil AG", + "bank_code": "30058", + "bic": "RBABCH22935", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis Bank Toggenburg AG", + "short_name": "Clientis Bank Toggenburg AG", + "bank_code": "30060", + "bic": "RBABCH22955", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank in Zuzwil", + "short_name": "Bank in Zuzwil", + "bank_code": "30061", + "bic": "RBABCH22964", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis Bank Thur Genossenschaft", + "short_name": "Clientis Bank Thur Genossenschaft", + "bank_code": "30062", + "bic": "RBABCH22977", + "country_code": "CH", + "primary": false + }, + { + "name": "Biene Bank im Rheintal Genossenschaft", + "short_name": "Biene Bank im Rheintal Genossenschaft", + "bank_code": "30063", + "bic": "RBABCH22980", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Popolare di Sondrio (Suisse) SA", + "short_name": "Banca Popolare di Sondrio (Suisse) SA", + "bank_code": "30110", + "bic": "POSOCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cr\u00e9dit Agricole next Bank (Suisse) SA", + "short_name": "Cr\u00e9dit Agricole next Bank (Suisse) SA", + "bank_code": "30113", + "bic": "AGRICHGXXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "30114", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Spar- und Leihkasse Wynigen AG", + "short_name": "Spar- und Leihkasse Wynigen AG", + "bank_code": "30115", + "bic": "SLWYCH21XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Hypothekarbank Lenzburg AG", + "short_name": "Hypothekarbank Lenzburg AG", + "bank_code": "30116", + "bic": "HYPLCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Ersparniskasse Speicher", + "short_name": "Ersparniskasse Speicher", + "bank_code": "30117", + "bic": "ERSCCH21XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Baloise Bank AG", + "short_name": "Baloise Bank AG", + "bank_code": "30118", + "bic": "KBSOCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "LGT Bank (Schweiz) AG", + "short_name": "LGT Bank (Schweiz) AG", + "bank_code": "30119", + "bic": "BLFLCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne Riviera, soci\u00e9t\u00e9 coop\u00e9rative", + "short_name": "Caisse d'Epargne Riviera, soci\u00e9t\u00e9 coop\u00e9rative", + "bank_code": "30121", + "bic": "CDDVCH21XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bezirks-Sparkasse Dielsdorf Genossenschaft", + "short_name": "Bezirks-Sparkasse Dielsdorf Genossenschaft", + "bank_code": "30122", + "bic": "BZSDCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Alternative Bank Schweiz AG", + "short_name": "Alternative Bank Schweiz AG", + "bank_code": "30123", + "bic": "ABSOCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "WIR Bank Genossenschaft", + "short_name": "WIR Bank Genossenschaft", + "bank_code": "30124", + "bic": "WIRBCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Freie Gemeinschaftsbank Genossenschaft", + "short_name": "Freie Gemeinschaftsbank Genossenschaft", + "bank_code": "30125", + "bic": "FRGGCHB1XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank EKI Genossenschaft", + "short_name": "Bank EKI Genossenschaft", + "bank_code": "30126", + "bic": "EKIICH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank EEK AG", + "short_name": "Bank EEK AG", + "bank_code": "30127", + "bic": "EEKBCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Spar- und Leihkasse Bucheggberg", + "short_name": "Spar- und Leihkasse Bucheggberg", + "bank_code": "30128", + "bic": "SLBUCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Spar- und Leihkasse G\u00fcrbetal AG", + "short_name": "Spar- und Leihkasse G\u00fcrbetal AG", + "bank_code": "30129", + "bic": "SLGUCH2MXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "DC Bank Deposito-Cassa der Stadt Bern", + "short_name": "DC Bank Deposito-Cassa der Stadt Bern", + "bank_code": "30130", + "bic": "RBABCH22DCB", + "country_code": "CH", + "primary": false + }, + { + "name": "VZ Depotbank AG", + "short_name": "VZ Depotbank AG", + "bank_code": "30131", + "bic": "VZDBCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Migros Bank AG", + "short_name": "Migros Bank AG", + "bank_code": "30132", + "bic": "MIGRCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Cler AG", + "short_name": "Bank Cler AG", + "bank_code": "30133", + "bic": "BCLRCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Corn\u00e8r Banca SA", + "short_name": "Corn\u00e8r Banca SA", + "bank_code": "30141", + "bic": "CBLUCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Gantrisch Genossenschaft", + "short_name": "Bank Gantrisch Genossenschaft", + "bank_code": "30143", + "bic": "BGAGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Ersparniskasse R\u00fceggisberg Genossenschaft", + "short_name": "Ersparniskasse R\u00fceggisberg Genossenschaft", + "bank_code": "30144", + "bic": "EKRUCH21XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BBO Bank Brienz Oberhasli AG", + "short_name": "BBO Bank Brienz Oberhasli AG", + "bank_code": "30145", + "bic": "BBOBCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Societ\u00e0 Bancaria Ticinese", + "short_name": "Societ\u00e0 Bancaria Ticinese", + "bank_code": "30146", + "bic": "SBTICH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank f\u00fcr Tirol und Vorarlberg Aktiengesellschaft, Innsbruck", + "short_name": "Bank f\u00fcr Tirol und Vorarlberg Aktiengesellschaft, Innsbruck", + "bank_code": "30147", + "bic": "BTVACH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Privatbank IHAG Z\u00fcrich AG", + "short_name": "Privatbank IHAG Z\u00fcrich AG", + "bank_code": "30148", + "bic": "IHZUCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BZ Bank Aktiengesellschaft", + "short_name": "BZ Bank Aktiengesellschaft", + "bank_code": "30151", + "bic": "BZBKCH2WXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "JPMorgan Chase Bank, National Association, Columbus", + "short_name": "JPMorgan Chase Bank, National Association, Columbus", + "bank_code": "30152", + "bic": "CHASCHGXXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca del Sempione SA", + "short_name": "Banca del Sempione SA", + "bank_code": "30153", + "bic": "BASECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BNP Paribas (Suisse) SA", + "short_name": "BNP Paribas (Suisse) SA", + "bank_code": "30154", + "bic": "BPPBCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "HSBC Bank plc, London, Zweigniederlassung Z\u00fcrich", + "short_name": "HSBC Bank plc, London, Zweigniederlassung Z\u00fcrich", + "bank_code": "30156", + "bic": "HSBCCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "AEK BANK 1826 Genossenschaft", + "short_name": "AEK BANK 1826 Genossenschaft", + "bank_code": "30157", + "bic": "AEKTCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank CIC (SCHWEIZ) AG", + "short_name": "Bank CIC (SCHWEIZ) AG", + "bank_code": "30159", + "bic": "CIALCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "LLB (Schweiz) AG", + "short_name": "LLB (Schweiz) AG", + "bank_code": "30162", + "bic": "LINSCH23XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Thalwil Genossenschaft", + "short_name": "Bank Thalwil Genossenschaft", + "bank_code": "30163", + "bic": "BKTHCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CA Indosuez (Switzerland) SA", + "short_name": "CA Indosuez (Switzerland) SA", + "bank_code": "30164", + "bic": "AGRICHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank J. Safra Sarasin AG", + "short_name": "Bank J. Safra Sarasin AG", + "bank_code": "30165", + "bic": "SARACHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CBH - Compagnie Bancaire Helv\u00e9tique SA", + "short_name": "CBH - Compagnie Bancaire Helv\u00e9tique SA", + "bank_code": "30166", + "bic": "BSSACHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Deutsche Bank AG Z\u00fcrich Branch", + "short_name": "Deutsche Bank AG Z\u00fcrich Branch", + "bank_code": "30167", + "bic": "DEUTCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Swissquote Bank SA", + "short_name": "Swissquote Bank SA", + "bank_code": "30168", + "bic": "SWQBCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Spar- und Leihkasse Frutigen AG", + "short_name": "Spar- und Leihkasse Frutigen AG", + "bank_code": "30169", + "bic": "SLFFCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Regiobank Solothurn AG", + "short_name": "Regiobank Solothurn AG", + "bank_code": "30170", + "bic": "RSOSCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Hypo Vorarlberg Bank AG, Bregenz, Zweigniederl., St. Gallen", + "short_name": "Hypo Vorarlberg Bank AG, Bregenz, Zweigniederl., St. Gallen", + "bank_code": "30176", + "bic": "RBABCH22VLH", + "country_code": "CH", + "primary": false + }, + { + "name": "COMMERZBANK AG FF, Zweigniederlassung Z\u00fcrich", + "short_name": "COMMERZBANK AG FF, Zweigniederlassung Z\u00fcrich", + "bank_code": "30177", + "bic": "COBACHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparkasse Bundespersonal", + "short_name": "Sparkasse Bundespersonal", + "bank_code": "30178", + "bic": "SKBPCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banco Santander International SA", + "short_name": "Banco Santander International SA", + "bank_code": "30180", + "bic": "BSCHCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Credit Europe Bank (Suisse) SA", + "short_name": "Credit Europe Bank (Suisse) SA", + "bank_code": "30182", + "bic": "FSUICHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Investec Bank (Switzerland) AG", + "short_name": "Investec Bank (Switzerland) AG", + "bank_code": "30184", + "bic": "IVESCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Kaleido Privatbank AG", + "short_name": "Kaleido Privatbank AG", + "bank_code": "30186", + "bic": "ANPRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "One Swiss Bank SA", + "short_name": "One Swiss Bank SA", + "bank_code": "30187", + "bic": "BQBHCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Van Lanschot Kempen (Schweiz) AG", + "short_name": "Van Lanschot Kempen (Schweiz) AG", + "bank_code": "30188", + "bic": "FVLBCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Internationale \u00e0 Luxembourg (Suisse) SA", + "short_name": "Banque Internationale \u00e0 Luxembourg (Suisse) SA", + "bank_code": "30189", + "bic": "BILSCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BBVA SA", + "short_name": "BBVA SA", + "bank_code": "30190", + "bic": "BBVACHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Frankfurter Bankgesellschaft (Schweiz) AG", + "short_name": "Frankfurter Bankgesellschaft (Schweiz) AG", + "bank_code": "30194", + "bic": "FBGSCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank von Roll AG", + "short_name": "Bank von Roll AG", + "bank_code": "30196", + "bic": "VRLLCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque SYZ SA", + "short_name": "Banque SYZ SA", + "bank_code": "30197", + "bic": "SYCOCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne de Nyon soci\u00e9t\u00e9 coop\u00e9rative", + "short_name": "Caisse d'Epargne de Nyon soci\u00e9t\u00e9 coop\u00e9rative", + "bank_code": "30198", + "bic": "CAGYCH21XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne d'Aubonne Soci\u00e9te coop\u00e9rative", + "short_name": "Caisse d'Epargne d'Aubonne Soci\u00e9te coop\u00e9rative", + "bank_code": "30199", + "bic": "RBABCH22CEA", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Alg\u00e9rienne du Commerce Ext\u00e9rieur SA", + "short_name": "Banque Alg\u00e9rienne du Commerce Ext\u00e9rieur SA", + "bank_code": "30201", + "bic": "AEXTCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne et de Cr\u00e9dit Mutuel de Chermignon", + "short_name": "Caisse d'Epargne et de Cr\u00e9dit Mutuel de Chermignon", + "bank_code": "30202", + "bic": "CDEMCH21XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CACEIS Bank, Montrouge, succursale de Nyon / Suisse", + "short_name": "CACEIS Bank, Montrouge, succursale de Nyon / Suisse", + "bank_code": "30203", + "bic": "ISAECH2NXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "NPB Neue Privat Bank AG", + "short_name": "NPB Neue Privat Bank AG", + "bank_code": "30206", + "bic": "NEPICHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "QNB (Suisse) SA", + "short_name": "QNB (Suisse) SA", + "bank_code": "30207", + "bic": "QNBACHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "J.P. Morgan (Suisse) SA", + "short_name": "J.P. Morgan (Suisse) SA", + "bank_code": "30208", + "bic": "MGTCCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Reyl & Cie S.A.", + "short_name": "Reyl & Cie S.A.", + "bank_code": "30209", + "bic": "REYLCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "NBK Banque Priv\u00e9e (Suisse) SA", + "short_name": "NBK Banque Priv\u00e9e (Suisse) SA", + "bank_code": "30210", + "bic": "NBOKCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "ING Bank N.V., Amsterdam, succursale de Lancy/Gen\u00e8ve", + "short_name": "ING Bank N.V., Amsterdam, succursale de Lancy/Gen\u00e8ve", + "bank_code": "30211", + "bic": "BBRUCHGTXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Eric Sturdza SA", + "short_name": "Banque Eric Sturdza SA", + "bank_code": "30212", + "bic": "BABRCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Credinvest SA", + "short_name": "Banca Credinvest SA", + "bank_code": "30213", + "bic": "BCRECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Aletti & C. (Suisse) SA", + "short_name": "Banca Aletti & C. (Suisse) SA", + "bank_code": "30214", + "bic": "VRBPCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Axion SWISS Bank SA", + "short_name": "Axion SWISS Bank SA", + "bank_code": "30216", + "bic": "UNCECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "FAB Private Bank (Suisse) SA", + "short_name": "FAB Private Bank (Suisse) SA", + "bank_code": "30217", + "bic": "NBPSCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Mercantil Bank (Schweiz) AG", + "short_name": "Mercantil Bank (Schweiz) AG", + "bank_code": "30218", + "bic": "BAMRCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "VP Bank (Schweiz) AG", + "short_name": "VP Bank (Schweiz) AG", + "bank_code": "30220", + "bic": "VPBVCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "SELVI & Cie SA", + "short_name": "SELVI & Cie SA", + "bank_code": "30221", + "bic": "SELVCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bergos AG", + "short_name": "Bergos AG", + "bank_code": "30223", + "bic": "BEGOCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BANQUE BANORIENT (SUISSE) SA", + "short_name": "BANQUE BANORIENT (SUISSE) SA", + "bank_code": "30224", + "bic": "BLOMCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Scobag Privatbank AG", + "short_name": "Scobag Privatbank AG", + "bank_code": "30225", + "bic": "SCOPCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Hyposwiss Private Bank Gen\u00e8ve SA", + "short_name": "Hyposwiss Private Bank Gen\u00e8ve SA", + "bank_code": "30226", + "bic": "CCIECHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Privatbank Von Graffenried AG", + "short_name": "Privatbank Von Graffenried AG", + "bank_code": "30228", + "bic": "GRAFCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Dreyfus S\u00f6hne & Cie AG, Banquiers", + "short_name": "Dreyfus S\u00f6hne & Cie AG, Banquiers", + "bank_code": "30229", + "bic": "DREYCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Bonh\u00f4te & Cie SA", + "short_name": "Banque Bonh\u00f4te & Cie SA", + "bank_code": "30230", + "bic": "BONHCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Edmond de Rothschild (Suisse) S.A", + "short_name": "Edmond de Rothschild (Suisse) S.A", + "bank_code": "30231", + "bic": "PRIBCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Trafina Privatbank AG", + "short_name": "Trafina Privatbank AG", + "bank_code": "30232", + "bic": "TRAPCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Maerki, Baumann & Co. AG", + "short_name": "Maerki, Baumann & Co. AG", + "bank_code": "30233", + "bic": "MAEBCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BANKMED SUISSE SA", + "short_name": "BANKMED SUISSE SA", + "bank_code": "30234", + "bic": "MEDSCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale Private Banking (Suisse) SA", + "short_name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale Private Banking (Suisse) SA", + "bank_code": "30236", + "bic": "RUEGCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca del Ceresio SA", + "short_name": "Banca del Ceresio SA", + "bank_code": "30237", + "bic": "BACECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Zarattini & Co. SA", + "short_name": "Banca Zarattini & Co. SA", + "bank_code": "30238", + "bic": "EUBACH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "DZ PRIVATBANK (Schweiz) AG", + "short_name": "DZ PRIVATBANK (Schweiz) AG", + "bank_code": "30239", + "bic": "GENOCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "EFG Bank European Financial Group SA", + "short_name": "EFG Bank European Financial Group SA", + "bank_code": "30240", + "bic": "EFGBCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Nomura Bank (Switzerland) Ltd", + "short_name": "Nomura Bank (Switzerland) Ltd", + "bank_code": "30241", + "bic": "NBSZCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Havilland (Liechtenstein) AG, Vaduz, Zweig. Z\u00fcrich", + "short_name": "Banque Havilland (Liechtenstein) AG, Vaduz, Zweig. Z\u00fcrich", + "bank_code": "30242", + "bic": "BPGECHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Audi (Suisse) SA", + "short_name": "Banque Audi (Suisse) SA", + "bank_code": "30243", + "bic": "AUDSCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque de Commerce et de Placements SA", + "short_name": "Banque de Commerce et de Placements SA", + "bank_code": "30244", + "bic": "BPCPCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schroder & Co Bank AG", + "short_name": "Schroder & Co Bank AG", + "bank_code": "30245", + "bic": "BJHSCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Citibank (Switzerland) AG", + "short_name": "Citibank (Switzerland) AG", + "bank_code": "30246", + "bic": "CBSWCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "UNION BANCAIRE PRIVEE, UBP SA", + "short_name": "UNION BANCAIRE PRIVEE, UBP SA", + "bank_code": "30248", + "bic": "UBPGCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Deutsche Bank (Suisse) S.A.", + "short_name": "Deutsche Bank (Suisse) S.A.", + "bank_code": "30249", + "bic": "DEUTCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "GPB (Schweiz) AG", + "short_name": "GPB (Schweiz) AG", + "bank_code": "30250", + "bic": "RKBZCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Rothschild & Co Bank AG", + "short_name": "Rothschild & Co Bank AG", + "bank_code": "30251", + "bic": "ROTACHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "PKB PRIVATE BANK SA", + "short_name": "PKB PRIVATE BANK SA", + "bank_code": "30252", + "bic": "PKBSCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "EFG Bank AG", + "short_name": "EFG Bank AG", + "bank_code": "30253", + "bic": "EFGBCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "HSBC Private Bank (Suisse) SA", + "short_name": "HSBC Private Bank (Suisse) SA", + "bank_code": "30254", + "bic": "BLICCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Citibank N.A., Sioux Falls", + "short_name": "Citibank N.A., Sioux Falls", + "bank_code": "30256", + "bic": "CITICHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale", + "short_name": "Soci\u00e9t\u00e9 G\u00e9n\u00e9rale", + "bank_code": "30259", + "bic": "SGABCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Arab Bank (Switzerland) Ltd.", + "short_name": "Arab Bank (Switzerland) Ltd.", + "bank_code": "30260", + "bic": "ARBSCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Gonet & Cie SA", + "short_name": "Gonet & Cie SA", + "bank_code": "30261", + "bic": "GONECHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank of America Europe DAC, Dublin, Zurich Branch", + "short_name": "Bank of America Europe DAC, Dublin, Zurich Branch", + "bank_code": "30262", + "bic": "BOFACH2XXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cramer & Cie SA", + "short_name": "Banque Cramer & Cie SA", + "bank_code": "30263", + "bic": "CRAMCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Barclays Bank (Suisse) SA", + "short_name": "Barclays Bank (Suisse) SA", + "bank_code": "30264", + "bic": "BARCCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Baumann & Cie KmG", + "short_name": "Baumann & Cie KmG", + "bank_code": "30269", + "bic": "BAUMCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank Vontobel AG", + "short_name": "Bank Vontobel AG", + "bank_code": "30270", + "bic": "VONTCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Lombard Odier & Cie SA", + "short_name": "Banque Lombard Odier & Cie SA", + "bank_code": "30271", + "bic": "LOCYCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Reichmuth & Co.", + "short_name": "Reichmuth & Co.", + "bank_code": "30272", + "bic": "REICCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bordier & Cie SCmA", + "short_name": "Bordier & Cie SCmA", + "bank_code": "30273", + "bic": "BORDCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Mirabaud & Cie SA", + "short_name": "Mirabaud & Cie SA", + "bank_code": "30275", + "bic": "MIRACHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Piguet Galland & Cie SA", + "short_name": "Piguet Galland & Cie SA", + "bank_code": "30277", + "bic": "PIGUCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Pfandbriefbank schweizerischer Hypothekarinstitute AG", + "short_name": "Pfandbriefbank schweizerischer Hypothekarinstitute AG", + "bank_code": "30281", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Habib Bank AG Z\u00fcrich", + "short_name": "Habib Bank AG Z\u00fcrich", + "bank_code": "30283", + "bic": "HBZUCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "IBKR Financial Services AG", + "short_name": "IBKR Financial Services AG", + "bank_code": "30285", + "bic": "TMBECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "InCore Bank AG", + "short_name": "InCore Bank AG", + "bank_code": "30286", + "bic": "INCOCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Tellco AG", + "short_name": "Tellco AG", + "bank_code": "30295", + "bic": "DOMICHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CIM Banque SA", + "short_name": "CIM Banque SA", + "bank_code": "30296", + "bic": "CIMMCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "TradeXBank AG", + "short_name": "TradeXBank AG", + "bank_code": "30297", + "bic": "SLBZCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "S.P. Hinduja Banque Priv\u00e9e SA", + "short_name": "S.P. Hinduja Banque Priv\u00e9e SA", + "bank_code": "30299", + "bic": "ABSGCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "ABANCA CORPORACION BANCARIA S.A., BETANZOS", + "short_name": "ABANCA CORPORACION BANCARIA S.A., BETANZOS", + "bank_code": "30311", + "bic": "CAGLCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque du L\u00e9man SA", + "short_name": "Banque du L\u00e9man SA", + "bank_code": "30313", + "bic": "BLEMCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Quilvest (Switzerland) Ltd.", + "short_name": "Quilvest (Switzerland) Ltd.", + "bank_code": "30314", + "bic": "QVCHCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banco Ita\u00fa (Suisse) SA", + "short_name": "Banco Ita\u00fa (Suisse) SA", + "bank_code": "30315", + "bic": "ITAUCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Globalance Bank AG", + "short_name": "Globalance Bank AG", + "bank_code": "30316", + "bic": "GLBNCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Dukascopy Bank SA", + "short_name": "Dukascopy Bank SA", + "bank_code": "30317", + "bic": "DUBACHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Helvetische Bank AG", + "short_name": "Helvetische Bank AG", + "bank_code": "30318", + "bic": "SFBFCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Aquila AG", + "short_name": "Aquila AG", + "bank_code": "30319", + "bic": "AQULCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Sygnum Bank AG", + "short_name": "Sygnum Bank AG", + "bank_code": "30321", + "bic": "SYGNCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "SEBA Bank AG", + "short_name": "SEBA Bank AG", + "bank_code": "30322", + "bic": "SCRYCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "CACEIS Investor Services Bank S.A.", + "short_name": "CACEIS Investor Services Bank S.A.", + "bank_code": "30323", + "bic": "FETACHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Remaco Asset Management AG", + "short_name": "Remaco Asset Management AG", + "bank_code": "30324", + "bic": "RMCOCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "LF Finance (Suisse) SA", + "short_name": "LF Finance (Suisse) SA", + "bank_code": "30325", + "bic": "LFFSCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "State Street Bank International GmbH, M\u00fcnchen, ZN Z\u00fcrich", + "short_name": "State Street Bank International GmbH, M\u00fcnchen, ZN Z\u00fcrich", + "bank_code": "30326", + "bic": "SSBECHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00e4hringer Privatbank AG", + "short_name": "Z\u00e4hringer Privatbank AG", + "bank_code": "30327", + "bic": "ZAPRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "BIZ Bank f\u00fcr Internationalen Zahlungsausgleich", + "short_name": "BIZ Bank f\u00fcr Internationalen Zahlungsausgleich", + "bank_code": "30328", + "bic": "BISBCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Industrial and Commercial Bank of China Limited", + "short_name": "Industrial and Commercial Bank of China Limited", + "bank_code": "30329", + "bic": "ICBKCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "JL Securities SA", + "short_name": "JL Securities SA", + "bank_code": "30330", + "bic": "JLSECHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Fidurh\u00f4ne SA", + "short_name": "Fidurh\u00f4ne SA", + "bank_code": "30331", + "bic": "FIDHCHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Privatbank Bellerive AG", + "short_name": "Privatbank Bellerive AG", + "bank_code": "30333", + "bic": "PRBECHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Yapeal AG", + "short_name": "Yapeal AG", + "bank_code": "30334", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Z\u00fcrcher Kantonalbank", + "short_name": "Z\u00fcrcher Kantonalbank", + "bank_code": "30700", + "bic": "ZKBKCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "Aargauische Kantonalbank", + "short_name": "Aargauische Kantonalbank", + "bank_code": "30761", + "bic": "KBAGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Appenzeller Kantonalbank", + "short_name": "Appenzeller Kantonalbank", + "bank_code": "30763", + "bic": "AIKACH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "30764", + "bic": "BSCTCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale du Valais", + "short_name": "Banque Cantonale du Valais", + "bank_code": "30765", + "bic": "BCVSCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Neuch\u00e2teloise", + "short_name": "Banque Cantonale Neuch\u00e2teloise", + "bank_code": "30766", + "bic": "BCNNCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "30767", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "30768", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Basellandschaftliche Kantonalbank", + "short_name": "Basellandschaftliche Kantonalbank", + "bank_code": "30769", + "bic": "BLKBCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Basler Kantonalbank", + "short_name": "Basler Kantonalbank", + "bank_code": "30770", + "bic": "BKBBCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Glarner Kantonalbank", + "short_name": "Glarner Kantonalbank", + "bank_code": "30773", + "bic": "GLKBCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Graub\u00fcndner Kantonalbank", + "short_name": "Graub\u00fcndner Kantonalbank", + "bank_code": "30774", + "bic": "GRKBCH2270A", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "30777", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "30778", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Nidwaldner Kantonalbank", + "short_name": "Nidwaldner Kantonalbank", + "bank_code": "30779", + "bic": "NIKACH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Obwaldner Kantonalbank", + "short_name": "Obwaldner Kantonalbank", + "bank_code": "30780", + "bic": "OBWKCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "30781", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schaffhauser Kantonalbank", + "short_name": "Schaffhauser Kantonalbank", + "bank_code": "30782", + "bic": "SHKBCH2SXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Thurgauer Kantonalbank", + "short_name": "Thurgauer Kantonalbank", + "bank_code": "30784", + "bic": "KBTGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Urner Kantonalbank", + "short_name": "Urner Kantonalbank", + "bank_code": "30785", + "bic": "URKNCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Zuger Kantonalbank", + "short_name": "Zuger Kantonalbank", + "bank_code": "30787", + "bic": "KBZGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Gen\u00e8ve", + "short_name": "Banque Cantonale de Gen\u00e8ve", + "bank_code": "30788", + "bic": "BCGECHGGXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale du Jura SA", + "short_name": "Banque Cantonale du Jura SA", + "bank_code": "30789", + "bic": "BCJUCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Berner Kantonalbank AG", + "short_name": "Berner Kantonalbank AG", + "bank_code": "30790", + "bic": "KBBECH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisen", + "short_name": "Raiffeisen", + "bank_code": "30808", + "bic": "RAIFCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Credit Suisse (Schweiz) AG", + "short_name": "Credit Suisse (Schweiz) AG", + "bank_code": "31000", + "bic": "CRESCHZZ80A", + "country_code": "CH", + "primary": false + }, + { + "name": "radicant bank ag", + "short_name": "radicant bank ag", + "bank_code": "31100", + "bic": "RDCTCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Klarpay AG", + "short_name": "Klarpay AG", + "bank_code": "31101", + "bic": "KLARCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Credit Suisse AG", + "short_name": "Credit Suisse AG", + "bank_code": "31866", + "bic": "CRESCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparkasse Sense", + "short_name": "Sparkasse Sense", + "bank_code": "61861", + "bic": "RBABCH22186", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne Courtelary SA", + "short_name": "Caisse d'Epargne Courtelary SA", + "bank_code": "62401", + "bic": "RBABCH22240", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne Courtelary SA", + "short_name": "Caisse d'Epargne Courtelary SA", + "bank_code": "62402", + "bic": "RBABCH22240", + "country_code": "CH", + "primary": false + }, + { + "name": "Caisse d'Epargne Courtelary SA", + "short_name": "Caisse d'Epargne Courtelary SA", + "bank_code": "62403", + "bic": "RBABCH22240", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis EB Entlebucher Bank AG", + "short_name": "Clientis EB Entlebucher Bank AG", + "bank_code": "66701", + "bic": "RBABCH22670", + "country_code": "CH", + "primary": false + }, + { + "name": "Clientis EB Entlebucher Bank AG", + "short_name": "Clientis EB Entlebucher Bank AG", + "bank_code": "66702", + "bic": "RBABCH22670", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparcassa 1816 Genossenschaft", + "short_name": "Sparcassa 1816 Genossenschaft", + "bank_code": "68141", + "bic": "RBABCH22814", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparcassa 1816 Genossenschaft", + "short_name": "Sparcassa 1816 Genossenschaft", + "bank_code": "68142", + "bic": "RBABCH22814", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparcassa 1816 Genossenschaft", + "short_name": "Sparcassa 1816 Genossenschaft", + "bank_code": "68143", + "bic": "RBABCH22814", + "country_code": "CH", + "primary": false + }, + { + "name": "Sparcassa 1816 Genossenschaft", + "short_name": "Sparcassa 1816 Genossenschaft", + "bank_code": "68144", + "bic": "RBABCH22814", + "country_code": "CH", + "primary": false + }, + { + "name": "BANK ZIMMERBERG AG", + "short_name": "BANK ZIMMERBERG AG", + "bank_code": "68241", + "bic": "RBABCH22824", + "country_code": "CH", + "primary": false + }, + { + "name": "BANK ZIMMERBERG AG", + "short_name": "BANK ZIMMERBERG AG", + "bank_code": "68242", + "bic": "RBABCH22824", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76410", + "bic": "BSCTCH22LUG", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76411", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76412", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76420", + "bic": "BSCTCH22LOC", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76430", + "bic": "BSCTCH22CHI", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76440", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76450", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76460", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Banca dello Stato del Cantone Ticino", + "short_name": "Banca dello Stato del Cantone Ticino", + "bank_code": "76470", + "bic": null, + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76711", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76712", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76713", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76714", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76715", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76716", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76717", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76718", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76719", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76720", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76721", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76722", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76723", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76724", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76725", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76726", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76727", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76728", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76729", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76730", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76731", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76732", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76733", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76734", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76735", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76736", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76737", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76738", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76739", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76740", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76741", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76742", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76743", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76744", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76745", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76746", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76747", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76748", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76749", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76750", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76751", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76752", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76753", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76754", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76755", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76756", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76758", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76761", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76762", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76763", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76764", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76765", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale Vaudoise", + "short_name": "Banque Cantonale Vaudoise", + "bank_code": "76766", + "bic": "BCVLCH2LXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76822", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76823", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76824", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Freiburger Kantonalbank", + "short_name": "Freiburger Kantonalbank", + "bank_code": "76825", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76826", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76827", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76828", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Freiburger Kantonalbank", + "short_name": "Freiburger Kantonalbank", + "bank_code": "76829", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76830", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Freiburger Kantonalbank", + "short_name": "Freiburger Kantonalbank", + "bank_code": "76831", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Freiburger Kantonalbank", + "short_name": "Freiburger Kantonalbank", + "bank_code": "76832", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76833", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76834", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76835", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Freiburger Kantonalbank", + "short_name": "Freiburger Kantonalbank", + "bank_code": "76836", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76837", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76838", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76839", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76840", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76841", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76842", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale de Fribourg", + "short_name": "Banque Cantonale de Fribourg", + "bank_code": "76843", + "bic": "BEFRCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77711", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77712", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77713", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77714", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77715", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77716", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77717", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77718", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77719", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77720", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77721", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77722", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77723", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77724", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77725", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77726", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77727", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77728", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77729", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77730", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77731", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77732", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77733", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schwyzer Kantonalbank", + "short_name": "Schwyzer Kantonalbank", + "bank_code": "77734", + "bic": "KBSZCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77811", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77812", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77813", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77814", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77815", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77816", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77817", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77818", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77819", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77820", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77821", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77823", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77824", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77825", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77828", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77829", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "Luzerner Kantonalbank AG", + "short_name": "Luzerner Kantonalbank AG", + "bank_code": "77860", + "bic": "LUKBCH2260A", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78102", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78103", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78111", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78112", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78113", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78114", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78115", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78116", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78117", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78118", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78119", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78120", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78121", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78122", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78123", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78124", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78125", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78126", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78127", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78128", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78129", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78130", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78131", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78132", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "St. Galler Kantonalbank AG", + "short_name": "St. Galler Kantonalbank AG", + "bank_code": "78158", + "bic": "KBSGCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schaffhauser Kantonalbank", + "short_name": "Schaffhauser Kantonalbank", + "bank_code": "78212", + "bic": "SHKBCH2SXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Schaffhauser Kantonalbank", + "short_name": "Schaffhauser Kantonalbank", + "bank_code": "78213", + "bic": "SHKBCH2SXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale du Jura SA", + "short_name": "Banque Cantonale du Jura SA", + "bank_code": "78910", + "bic": "BCJUCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Cantonale du Jura SA", + "short_name": "Banque Cantonale du Jura SA", + "bank_code": "78920", + "bic": "BCJUCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisen Schweiz", + "short_name": "Raiffeisen Schweiz", + "bank_code": "80000", + "bic": "RAIFCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Banque Raiffeisen R\u00e9gion Del\u00e9mont", + "short_name": "Banque Raiffeisen R\u00e9gion Del\u00e9mont", + "bank_code": "80002", + "bic": "RAIFCH22002", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank St.Gallen", + "short_name": "Raiffeisenbank St.Gallen", + "bank_code": "80005", + "bic": "RAIFCH22005", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Ajoie", + "short_name": "Banque Raiffeisen Ajoie", + "bank_code": "80027", + "bic": "RAIFCH22027", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Clos du Doubs et Haute-Ajoie", + "short_name": "Banque Raiffeisen Clos du Doubs et Haute-Ajoie", + "bank_code": "80037", + "bic": "RAIFCH22037", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Pierre Pertuis", + "short_name": "Banque Raiffeisen Pierre Pertuis", + "bank_code": "80051", + "bic": "RAIFCH22051", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Franches-Montagnes", + "short_name": "Banque Raiffeisen Franches-Montagnes", + "bank_code": "80059", + "bic": "RAIFCH22059", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen du Val-Terbi", + "short_name": "Banque Raiffeisen du Val-Terbi", + "bank_code": "80073", + "bic": "RAIFCH22073", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Worblen-Emmental", + "short_name": "Raiffeisenbank Worblen-Emmental", + "bank_code": "80094", + "bic": "RAIFCH22094", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Laufental-Thierstein", + "short_name": "Raiffeisenbank Laufental-Thierstein", + "bank_code": "80097", + "bic": "RAIFCH22097", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank G\u00fcrbe", + "short_name": "Raiffeisenbank G\u00fcrbe", + "bank_code": "80098", + "bic": "RAIFCH22098", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Sarine-Ouest", + "short_name": "Banque Raiffeisen Sarine-Ouest", + "bank_code": "80102", + "bic": "RAIFCH22102", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen R\u00e9gion Marly-Cousimbert", + "short_name": "Banque Raiffeisen R\u00e9gion Marly-Cousimbert", + "bank_code": "80105", + "bic": "RAIFCH22105", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Mol\u00e9son", + "short_name": "Banque Raiffeisen Mol\u00e9son", + "bank_code": "80129", + "bic": "RAIFCH22129", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Broye Vully Lac", + "short_name": "Banque Raiffeisen Broye Vully Lac", + "bank_code": "80139", + "bic": "RAIFCH22139", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Fribourg-Ouest", + "short_name": "Banque Raiffeisen Fribourg-Ouest", + "bank_code": "80159", + "bic": "RAIFCH22159", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen R\u00e9gion Gen\u00e8ve Rh\u00f4ne", + "short_name": "Banque Raiffeisen R\u00e9gion Gen\u00e8ve Rh\u00f4ne", + "bank_code": "80181", + "bic": "RAIFCH22181", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Gen\u00e8ve Rive Gauche", + "short_name": "Banque Raiffeisen Gen\u00e8ve Rive Gauche", + "bank_code": "80187", + "bic": "RAIFCH22187", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Villes et Vignobles genevois", + "short_name": "Banque Raiffeisen Villes et Vignobles genevois", + "bank_code": "80210", + "bic": "RAIFCH22210", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de la Versoix", + "short_name": "Banque Raiffeisen de la Versoix", + "bank_code": "80215", + "bic": "RAIFCH22215", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen des Montagnes Neuch\u00e2teloises", + "short_name": "Banque Raiffeisen des Montagnes Neuch\u00e2teloises", + "bank_code": "80237", + "bic": "RAIFCH22237", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Neuch\u00e2tel et Vall\u00e9es", + "short_name": "Banque Raiffeisen Neuch\u00e2tel et Vall\u00e9es", + "bank_code": "80241", + "bic": "RAIFCH22241", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen del Basso Mendrisiotto", + "short_name": "Banca Raiffeisen del Basso Mendrisiotto", + "bank_code": "80272", + "bic": "RAIFCH22272", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Piano di Magadino", + "short_name": "Banca Raiffeisen Piano di Magadino", + "bank_code": "80280", + "bic": "RAIFCH22280", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen del Vedeggio", + "short_name": "Banca Raiffeisen del Vedeggio", + "bank_code": "80283", + "bic": "RAIFCH22283", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Basso Ceresio", + "short_name": "Banca Raiffeisen Basso Ceresio", + "bank_code": "80287", + "bic": "RAIFCH22287", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Morbio-Vacallo", + "short_name": "Banca Raiffeisen Morbio-Vacallo", + "bank_code": "80290", + "bic": "RAIFCH22290", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen del Malcantone", + "short_name": "Banca Raiffeisen del Malcantone", + "bank_code": "80317", + "bic": "RAIFCH22317", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Losone Pedemonte Vallemaggia", + "short_name": "Banca Raiffeisen Losone Pedemonte Vallemaggia", + "bank_code": "80333", + "bic": "RAIFCH22333", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Mendrisio e Valle di Muggio", + "short_name": "Banca Raiffeisen Mendrisio e Valle di Muggio", + "bank_code": "80340", + "bic": "RAIFCH22340", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen del Camogh\u00e8", + "short_name": "Banca Raiffeisen del Camogh\u00e8", + "bank_code": "80344", + "bic": "RAIFCH22344", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Tre Valli", + "short_name": "Banca Raiffeisen Tre Valli", + "bank_code": "80350", + "bic": "RAIFCH22350", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Colline del Ceresio", + "short_name": "Banca Raiffeisen Colline del Ceresio", + "bank_code": "80362", + "bic": "RAIFCH22362", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen della Campagnadorna", + "short_name": "Banca Raiffeisen della Campagnadorna", + "bank_code": "80365", + "bic": "RAIFCH22365", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen del Cassarate", + "short_name": "Banca Raiffeisen del Cassarate", + "bank_code": "80366", + "bic": "RAIFCH22366", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Lugano", + "short_name": "Banca Raiffeisen Lugano", + "bank_code": "80375", + "bic": "RAIFCH22375", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Locarno", + "short_name": "Banca Raiffeisen Locarno", + "bank_code": "80379", + "bic": "RAIFCH22379", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Bellinzonese e Visagno", + "short_name": "Banca Raiffeisen Bellinzonese e Visagno", + "bank_code": "80387", + "bic": "RAIFCH22387", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Mont-Aubert Orbe", + "short_name": "Banque Raiffeisen Mont-Aubert Orbe", + "bank_code": "80401", + "bic": "RAIFCH22401", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen d'Assens-Talent", + "short_name": "Banque Raiffeisen d'Assens-Talent", + "bank_code": "80414", + "bic": "RAIFCH22414", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Alpes Riviera Chablais Vaudois", + "short_name": "Banque Raiffeisen Alpes Riviera Chablais Vaudois", + "bank_code": "80430", + "bic": "RAIFCH22430", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen du Gros-de-Vaud", + "short_name": "Banque Raiffeisen du Gros-de-Vaud", + "bank_code": "80434", + "bic": "RAIFCH22434", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Nyon-La Vall\u00e9e", + "short_name": "Banque Raiffeisen Nyon-La Vall\u00e9e", + "bank_code": "80442", + "bic": "RAIFCH22442", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen du Mont-Tendre", + "short_name": "Banque Raiffeisen du Mont-Tendre", + "bank_code": "80445", + "bic": "RAIFCH22445", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Lausanne Haute-Broye-Jorat", + "short_name": "Banque Raiffeisen Lausanne Haute-Broye-Jorat", + "bank_code": "80451", + "bic": "RAIFCH22451", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Lavaux", + "short_name": "Banque Raiffeisen de Lavaux", + "bank_code": "80454", + "bic": "RAIFCH22454", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Morges Venoge", + "short_name": "Banque Raiffeisen Morges Venoge", + "bank_code": "80460", + "bic": "RAIFCH22460", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen d'Yverdon-les-Bains", + "short_name": "Banque Raiffeisen d'Yverdon-les-Bains", + "bank_code": "80472", + "bic": "RAIFCH22472", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de la Broye", + "short_name": "Banque Raiffeisen de la Broye", + "bank_code": "80479", + "bic": "RAIFCH22479", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Gimel", + "short_name": "Banque Raiffeisen de Gimel", + "bank_code": "80485", + "bic": "RAIFCH22485", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mischabel-Matterhorn", + "short_name": "Raiffeisenbank Mischabel-Matterhorn", + "bank_code": "80496", + "bic": "RAIFCH22496", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Gampel-Raron", + "short_name": "Raiffeisenbank Gampel-Raron", + "bank_code": "80521", + "bic": "RAIFCH22521", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Leuk", + "short_name": "Raiffeisenbank Region Leuk", + "bank_code": "80527", + "bic": "RAIFCH22527", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Belalp-Simplon", + "short_name": "Raiffeisenbank Belalp-Simplon", + "bank_code": "80532", + "bic": "RAIFCH22532", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aletsch-Goms", + "short_name": "Raiffeisenbank Aletsch-Goms", + "bank_code": "80539", + "bic": "RAIFCH22539", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Visp", + "short_name": "Raiffeisenbank Region Visp", + "bank_code": "80553", + "bic": "RAIFCH22553", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Sion et R\u00e9gion", + "short_name": "Banque Raiffeisen Sion et R\u00e9gion", + "bank_code": "80572", + "bic": "RAIFCH22572", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Entremont", + "short_name": "Banque Raiffeisen Entremont", + "bank_code": "80581", + "bic": "RAIFCH22581", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen du Haut-L\u00e9man", + "short_name": "Banque Raiffeisen du Haut-L\u00e9man", + "bank_code": "80588", + "bic": "RAIFCH22588", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen Martigny et R\u00e9gion", + "short_name": "Banque Raiffeisen Martigny et R\u00e9gion", + "bank_code": "80595", + "bic": "RAIFCH22595", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Sierre & R\u00e9gion", + "short_name": "Banque Raiffeisen de Sierre & R\u00e9gion", + "bank_code": "80598", + "bic": "RAIFCH22598", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Massongex-St-Maurice-V\u00e9rossaz", + "short_name": "Banque Raiffeisen de Massongex-St-Maurice-V\u00e9rossaz", + "bank_code": "80606", + "bic": "RAIFCH22606", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Monthey", + "short_name": "Banque Raiffeisen de Monthey", + "bank_code": "80611", + "bic": "RAIFCH22611", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen des Communes du Haut Plateau", + "short_name": "Banque Raiffeisen des Communes du Haut Plateau", + "bank_code": "80615", + "bic": "RAIFCH22615", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Troistorrents-Morgins", + "short_name": "Banque Raiffeisen de Troistorrents-Morgins", + "bank_code": "80626", + "bic": "RAIFCH22626", + "country_code": "CH", + "primary": false + }, + { + "name": "Banque Raiffeisen de Val-d'Illiez-Champ\u00e9ry", + "short_name": "Banque Raiffeisen de Val-d'Illiez-Champ\u00e9ry", + "bank_code": "80627", + "bic": "RAIFCH22627", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank B\u00f6ttstein", + "short_name": "Raiffeisenbank B\u00f6ttstein", + "bank_code": "80652", + "bic": "RAIFCH22652", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank an der Limmat", + "short_name": "Raiffeisenbank an der Limmat", + "bank_code": "80666", + "bic": "RAIFCH22666", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mutschellen-Reppischtal", + "short_name": "Raiffeisenbank Mutschellen-Reppischtal", + "bank_code": "80673", + "bic": "RAIFCH22673", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Wasserschloss", + "short_name": "Raiffeisenbank Wasserschloss", + "bank_code": "80690", + "bic": "RAIFCH22690", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Frick-Mettauertal", + "short_name": "Raiffeisenbank Regio Frick-Mettauertal", + "bank_code": "80691", + "bic": "RAIFCH22691", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Laufenburg", + "short_name": "Raiffeisenbank Regio Laufenburg", + "bank_code": "80696", + "bic": "RAIFCH22696", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aarau-Lenzburg", + "short_name": "Raiffeisenbank Aarau-Lenzburg", + "bank_code": "80698", + "bic": "RAIFCH22698", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Surbtal-Wehntal", + "short_name": "Raiffeisenbank Surbtal-Wehntal", + "bank_code": "80700", + "bic": "RAIFCH22700", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aare-Rhein", + "short_name": "Raiffeisenbank Aare-Rhein", + "bank_code": "80701", + "bic": "RAIFCH22701", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Kelleramt-Albis", + "short_name": "Raiffeisenbank Kelleramt-Albis", + "bank_code": "80702", + "bic": "RAIFCH22702", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aare-Reuss", + "short_name": "Raiffeisenbank Aare-Reuss", + "bank_code": "80704", + "bic": "RAIFCH22704", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Reuss-Lindenberg", + "short_name": "Raiffeisenbank Reuss-Lindenberg", + "bank_code": "80705", + "bic": "RAIFCH22705", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank M\u00f6hlin", + "short_name": "Raiffeisenbank M\u00f6hlin", + "bank_code": "80706", + "bic": "RAIFCH22706", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Rohrdorferberg-Fislisbach", + "short_name": "Raiffeisenbank Rohrdorferberg-Fislisbach", + "bank_code": "80719", + "bic": "RAIFCH22719", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Zofingen", + "short_name": "Raiffeisenbank Region Zofingen", + "bank_code": "80721", + "bic": "RAIFCH22721", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Reitnau-Rued", + "short_name": "Raiffeisenbank Reitnau-Rued", + "bank_code": "80723", + "bic": "RAIFCH22723", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Oberfreiamt", + "short_name": "Raiffeisenbank Oberfreiamt", + "bank_code": "80728", + "bic": "RAIFCH22728", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Villmergen", + "short_name": "Raiffeisenbank Villmergen", + "bank_code": "80736", + "bic": "RAIFCH22736", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank L\u00e4gern-Baregg", + "short_name": "Raiffeisenbank L\u00e4gern-Baregg", + "bank_code": "80740", + "bic": "RAIFCH22740", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Wohlen", + "short_name": "Raiffeisenbank Wohlen", + "bank_code": "80744", + "bic": "RAIFCH22744", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Siggenthal-W\u00fcrenlingen", + "short_name": "Raiffeisenbank Siggenthal-W\u00fcrenlingen", + "bank_code": "80746", + "bic": "RAIFCH22746", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank W\u00fcrenlos", + "short_name": "Raiffeisenbank W\u00fcrenlos", + "bank_code": "80747", + "bic": "RAIFCH22747", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Wegenstettertal", + "short_name": "Raiffeisenbank Wegenstettertal", + "bank_code": "80748", + "bic": "RAIFCH22748", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Zufikon", + "short_name": "Raiffeisenbank Zufikon", + "bank_code": "80749", + "bic": "RAIFCH22749", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Liestal-Oberbaselbiet", + "short_name": "Raiffeisenbank Liestal-Oberbaselbiet", + "bank_code": "80773", + "bic": "RAIFCH22773", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Leimental", + "short_name": "Raiffeisenbank Leimental", + "bank_code": "80774", + "bic": "RAIFCH22774", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Allschwil-Sch\u00f6nenbuch", + "short_name": "Raiffeisenbank Allschwil-Sch\u00f6nenbuch", + "bank_code": "80775", + "bic": "RAIFCH22775", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Muttenz", + "short_name": "Raiffeisenbank Regio Muttenz", + "bank_code": "80776", + "bic": "RAIFCH22776", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aesch-Pfeffingen", + "short_name": "Raiffeisenbank Aesch-Pfeffingen", + "bank_code": "80779", + "bic": "RAIFCH22779", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Reinach BL", + "short_name": "Raiffeisenbank Reinach BL", + "bank_code": "80780", + "bic": "RAIFCH22780", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisen", + "short_name": "Raiffeisen", + "bank_code": "80808", + "bic": "RAIFCH22XXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Kiesental", + "short_name": "Raiffeisenbank Kiesental", + "bank_code": "80811", + "bic": "RAIFCH22811", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Niedersimmental", + "short_name": "Raiffeisenbank Niedersimmental", + "bank_code": "80816", + "bic": "RAIFCH22816", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Steffisburg", + "short_name": "Raiffeisenbank Steffisburg", + "bank_code": "80817", + "bic": "RAIFCH22817", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Grauholz", + "short_name": "Raiffeisenbank Grauholz", + "bank_code": "80819", + "bic": "RAIFCH22819", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Frutigland", + "short_name": "Raiffeisenbank Frutigland", + "bank_code": "80820", + "bic": "RAIFCH22820", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Jungfrau", + "short_name": "Raiffeisenbank Jungfrau", + "bank_code": "80842", + "bic": "RAIFCH22842", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Haslital-Brienz", + "short_name": "Raiffeisenbank Region Haslital-Brienz", + "bank_code": "80843", + "bic": "RAIFCH22843", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Obersimmental-Saanenland", + "short_name": "Raiffeisenbank Obersimmental-Saanenland", + "bank_code": "80856", + "bic": "RAIFCH22856", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Schwarzwasser", + "short_name": "Raiffeisenbank Schwarzwasser", + "bank_code": "80860", + "bic": "RAIFCH22860", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Seeland", + "short_name": "Raiffeisenbank Seeland", + "bank_code": "80862", + "bic": "RAIFCH22862", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Thunersee", + "short_name": "Raiffeisenbank Thunersee", + "bank_code": "80867", + "bic": "RAIFCH22867", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Unteremmental", + "short_name": "Raiffeisenbank Unteremmental", + "bank_code": "80875", + "bic": "RAIFCH22875", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Oberes Emmental", + "short_name": "Raiffeisenbank Oberes Emmental", + "bank_code": "80882", + "bic": "RAIFCH22882", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Burgdorf", + "short_name": "Raiffeisenbank Region Burgdorf", + "bank_code": "80888", + "bic": "RAIFCH22888", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sense-Oberland", + "short_name": "Raiffeisenbank Sense-Oberland", + "bank_code": "80895", + "bic": "RAIFCH22895", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank See-Lac", + "short_name": "Raiffeisenbank See-Lac", + "bank_code": "80896", + "bic": "RAIFCH22896", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Freiburg Ost / Fribourg Est", + "short_name": "Raiffeisenbank Freiburg Ost / Fribourg Est", + "bank_code": "80901", + "bic": "RAIFCH22901", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sensetal", + "short_name": "Raiffeisenbank Sensetal", + "bank_code": "80905", + "bic": "RAIFCH22905", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Dulliken-Starrkirch", + "short_name": "Raiffeisenbank Dulliken-Starrkirch", + "bank_code": "80911", + "bic": "RAIFCH22911", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank G\u00e4u-Bipperamt", + "short_name": "Raiffeisenbank G\u00e4u-Bipperamt", + "bank_code": "80912", + "bic": "RAIFCH22912", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aare-Langete", + "short_name": "Raiffeisenbank Aare-Langete", + "bank_code": "80914", + "bic": "RAIFCH22914", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mittelg\u00f6sgen-Staffelegg", + "short_name": "Raiffeisenbank Mittelg\u00f6sgen-Staffelegg", + "bank_code": "80918", + "bic": "RAIFCH22918", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Balsthal-Laupersdorf", + "short_name": "Raiffeisenbank Balsthal-Laupersdorf", + "bank_code": "80930", + "bic": "RAIFCH22930", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Wasseramt-Buchsi", + "short_name": "Raiffeisenbank Wasseramt-Buchsi", + "bank_code": "80938", + "bic": "RAIFCH22938", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Dornach", + "short_name": "Raiffeisenbank Dornach", + "bank_code": "80939", + "bic": "RAIFCH22939", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Unterg\u00e4u", + "short_name": "Raiffeisenbank Unterg\u00e4u", + "bank_code": "80947", + "bic": "RAIFCH22947", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank D\u00fcnnerntal-Guldental", + "short_name": "Raiffeisenbank D\u00fcnnerntal-Guldental", + "bank_code": "80962", + "bic": "RAIFCH22962", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank am Eppenberg", + "short_name": "Raiffeisenbank am Eppenberg", + "bank_code": "80965", + "bic": "RAIFCH22965", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Olten", + "short_name": "Raiffeisenbank Olten", + "bank_code": "80970", + "bic": "RAIFCH22970", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Untere Emme", + "short_name": "Raiffeisenbank Untere Emme", + "bank_code": "80971", + "bic": "RAIFCH22971", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Weissenstein", + "short_name": "Raiffeisenbank Weissenstein", + "bank_code": "80976", + "bic": "RAIFCH22976", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Appenzeller Hinterland", + "short_name": "Raiffeisenbank Appenzeller Hinterland", + "bank_code": "81011", + "bic": "RAIFCH22A11", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Heiden", + "short_name": "Raiffeisenbank Heiden", + "bank_code": "81012", + "bic": "RAIFCH22A12", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Appenzell", + "short_name": "Raiffeisenbank Appenzell", + "bank_code": "81023", + "bic": "RAIFCH22A23", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Glarnerland", + "short_name": "Raiffeisenbank Glarnerland", + "bank_code": "81031", + "bic": "RAIFCH22A31", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank B\u00fcndner Rheintal", + "short_name": "Raiffeisenbank B\u00fcndner Rheintal", + "bank_code": "81045", + "bic": "RAIFCH22A45", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen del Moesano", + "short_name": "Banca Raiffeisen del Moesano", + "bank_code": "81054", + "bic": "RAIFCH22A54", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mittelb\u00fcnden", + "short_name": "Raiffeisenbank Mittelb\u00fcnden", + "bank_code": "81063", + "bic": "RAIFCH22A63", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Cadi", + "short_name": "Raiffeisenbank Cadi", + "bank_code": "81072", + "bic": "RAIFCH22A72", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Surselva", + "short_name": "Raiffeisenbank Surselva", + "bank_code": "81073", + "bic": "RAIFCH22A73", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Pr\u00e4ttigau-Davos", + "short_name": "Raiffeisenbank Pr\u00e4ttigau-Davos", + "bank_code": "81084", + "bic": "RAIFCH22A84", + "country_code": "CH", + "primary": false + }, + { + "name": "Banca Raiffeisen Valposchiavo", + "short_name": "Banca Raiffeisen Valposchiavo", + "bank_code": "81103", + "bic": "RAIFCH22B03", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Engiadina Val M\u00fcstair", + "short_name": "Raiffeisenbank Engiadina Val M\u00fcstair", + "bank_code": "81144", + "bic": "RAIFCH22B44", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Pilatus", + "short_name": "Raiffeisenbank Pilatus", + "bank_code": "81165", + "bic": "RAIFCH22B65", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Adligenswil-Udligenswil-Meggen", + "short_name": "Raiffeisenbank Adligenswil-Udligenswil-Meggen", + "bank_code": "81168", + "bic": "RAIFCH22B68", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Hitzkirchertal", + "short_name": "Raiffeisenbank Hitzkirchertal", + "bank_code": "81169", + "bic": "RAIFCH22B69", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Berom\u00fcnster", + "short_name": "Raiffeisenbank Berom\u00fcnster", + "bank_code": "81170", + "bic": "RAIFCH22B70", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Emmen", + "short_name": "Raiffeisenbank Emmen", + "bank_code": "81177", + "bic": "RAIFCH22B77", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank im Entlebuch", + "short_name": "Raiffeisenbank im Entlebuch", + "bank_code": "81179", + "bic": "RAIFCH22B79", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Horw", + "short_name": "Raiffeisenbank Horw", + "bank_code": "81186", + "bic": "RAIFCH22B86", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Oberseetal", + "short_name": "Raiffeisenbank Oberseetal", + "bank_code": "81187", + "bic": "RAIFCH22B87", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Menznau-Wolhusen", + "short_name": "Raiffeisenbank Menznau-Wolhusen", + "bank_code": "81194", + "bic": "RAIFCH22B94", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sempachersee-Rottal S\u00fcd", + "short_name": "Raiffeisenbank Sempachersee-Rottal S\u00fcd", + "bank_code": "81196", + "bic": "RAIFCH22B96", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Luzern", + "short_name": "Raiffeisenbank Luzern", + "bank_code": "81203", + "bic": "RAIFCH22C03", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Rothenburg", + "short_name": "Raiffeisenbank Rothenburg", + "bank_code": "81204", + "bic": "RAIFCH22C04", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Luzerner Hinterland", + "short_name": "Raiffeisenbank Luzerner Hinterland", + "bank_code": "81211", + "bic": "RAIFCH22C11", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Ettiswil", + "short_name": "Raiffeisenbank Ettiswil", + "bank_code": "81212", + "bic": "RAIFCH22C12", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Luzerner Landschaft Nordwest", + "short_name": "Raiffeisenbank Luzerner Landschaft Nordwest", + "bank_code": "81214", + "bic": "RAIFCH22C14", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Nidwalden", + "short_name": "Raiffeisenbank Nidwalden", + "bank_code": "81223", + "bic": "RAIFCH22C23", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Obwalden", + "short_name": "Raiffeisenbank Obwalden", + "bank_code": "81232", + "bic": "RAIFCH22C32", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio St. Gallen West", + "short_name": "Raiffeisenbank Regio St. Gallen West", + "bank_code": "81241", + "bic": "RAIFCH22C41", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Werdenberg", + "short_name": "Raiffeisenbank Werdenberg", + "bank_code": "81251", + "bic": "RAIFCH22C51", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Benken", + "short_name": "Raiffeisenbank Benken", + "bank_code": "81256", + "bic": "RAIFCH22C56", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Bernhardzell", + "short_name": "Raiffeisenbank Bernhardzell", + "bank_code": "81259", + "bic": "RAIFCH22C59", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Unteres Toggenburg & Neckertal", + "short_name": "Raiffeisenbank Regio Unteres Toggenburg & Neckertal", + "bank_code": "81261", + "bic": "RAIFCH22C61", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Diepoldsau-Schmitter", + "short_name": "Raiffeisenbank Diepoldsau-Schmitter", + "bank_code": "81262", + "bic": "RAIFCH22C62", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Gossau-Andwil-Niederwil", + "short_name": "Raiffeisenbank Gossau-Andwil-Niederwil", + "bank_code": "81271", + "bic": "RAIFCH22C71", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Rapperswil-Jona", + "short_name": "Raiffeisenbank Rapperswil-Jona", + "bank_code": "81274", + "bic": "RAIFCH22C74", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sarganserland", + "short_name": "Raiffeisenbank Sarganserland", + "bank_code": "81281", + "bic": "RAIFCH22C81", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank M\u00f6rschwil", + "short_name": "Raiffeisenbank M\u00f6rschwil", + "bank_code": "81284", + "bic": "RAIFCH22C84", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Obertoggenburg", + "short_name": "Raiffeisenbank Obertoggenburg", + "bank_code": "81287", + "bic": "RAIFCH22C87", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Niederhelfenschwil", + "short_name": "Raiffeisenbank Niederhelfenschwil", + "bank_code": "81289", + "bic": "RAIFCH22C89", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Uzwil", + "short_name": "Raiffeisenbank Regio Uzwil", + "bank_code": "81291", + "bic": "RAIFCH22C91", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Unteres Rheintal", + "short_name": "Raiffeisenbank Unteres Rheintal", + "bank_code": "81295", + "bic": "RAIFCH22C95", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Rorschach", + "short_name": "Raiffeisenbank Region Rorschach", + "bank_code": "81296", + "bic": "RAIFCH22C96", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Oberes Rheintal", + "short_name": "Raiffeisenbank Oberes Rheintal", + "bank_code": "81297", + "bic": "RAIFCH22C97", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank am Ricken", + "short_name": "Raiffeisenbank am Ricken", + "bank_code": "81298", + "bic": "RAIFCH22C98", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sch\u00e4nis-Amden", + "short_name": "Raiffeisenbank Sch\u00e4nis-Amden", + "bank_code": "81302", + "bic": "RAIFCH22D02", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sennwald", + "short_name": "Raiffeisenbank Sennwald", + "bank_code": "81304", + "bic": "RAIFCH22D04", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Arbon", + "short_name": "Raiffeisenbank Regio Arbon", + "bank_code": "81307", + "bic": "RAIFCH22D07", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Waldkirch", + "short_name": "Raiffeisenbank Waldkirch", + "bank_code": "81313", + "bic": "RAIFCH22D13", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Walenstadt", + "short_name": "Raiffeisenbank Walenstadt", + "bank_code": "81314", + "bic": "RAIFCH22D14", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mittleres Toggenburg", + "short_name": "Raiffeisenbank Mittleres Toggenburg", + "bank_code": "81317", + "bic": "RAIFCH22D17", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mittelrheintal", + "short_name": "Raiffeisenbank Mittelrheintal", + "bank_code": "81319", + "bic": "RAIFCH22D19", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Wil und Umgebung", + "short_name": "Raiffeisenbank Wil und Umgebung", + "bank_code": "81320", + "bic": "RAIFCH22D20", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Wittenbach-H\u00e4ggenschwil", + "short_name": "Raiffeisenbank Wittenbach-H\u00e4ggenschwil", + "bank_code": "81323", + "bic": "RAIFCH22D23", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Marbach-Rebstein", + "short_name": "Raiffeisenbank Marbach-Rebstein", + "bank_code": "81324", + "bic": "RAIFCH22D24", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Flawil-Degersheim-Mogelsberg-Oberuzwil", + "short_name": "Raiffeisenbank Flawil-Degersheim-Mogelsberg-Oberuzwil", + "bank_code": "81325", + "bic": "RAIFCH22D25", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Schaffhausen", + "short_name": "Raiffeisenbank Schaffhausen", + "bank_code": "81344", + "bic": "RAIFCH22D44", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Rigi", + "short_name": "Raiffeisenbank Rigi", + "bank_code": "81351", + "bic": "RAIFCH22D51", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region linker Z\u00fcrichsee", + "short_name": "Raiffeisenbank Region linker Z\u00fcrichsee", + "bank_code": "81356", + "bic": "RAIFCH22D56", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Muotathal", + "short_name": "Raiffeisenbank Muotathal", + "bank_code": "81360", + "bic": "RAIFCH22D60", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Einsiedeln-Ybrig", + "short_name": "Raiffeisenbank Einsiedeln-Ybrig", + "bank_code": "81361", + "bic": "RAIFCH22D61", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Altnau", + "short_name": "Raiffeisenbank Regio Altnau", + "bank_code": "81371", + "bic": "RAIFCH22D71", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Aadorf", + "short_name": "Raiffeisenbank Aadorf", + "bank_code": "81377", + "bic": "RAIFCH22D77", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank am Bichelsee", + "short_name": "Raiffeisenbank am Bichelsee", + "bank_code": "81378", + "bic": "RAIFCH22D78", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Mittelthurgau", + "short_name": "Raiffeisenbank Mittelthurgau", + "bank_code": "81380", + "bic": "RAIFCH22D80", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Untersee-Rhein", + "short_name": "Raiffeisenbank Untersee-Rhein", + "bank_code": "81382", + "bic": "RAIFCH22D82", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Frauenfeld", + "short_name": "Raiffeisenbank Frauenfeld", + "bank_code": "81384", + "bic": "RAIFCH22D84", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Neukirch-Romanshorn", + "short_name": "Raiffeisenbank Neukirch-Romanshorn", + "bank_code": "81398", + "bic": "RAIFCH22D98", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Seer\u00fccken", + "short_name": "Raiffeisenbank Seer\u00fccken", + "bank_code": "81401", + "bic": "RAIFCH22E01", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Regio Sirnach", + "short_name": "Raiffeisenbank Regio Sirnach", + "bank_code": "81402", + "bic": "RAIFCH22E02", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank T\u00e4gerwilen", + "short_name": "Raiffeisenbank T\u00e4gerwilen", + "bank_code": "81412", + "bic": "RAIFCH22E12", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank M\u00fcnchwilen-Tobel", + "short_name": "Raiffeisenbank M\u00fcnchwilen-Tobel", + "bank_code": "81414", + "bic": "RAIFCH22E14", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank W\u00e4ngi-Matzingen", + "short_name": "Raiffeisenbank W\u00e4ngi-Matzingen", + "bank_code": "81416", + "bic": "RAIFCH22E16", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Amriswil Bischofszell", + "short_name": "Raiffeisenbank Amriswil Bischofszell", + "bank_code": "81417", + "bic": "RAIFCH22E17", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Urnerland", + "short_name": "Raiffeisenbank Urnerland", + "bank_code": "81431", + "bic": "RAIFCH22E31", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Sch\u00e4chental", + "short_name": "Raiffeisenbank Sch\u00e4chental", + "bank_code": "81432", + "bic": "RAIFCH22E32", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Zug", + "short_name": "Raiffeisenbank Zug", + "bank_code": "81454", + "bic": "RAIFCH22E54", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Cham-Steinhausen", + "short_name": "Raiffeisenbank Cham-Steinhausen", + "bank_code": "81455", + "bic": "RAIFCH22E55", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank H\u00fcnenberg-Risch", + "short_name": "Raiffeisenbank H\u00fcnenberg-Risch", + "bank_code": "81456", + "bic": "RAIFCH22E56", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Menzingen-Neuheim", + "short_name": "Raiffeisenbank Menzingen-Neuheim", + "bank_code": "81457", + "bic": "RAIFCH22E57", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region \u00c4gerital-Sattel", + "short_name": "Raiffeisenbank Region \u00c4gerital-Sattel", + "bank_code": "81459", + "bic": "RAIFCH22E59", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Z\u00fcrcher Oberland", + "short_name": "Raiffeisenbank Z\u00fcrcher Oberland", + "bank_code": "81471", + "bic": "RAIFCH22E71", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Z\u00fcrich Flughafen", + "short_name": "Raiffeisenbank Z\u00fcrich Flughafen", + "bank_code": "81474", + "bic": "RAIFCH22E74", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Z\u00fcri-Unterland", + "short_name": "Raiffeisenbank Z\u00fcri-Unterland", + "bank_code": "81475", + "bic": "RAIFCH22E75", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Region Glatt", + "short_name": "Raiffeisenbank Region Glatt", + "bank_code": "81477", + "bic": "RAIFCH22E77", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Weinland", + "short_name": "Raiffeisenbank Weinland", + "bank_code": "81479", + "bic": "RAIFCH22E79", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank rechter Z\u00fcrichsee", + "short_name": "Raiffeisenbank rechter Z\u00fcrichsee", + "bank_code": "81481", + "bic": "RAIFCH22E81", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Winterthur", + "short_name": "Raiffeisenbank Winterthur", + "bank_code": "81485", + "bic": "RAIFCH22E85", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Basel", + "short_name": "Raiffeisenbank Basel", + "bank_code": "81486", + "bic": "RAIFCH22E86", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Z\u00fcrich", + "short_name": "Raiffeisenbank Z\u00fcrich", + "bank_code": "81487", + "bic": "RAIFCH22E87", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Bern", + "short_name": "Raiffeisenbank Bern", + "bank_code": "81488", + "bic": "RAIFCH22E88", + "country_code": "CH", + "primary": false + }, + { + "name": "Raiffeisenbank Thalwil", + "short_name": "Raiffeisenbank Thalwil", + "bank_code": "81490", + "bic": "RAIFCH22E90", + "country_code": "CH", + "primary": false + }, + { + "name": "RAIFFEISEN SCHWEIZ, FIRMENKUNDEN", + "short_name": "RAIFFEISEN SCHWEIZ, FIRMENKUNDEN", + "bank_code": "81491", + "bic": "RAIFCH22E91", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82961", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82962", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82963", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82964", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82965", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82966", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82967", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82968", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82969", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82971", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82972", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82973", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82975", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82976", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82977", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82979", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82981", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82983", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82984", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank SA", + "short_name": "Cembra Money Bank SA", + "bank_code": "82986", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Cembra Money Bank AG", + "short_name": "Cembra Money Bank AG", + "bank_code": "82998", + "bic": "CMBNCHZHXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Bank von Roll AG", + "short_name": "Bank von Roll AG", + "bank_code": "83002", + "bic": "VRLLCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SIX x-clear AG", + "short_name": "SIX x-clear AG", + "bank_code": "83003", + "bic": "CLRXCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Allianz Suisse Versicherungs-Ges. AG (geb. Verm\u00f6gen)", + "short_name": "Allianz Suisse Versicherungs-Ges. AG (geb. Verm\u00f6gen)", + "bank_code": "83004", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Industrial and Commercial Bank of China Limited", + "short_name": "Industrial and Commercial Bank of China Limited", + "bank_code": "83006", + "bic": "ICBKCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "JL Securities SA", + "short_name": "JL Securities SA", + "bank_code": "83007", + "bic": "JLSECHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Fidurh\u00f4ne SA", + "short_name": "Fidurh\u00f4ne SA", + "bank_code": "83008", + "bic": "FIDHCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Helvetia Schweizerische Versicherungsgesellschaft AG", + "short_name": "Helvetia Schweizerische Versicherungsgesellschaft AG", + "bank_code": "83009", + "bic": "HELNCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Allianz Suisse Lebensvers.-Ges. AG (geb. Verm\u00f6gen KL)", + "short_name": "Allianz Suisse Lebensvers.-Ges. AG (geb. Verm\u00f6gen KL)", + "bank_code": "83010", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Privatbank Bellerive AG", + "short_name": "Privatbank Bellerive AG", + "bank_code": "83011", + "bic": "PRBECHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Hottinger AG", + "short_name": "Hottinger AG", + "bank_code": "83013", + "bic": "JHOTCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Sygnum Bank AG", + "short_name": "Sygnum Bank AG", + "bank_code": "83014", + "bic": "SYGNCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "MBaer Merchant Bank AG", + "short_name": "MBaer Merchant Bank AG", + "bank_code": "83015", + "bic": "MBMECHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "AXA Versicherungen AG", + "short_name": "AXA Versicherungen AG", + "bank_code": "83016", + "bic": "AXIPCHZZWIN", + "country_code": "CH", + "primary": true + }, + { + "name": "SIX Digital Exchange AG", + "short_name": "SIX Digital Exchange AG", + "bank_code": "83017", + "bic": "SDXXCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SEBA Bank AG", + "short_name": "SEBA Bank AG", + "bank_code": "83018", + "bic": "SCRYCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Yapeal AG", + "short_name": "Yapeal AG", + "bank_code": "83019", + "bic": "YAPECHZ2XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Barclays Capital, Zurich Branch of Barclays Bank PLC, London", + "short_name": "Barclays Capital, Zurich Branch of Barclays Bank PLC, London", + "bank_code": "83020", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Cit\u00e9 Gestion SA", + "short_name": "Cit\u00e9 Gestion SA", + "bank_code": "83021", + "bic": "CITECHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "ISP Securities Ltd", + "short_name": "ISP Securities Ltd", + "bank_code": "83022", + "bic": "PLESCHZZXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Swiss Life AG (EV, geb. Verm\u00f6gen)", + "short_name": "Swiss Life AG (EV, geb. Verm\u00f6gen)", + "bank_code": "83024", + "bic": "SLAMCHZZEV0", + "country_code": "CH", + "primary": true + }, + { + "name": "Swiss Life AG (freies Verm\u00f6gen)", + "short_name": "Swiss Life AG (freies Verm\u00f6gen)", + "bank_code": "83025", + "bic": "SLAMCHZZNZ0", + "country_code": "CH", + "primary": true + }, + { + "name": "AXA Leben AG (Einzel, geb. Verm\u00f6gen)", + "short_name": "AXA Leben AG (Einzel, geb. Verm\u00f6gen)", + "bank_code": "83026", + "bic": "AXIPCHZZWLE", + "country_code": "CH", + "primary": true + }, + { + "name": "Baloise Leben AG (Kollektiv, geb. Verm\u00f6gen)", + "short_name": "Baloise Leben AG (Kollektiv, geb. Verm\u00f6gen)", + "bank_code": "83027", + "bic": "BAAMCHBBBKL", + "country_code": "CH", + "primary": true + }, + { + "name": "Baloise Leben AG (Einzel, geb. Verm\u00f6gen)", + "short_name": "Baloise Leben AG (Einzel, geb. Verm\u00f6gen)", + "bank_code": "83028", + "bic": "BAAMCHBBBEL", + "country_code": "CH", + "primary": true + }, + { + "name": "Flowbank SA", + "short_name": "Flowbank SA", + "bank_code": "83029", + "bic": "FLOKCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Allianz Suisse Lebensvers.-Ges. AG (geb. Verm\u00f6gen EL)", + "short_name": "Allianz Suisse Lebensvers.-Ges. AG (geb. Verm\u00f6gen EL)", + "bank_code": "83030", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Swisscanto Fondsleitung AG", + "short_name": "Swisscanto Fondsleitung AG", + "bank_code": "83031", + "bic": "SWIPCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00fcrich Versicherungs.-Ges. AG gebundenes Verm\u00f6gen", + "short_name": "Z\u00fcrich Versicherungs.-Ges. AG gebundenes Verm\u00f6gen", + "bank_code": "83032", + "bic": "ZURICHZFZNT", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00fcrich Lebensvers.-Ges. AG Group Life geb. Verm\u00f6gen", + "short_name": "Z\u00fcrich Lebensvers.-Ges. AG Group Life geb. Verm\u00f6gen", + "bank_code": "83033", + "bic": "ZURICHZFZKT", + "country_code": "CH", + "primary": true + }, + { + "name": "Z\u00fcrich Lebensvers.-Ges. AG Individual Life geb. Verm\u00f6gen", + "short_name": "Z\u00fcrich Lebensvers.-Ges. AG Individual Life geb. Verm\u00f6gen", + "bank_code": "83034", + "bic": "ZURICHZFZLT", + "country_code": "CH", + "primary": true + }, + { + "name": "Taurus SA", + "short_name": "Taurus SA", + "bank_code": "83035", + "bic": "TARSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "radicant bank ag", + "short_name": "radicant bank ag", + "bank_code": "83036", + "bic": "RDCTCHZHXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Vaudoise G\u00e9n\u00e9rale, Compagnie d'Assurances SA (geb. Verm\u00f6gen)", + "short_name": "Vaudoise G\u00e9n\u00e9rale, Compagnie d'Assurances SA (geb. Verm\u00f6gen)", + "bank_code": "83037", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Alpian SA", + "short_name": "Alpian SA", + "bank_code": "83039", + "bic": "APNACHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Kendra Securities House SA", + "short_name": "Kendra Securities House SA", + "bank_code": "83040", + "bic": "KSEHCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Klarpay AG", + "short_name": "Klarpay AG", + "bank_code": "83041", + "bic": "KLARCH22XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "MG Finance SA", + "short_name": "MG Finance SA", + "bank_code": "83042", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "Private Client Bank AG", + "short_name": "Private Client Bank AG", + "bank_code": "83043", + "bic": "PICLCHZ2XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank of China Limited, GenevaBranch", + "short_name": "Bank of China Limited, GenevaBranch", + "bank_code": "83045", + "bic": "BKCHCHGEXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "SWISS4.0 SA", + "short_name": "SWISS4.0 SA", + "bank_code": "83048", + "bic": "IWSSCHGGXXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Relio AG", + "short_name": "Relio AG", + "bank_code": "83050", + "bic": null, + "country_code": "CH", + "primary": true + }, + { + "name": "SR Saphirstein AG", + "short_name": "SR Saphirstein AG", + "bank_code": "83051", + "bic": "SAHHCHZ2XXX", + "country_code": "CH", + "primary": true + }, + { + "name": "Bank CIC (SCHWEIZ) AG", + "short_name": "Bank CIC (SCHWEIZ) AG", + "bank_code": "87105", + "bic": "CIALCHBBXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Deutsche Bank AG Z\u00fcrich Branch", + "short_name": "Deutsche Bank AG Z\u00fcrich Branch", + "bank_code": "87801", + "bic": "DEUTCHZZXXX", + "country_code": "CH", + "primary": false + }, + { + "name": "Deutsche Bank AG Z\u00fcrich Branch", + "short_name": "Deutsche Bank AG Z\u00fcrich Branch", + "bank_code": "87802", + "bic": "DEUTCHZZXXX", + "country_code": "CH", + "primary": false + } +] \ No newline at end of file diff --git a/schwifty/bank_registry/manual_ch.json b/schwifty/bank_registry/manual_ch.json deleted file mode 100644 index c50a145..0000000 --- a/schwifty/bank_registry/manual_ch.json +++ /dev/null @@ -1,242 +0,0 @@ -[ - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00268", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "ZKBKCHZZ80A", - "bank_code": "00700", - "name": "Z\u00fcrcher Kantonalbank", - "short_name": "Z\u00fcrcher Kantonalbank", - "primary": true - }, - { - "country_code": "CH", - "bic": "CRESCHZZ", - "bank_code": "04835", - "name": "Credit Suisse (Switzerland) Ltd.", - "short_name": "Credit Suisse (Switzerland) Ltd.", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00240", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH", - "bank_code": "00233", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "CIMMCHGGXXX", - "bank_code": "08822", - "name": "CIM BANQUE SA", - "short_name": "CIM BANQUE SA", - "primary": true - }, - { - "country_code": "CH", - "bic": "RAIFCH22XXX", - "bank_code": "80808", - "name": "Raiffeisen Schweiz Genossenschaft", - "short_name": "Raiffeisen Schweiz Genossenschaft", - "primary": true - }, - { - "country_code": "CH", - "bic": "BLKBCH22", - "bank_code": "00769", - "name": "Basellandschaftliche Kantonalbank", - "short_name": "Basellandschaftliche Kantonalbank", - "primary": true - }, - { - "country_code": "CH", - "bic": "RAIFCH22XXX", - "bank_code": "80102", - "name": "Raiffeisen Schweiz Genossenschaft", - "short_name": "Raiffeisen Schweiz Genossenschaft", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00228", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "CRESCHZZ80A", - "bank_code": "04835", - "name": "Credit Suisse (Switzerland) Ltd.", - "short_name": "Credit Suisse (Switzerland) Ltd.", - "primary": true - }, - { - "country_code": "CH", - "bic": "POFICHBEXXX", - "bank_code": "09000", - "name": "PostFinance AG", - "short_name": "PostFinance AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "MIGRCHZZXXX", - "bank_code": "08401", - "name": "Migros Bank AG", - "short_name": "Migros Bank AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "BCNNCH22", - "bank_code": "00766", - "name": "Banque cantonale neuch\u00e2teloise", - "short_name": "Banque cantonale neuch\u00e2teloise", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00244", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00206", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00243", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "PIGUCH22", - "bank_code": "08777", - "name": "Piguet Galland & Cie SA", - "short_name": "Piguet Galland & Cie SA", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH80A", - "bank_code": "00260", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "BCVSCH2LXXX", - "bank_code": "00765", - "name": "Banque Cantonale du Valais", - "short_name": "Banque Cantonale du Valais", - "primary": true - }, - { - "country_code": "CH", - "bic": "BLEMCHGG", - "bank_code": "08838", - "name": "Banque du L\u00e9man SA", - "short_name": "Banque du L\u00e9man SA", - "primary": true - }, - { - "country_code": "CH", - "bic": "INCOCHZZXXX", - "bank_code": "08799", - "name": "InCore Bank AG", - "short_name": "InCore Bank AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "DUBACHGGXXX", - "bank_code": "08843", - "name": "Dukascopy Bank SA", - "short_name": "Dukascopy Bank SA", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH", - "bank_code": "00240", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "POSOCH22", - "bank_code": "08252", - "name": "Banca Popolare di Sondrio (Suisse) SA", - "short_name": "Banca Popolare di Sondrio (Suisse) SA", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH", - "bank_code": "00251", - "name": "UBS Switzerland AG", - "short_name": "UBS Switzerland AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "BCVLCH2L", - "bank_code": "00767", - "name": "Banque Cantonale Vaudoise", - "short_name": "Banque Cantonale Vaudoise", - "primary": true - }, - { - "country_code": "CH", - "bic": "COBACHZHXXX", - "bank_code": "08836", - "name": "COMMERZBANK AG", - "short_name": "COMMERZBANK AG", - "primary": true - }, - { - "country_code": "CH", - "bic": "SWQBCHZZXXX", - "bank_code": "08781", - "name": "Swissquote Bank SA", - "short_name": "Swissquote Bank SA", - "primary": true - }, - { - "country_code": "CH", - "bic": "UBSWCHZH40M", - "bank_code": "00292", - "name": "UBS SWITZERLAND AG", - "short_name": "UBS SWITZERLAND AG", - "primary": true - } -] diff --git a/scripts/get_bank_registry_ch.py b/scripts/get_bank_registry_ch.py new file mode 100644 index 0000000..9850bac --- /dev/null +++ b/scripts/get_bank_registry_ch.py @@ -0,0 +1,38 @@ +import json +from typing import Any + +import requests + + +URL = "https://api.six-group.com/api/epcd/bankmaster/v3/bankmaster.json" + + +def fetch() -> dict[str, Any]: + return requests.get(URL).json()["entries"] + + +def process(records: dict[str, Any]) -> dict[str, Any]: + registry: list[dict[str, Any]] = [] + + for record in records: + if record["entryType"] != "BankMaster" or record["country"] != "CH": + continue + name = short_name = record["bankOrInstitutionName"] + if name == "UBS Switzerland AG": + name += f" - {record['townName']}" + registry.append( + { + "name": name, + "short_name": short_name, + "bank_code": f"{record['iid']:0>5}", + "bic": record.get("bic"), + "country_code": "CH", + "primary": record["iidType"] == "HEADQUARTERS", + } + ) + return registry + + +if __name__ == "__main__": + with open("schwifty/bank_registry/generated_ch.json", "w") as fp: + json.dump(process(fetch()), fp, indent=2)
Swiss bank codes and BICs Here's an up to date list of swiss banks. Is there any way to implement them into the package? The version 3.0 provides bank codes as well as the BIC: https://api.six-group.com/api/epcd/bankmaster/v3/bankmaster.json
Thanks for the source. I will give it a go!
2023-11-26T17:52:04
0.0
[]
[]
mdomke/schwifty
mdomke__schwifty-166
a7749cc6acb38b85e6cff4e8b23e4b126b38d470
diff --git a/schwifty/bic.py b/schwifty/bic.py index c530b86..b9e1b14 100644 --- a/schwifty/bic.py +++ b/schwifty/bic.py @@ -187,14 +187,16 @@ def from_bank_code(cls, country_code: str, bank_code: str) -> BIC: candidates = cls.candidates_from_bank_code(country_code, bank_code) if len(candidates) > 1: # If we have multiple candidates, we try to pick the - # one with XXX as branch code which is the most generic one. - generic_codes = sorted( - [c for c in candidates if c.branch_code == "XXX" or not c.branch_code], - key=len, - reverse=True, - ) + # one with no branch code which is the most generic one. + generic_codes = [c for c in candidates if not c.branch_code] if generic_codes: - return generic_codes[0] + return sorted(generic_codes)[-1] + + # If we don't have one, we try to pick the one with + # 'XXX' as a branch code + generic_codes = [c for c in candidates if c.branch_code == "XXX"] + if generic_codes: + return sorted(generic_codes)[-1] return candidates[0] except KeyError as e: raise exceptions.InvalidBankCode(
When having multiple BIC candidates, always pick the most generic one. Implements #164 ## Rationale Following up on https://github.com/mdomke/schwifty/pull/154, I realized that when we are looking for `BIC.from_bank_code` with multiple candidates, we should use the one with `XXX` as branch code by default. ![image](https://github.com/mdomke/schwifty/assets/229453/9d8f0ca2-c69e-4f00-8864-e6d54fe7494e) For instance: ``` >>> BIC.candidates_from_bank_code('FR', '30004') # doctest: +NORMALIZE_WHITESPACE [<BIC=BNPAFRPPIFN>, <BIC=BNPAFRPPPAA>, <BIC=BNPAFRPPMED>, <BIC=BNPAFRPPCRN>, <BIC=BNPAFRPP>, <BIC=BNPAFRPPPAE>, <BIC=BNPAFRPPPBQ>, <BIC=BNPAFRPPNFE>, <BIC=BNPAFRPPPGN>, <BIC=BNPAFRPPXXX>, <BIC=BNPAFRPPBOR>, <BIC=BNPAFRPPCRM>, <BIC=BNPAFRPPPVD>, <BIC=BNPAFRPPPTX>, <BIC=BNPAFRPPPAC>, <BIC=BNPAFRPPPLZ>, <BIC=BNPAFRPP039>, <BIC=BNPAFRPPENG>, <BIC=BNPAFRPPNEU>, <BIC=BNPAFRPPORE>, <BIC=BNPAFRPPPEE>, <BIC=BNPAFRPPPXV>, <BIC=BNPAFRPPIFO>] ``` Should return `BNPAFRPPXXX` by default: ``` >>> BIC.from_bank_code('FR', '30004') BNPAFRPPXXX ``` The rationale is that we don't have enough information to know the branch code, also using XXX in a Sepa Direct Debit will let the bank decide where to route the money rather than sending it to a random and invalid branch.
2023-11-15T14:11:03
0.0
[]
[]
mdomke/schwifty
mdomke__schwifty-45
7de6194fc68070938d871be439fa3d40fab2fe42
diff --git a/schwifty/iban.py b/schwifty/iban.py index 7f4ef90..4f9e14a 100644 --- a/schwifty/iban.py +++ b/schwifty/iban.py @@ -199,7 +199,7 @@ def _validate_characters(self): raise ValueError("Invalid characters in IBAN {}".format(self.compact)) def _validate_checksum(self): - if self.numeric % 97 != 1: + if self.numeric % 97 != 1 or self._calc_checksum_digits() != self.checksum_digits: raise ValueError("Invalid checksum digits") def _validate_length(self):
Not all test ibans from the reference set behave as expected Under https://www.iban.com/testibans you can find the set of test IBAN. Most of them behave as expected, but some not. `GB02BARC20201530093451` and `GB68CITI18500483515538` should result to account number invalidation, but they don't. `GB01BARC20714583608387` and `GB00HLFX11016111455365` should result in IBAN checksum invalidation, but they don't. Please let me know if I can help.
Looks like validate checksum misses actual validation of checksum in iban: ``` def _validate_checksum(self): if self.numeric % 97 != 1: raise InvalidChecksumDigits("Invalid checksum digits") ``` must be extended with ``` if self.checksum_digits != self._calc_checksum_digits(): raise InvalidChecksumDigits("Invalid checksum digits") ```
2020-12-26T19:05:49
0.0
[]
[]
googlefonts/fez
googlefonts__fez-16
a73d5e622d9a87e2037a462332743c121e328cbd
diff --git a/lib/fez/Anchors.py b/lib/fez/Anchors.py index 29ee17c..98398e3 100644 --- a/lib/fez/Anchors.py +++ b/lib/fez/Anchors.py @@ -28,6 +28,10 @@ name, and a class filter, which is either ``marks``, ``bases``, ``cursive`` or a glyph selector. +It also optionally takes a mark class name: + + Feature mark { Attach &top &_top @topmarks bases; }; + The verb acts by collecting all the glyphs which have anchors defined, and filtering them according to their class definition in the ``GDEF`` table. In this case, we have asked for ``bases``, so glyph ``A`` will be selected. @@ -76,7 +80,8 @@ Attach_GRAMMAR = """ ?start: action -action: "&" BARENAME "&" BARENAME (ATTACHTYPE | glyphselector) maybe_languages +action: "&" BARENAME "&" BARENAME maybe_classname (ATTACHTYPE | glyphselector) maybe_languages +maybe_classname: classname? maybe_languages: languages? ATTACHTYPE: "marks" | "bases" | "cursive" """ @@ -149,7 +154,7 @@ def load_anchor_variable(self, glyphname): class Attach(FEZVerb): def action(self, args): - (aFrom, aTo, attachtype, languages) = args + (aFrom, aTo, markClass, attachtype, languages) = args bases = {} marks = {} @@ -195,12 +200,15 @@ def _category(k): return [ fontFeatures.Routine( rules=[ - fontFeatures.Attachment(aFrom, aTo, bases, marks, font=self.parser.font) + fontFeatures.Attachment(aFrom, aTo, markClass, bases, marks, font=self.parser.font) ], languages = languages ) ] + def maybe_classname(self, args): + return args[0] if len(args) else '' + def languages(self, args): rv = [] while args: diff --git a/lib/fez/ClassDefinition.py b/lib/fez/ClassDefinition.py index ed941a6..418a9c5 100644 --- a/lib/fez/ClassDefinition.py +++ b/lib/fez/ClassDefinition.py @@ -182,12 +182,12 @@ DefineClass_GRAMMAR = """ ?start: action -action: CLASSNAME "=" primary +action: classname "=" primary """ DefineClassBinned_GRAMMAR = """ ?start: action -action: CLASSNAME "[" METRIC "," NUMBER "]" "=" primary +action: classname "[" METRIC "," NUMBER "]" "=" primary """ PARSEOPTS = dict(use_helpers=True) @@ -256,7 +256,6 @@ def conjunction(self, args): def action(self, args): parser = self.parser classname, glyphs = args - classname = classname[1:] # -@ self._add_glyphs_to_named_class(glyphs, classname) diff --git a/lib/fez/__init__.py b/lib/fez/__init__.py index 9ee5617..df98636 100644 --- a/lib/fez/__init__.py +++ b/lib/fez/__init__.py @@ -251,8 +251,10 @@ def resolve_as_glyph(self): STARTGLYPHNAME: LETTER | DIGIT | "_" MIDGLYPHNAME: STARTGLYPHNAME | "." | "-" BARENAME: STARTGLYPHNAME MIDGLYPHNAME* - inlineclass: "[" (WS* (CLASSNAME | BARENAME | REGEX | UNICODEGLYPH))* "]" - CLASSNAME: "@" STARTGLYPHNAME+ + inlineclass: "[" (WS* (_glyphselector_inner))* "]" + CLASSNAME: STARTGLYPHNAME+ + _CLASS_SIGIL: "@" + classname: _CLASS_SIGIL CLASSNAME ANYTHING: /[^\\s]/ REGEX: "/" ANYTHING* "/" @@ -263,7 +265,8 @@ def resolve_as_glyph(self): SUFFIXTYPE: ("." | "~") glyphsuffix: SUFFIXTYPE STARTGLYPHNAME+ - glyphselector: (unicoderange | UNICODEGLYPH | REGEX | CLASSNAME | inlineclass | singleglyph) glyphsuffix* + _glyphselector_inner: (unicoderange | UNICODEGLYPH | REGEX | classname | inlineclass | singleglyph) + glyphselector: _glyphselector_inner glyphsuffix* singleglyph: BARENAME | glyph_variable glyph_variable: VARIABLE @@ -534,6 +537,9 @@ def VARIABLE(self, tok): def singleglyph(self, args): return args[0] + def classname(self, args): + return args[0] + def integer_constant(self, args): return int(args[0]) @@ -568,7 +574,7 @@ def glyph_variable(self, args): return ScalarOrVariable(args[0], self.parser) def unicoderange(self, args): - return lark.Token("UNICODERANGE", range(_UNICODEGLYPH(args[0].value), _UNICODEGLYPH(args[1].value)+1), args[0].pos_in_stream) + return lark.Token("UNICODERANGE", range(_UNICODEGLYPH(args[0].value), _UNICODEGLYPH(args[1].value)+1)) def inlineclass(self, args): return lark.Token("INLINECLASS", [self._glyphselector(t) for t in args if t.type != "WS"]) @@ -576,8 +582,6 @@ def inlineclass(self, args): def _glyphselector(self, token): if isinstance(token, ScalarOrVariable): return {"variable": token} - if token.type == "CLASSNAME": - val = token.value[1:] elif token.type == "REGEX": val = token.value[1:-1] elif token.type == "UNICODEGLYPH":
[Feature request] Multiple contiguous Unicode ranges as a glyph selector Hello! In several Indic (and probably other scripts) the Unicode ranges for a group such as 'all consonants' may not be contiguous. Would it be possible to introduce multiple Unicode ranges as a single class definition? Example: ``` # Bengali Unicode consonant ranges are non-contiguous # The following example includes a mix of range and single Unicode point selector DefineClass @consonants = [U+0995=>U+09A8 U+09AA=>U+09B0 U+09B2 U+09B6=>U+09B9]; ``` Or is there another way to do this? Fix failing tests
2023-02-03T08:25:45
0.0
[]
[]
MDAnalysis/membrane-curvature
MDAnalysis__membrane-curvature-116
c44be15e46b19195cb29fa6e04bd6739522b3066
diff --git a/membrane_curvature/base.py b/membrane_curvature/base.py index 8029dc9..364e9d5 100644 --- a/membrane_curvature/base.py +++ b/membrane_curvature/base.py @@ -135,6 +135,8 @@ def __init__(self, universe, select='all', self.n_y_bins = n_y_bins self.x_range = x_range if x_range else (0, universe.dimensions[0]) self.y_range = y_range if y_range else (0, universe.dimensions[1]) + self.dx = (self.x_range[1] - self.x_range[0]) / n_x_bins + self.dy = (self.y_range[1] - self.y_range[0]) / n_y_bins # Raise if selection doesn't exist if len(self.ag) == 0: @@ -183,8 +185,12 @@ def _single_frame(self): n_y_bins=self.n_y_bins, x_range=self.x_range, y_range=self.y_range) - self.results.mean[self._frame_index] = mean_curvature(self.results.z_surface[self._frame_index]) - self.results.gaussian[self._frame_index] = gaussian_curvature(self.results.z_surface[self._frame_index]) + self.results.mean[self._frame_index] = mean_curvature( + self.results.z_surface[self._frame_index], self.dx, self.dy + ) + self.results.gaussian[self._frame_index] = gaussian_curvature( + self.results.z_surface[self._frame_index], self.dx, self.dy + ) def _conclude(self): self.results.average_z_surface = np.nanmean(self.results.z_surface, axis=0) diff --git a/membrane_curvature/curvature.py b/membrane_curvature/curvature.py index 06cbfae..7e96848 100644 --- a/membrane_curvature/curvature.py +++ b/membrane_curvature/curvature.py @@ -45,7 +45,7 @@ import numpy as np -def gaussian_curvature(Z): +def gaussian_curvature(Z, *varargs): """ Calculate Gaussian curvature from Z cloud points. @@ -54,7 +54,9 @@ def gaussian_curvature(Z): ---------- Z: np.ndarray. Multidimensional array of shape (n,n). - + varargs : list of scalar or array, optional + Spacing between f values. Default unitary spacing for all dimensions. + See np.gradient docs for more information. Returns ------- @@ -64,16 +66,16 @@ def gaussian_curvature(Z): """ - Zy, Zx = np.gradient(Z) - Zxy, Zxx = np.gradient(Zx) - Zyy, _ = np.gradient(Zy) + Zx, Zy = np.gradient(Z, *varargs) + Zxx, Zxy = np.gradient(Zx, *varargs) + _, Zyy = np.gradient(Zy, *varargs) - K = (Zxx * Zyy - (Zxy ** 2)) / (1 + (Zx ** 2) + (Zy ** 2)) ** 2 + K = (Zxx * Zyy - (Zxy**2)) / (1 + (Zx**2) + (Zy**2)) ** 2 return K -def mean_curvature(Z): +def mean_curvature(Z, *varargs): """ Calculates mean curvature from Z cloud points. @@ -82,7 +84,9 @@ def mean_curvature(Z): ---------- Z: np.ndarray. Multidimensional array of shape (n,n). - + varargs : list of scalar or array, optional + Spacing between f values. Default unitary spacing for all dimensions. + See np.gradient docs for more information. Returns ------- @@ -92,11 +96,11 @@ def mean_curvature(Z): """ - Zy, Zx = np.gradient(Z) - Zxy, Zxx = np.gradient(Zx) - Zyy, _ = np.gradient(Zy) + Zx, Zy, = np.gradient(Z, *varargs) + Zxx, Zxy = np.gradient(Zx, *varargs) + _, Zyy = np.gradient(Zy, *varargs) H = (1 + Zx**2) * Zyy + (1 + Zy**2) * Zxx - 2 * Zx * Zy * Zxy - H = H / (2 * (1 + Zx**2 + Zy**2)**(1.5)) + H = H / (2 * (1 + Zx**2 + Zy**2) ** (1.5)) return H
Gradient calculation should respect arbitrary grid spacing The evaluations of derivatives of height field on the grid currently assume unit grid spacing without enforcement. The computed curvatures will be incorrect if the actual grid spacing isn't one.
2023-11-20T23:59:49
0.0
[]
[]
MDAnalysis/membrane-curvature
MDAnalysis__membrane-curvature-112
0cf48563001462cbd1bcf0464371392fe6b905d0
diff --git a/.github/workflows/CI.yaml b/.github/workflows/CI.yaml index 3ae20bf..1f6283c 100644 --- a/.github/workflows/CI.yaml +++ b/.github/workflows/CI.yaml @@ -24,16 +24,16 @@ jobs: fail-fast: false matrix: os: [macOS-latest, ubuntu-latest, windows-latest] - python-version: [3.8, 3.9, "3.10"] + python-version: ["3.9", "3.10", "3.11"] env_file: [env, ] include: - name: minimum_requirements os: ubuntu-latest - python-version: 3.8 + python-version: 3.9 env_file: min steps: - - uses: actions/checkout@v3 + - uses: actions/checkout@v4 - name: Additional info about the build shell: bash @@ -44,22 +44,21 @@ jobs: # More info on options: https://github.com/conda-incubator/setup-miniconda # An environment for the minimum versions - - uses: conda-incubator/setup-miniconda@v2 + - name: setup micromamba + uses: mamba-org/setup-micromamba@v1 with: - python-version: ${{ matrix.python-version }} environment-file: devtools/conda-envs/test_${{ matrix.env_file }}.yaml - channels: conda-forge, defaults - activate-environment: test - auto-update-conda: false - auto-activate-base: false - show-channel-urls: true - + environment-name: test + create-args: >- + python=${{ matrix.python-version }} + - name: Install package # conda setup requires this special shell shell: bash -l {0} run: | python -m pip install . --no-deps - conda list + pip list + micromamba list - name: Run tests # conda setup requires this special shell @@ -68,7 +67,9 @@ jobs: pytest -v --cov=membrane_curvature --cov-report=xml --color=yes membrane_curvature/tests/ - name: CodeCov - uses: codecov/codecov-action@v2 + if: ${{ github.repository == 'MDAnalysis/membrane-curvature' + && github.event_name == 'pull_request' }} + uses: codecov/codecov-action@v3 with: file: ./coverage.xml flags: unittests @@ -80,10 +81,10 @@ jobs: strategy: fail-fast: false matrix: - latest-python : ["3.10", "3.11"] + latest-python : ["3.9", "3.10", "3.11", "3.12"] steps: - - uses: actions/checkout@v3 + - uses: actions/checkout@v4 - uses: actions/setup-python@v4 with: python-version: ${{ matrix.latest-python }} diff --git a/.readthedocs.yaml b/.readthedocs.yaml new file mode 100644 index 0000000..31adaf0 --- /dev/null +++ b/.readthedocs.yaml @@ -0,0 +1,19 @@ +# readthedocs.yml + +version: 2 + +sphinx: + configuration: docs/conf.py + +build: + os: ubuntu-22.04 + tools: + python: "mambaforge-22.9" + +python: + install: + - method: setuptools + path: . + +conda: + environment: docs/requirements.yaml diff --git a/docs/conf.py b/docs/conf.py index 5367ce8..3d36894 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -13,7 +13,6 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. # Incase the project was not installed -import membrane_curvature import os import sys sys.path.insert(0, os.path.abspath('..')) diff --git a/docs/requirements.yaml b/docs/requirements.yaml index 2b3f044..97295e2 100644 --- a/docs/requirements.yaml +++ b/docs/requirements.yaml @@ -10,8 +10,9 @@ dependencies: - MDAnalysis - ipython - MDAnalysisTests + - sphinx_rtd_theme # Pip-only installs - pip: - nbsphinx - ipywidgets - - nglview \ No newline at end of file + - nglview diff --git a/membrane_curvature/data/__init__.py b/membrane_curvature/data/__init__.py new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/membrane_curvature/data/__init__.py @@ -0,0 +1,1 @@ + diff --git a/readthedocs.yml b/readthedocs.yml deleted file mode 100644 index 69d6db5..0000000 --- a/readthedocs.yml +++ /dev/null @@ -1,15 +0,0 @@ -# readthedocs.yml - -version: 2 - -build: - image: latest - -python: - version: 3.8 - install: - - method: pip - path: . - -conda: - environment: docs/requirements.yaml \ No newline at end of file diff --git a/setup.py b/setup.py index d8cb529..e4e8d2e 100644 --- a/setup.py +++ b/setup.py @@ -59,7 +59,7 @@ # 'Mac OS-X', # 'Unix', # 'Windows'], # Valid platforms your code works on, adjust to your flavor - python_requires=">=3.8", # Python version restrictions + python_requires=">=3.9", # Python version restrictions classifiers = [ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', @@ -70,10 +70,10 @@ 'Operating System :: POSIX', 'Topic :: Scientific/Engineering ', 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: 3.11', + 'Programming Language :: Python :: 3.12', ] # Manual control if final package is compressible or not, set False to prevent the .egg from being made # zip_safe=False,
Try fixing rtd ## Description Fixes RTD build ## Todos Notable points that this PR has either accomplished or will accomplish. - [ ] TODO 1 ## Questions - [ ] Question1 ## Status - [ ] Ready to go
2023-10-27T10:26:34
0.0
[]
[]
MDAnalysis/membrane-curvature
MDAnalysis__membrane-curvature-23
7ebc22fbe88ad049140547f3faae724e44f76a20
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 5904883..bcf5ea8 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,7 +1,7 @@ # How to contribute We welcome contributions from external contributors, and this document -describes how to merge code changes into this mdakit_membcurv. +describes how to merge code changes into this membrane_curvature. ## Getting Started @@ -19,7 +19,7 @@ describes how to merge code changes into this mdakit_membcurv. [branch](https://help.github.com/articles/creating-and-deleting-branches-within-your-repository/) with the branch name relating to the feature you are going to add. * When you are ready for others to examine and comment on your new feature, - navigate to your fork of mdakit_membcurv on GitHub and open a [pull + navigate to your fork of membrane_curvature on GitHub and open a [pull request](https://help.github.com/articles/using-pull-requests/) (PR). Note that after you launch a PR from one of your fork's branches, all subsequent commits to that branch will be added to the open pull request @@ -29,7 +29,7 @@ describes how to merge code changes into this mdakit_membcurv. * If you're providing a new feature, you must add test cases and documentation. * When the code is ready to go, make sure you run the test suite using pytest. * When you're ready to be considered for merging, check the "Ready to go" - box on the PR page to let the mdakit_membcurv devs know that the changes are complete. + box on the PR page to let the membrane_curvature devs know that the changes are complete. The code will not be merged until this box is checked, the continuous integration returns checkmarks, and multiple core developers give "Approved" reviews. diff --git a/.github/workflows/CI.yaml b/.github/workflows/CI.yaml index 1ce2928..dd34bf1 100644 --- a/.github/workflows/CI.yaml +++ b/.github/workflows/CI.yaml @@ -66,7 +66,7 @@ jobs: shell: bash -l {0} run: | - pytest -v --cov=mdakit_membcurv --cov-report=xml --color=yes mdakit_membcurv/tests/ + pytest -v --cov=membrane_curvature --cov-report=xml --color=yes membrane_curvature/tests/ - name: CodeCov uses: codecov/codecov-action@v1 diff --git a/.lgtm.yml b/.lgtm.yml index ecc0574..a53bf0a 100644 --- a/.lgtm.yml +++ b/.lgtm.yml @@ -9,4 +9,4 @@ path_classifiers: - versioneer.py # Set Versioneer.py to an external "library" (3rd party code) - devtools/* generated: - - mdakit_membcurv/_version.py + - membrane_curvature/_version.py diff --git a/MANIFEST.in b/MANIFEST.in index 7d84433..9549ee6 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -3,5 +3,5 @@ include MANIFEST.in include CODE_OF_CONDUCT.md include versioneer.py -graft mdakit_membcurv +graft membrane_curvature global-exclude *.py[cod] __pycache__ *.so \ No newline at end of file diff --git a/README.md b/README.md index cd6219c..af33b3b 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -MDA Membrane Curvature +Membrane Curvature ============================== [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://www.numfocus.org/) [![Powered by MDAnalysis](https://img.shields.io/badge/powered%20by-MDAnalysis-orange.svg?logoWidth=16&logo=data:image/x-icon;base64,AAABAAEAEBAAAAEAIAAoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJD+XwCY/fEAkf3uAJf97wGT/a+HfHaoiIWE7n9/f+6Hh4fvgICAjwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACT/yYAlP//AJ///wCg//8JjvOchXly1oaGhv+Ghob/j4+P/39/f3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJH8aQCY/8wAkv2kfY+elJ6al/yVlZX7iIiI8H9/f7h/f38UAAAAAAAAAAAAAAAAAAAAAAAAAAB/f38egYF/noqAebF8gYaagnx3oFpUUtZpaWr/WFhY8zo6OmT///8BAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgICAn46Ojv+Hh4b/jouJ/4iGhfcAAADnAAAA/wAAAP8AAADIAAAAAwCj/zIAnf2VAJD/PAAAAAAAAAAAAAAAAICAgNGHh4f/gICA/4SEhP+Xl5f/AwMD/wAAAP8AAAD/AAAA/wAAAB8Aov9/ALr//wCS/Z0AAAAAAAAAAAAAAACBgYGOjo6O/4mJif+Pj4//iYmJ/wAAAOAAAAD+AAAA/wAAAP8AAABhAP7+FgCi/38Axf4fAAAAAAAAAAAAAAAAiIiID4GBgYKCgoKogoB+fYSEgZhgYGDZXl5e/m9vb/9ISEjpEBAQxw8AAFQAAAAAAAAANQAAADcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAjo6Mb5iYmP+cnJz/jY2N95CQkO4pKSn/AAAA7gAAAP0AAAD7AAAAhgAAAAEAAAAAAAAAAACL/gsAkv2uAJX/QQAAAAB9fX3egoKC/4CAgP+NjY3/c3Nz+wAAAP8AAAD/AAAA/wAAAPUAAAAcAAAAAAAAAAAAnP4NAJL9rgCR/0YAAAAAfX19w4ODg/98fHz/i4uL/4qKivwAAAD/AAAA/wAAAP8AAAD1AAAAGwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALGxsVyqqqr/mpqa/6mpqf9KSUn/AAAA5QAAAPkAAAD5AAAAhQAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADkUFBSuZ2dn/3V1df8uLi7bAAAATgBGfyQAAAA2AAAAMwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB0AAADoAAAA/wAAAP8AAAD/AAAAWgC3/2AAnv3eAJ/+dgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA9AAAA/wAAAP8AAAD/AAAA/wAKDzEAnP3WAKn//wCS/OgAf/8MAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIQAAANwAAADtAAAA7QAAAMAAABUMAJn9gwCe/e0Aj/2LAP//AQAAAAAAAAAA)](https://www.mdanalysis.org) diff --git a/devtools/README.md b/devtools/README.md index 148705f..9e7a994 100644 --- a/devtools/README.md +++ b/devtools/README.md @@ -59,5 +59,5 @@ is installed by looking at the `git` tags and how many commits ahead this versio \d+.\d+.\d+(?\+\d+-[a-z0-9]+) ``` If the version of this commit is the same as a `git` tag, the installed version is the same as the tag, -e.g. `mdakit_membcurv-0.1.2`, otherwise it will be appended with `+X` where `X` is the number of commits +e.g. `membrane_curvature-0.1.2`, otherwise it will be appended with `+X` where `X` is the number of commits ahead from the last tag, and then `-YYYYYY` where the `Y`'s are replaced with the `git` commit hash. diff --git a/docs/Makefile b/docs/Makefile index 6488e81..8d953c8 100644 --- a/docs/Makefile +++ b/docs/Makefile @@ -4,7 +4,7 @@ # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build -SPHINXPROJ = mdakit_membcurv +SPHINXPROJ = membrane_curvature SOURCEDIR = . BUILDDIR = _build diff --git a/docs/README.md b/docs/README.md index b5a5a98..5cd5884 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,4 +1,4 @@ -# Compiling MDAkit MembCurv's Documentation +# Compiling Membrane Curvature's Documentation The docs for this project are built with [Sphinx](http://www.sphinx-doc.org/en/master/). To compile the docs, first ensure that Sphinx and the ReadTheDocs theme are installed. diff --git a/docs/api.rst b/docs/api.rst index 3f37a56..a10ef4e 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -7,4 +7,4 @@ API Documentation .. toctree:: - mdakit_membcurv.canvas + membrane_curvature.canvas diff --git a/docs/conf.py b/docs/conf.py index dda7a69..3edb791 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -13,16 +13,15 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. # Incase the project was not installed +import membrane_curvature import os import sys sys.path.insert(0, os.path.abspath('..')) -import mdakit_membcurv - # -- Project information ----------------------------------------------------- -project = 'MDAkit_membcurv' +project = 'membrane_curvature' copyright = ("2021, Estefania Barreto-Ojeda. Project structure based on the " "Computational Molecular Science Python Cookiecutter version 1.5") author = 'Estefania Barreto-Ojeda' @@ -117,7 +116,7 @@ # -- Options for HTMLHelp output --------------------------------------------- # Output file base name for HTML help builder. -htmlhelp_basename = 'mdakit_membcurvdoc' +htmlhelp_basename = 'membrane_curvaturedoc' # -- Options for LaTeX output ------------------------------------------------ @@ -144,8 +143,8 @@ # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ - (master_doc, 'mdakit_membcurv.tex', 'MDAkit MembCurv Documentation', - 'mdakit_membcurv', 'manual'), + (master_doc, 'membrane_curvature.tex', 'Membrane Curvature Documentation', + 'membrane_curvature', 'manual'), ] @@ -154,7 +153,7 @@ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ - (master_doc, 'mdakit_membcurv', 'MDAkit MembCurv Documentation', + (master_doc, 'membrane_curvature', 'Membrane Curvature Documentation', [author], 1) ] @@ -165,8 +164,8 @@ # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ - (master_doc, 'mdakit_membcurv', 'MDAkit MembCurv Documentation', - author, 'mdakit_membcurv', 'MDAkit for Membrane Curvature', + (master_doc, 'membrane_curvature', 'Membrane Curvature Documentation', + author, 'membrane_curvature', 'MDAkit for Membrane Curvature', 'Miscellaneous'), ] diff --git a/docs/index.rst b/docs/index.rst index 70f79f7..33d70a8 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -1,9 +1,9 @@ -.. mdakit_membcurv documentation master file, created by +.. membrane_curvature documentation master file, created by sphinx-quickstart on Thu Mar 15 13:55:56 2018. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. -Welcome to MDAkit MembCurv's documentation! +Welcome to Membrane Curvature's documentation! ========================================================= The MDAnalysis membrane curvature analysis module calculates the Gaussian and mean diff --git a/mdakit_membcurv/__init__.py b/membrane_curvature/__init__.py similarity index 94% rename from mdakit_membcurv/__init__.py rename to membrane_curvature/__init__.py index 861ebb1..38923a2 100644 --- a/mdakit_membcurv/__init__.py +++ b/membrane_curvature/__init__.py @@ -1,5 +1,5 @@ """ -MDAkit MembCurv + MDAkit for Membrane Curvature """ diff --git a/mdakit_membcurv/_version.py b/membrane_curvature/_version.py similarity index 99% rename from mdakit_membcurv/_version.py rename to membrane_curvature/_version.py index c5ca315..35b5c3c 100644 --- a/mdakit_membcurv/_version.py +++ b/membrane_curvature/_version.py @@ -43,7 +43,7 @@ def get_config(): cfg.style = "pep440" cfg.tag_prefix = "" cfg.parentdir_prefix = "None" - cfg.versionfile_source = "mdakit_membcurv/_version.py" + cfg.versionfile_source = "membrane_curvature/_version.py" cfg.verbose = False return cfg diff --git a/mdakit_membcurv/core.py b/membrane_curvature/core.py similarity index 100% rename from mdakit_membcurv/core.py rename to membrane_curvature/core.py diff --git a/mdakit_membcurv/data/README.md b/membrane_curvature/data/README.md similarity index 100% rename from mdakit_membcurv/data/README.md rename to membrane_curvature/data/README.md diff --git a/mdakit_membcurv/data/look_and_say.dat b/membrane_curvature/data/look_and_say.dat similarity index 100% rename from mdakit_membcurv/data/look_and_say.dat rename to membrane_curvature/data/look_and_say.dat diff --git a/mdakit_membcurv/lib/mods.py b/membrane_curvature/lib/mods.py similarity index 100% rename from mdakit_membcurv/lib/mods.py rename to membrane_curvature/lib/mods.py diff --git a/setup.cfg b/setup.cfg index 05957c1..d8c7c2e 100644 --- a/setup.cfg +++ b/setup.cfg @@ -6,7 +6,7 @@ omit = # Omit the tests */tests/* # Omit generated versioneer - mdakit_membcurv/_version.py + membrane_curvature/_version.py [yapf] # YAPF, in .style.yapf files this shows up as "[style]" header @@ -22,8 +22,8 @@ max-line-length = 119 # Automatic version numbering scheme VCS = git style = pep440 -versionfile_source = mdakit_membcurv/_version.py -versionfile_build = mdakit_membcurv/_version.py +versionfile_source = membrane_curvature/_version.py +versionfile_build = membrane_curvature/_version.py tag_prefix = '' [aliases] diff --git a/setup.py b/setup.py index f314c78..cfa1629 100644 --- a/setup.py +++ b/setup.py @@ -1,5 +1,5 @@ """ -MDAkit MembCurv + MDAkit for Membrane Curvature """ import sys @@ -21,8 +21,8 @@ setup( # Self-descriptive entries which should always be present - name='mdakit_membcurv', - author='MDAnalysis', + name='membrane_curvature', + author='Estefania Barreto-Ojeda', author_email='[email protected]', description=short_description[0], long_description=long_description,
Names in repo are not consistent. Currently, there are two names used in the repo `MDAkit_MembCurv` and `membrane_curvature`. For consistency, change names to `membrane_curvature`. Names in repo are not consistent. Currently, there are two names used in the repo `MDAkit_MembCurv` and `membrane_curvature`. For consistency, change names to `membrane_curvature`.
2021-06-18T04:09:04
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-117
84804adec54bd86aa0134814003e8b8434e386da
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 5f07478..7522c77 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -55,7 +55,7 @@ jobs: - name: Install poetry run: pip install "poetry>=1.4.2,<1.5" - name: Install dependencies - run: poetry install + run: poetry install --with=qt - name: Test run: poetry run pytest -v --cov=observ --cov-report=term-missing tests env: diff --git a/observ/__init__.py b/observ/__init__.py index d41bef7..725a703 100644 --- a/observ/__init__.py +++ b/observ/__init__.py @@ -1,4 +1,4 @@ -__version__ = "0.13.1" +__version__ = "0.14.0" from .init import init, loop_factory diff --git a/poetry.lock b/poetry.lock index 8ea18a9..78a4670 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,25 +1,43 @@ # This file is automatically @generated by Poetry 1.4.2 and should not be changed by hand. +[[package]] +name = "appdirs" +version = "1.4.4" +description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +category = "dev" +optional = false +python-versions = "*" +files = [ + {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, + {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, +] + [[package]] name = "black" -version = "23.10.1" +version = "23.11.0" description = "The uncompromising code formatter." category = "dev" optional = false python-versions = ">=3.8" files = [ - {file = "black-23.10.1-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:ec3f8e6234c4e46ff9e16d9ae96f4ef69fa328bb4ad08198c8cee45bb1f08c69"}, - {file = "black-23.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c74de4c77b849e6359c6f01987e94873c707098322b91490d24296f66d067dc"}, - {file = "black-23.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:7b4d10b0f016616a0d93d24a448100adf1699712fb7a4efd0e2c32bbb219b173"}, - {file = "black-23.10.1-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:b15b75fc53a2fbcac8a87d3e20f69874d161beef13954747e053bca7a1ce53a0"}, - {file = "black-23.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d56124b7a61d092cb52cce34182a5280e160e6aff3137172a68c2c2c4b76bcb"}, - {file = "black-23.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:3f157a8945a7b2d424da3335f7ace89c14a3b0625e6593d21139c2d8214d55ce"}, - {file = "black-23.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:840015166dbdfbc47992871325799fd2dc0dcf9395e401ada6d88fe11498abad"}, - {file = "black-23.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:037e9b4664cafda5f025a1728c50a9e9aedb99a759c89f760bd83730e76ba884"}, - {file = "black-23.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ed45ac9a613fb52dad3b61c8dea2ec9510bf3108d4db88422bacc7d1ba1243d"}, - {file = "black-23.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:6d23d7822140e3fef190734216cefb262521789367fbdc0b3f22af6744058982"}, - {file = "black-23.10.1-py3-none-any.whl", hash = "sha256:d431e6739f727bb2e0495df64a6c7a5310758e87505f5f8cde9ff6c0f2d7e4fe"}, - {file = "black-23.10.1.tar.gz", hash = "sha256:1f8ce316753428ff68749c65a5f7844631aa18c8679dfd3ca9dc1a289979c258"}, + {file = "black-23.11.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:dbea0bb8575c6b6303cc65017b46351dc5953eea5c0a59d7b7e3a2d2f433a911"}, + {file = "black-23.11.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:412f56bab20ac85927f3a959230331de5614aecda1ede14b373083f62ec24e6f"}, + {file = "black-23.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d136ef5b418c81660ad847efe0e55c58c8208b77a57a28a503a5f345ccf01394"}, + {file = "black-23.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:6c1cac07e64433f646a9a838cdc00c9768b3c362805afc3fce341af0e6a9ae9f"}, + {file = "black-23.11.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cf57719e581cfd48c4efe28543fea3d139c6b6f1238b3f0102a9c73992cbb479"}, + {file = "black-23.11.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:698c1e0d5c43354ec5d6f4d914d0d553a9ada56c85415700b81dc90125aac244"}, + {file = "black-23.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:760415ccc20f9e8747084169110ef75d545f3b0932ee21368f63ac0fee86b221"}, + {file = "black-23.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:58e5f4d08a205b11800332920e285bd25e1a75c54953e05502052738fe16b3b5"}, + {file = "black-23.11.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:45aa1d4675964946e53ab81aeec7a37613c1cb71647b5394779e6efb79d6d187"}, + {file = "black-23.11.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4c44b7211a3a0570cc097e81135faa5f261264f4dfaa22bd5ee2875a4e773bd6"}, + {file = "black-23.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a9acad1451632021ee0d146c8765782a0c3846e0e0ea46659d7c4f89d9b212b"}, + {file = "black-23.11.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc7f6a44d52747e65a02558e1d807c82df1d66ffa80a601862040a43ec2e3142"}, + {file = "black-23.11.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7f622b6822f02bfaf2a5cd31fdb7cd86fcf33dab6ced5185c35f5db98260b055"}, + {file = "black-23.11.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:250d7e60f323fcfc8ea6c800d5eba12f7967400eb6c2d21ae85ad31c204fb1f4"}, + {file = "black-23.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5133f5507007ba08d8b7b263c7aa0f931af5ba88a29beacc4b2dc23fcefe9c06"}, + {file = "black-23.11.0-cp39-cp39-win_amd64.whl", hash = "sha256:421f3e44aa67138ab1b9bfbc22ee3780b22fa5b291e4db8ab7eee95200726b07"}, + {file = "black-23.11.0-py3-none-any.whl", hash = "sha256:54caaa703227c6e0c87b76326d0862184729a69b73d3b7305b6288e1d830067e"}, + {file = "black-23.11.0.tar.gz", hash = "sha256:4c68855825ff432d197229846f971bc4d6666ce90492e5b02013bcaca4d9ab05"}, ] [package.dependencies] @@ -39,14 +57,14 @@ uvloop = ["uvloop (>=0.15.2)"] [[package]] name = "certifi" -version = "2023.7.22" +version = "2023.11.17" description = "Python package for providing Mozilla's CA Bundle." category = "dev" optional = false python-versions = ">=3.6" files = [ - {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, - {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, + {file = "certifi-2023.11.17-py3-none-any.whl", hash = "sha256:e036ab49d5b79556f99cfc2d9320b34cfbe5be05c5871b51de9329f0603b0474"}, + {file = "certifi-2023.11.17.tar.gz", hash = "sha256:9b469f3a900bf28dc19b8cfbf8019bf47f7fdd1a65a1d4ffb98fc14166beb4d1"}, ] [[package]] @@ -128,102 +146,102 @@ files = [ [[package]] name = "charset-normalizer" -version = "3.3.1" +version = "3.3.2" description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." category = "dev" optional = false python-versions = ">=3.7.0" files = [ - {file = "charset-normalizer-3.3.1.tar.gz", hash = "sha256:d9137a876020661972ca6eec0766d81aef8a5627df628b664b234b73396e727e"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8aee051c89e13565c6bd366813c386939f8e928af93c29fda4af86d25b73d8f8"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:352a88c3df0d1fa886562384b86f9a9e27563d4704ee0e9d56ec6fcd270ea690"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:223b4d54561c01048f657fa6ce41461d5ad8ff128b9678cfe8b2ecd951e3f8a2"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f861d94c2a450b974b86093c6c027888627b8082f1299dfd5a4bae8e2292821"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1171ef1fc5ab4693c5d151ae0fdad7f7349920eabbaca6271f95969fa0756c2d"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28f512b9a33235545fbbdac6a330a510b63be278a50071a336afc1b78781b147"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0e842112fe3f1a4ffcf64b06dc4c61a88441c2f02f373367f7b4c1aa9be2ad5"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3f9bc2ce123637a60ebe819f9fccc614da1bcc05798bbbaf2dd4ec91f3e08846"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:f194cce575e59ffe442c10a360182a986535fd90b57f7debfaa5c845c409ecc3"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:9a74041ba0bfa9bc9b9bb2cd3238a6ab3b7618e759b41bd15b5f6ad958d17605"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b578cbe580e3b41ad17b1c428f382c814b32a6ce90f2d8e39e2e635d49e498d1"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:6db3cfb9b4fcecb4390db154e75b49578c87a3b9979b40cdf90d7e4b945656e1"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:debb633f3f7856f95ad957d9b9c781f8e2c6303ef21724ec94bea2ce2fcbd056"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-win32.whl", hash = "sha256:87071618d3d8ec8b186d53cb6e66955ef2a0e4fa63ccd3709c0c90ac5a43520f"}, - {file = "charset_normalizer-3.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:e372d7dfd154009142631de2d316adad3cc1c36c32a38b16a4751ba78da2a397"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ae4070f741f8d809075ef697877fd350ecf0b7c5837ed68738607ee0a2c572cf"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:58e875eb7016fd014c0eea46c6fa92b87b62c0cb31b9feae25cbbe62c919f54d"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dbd95e300367aa0827496fe75a1766d198d34385a58f97683fe6e07f89ca3e3c"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:de0b4caa1c8a21394e8ce971997614a17648f94e1cd0640fbd6b4d14cab13a72"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:985c7965f62f6f32bf432e2681173db41336a9c2611693247069288bcb0c7f8b"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a15c1fe6d26e83fd2e5972425a772cca158eae58b05d4a25a4e474c221053e2d"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ae55d592b02c4349525b6ed8f74c692509e5adffa842e582c0f861751701a673"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be4d9c2770044a59715eb57c1144dedea7c5d5ae80c68fb9959515037cde2008"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:851cf693fb3aaef71031237cd68699dded198657ec1e76a76eb8be58c03a5d1f"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:31bbaba7218904d2eabecf4feec0d07469284e952a27400f23b6628439439fa7"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:871d045d6ccc181fd863a3cd66ee8e395523ebfbc57f85f91f035f50cee8e3d4"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:501adc5eb6cd5f40a6f77fbd90e5ab915c8fd6e8c614af2db5561e16c600d6f3"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f5fb672c396d826ca16a022ac04c9dce74e00a1c344f6ad1a0fdc1ba1f332213"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-win32.whl", hash = "sha256:bb06098d019766ca16fc915ecaa455c1f1cd594204e7f840cd6258237b5079a8"}, - {file = "charset_normalizer-3.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:8af5a8917b8af42295e86b64903156b4f110a30dca5f3b5aedea123fbd638bff"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7ae8e5142dcc7a49168f4055255dbcced01dc1714a90a21f87448dc8d90617d1"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5b70bab78accbc672f50e878a5b73ca692f45f5b5e25c8066d748c09405e6a55"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5ceca5876032362ae73b83347be8b5dbd2d1faf3358deb38c9c88776779b2e2f"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34d95638ff3613849f473afc33f65c401a89f3b9528d0d213c7037c398a51296"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9edbe6a5bf8b56a4a84533ba2b2f489d0046e755c29616ef8830f9e7d9cf5728"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f6a02a3c7950cafaadcd46a226ad9e12fc9744652cc69f9e5534f98b47f3bbcf"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10b8dd31e10f32410751b3430996f9807fc4d1587ca69772e2aa940a82ab571a"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edc0202099ea1d82844316604e17d2b175044f9bcb6b398aab781eba957224bd"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b891a2f68e09c5ef989007fac11476ed33c5c9994449a4e2c3386529d703dc8b"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:71ef3b9be10070360f289aea4838c784f8b851be3ba58cf796262b57775c2f14"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:55602981b2dbf8184c098bc10287e8c245e351cd4fdcad050bd7199d5a8bf514"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:46fb9970aa5eeca547d7aa0de5d4b124a288b42eaefac677bde805013c95725c"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:520b7a142d2524f999447b3a0cf95115df81c4f33003c51a6ab637cbda9d0bf4"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-win32.whl", hash = "sha256:8ec8ef42c6cd5856a7613dcd1eaf21e5573b2185263d87d27c8edcae33b62a61"}, - {file = "charset_normalizer-3.3.1-cp312-cp312-win_amd64.whl", hash = "sha256:baec8148d6b8bd5cee1ae138ba658c71f5b03e0d69d5907703e3e1df96db5e41"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:63a6f59e2d01310f754c270e4a257426fe5a591dc487f1983b3bbe793cf6bac6"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1d6bfc32a68bc0933819cfdfe45f9abc3cae3877e1d90aac7259d57e6e0f85b1"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4f3100d86dcd03c03f7e9c3fdb23d92e32abbca07e7c13ebd7ddfbcb06f5991f"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:39b70a6f88eebe239fa775190796d55a33cfb6d36b9ffdd37843f7c4c1b5dc67"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e12f8ee80aa35e746230a2af83e81bd6b52daa92a8afaef4fea4a2ce9b9f4fa"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b6cefa579e1237ce198619b76eaa148b71894fb0d6bcf9024460f9bf30fd228"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:61f1e3fb621f5420523abb71f5771a204b33c21d31e7d9d86881b2cffe92c47c"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:4f6e2a839f83a6a76854d12dbebde50e4b1afa63e27761549d006fa53e9aa80e"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:1ec937546cad86d0dce5396748bf392bb7b62a9eeb8c66efac60e947697f0e58"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:82ca51ff0fc5b641a2d4e1cc8c5ff108699b7a56d7f3ad6f6da9dbb6f0145b48"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:633968254f8d421e70f91c6ebe71ed0ab140220469cf87a9857e21c16687c034"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-win32.whl", hash = "sha256:c0c72d34e7de5604df0fde3644cc079feee5e55464967d10b24b1de268deceb9"}, - {file = "charset_normalizer-3.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:63accd11149c0f9a99e3bc095bbdb5a464862d77a7e309ad5938fbc8721235ae"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5a3580a4fdc4ac05f9e53c57f965e3594b2f99796231380adb2baaab96e22761"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2465aa50c9299d615d757c1c888bc6fef384b7c4aec81c05a0172b4400f98557"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cb7cd68814308aade9d0c93c5bd2ade9f9441666f8ba5aa9c2d4b389cb5e2a45"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:91e43805ccafa0a91831f9cd5443aa34528c0c3f2cc48c4cb3d9a7721053874b"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:854cc74367180beb327ab9d00f964f6d91da06450b0855cbbb09187bcdb02de5"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c15070ebf11b8b7fd1bfff7217e9324963c82dbdf6182ff7050519e350e7ad9f"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2c4c99f98fc3a1835af8179dcc9013f93594d0670e2fa80c83aa36346ee763d2"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3fb765362688821404ad6cf86772fc54993ec11577cd5a92ac44b4c2ba52155b"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dced27917823df984fe0c80a5c4ad75cf58df0fbfae890bc08004cd3888922a2"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a66bcdf19c1a523e41b8e9d53d0cedbfbac2e93c649a2e9502cb26c014d0980c"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:ecd26be9f112c4f96718290c10f4caea6cc798459a3a76636b817a0ed7874e42"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:3f70fd716855cd3b855316b226a1ac8bdb3caf4f7ea96edcccc6f484217c9597"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:17a866d61259c7de1bdadef418a37755050ddb4b922df8b356503234fff7932c"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-win32.whl", hash = "sha256:548eefad783ed787b38cb6f9a574bd8664468cc76d1538215d510a3cd41406cb"}, - {file = "charset_normalizer-3.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:45f053a0ece92c734d874861ffe6e3cc92150e32136dd59ab1fb070575189c97"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:bc791ec3fd0c4309a753f95bb6c749ef0d8ea3aea91f07ee1cf06b7b02118f2f"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:0c8c61fb505c7dad1d251c284e712d4e0372cef3b067f7ddf82a7fa82e1e9a93"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:2c092be3885a1b7899cd85ce24acedc1034199d6fca1483fa2c3a35c86e43041"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c2000c54c395d9e5e44c99dc7c20a64dc371f777faf8bae4919ad3e99ce5253e"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4cb50a0335382aac15c31b61d8531bc9bb657cfd848b1d7158009472189f3d62"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c30187840d36d0ba2893bc3271a36a517a717f9fd383a98e2697ee890a37c273"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fe81b35c33772e56f4b6cf62cf4aedc1762ef7162a31e6ac7fe5e40d0149eb67"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d0bf89afcbcf4d1bb2652f6580e5e55a840fdf87384f6063c4a4f0c95e378656"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:06cf46bdff72f58645434d467bf5228080801298fbba19fe268a01b4534467f5"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:3c66df3f41abee950d6638adc7eac4730a306b022570f71dd0bd6ba53503ab57"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:cd805513198304026bd379d1d516afbf6c3c13f4382134a2c526b8b854da1c2e"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:9505dc359edb6a330efcd2be825fdb73ee3e628d9010597aa1aee5aa63442e97"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:31445f38053476a0c4e6d12b047b08ced81e2c7c712e5a1ad97bc913256f91b2"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-win32.whl", hash = "sha256:bd28b31730f0e982ace8663d108e01199098432a30a4c410d06fe08fdb9e93f4"}, - {file = "charset_normalizer-3.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:555fe186da0068d3354cdf4bbcbc609b0ecae4d04c921cc13e209eece7720727"}, - {file = "charset_normalizer-3.3.1-py3-none-any.whl", hash = "sha256:800561453acdecedaac137bf09cd719c7a440b6800ec182f077bb8e7025fb708"}, + {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"}, + {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"}, ] [[package]] @@ -323,35 +341,35 @@ toml = ["tomli"] [[package]] name = "cryptography" -version = "41.0.4" +version = "41.0.5" description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "cryptography-41.0.4-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:80907d3faa55dc5434a16579952ac6da800935cd98d14dbd62f6f042c7f5e839"}, - {file = "cryptography-41.0.4-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:35c00f637cd0b9d5b6c6bd11b6c3359194a8eba9c46d4e875a3660e3b400005f"}, - {file = "cryptography-41.0.4-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cecfefa17042941f94ab54f769c8ce0fe14beff2694e9ac684176a2535bf9714"}, - {file = "cryptography-41.0.4-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e40211b4923ba5a6dc9769eab704bdb3fbb58d56c5b336d30996c24fcf12aadb"}, - {file = "cryptography-41.0.4-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:23a25c09dfd0d9f28da2352503b23e086f8e78096b9fd585d1d14eca01613e13"}, - {file = "cryptography-41.0.4-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:2ed09183922d66c4ec5fdaa59b4d14e105c084dd0febd27452de8f6f74704143"}, - {file = "cryptography-41.0.4-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5a0f09cefded00e648a127048119f77bc2b2ec61e736660b5789e638f43cc397"}, - {file = "cryptography-41.0.4-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:9eeb77214afae972a00dee47382d2591abe77bdae166bda672fb1e24702a3860"}, - {file = "cryptography-41.0.4-cp37-abi3-win32.whl", hash = "sha256:3b224890962a2d7b57cf5eeb16ccaafba6083f7b811829f00476309bce2fe0fd"}, - {file = "cryptography-41.0.4-cp37-abi3-win_amd64.whl", hash = "sha256:c880eba5175f4307129784eca96f4e70b88e57aa3f680aeba3bab0e980b0f37d"}, - {file = "cryptography-41.0.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:004b6ccc95943f6a9ad3142cfabcc769d7ee38a3f60fb0dddbfb431f818c3a67"}, - {file = "cryptography-41.0.4-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:86defa8d248c3fa029da68ce61fe735432b047e32179883bdb1e79ed9bb8195e"}, - {file = "cryptography-41.0.4-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:37480760ae08065437e6573d14be973112c9e6dcaf5f11d00147ee74f37a3829"}, - {file = "cryptography-41.0.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:b5f4dfe950ff0479f1f00eda09c18798d4f49b98f4e2006d644b3301682ebdca"}, - {file = "cryptography-41.0.4-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7e53db173370dea832190870e975a1e09c86a879b613948f09eb49324218c14d"}, - {file = "cryptography-41.0.4-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5b72205a360f3b6176485a333256b9bcd48700fc755fef51c8e7e67c4b63e3ac"}, - {file = "cryptography-41.0.4-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:93530900d14c37a46ce3d6c9e6fd35dbe5f5601bf6b3a5c325c7bffc030344d9"}, - {file = "cryptography-41.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:efc8ad4e6fc4f1752ebfb58aefece8b4e3c4cae940b0994d43649bdfce8d0d4f"}, - {file = "cryptography-41.0.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c3391bd8e6de35f6f1140e50aaeb3e2b3d6a9012536ca23ab0d9c35ec18c8a91"}, - {file = "cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:0d9409894f495d465fe6fda92cb70e8323e9648af912d5b9141d616df40a87b8"}, - {file = "cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ac4f9ead4bbd0bc8ab2d318f97d85147167a488be0e08814a37eb2f439d5cf6"}, - {file = "cryptography-41.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:047c4603aeb4bbd8db2756e38f5b8bd7e94318c047cfe4efeb5d715e08b49311"}, - {file = "cryptography-41.0.4.tar.gz", hash = "sha256:7febc3094125fc126a7f6fb1f420d0da639f3f32cb15c8ff0dc3997c4549f51a"}, + {file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:da6a0ff8f1016ccc7477e6339e1d50ce5f59b88905585f77193ebd5068f1e797"}, + {file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:b948e09fe5fb18517d99994184854ebd50b57248736fd4c720ad540560174ec5"}, + {file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d38e6031e113b7421db1de0c1b1f7739564a88f1684c6b89234fbf6c11b75147"}, + {file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e270c04f4d9b5671ebcc792b3ba5d4488bf7c42c3c241a3748e2599776f29696"}, + {file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ec3b055ff8f1dce8e6ef28f626e0972981475173d7973d63f271b29c8a2897da"}, + {file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:7d208c21e47940369accfc9e85f0de7693d9a5d843c2509b3846b2db170dfd20"}, + {file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:8254962e6ba1f4d2090c44daf50a547cd5f0bf446dc658a8e5f8156cae0d8548"}, + {file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:a48e74dad1fb349f3dc1d449ed88e0017d792997a7ad2ec9587ed17405667e6d"}, + {file = "cryptography-41.0.5-cp37-abi3-win32.whl", hash = "sha256:d3977f0e276f6f5bf245c403156673db103283266601405376f075c849a0b936"}, + {file = "cryptography-41.0.5-cp37-abi3-win_amd64.whl", hash = "sha256:73801ac9736741f220e20435f84ecec75ed70eda90f781a148f1bad546963d81"}, + {file = "cryptography-41.0.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3be3ca726e1572517d2bef99a818378bbcf7d7799d5372a46c79c29eb8d166c1"}, + {file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e886098619d3815e0ad5790c973afeee2c0e6e04b4da90b88e6bd06e2a0b1b72"}, + {file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:573eb7128cbca75f9157dcde974781209463ce56b5804983e11a1c462f0f4e88"}, + {file = "cryptography-41.0.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0c327cac00f082013c7c9fb6c46b7cc9fa3c288ca702c74773968173bda421bf"}, + {file = "cryptography-41.0.5-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:227ec057cd32a41c6651701abc0328135e472ed450f47c2766f23267b792a88e"}, + {file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:22892cc830d8b2c89ea60148227631bb96a7da0c1b722f2aac8824b1b7c0b6b8"}, + {file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:5a70187954ba7292c7876734183e810b728b4f3965fbe571421cb2434d279179"}, + {file = "cryptography-41.0.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:88417bff20162f635f24f849ab182b092697922088b477a7abd6664ddd82291d"}, + {file = "cryptography-41.0.5-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c707f7afd813478e2019ae32a7c49cd932dd60ab2d2a93e796f68236b7e1fbf1"}, + {file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:580afc7b7216deeb87a098ef0674d6ee34ab55993140838b14c9b83312b37b86"}, + {file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fba1e91467c65fe64a82c689dc6cf58151158993b13eb7a7f3f4b7f395636723"}, + {file = "cryptography-41.0.5-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:0d2a6a598847c46e3e321a7aef8af1436f11c27f1254933746304ff014664d84"}, + {file = "cryptography-41.0.5.tar.gz", hash = "sha256:392cb88b597247177172e02da6b7a63deeff1937fa6fec3bbf902ebd75d97ec7"}, ] [package.dependencies] @@ -408,37 +426,37 @@ test = ["pytest (>=6)"] [[package]] name = "filelock" -version = "3.12.4" +version = "3.13.1" description = "A platform independent file lock." category = "dev" optional = false python-versions = ">=3.8" files = [ - {file = "filelock-3.12.4-py3-none-any.whl", hash = "sha256:08c21d87ded6e2b9da6728c3dff51baf1dcecf973b768ef35bcbc3447edb9ad4"}, - {file = "filelock-3.12.4.tar.gz", hash = "sha256:2e6f249f1f3654291606e046b09f1fd5eac39b360664c27f5aad072012f8bcbd"}, + {file = "filelock-3.13.1-py3-none-any.whl", hash = "sha256:57dbda9b35157b05fb3e58ee91448612eb674172fab98ee235ccb0b5bee19a1c"}, + {file = "filelock-3.13.1.tar.gz", hash = "sha256:521f5f56c50f8426f5e03ad3b281b490a87ef15bc6c526f168290f0c7148d44e"}, ] [package.extras] -docs = ["furo (>=2023.7.26)", "sphinx (>=7.1.2)", "sphinx-autodoc-typehints (>=1.24)"] -testing = ["covdefaults (>=2.3)", "coverage (>=7.3)", "diff-cover (>=7.7)", "pytest (>=7.4)", "pytest-cov (>=4.1)", "pytest-mock (>=3.11.1)", "pytest-timeout (>=2.1)"] -typing = ["typing-extensions (>=4.7.1)"] +docs = ["furo (>=2023.9.10)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.24)"] +testing = ["covdefaults (>=2.3)", "coverage (>=7.3.2)", "diff-cover (>=8)", "pytest (>=7.4.3)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)", "pytest-timeout (>=2.2)"] +typing = ["typing-extensions (>=4.8)"] [[package]] name = "flake8" -version = "5.0.4" +version = "6.1.0" description = "the modular source code checker: pep8 pyflakes and co" category = "dev" optional = false -python-versions = ">=3.6.1" +python-versions = ">=3.8.1" files = [ - {file = "flake8-5.0.4-py2.py3-none-any.whl", hash = "sha256:7a1cf6b73744f5806ab95e526f6f0d8c01c66d7bbe349562d22dfca20610b248"}, - {file = "flake8-5.0.4.tar.gz", hash = "sha256:6fbe320aad8d6b95cec8b8e47bc933004678dc63095be98528b7bdd2a9f510db"}, + {file = "flake8-6.1.0-py2.py3-none-any.whl", hash = "sha256:ffdfce58ea94c6580c77888a86506937f9a1a227dfcd15f245d694ae20a6b6e5"}, + {file = "flake8-6.1.0.tar.gz", hash = "sha256:d5b3857f07c030bdb5bf41c7f53799571d75c4491748a3adcd47de929e34cd23"}, ] [package.dependencies] mccabe = ">=0.7.0,<0.8.0" -pycodestyle = ">=2.9.0,<2.10.0" -pyflakes = ">=2.5.0,<2.6.0" +pycodestyle = ">=2.11.0,<2.12.0" +pyflakes = ">=3.1.0,<3.2.0" [[package]] name = "flake8-black" @@ -494,14 +512,14 @@ pycodestyle = "*" [[package]] name = "identify" -version = "2.5.30" +version = "2.5.32" description = "File identification library for Python" category = "dev" optional = false python-versions = ">=3.8" files = [ - {file = "identify-2.5.30-py2.py3-none-any.whl", hash = "sha256:afe67f26ae29bab007ec21b03d4114f41316ab9dd15aa8736a167481e108da54"}, - {file = "identify-2.5.30.tar.gz", hash = "sha256:f302a4256a15c849b91cfcdcec052a8ce914634b2f77ae87dad29cd749f2d88d"}, + {file = "identify-2.5.32-py2.py3-none-any.whl", hash = "sha256:0b7656ef6cba81664b783352c73f8c24b39cf82f926f78f4550eda928e5e0545"}, + {file = "identify-2.5.32.tar.gz", hash = "sha256:5d9979348ec1a21c768ae07e0a652924538e8bce67313a73cb0f681cf08ba407"}, ] [package.extras] @@ -588,14 +606,14 @@ trio = ["async_generator", "trio"] [[package]] name = "keyring" -version = "24.2.0" +version = "24.3.0" description = "Store and access your passwords safely." category = "dev" optional = false python-versions = ">=3.8" files = [ - {file = "keyring-24.2.0-py3-none-any.whl", hash = "sha256:4901caaf597bfd3bbd78c9a0c7c4c29fcd8310dab2cffefe749e916b6527acd6"}, - {file = "keyring-24.2.0.tar.gz", hash = "sha256:ca0746a19ec421219f4d713f848fa297a661a8a8c1504867e55bfb5e09091509"}, + {file = "keyring-24.3.0-py3-none-any.whl", hash = "sha256:4446d35d636e6a10b8bce7caa66913dd9eca5fd222ca03a3d42c38608ac30836"}, + {file = "keyring-24.3.0.tar.gz", hash = "sha256:e730ecffd309658a08ee82535a3b5ec4b4c8669a9be11efb66249d8e0aeb9a25"}, ] [package.dependencies] @@ -606,9 +624,9 @@ pywin32-ctypes = {version = ">=0.2.0", markers = "sys_platform == \"win32\""} SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""} [package.extras] -completion = ["shtab"] -docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] -testing = ["pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-mypy (>=0.9.1)", "pytest-ruff"] +completion = ["shtab (>=1.1.0)"] +docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"] +testing = ["pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-ruff"] [[package]] name = "markdown-it-py" @@ -777,14 +795,14 @@ testing = ["pytest", "pytest-cov"] [[package]] name = "platformdirs" -version = "3.11.0" +version = "4.0.0" description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "platformdirs-3.11.0-py3-none-any.whl", hash = "sha256:e9d171d00af68be50e9202731309c4e658fd8bc76f55c11c7dd760d023bda68e"}, - {file = "platformdirs-3.11.0.tar.gz", hash = "sha256:cf8ee52a3afdb965072dcc652433e0c7e3e40cf5ea1477cd4b3b1d2eb75495b3"}, + {file = "platformdirs-4.0.0-py3-none-any.whl", hash = "sha256:118c954d7e949b35437270383a3f2531e99dd93cf7ce4dc8340d3356d30f173b"}, + {file = "platformdirs-4.0.0.tar.gz", hash = "sha256:cb633b2bcf10c51af60beb0ab06d2f1d69064b43abf4c185ca6b28865f3f9731"}, ] [package.extras] @@ -809,14 +827,14 @@ testing = ["pytest", "pytest-benchmark"] [[package]] name = "pre-commit" -version = "3.5.0" +version = "2.20.0" description = "A framework for managing and maintaining multi-language pre-commit hooks." category = "dev" optional = false -python-versions = ">=3.8" +python-versions = ">=3.7" files = [ - {file = "pre_commit-3.5.0-py2.py3-none-any.whl", hash = "sha256:841dc9aef25daba9a0238cd27984041fa0467b4199fc4852e27950664919f660"}, - {file = "pre_commit-3.5.0.tar.gz", hash = "sha256:5804465c675b659b0862f07907f96295d490822a450c4c40e747d0b1c6ebcb32"}, + {file = "pre_commit-2.20.0-py2.py3-none-any.whl", hash = "sha256:51a5ba7c480ae8072ecdb6933df22d2f812dc897d5fe848778116129a681aac7"}, + {file = "pre_commit-2.20.0.tar.gz", hash = "sha256:a978dac7bc9ec0bcee55c18a277d553b0f419d259dadb4b9418ff2d00eb43959"}, ] [package.dependencies] @@ -824,7 +842,8 @@ cfgv = ">=2.0.0" identify = ">=1.0.0" nodeenv = ">=0.11.1" pyyaml = ">=5.1" -virtualenv = ">=20.10.0" +toml = "*" +virtualenv = ">=20.0.8" [[package]] name = "py-cpuinfo" @@ -840,14 +859,14 @@ files = [ [[package]] name = "pycodestyle" -version = "2.9.1" +version = "2.11.1" description = "Python style guide checker" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "pycodestyle-2.9.1-py2.py3-none-any.whl", hash = "sha256:d1735fc58b418fd7c5f658d28d943854f8a849b01a5d0a1e6f3f3fdd0166804b"}, - {file = "pycodestyle-2.9.1.tar.gz", hash = "sha256:2c9607871d58c76354b697b42f5d57e1ada7d261c261efac224b664affdc5785"}, + {file = "pycodestyle-2.11.1-py2.py3-none-any.whl", hash = "sha256:44fe31000b2d866f2e41841b18528a505fbd7fef9017b04eff4e2648a0fadc67"}, + {file = "pycodestyle-2.11.1.tar.gz", hash = "sha256:41ba0e7afc9752dfb53ced5489e89f8186be00e599e712660695b7a75ff2663f"}, ] [[package]] @@ -864,30 +883,31 @@ files = [ [[package]] name = "pyflakes" -version = "2.5.0" +version = "3.1.0" description = "passive checker of Python programs" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"}, - {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"}, + {file = "pyflakes-3.1.0-py2.py3-none-any.whl", hash = "sha256:4132f6d49cb4dae6819e5379898f2b8cce3c5f23994194c24b77d5da2e36f774"}, + {file = "pyflakes-3.1.0.tar.gz", hash = "sha256:a0aae034c444db0071aa077972ba4768d40c830d9539fd45bf4cd3f8f6992efc"}, ] [[package]] name = "pygments" -version = "2.16.1" +version = "2.17.1" description = "Pygments is a syntax highlighting package written in Python." category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"}, - {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"}, + {file = "pygments-2.17.1-py3-none-any.whl", hash = "sha256:1b37f1b1e1bff2af52ecaf28cc601e2ef7077000b227a0675da25aef85784bc4"}, + {file = "pygments-2.17.1.tar.gz", hash = "sha256:e45a0e74bf9c530f564ca81b8952343be986a29f6afe7f5ad95c5f06b7bdf5e8"}, ] [package.extras] plugins = ["importlib-metadata"] +windows-terminal = ["colorama (>=0.4.6)"] [[package]] name = "pyside6" @@ -945,14 +965,14 @@ shiboken6 = "6.6.0" [[package]] name = "pytest" -version = "7.4.2" +version = "7.4.3" description = "pytest: simple powerful testing with Python" category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "pytest-7.4.2-py3-none-any.whl", hash = "sha256:1d881c6124e08ff0a1bb75ba3ec0bfd8b5354a01c194ddd5a0a870a48d99b002"}, - {file = "pytest-7.4.2.tar.gz", hash = "sha256:a766259cfab564a2ad52cb1aae1b881a75c3eb7e34ca3779697c23ed47c47069"}, + {file = "pytest-7.4.3-py3-none-any.whl", hash = "sha256:0d009c083ea859a71b76adf7c1d502e4bc170b80a8ef002da5806527b9591fac"}, + {file = "pytest-7.4.3.tar.gz", hash = "sha256:d989d136982de4e3b29dabcc838ad581c64e8ed52c11fbe86ddebd9da0818cd5"}, ] [package.dependencies] @@ -1093,6 +1113,7 @@ files = [ {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, + {file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"}, {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, @@ -1100,8 +1121,15 @@ files = [ {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, + {file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"}, {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, + {file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"}, + {file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"}, + {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"}, + {file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"}, + {file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"}, + {file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"}, {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, @@ -1118,6 +1146,7 @@ files = [ {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, + {file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"}, {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, @@ -1125,6 +1154,7 @@ files = [ {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, + {file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"}, {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, @@ -1204,14 +1234,14 @@ idna2008 = ["idna"] [[package]] name = "rich" -version = "13.6.0" +version = "13.7.0" description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal" category = "dev" optional = false python-versions = ">=3.7.0" files = [ - {file = "rich-13.6.0-py3-none-any.whl", hash = "sha256:2b38e2fe9ca72c9a00170a1a2d20c63c790d0e10ef1fe35eba76e1e7b1d7d245"}, - {file = "rich-13.6.0.tar.gz", hash = "sha256:5c14d22737e6d5084ef4771b62d5d4363165b403455a30a1c8ca39dc7b644bef"}, + {file = "rich-13.7.0-py3-none-any.whl", hash = "sha256:6da14c108c4866ee9520bbffa71f6fe3962e193b7da68720583850cd4548e235"}, + {file = "rich-13.7.0.tar.gz", hash = "sha256:5cb5123b5cf9ee70584244246816e9114227e0b98ad9176eede6ad54bf5403fa"}, ] [package.dependencies] @@ -1268,6 +1298,30 @@ files = [ {file = "shiboken6-6.6.0-cp38-abi3-win_amd64.whl", hash = "sha256:e62b2610b84f0ff7ed0181a4c535849cdc0654127097b5ef561cf0a33078f245"}, ] +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] + +[[package]] +name = "toml" +version = "0.10.2" +description = "Python Library for Tom's Obvious, Minimal Language" +category = "dev" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, + {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, +] + [[package]] name = "tomli" version = "2.0.1" @@ -1317,42 +1371,42 @@ files = [ [[package]] name = "urllib3" -version = "2.0.7" +version = "2.1.0" description = "HTTP library with thread-safe connection pooling, file post, and more." category = "dev" optional = false -python-versions = ">=3.7" +python-versions = ">=3.8" files = [ - {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"}, - {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"}, + {file = "urllib3-2.1.0-py3-none-any.whl", hash = "sha256:55901e917a5896a349ff771be919f8bd99aff50b79fe58fec595eb37bbc56bb3"}, + {file = "urllib3-2.1.0.tar.gz", hash = "sha256:df7aa8afb0148fa78488e7899b2c59b5f4ffcfa82e6c54ccb9dd37c1d7b52d54"}, ] [package.extras] brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] -secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"] socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] zstd = ["zstandard (>=0.18.0)"] [[package]] name = "virtualenv" -version = "20.24.6" +version = "20.4.7" description = "Virtual Python Environment builder" category = "dev" optional = false -python-versions = ">=3.7" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" files = [ - {file = "virtualenv-20.24.6-py3-none-any.whl", hash = "sha256:520d056652454c5098a00c0f073611ccbea4c79089331f60bf9d7ba247bb7381"}, - {file = "virtualenv-20.24.6.tar.gz", hash = "sha256:02ece4f56fbf939dbbc33c0715159951d6bf14aaf5457b092e4548e1382455af"}, + {file = "virtualenv-20.4.7-py2.py3-none-any.whl", hash = "sha256:2b0126166ea7c9c3661f5b8e06773d28f83322de7a3ff7d06f0aed18c9de6a76"}, + {file = "virtualenv-20.4.7.tar.gz", hash = "sha256:14fdf849f80dbb29a4eb6caa9875d476ee2a5cf76a5f5415fa2f1606010ab467"}, ] [package.dependencies] -distlib = ">=0.3.7,<1" -filelock = ">=3.12.2,<4" -platformdirs = ">=3.9.1,<4" +appdirs = ">=1.4.3,<2" +distlib = ">=0.3.1,<1" +filelock = ">=3.0.0,<4" +six = ">=1.9.0,<2" [package.extras] -docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"] -test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8)", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10)"] +docs = ["proselint (>=0.10.2)", "sphinx (>=3)", "sphinx-argparse (>=0.2.5)", "sphinx-rtd-theme (>=0.4.3)", "towncrier (>=19.9.0rc1)"] +testing = ["coverage (>=4)", "coverage-enable-subprocess (>=1)", "flaky (>=3)", "packaging (>=20.0)", "pytest (>=4)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.1)", "pytest-mock (>=2)", "pytest-randomly (>=1)", "pytest-timeout (>=1)", "xonsh (>=0.9.16)"] [[package]] name = "zipp" @@ -1373,4 +1427,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p [metadata] lock-version = "2.0" python-versions = ">=3.9" -content-hash = "10baaa5bf5930a46fdd8d5edfbeebcb0a530c412c9dfc5261e2912a9a2faa29f" +content-hash = "ecc08e8a241945ce50fc30edd88e1791299660280a171274667149a96925e2ef" diff --git a/pyproject.toml b/pyproject.toml index 1bc6539..54dcb8f 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "observ" -version = "0.13.1" +version = "0.14.0" description = "Reactive state management for Python" authors = ["Korijn van Golen <[email protected]>", "Berend Klein Haneveld <[email protected]>"] license = "MIT" @@ -18,16 +18,21 @@ flake8-black = "*" flake8-import-order = "*" flake8-print = "*" pre-commit = "*" -PySide6 = { version = ">=6.6", python = "<3.13"} pytest = "*" pytest-benchmark = "*" pytest-cov = "*" -pytest-qt = "*" pytest-timeout = "*" -pytest-xvfb = "*" twine = "*" urllib3 = { version = "*", python = "<4"} +[tool.poetry.group.qt] +optional = true + +[tool.poetry.group.qt.dependencies] +PySide6 = { version = ">=6.6", python = "<3.13"} +pytest-qt = "*" +pytest-xvfb = "*" + [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api"
Make PySide dev dependency optional See discussion in #115 : we'll move it to its own optional group. 🌲
2023-11-20T15:48:05
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-113
a87de9649d24fbe2bf0daf0934c958c8dacb9cdd
diff --git a/observ/watcher.py b/observ/watcher.py index 94e927e..390e664 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -10,7 +10,7 @@ from functools import partial, wraps import inspect from itertools import count -from typing import Any, Callable, Generic, TypeVar, Union +from typing import Any, Callable, Generic, Optional, TypeVar, Union from weakref import ref, WeakSet from .dep import Dep @@ -137,6 +137,8 @@ class Watcher(Generic[T]): "_number_of_callback_args", "__weakref__", ) + on_created: Optional[Callable[[Watcher[T]], None]] = None + on_destroyed: Optional[Callable[[Watcher[T]], None]] = None def __init__( self, @@ -148,7 +150,7 @@ def __init__( ) -> None: """ sync: Ignore the scheduler - lazy: Only reevalutate when value is requested + lazy: Only reevaluate when value is requested deep: Deep watch the watched value callback: Method to call when value has changed """ @@ -185,6 +187,13 @@ def __init__( self.value = None if self.lazy else self.get() self._number_of_callback_args = None + if Watcher.on_created: + Watcher.on_created(self) + + def __del__(self): + if Watcher.on_destroyed: + Watcher.on_destroyed(self) + def update(self) -> None: if self.lazy: self.dirty = True
Provide global Watcher lifecycle hooks At least created and destroy. ```py off = Watcher.on("created", hook) off = Watcher.on("destroy", hook) ```
2023-11-14T13:21:38
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-111
30ae3864f7962b5b660fbac9137c0023b0b6f334
diff --git a/observ/dep.py b/observ/dep.py index de8d4ba..68c3f30 100644 --- a/observ/dep.py +++ b/observ/dep.py @@ -7,7 +7,7 @@ class Dep: - __slots__ = ["_subs", "__weakref__"] + __slots__ = ("_subs", "__weakref__") stack: List["Watcher"] = [] # noqa: F821 def __init__(self) -> None: diff --git a/observ/dict_proxy.py b/observ/dict_proxy.py index 2e37659..a0bf60b 100644 --- a/observ/dict_proxy.py +++ b/observ/dict_proxy.py @@ -51,7 +51,7 @@ } -class DictProxyBase(Proxy): +class DictProxyBase(Proxy[dict]): def _orphaned_keydeps(self): return set(proxy_db.attrs(self)["keydep"].keys()) - set(self.target.keys()) diff --git a/observ/list_proxy.py b/observ/list_proxy.py index e377ede..7e7c8ca 100644 --- a/observ/list_proxy.py +++ b/observ/list_proxy.py @@ -45,7 +45,7 @@ } -class ListProxyBase(Proxy): +class ListProxyBase(Proxy[list]): pass diff --git a/observ/proxy.py b/observ/proxy.py index 899ddc2..f2a0865 100644 --- a/observ/proxy.py +++ b/observ/proxy.py @@ -1,9 +1,14 @@ +from __future__ import annotations + from functools import partial +from typing import cast, Generic, Literal, TypedDict, TypeVar from .proxy_db import proxy_db +T = TypeVar("T") + -class Proxy: +class Proxy(Generic[T]): """ Proxy for an object/target. @@ -16,9 +21,9 @@ class Proxy: """ __hash__ = None - __slots__ = ["target", "readonly", "shallow", "__weakref__"] + __slots__ = ("target", "readonly", "shallow", "__weakref__") - def __init__(self, target, readonly=False, shallow=False): + def __init__(self, target: T, readonly=False, shallow=False): self.target = target self.readonly = readonly self.shallow = shallow @@ -33,7 +38,7 @@ def __del__(self): TYPE_LOOKUP = {} -def proxy(target, readonly=False, shallow=False): +def proxy(target: T, readonly=False, shallow=False) -> T: """ Returns a Proxy for the given object. If a proxy for the given configuration already exists, it will return that instead of @@ -67,14 +72,26 @@ def proxy(target, readonly=False, shallow=False): return proxy_type(target, readonly=readonly, shallow=shallow) if isinstance(target, tuple): - return tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) + return cast( + T, tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) + ) # We can't proxy a plain value - return target + return cast(T, target) + + +try: + # for Python >= 3.11 + class Ref(TypedDict, Generic[T]): + value: T + def ref(target: T) -> Ref[T]: + return proxy(Ref(value=target)) -def ref(target): - return proxy({"value": target}) +except TypeError: + # before python 3.11 a TypedDict cannot inherit from a non-TypedDict class + def ref(target: T) -> dict[Literal["value"], T]: + return proxy({"value": target}) reactive = proxy @@ -83,7 +100,7 @@ def ref(target): shallow_readonly = partial(proxy, shallow=True, readonly=True) -def to_raw(target): +def to_raw(target: Proxy[T] | T) -> T: """ Returns a raw object from which any trace of proxy has been replaced with its wrapped target value. @@ -92,15 +109,15 @@ def to_raw(target): return to_raw(target.target) if isinstance(target, list): - return [to_raw(t) for t in target] + return cast(T, [to_raw(t) for t in target]) if isinstance(target, dict): - return {key: to_raw(value) for key, value in target.items()} + return cast(T, {key: to_raw(value) for key, value in target.items()}) if isinstance(target, tuple): - return tuple(to_raw(t) for t in target) + return cast(T, tuple(to_raw(t) for t in target)) if isinstance(target, set): - return {to_raw(t) for t in target} + return cast(T, {to_raw(t) for t in target}) return target diff --git a/observ/proxy_db.py b/observ/proxy_db.py index 75caa8a..9475371 100644 --- a/observ/proxy_db.py +++ b/observ/proxy_db.py @@ -15,7 +15,7 @@ class ProxyDb: removed from the collection. """ - __slots__ = ["db"] + __slots__ = ("db",) def __init__(self): self.db = {} diff --git a/observ/scheduler.py b/observ/scheduler.py index ab0dac8..26b69a2 100644 --- a/observ/scheduler.py +++ b/observ/scheduler.py @@ -9,7 +9,7 @@ class Scheduler: - __slots__ = [ + __slots__ = ( "_queue", "_queue_indices", "flushing", @@ -19,7 +19,7 @@ class Scheduler: "waiting", "request_flush", "detect_cycles", - ] + ) def __init__(self): self._queue = [] diff --git a/observ/set_proxy.py b/observ/set_proxy.py index d99d633..e049d8b 100644 --- a/observ/set_proxy.py +++ b/observ/set_proxy.py @@ -54,7 +54,7 @@ } -class SetProxyBase(Proxy): +class SetProxyBase(Proxy[set]): pass diff --git a/observ/store.py b/observ/store.py index a983b67..a493423 100644 --- a/observ/store.py +++ b/observ/store.py @@ -1,5 +1,5 @@ from functools import partial, wraps -from typing import Callable, Collection, TypeVar +from typing import Callable, Generic, TypeVar import patchdiff @@ -51,12 +51,15 @@ def decorator_computed(fn: T) -> T: return decorator_computed(_fn) -class Store: +S = TypeVar("S") + + +class Store(Generic[S]): """ Store that tracks mutations to state in order to enable undo/redo functionality """ - def __init__(self, state: Collection, strict=True): + def __init__(self, state: S, strict=True): """ Creates a store with the given state as the initial state. When `strict` is False, calling mutations that do not result diff --git a/observ/watcher.py b/observ/watcher.py index 096e713..0f1bf32 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -9,27 +9,28 @@ from functools import partial, wraps import inspect from itertools import count -from typing import Any, Callable, Optional, TypeVar +from typing import Any, Callable, Generic, TypeVar, Union from weakref import ref, WeakSet from .dep import Dep from .dict_proxy import DictProxyBase from .list_proxy import ListProxyBase -from .proxy import Proxy from .scheduler import scheduler from .set_proxy import SetProxyBase -T = TypeVar("T", bound=Callable[[], Any]) +T = TypeVar("T") +Watchable = Union[Callable[[], T], T] +WatchCallback = Union[Callable[[], Any], Callable[[T], Any], Callable[[T, T], Any]] def watch( - fn: Callable[[], Any] | Proxy | list[Proxy], - callback: Optional[Callable] = None, + fn: Watchable[T], + callback: WatchCallback[T] | None = None, sync: bool = False, deep: bool | None = None, immediate: bool = False, -): +) -> Watcher[T]: watcher = Watcher(fn, sync=sync, lazy=False, deep=deep, callback=callback) if immediate: watcher.dirty = True @@ -42,8 +43,8 @@ def watch( watch_effect = partial(watch, immediate=False, deep=True, callback=None) -def computed(_fn=None, *, deep=True): - def decorator_computed(fn: T) -> T: +def computed(_fn: Callable[[], T] | None = None, *, deep=True) -> Callable[[], T]: + def decorator_computed(fn: Callable[[], T]) -> Callable[[], T]: """ Create a watcher for an expression. Note: make sure fn doesn't need any arguments to run @@ -113,8 +114,8 @@ class WrongNumberOfArgumentsError(TypeError): pass -class Watcher: - __slots__ = [ +class Watcher(Generic[T]): + __slots__ = ( "id", "fn", "_deps", @@ -128,15 +129,15 @@ class Watcher: "value", "_number_of_callback_args", "__weakref__", - ] + ) def __init__( self, - fn: Callable[[], Any] | Proxy | list[Proxy], + fn: Watchable[T], sync: bool = False, lazy: bool = True, deep: bool | None = None, - callback: Callable = None, + callback: WatchCallback[T] | None = None, ) -> None: """ sync: Ignore the scheduler
missing type hints for the public API Hi, The publis API does not have type hints ATM, which leads to a bad experience when using the library. For example, the following expression: ```py state = reactive({'value': 0}) ``` is inferred as `Proxy | Any | tuple[Any, ...]`, so when trying to do this: ```py state['value'] = 42 ``` we get the following error: `"__getitem__" method not defined on type "Proxy"`. Typescript [gets away with this](https://github.com/microsoft/TypeScript/blob/628bf0ec85be7c58019dc1742e50bc3f97168a5a/src/lib/es2015.proxy.d.ts#L108) by doing: ```ts interface ProxyConstructor { new <T extends object>(target: T, ...): T; } ``` so maybe we should have: ```py def proxy[T](target: T, **etc) -> T: ... ```
2023-11-14T07:53:43
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-105
a07efd33e07cdb4fa565f016a61adeec26dfd738
diff --git a/observ/__init__.py b/observ/__init__.py index 2e21018..437cc91 100644 --- a/observ/__init__.py +++ b/observ/__init__.py @@ -1,9 +1,10 @@ -__version__ = "0.12.0" +__version__ = "0.13.0" from .proxy import ( reactive, readonly, + ref, shallow_reactive, shallow_readonly, to_raw, diff --git a/observ/dict_proxy.py b/observ/dict_proxy.py index 3433860..2e37659 100644 --- a/observ/dict_proxy.py +++ b/observ/dict_proxy.py @@ -1,4 +1,5 @@ from .proxy import Proxy, TYPE_LOOKUP +from .proxy_db import proxy_db from .traps import construct_methods_traps_dict, trap_map, trap_map_readonly @@ -52,7 +53,7 @@ class DictProxyBase(Proxy): def _orphaned_keydeps(self): - return set(self.proxy_db.attrs(self)["keydep"].keys()) - set(self.target.keys()) + return set(proxy_db.attrs(self)["keydep"].keys()) - set(self.target.keys()) def readonly_dict_proxy_init(self, target, shallow=False, **kwargs): diff --git a/observ/proxy.py b/observ/proxy.py index ccb4bcb..899ddc2 100644 --- a/observ/proxy.py +++ b/observ/proxy.py @@ -1,6 +1,5 @@ from functools import partial -from .dep import Dep from .proxy_db import proxy_db @@ -17,18 +16,16 @@ class Proxy: """ __hash__ = None - __slots__ = ["target", "readonly", "shallow", "proxy_db", "Dep", "__weakref__"] + __slots__ = ["target", "readonly", "shallow", "__weakref__"] def __init__(self, target, readonly=False, shallow=False): self.target = target self.readonly = readonly self.shallow = shallow - self.proxy_db = proxy_db - self.proxy_db.reference(self) - self.Dep = Dep + proxy_db.reference(self) def __del__(self): - self.proxy_db.dereference(self) + proxy_db.dereference(self) # Lookup dict for mapping a type (dict, list, set) to a method @@ -63,22 +60,21 @@ def proxy(target, readonly=False, shallow=False): if existing_proxy is not None: return existing_proxy - # We can only wrap the following datatypes - if not isinstance(target, (dict, list, tuple, set)): - return target - - # Otherwise, create a new proxy - proxy_type = None - + # Create a new proxy for target_type, (writable_proxy_type, readonly_proxy_type) in TYPE_LOOKUP.items(): if isinstance(target, target_type): proxy_type = readonly_proxy_type if readonly else writable_proxy_type - break - else: - if isinstance(target, tuple): - return tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) + return proxy_type(target, readonly=readonly, shallow=shallow) + + if isinstance(target, tuple): + return tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) + + # We can't proxy a plain value + return target + - return proxy_type(target, readonly=readonly, shallow=shallow) +def ref(target): + return proxy({"value": target}) reactive = proxy diff --git a/observ/traps.py b/observ/traps.py index 507835d..315a6c9 100644 --- a/observ/traps.py +++ b/observ/traps.py @@ -3,6 +3,7 @@ from .dep import Dep from .proxy import proxy +from .proxy_db import proxy_db class ReadonlyError(Exception): @@ -18,8 +19,8 @@ def read_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if self.Dep.stack: - self.proxy_db.attrs(self)["dep"].depend() + if Dep.stack: + proxy_db.attrs(self)["dep"].depend() value = fn(self.target, *args, **kwargs) if self.shallow: return value @@ -33,8 +34,8 @@ def iterate_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if self.Dep.stack: - self.proxy_db.attrs(self)["dep"].depend() + if Dep.stack: + proxy_db.attrs(self)["dep"].depend() iterator = fn(self.target, *args, **kwargs) if self.shallow: return iterator @@ -54,9 +55,9 @@ def read_key_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if self.Dep.stack: + if Dep.stack: key = args[0] - keydeps = self.proxy_db.attrs(self)["keydep"] + keydeps = proxy_db.attrs(self)["keydep"] if key not in keydeps: keydeps[key] = Dep() keydeps[key].depend() @@ -75,7 +76,7 @@ def write_trap(method, obj_cls): def trap(self, *args, **kwargs): old = self.target.copy() retval = fn(self.target, *args, **kwargs) - attrs = self.proxy_db.attrs(self) + attrs = proxy_db.attrs(self) if obj_cls == dict: change_detected = False keydeps = attrs["keydep"] @@ -104,7 +105,7 @@ def write_key_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): key = args[0] - attrs = self.proxy_db.attrs(self) + attrs = proxy_db.attrs(self) is_new = key not in attrs["keydep"] old_value = getitem_fn(self.target, key) if not is_new else None retval = fn(self.target, *args, **kwargs) @@ -129,7 +130,7 @@ def delete_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): retval = fn(self.target, *args, **kwargs) - attrs = self.proxy_db.attrs(self) + attrs = proxy_db.attrs(self) attrs["dep"].notify() for key in self._orphaned_keydeps(): attrs["keydep"][key].notify() @@ -146,7 +147,7 @@ def delete_key_trap(method, obj_cls): def trap(self, *args, **kwargs): retval = fn(self.target, *args, **kwargs) key = args[0] - attrs = self.proxy_db.attrs(self) + attrs = proxy_db.attrs(self) attrs["dep"].notify() attrs["keydep"][key].notify() del attrs["keydep"][key] diff --git a/observ/watcher.py b/observ/watcher.py index 1401d8a..096e713 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -55,7 +55,7 @@ def decorator_computed(fn: T) -> T: def getter(): if watcher.dirty: watcher.evaluate() - if watcher.Dep.stack: + if Dep.stack: watcher.depend() return watcher.value @@ -126,7 +126,6 @@ class Watcher: "lazy", "dirty", "value", - "Dep", "_number_of_callback_args", "__weakref__", ] @@ -145,7 +144,6 @@ def __init__( deep: Deep watch the watched value callback: Method to call when value has changed """ - self.Dep = Dep self.id = next(_ids) if callable(fn): if is_bound_method(fn): @@ -177,7 +175,7 @@ def update(self) -> None: self.dirty = True return - if self.Dep.stack and self.Dep.stack[-1] is self and self.no_recurse: + if Dep.stack and Dep.stack[-1] is self and self.no_recurse: return if self.sync: self.run() @@ -254,13 +252,13 @@ def _run_callback(self, *args) -> None: del frames def get(self) -> Any: - self.Dep.stack.append(self) + Dep.stack.append(self) try: value = self.fn() if self.deep: traverse(value) finally: - self.Dep.stack.pop() + Dep.stack.pop() self.cleanup_deps() return value @@ -280,7 +278,7 @@ def cleanup_deps(self) -> None: def depend(self) -> None: """This function is used by other watchers to depend on everything this watcher depends on.""" - if self.Dep.stack: + if Dep.stack: for dep in self._deps: dep.depend() diff --git a/pyproject.toml b/pyproject.toml index 9a4e73e..d39b1c0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "observ" -version = "0.12.0" +version = "0.13.0" description = "Reactive state management for Python" authors = ["Korijn van Golen <[email protected]>", "Berend Klein Haneveld <[email protected]>"] license = "MIT"
Reactive plain value container Like `ref` in Vue. I guess it's just the following under the hood: ```py def ref(value): return reactive({"value": value}) ``` We could support dot notation if we make it into a class, something like: ```py class ref: def __init__(self, value): self._value = reactive({"value": value}) @property def value(self): return self._value["value"] # and a setter ``` Which is related to #103
Don't forget the `__slots__` attribute ;) I think I finally get the name "ref", it's a proxy for a single reference to a value. https://github.com/vuejs/vue/blob/81598ea2f3bdbc81ae50f2fa9b9e547efc6b6261/src/v3/reactivity/ref.ts#L65 Indeed it's just a proxied dict with a value key
2023-11-06T12:13:52
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-101
9a234e418f4183ff64ec266ee6774f8490540b0f
diff --git a/observ/dep.py b/observ/dep.py index e024b80..f52e28f 100644 --- a/observ/dep.py +++ b/observ/dep.py @@ -7,6 +7,7 @@ class Dep: + __slots__ = ["_subs", "__weakref__"] stack: List["Watcher"] = [] # noqa: F821 def __init__(self) -> None: diff --git a/observ/proxy.py b/observ/proxy.py index 0353f7c..ccb4bcb 100644 --- a/observ/proxy.py +++ b/observ/proxy.py @@ -1,5 +1,6 @@ from functools import partial +from .dep import Dep from .proxy_db import proxy_db @@ -16,7 +17,7 @@ class Proxy: """ __hash__ = None - __slots__ = ["target", "readonly", "shallow", "proxy_db", "__weakref__"] + __slots__ = ["target", "readonly", "shallow", "proxy_db", "Dep", "__weakref__"] def __init__(self, target, readonly=False, shallow=False): self.target = target @@ -24,6 +25,7 @@ def __init__(self, target, readonly=False, shallow=False): self.shallow = shallow self.proxy_db = proxy_db self.proxy_db.reference(self) + self.Dep = Dep def __del__(self): self.proxy_db.dereference(self) diff --git a/observ/proxy_db.py b/observ/proxy_db.py index d67d71e..17bf41c 100644 --- a/observ/proxy_db.py +++ b/observ/proxy_db.py @@ -15,6 +15,8 @@ class ProxyDb: removed from the collection. """ + __slots__ = ["db"] + def __init__(self): self.db = {} gc.callbacks.append(self.cleanup) diff --git a/observ/scheduler.py b/observ/scheduler.py index 15df230..ab0dac8 100644 --- a/observ/scheduler.py +++ b/observ/scheduler.py @@ -9,6 +9,18 @@ class Scheduler: + __slots__ = [ + "_queue", + "_queue_indices", + "flushing", + "has", + "circular", + "index", + "waiting", + "request_flush", + "detect_cycles", + ] + def __init__(self): self._queue = [] self._queue_indices = [] diff --git a/observ/traps.py b/observ/traps.py index aa6640e..507835d 100644 --- a/observ/traps.py +++ b/observ/traps.py @@ -18,7 +18,7 @@ def read_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: + if self.Dep.stack: self.proxy_db.attrs(self)["dep"].depend() value = fn(self.target, *args, **kwargs) if self.shallow: @@ -33,7 +33,7 @@ def iterate_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: + if self.Dep.stack: self.proxy_db.attrs(self)["dep"].depend() iterator = fn(self.target, *args, **kwargs) if self.shallow: @@ -54,7 +54,7 @@ def read_key_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: + if self.Dep.stack: key = args[0] keydeps = self.proxy_db.attrs(self)["keydep"] if key not in keydeps: diff --git a/observ/watcher.py b/observ/watcher.py index d1a7364..1401d8a 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -55,7 +55,7 @@ def decorator_computed(fn: T) -> T: def getter(): if watcher.dirty: watcher.evaluate() - if Dep.stack: + if watcher.Dep.stack: watcher.depend() return watcher.value @@ -114,6 +114,23 @@ class WrongNumberOfArgumentsError(TypeError): class Watcher: + __slots__ = [ + "id", + "fn", + "_deps", + "_new_deps", + "sync", + "callback", + "no_recurse", + "deep", + "lazy", + "dirty", + "value", + "Dep", + "_number_of_callback_args", + "__weakref__", + ] + def __init__( self, fn: Callable[[], Any] | Proxy | list[Proxy], @@ -128,6 +145,7 @@ def __init__( deep: Deep watch the watched value callback: Method to call when value has changed """ + self.Dep = Dep self.id = next(_ids) if callable(fn): if is_bound_method(fn): @@ -159,7 +177,7 @@ def update(self) -> None: self.dirty = True return - if Dep.stack and Dep.stack[-1] is self and self.no_recurse: + if self.Dep.stack and self.Dep.stack[-1] is self and self.no_recurse: return if self.sync: self.run() @@ -236,13 +254,13 @@ def _run_callback(self, *args) -> None: del frames def get(self) -> Any: - Dep.stack.append(self) + self.Dep.stack.append(self) try: value = self.fn() if self.deep: traverse(value) finally: - Dep.stack.pop() + self.Dep.stack.pop() self.cleanup_deps() return value @@ -262,7 +280,7 @@ def cleanup_deps(self) -> None: def depend(self) -> None: """This function is used by other watchers to depend on everything this watcher depends on.""" - if Dep.stack: + if self.Dep.stack: for dep in self._deps: dep.depend()
Use __slots__ on Watcher This one seems pretty difficult
2023-11-03T13:25:44
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-100
c4724a964d1cafbb00a8e965599380fb41ee6135
diff --git a/observ/traps.py b/observ/traps.py index affb7e0..aa6640e 100644 --- a/observ/traps.py +++ b/observ/traps.py @@ -5,14 +5,6 @@ from .proxy import proxy -class StateModifiedError(Exception): - """ - Raised when a proxy is modified in a watched (or computed) expression. - """ - - pass - - class ReadonlyError(Exception): """ Raised when a readonly proxy is modified. @@ -81,8 +73,6 @@ def write_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() old = self.target.copy() retval = fn(self.target, *args, **kwargs) attrs = self.proxy_db.attrs(self) @@ -113,8 +103,6 @@ def write_key_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() key = args[0] attrs = self.proxy_db.attrs(self) is_new = key not in attrs["keydep"] @@ -140,8 +128,6 @@ def delete_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() retval = fn(self.target, *args, **kwargs) attrs = self.proxy_db.attrs(self) attrs["dep"].notify() @@ -158,8 +144,6 @@ def delete_key_trap(method, obj_cls): @wraps(fn) def trap(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() retval = fn(self.target, *args, **kwargs) key = args[0] attrs = self.proxy_db.attrs(self) diff --git a/observ/watcher.py b/observ/watcher.py index 9f03936..d1a7364 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -39,7 +39,7 @@ def watch( return watcher -watch_effect = partial(watch, immediate=True, deep=True, callback=None) +watch_effect = partial(watch, immediate=False, deep=True, callback=None) def computed(_fn=None, *, deep=True): @@ -147,6 +147,7 @@ def __init__( self.callback = weak(callback.__self__, callback.__func__) else: self.callback = callback + self.no_recurse = callback is None self.deep = bool(deep) self.lazy = lazy self.dirty = self.lazy @@ -156,7 +157,11 @@ def __init__( def update(self) -> None: if self.lazy: self.dirty = True - elif self.sync: + return + + if Dep.stack and Dep.stack[-1] is self and self.no_recurse: + return + if self.sync: self.run() else: scheduler.queue(self)
Allow state modification while evaluating watch expressions This is particularly important for watchEffect
2023-11-03T10:17:36
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-95
d95450f715fae2daf8a8714a8ae711bc6f7d87b6
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 79ca808..d9fd962 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -34,8 +34,6 @@ jobs: fail-fast: false matrix: include: - - name: Linux py38 - pyversion: '3.8' - name: Linux py39 pyversion: '3.9' - name: Linux py310 diff --git a/README.md b/README.md index ae91740..0d77da5 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # Observ 👁 -Observ is a Python port of [Vue.js](https://vuejs.org/)' [computed properties and watchers](https://v3.vuejs.org/api/basic-reactivity.html). It is event loop/framework agnostic and has only one pure-python dependency ([patchdiff](https://github.com/Korijn/patchdiff)) so it can be used in any project targeting Python >= 3.8. +Observ is a Python port of [Vue.js](https://vuejs.org/)' [computed properties and watchers](https://v3.vuejs.org/api/basic-reactivity.html). It is event loop/framework agnostic and has only one pure-python dependency ([patchdiff](https://github.com/Korijn/patchdiff)) so it can be used in any project targeting Python >= 3.9. Observ provides the following two benefits for stateful applications: diff --git a/observ/__init__.py b/observ/__init__.py index bb4b82e..1de1847 100644 --- a/observ/__init__.py +++ b/observ/__init__.py @@ -1,7 +1,7 @@ -__version__ = "0.10.0" +__version__ = "0.11.0" -from .observables import ( +from .proxy import ( reactive, readonly, shallow_reactive, @@ -9,4 +9,4 @@ to_raw, ) from .scheduler import scheduler -from .watcher import computed, watch +from .watcher import computed, watch, watch_effect diff --git a/observ/dict_proxy.py b/observ/dict_proxy.py new file mode 100644 index 0000000..3433860 --- /dev/null +++ b/observ/dict_proxy.py @@ -0,0 +1,78 @@ +from .proxy import Proxy, TYPE_LOOKUP +from .traps import construct_methods_traps_dict, trap_map, trap_map_readonly + + +dict_traps = { + "READERS": { + "copy", + "__eq__", + "__format__", + "__ge__", + "__gt__", + "__le__", + "__len__", + "__lt__", + "__ne__", + "__repr__", + "__sizeof__", + "__str__", + "keys", + "__or__", + "__ror__", + }, + "KEYREADERS": { + "get", + "__contains__", + "__getitem__", + }, + "ITERATORS": { + "items", + "values", + "__iter__", + "__reversed__", + }, + "WRITERS": { + "update", + "__ior__", + }, + "KEYWRITERS": { + "setdefault", + "__setitem__", + }, + "DELETERS": { + "clear", + "popitem", + }, + "KEYDELETERS": { + "pop", + "__delitem__", + }, +} + + +class DictProxyBase(Proxy): + def _orphaned_keydeps(self): + return set(self.proxy_db.attrs(self)["keydep"].keys()) - set(self.target.keys()) + + +def readonly_dict_proxy_init(self, target, shallow=False, **kwargs): + super(ReadonlyDictProxy, self).__init__( + target, shallow=shallow, **{**kwargs, "readonly": True} + ) + + +DictProxy = type( + "DictProxy", + (DictProxyBase,), + construct_methods_traps_dict(dict, dict_traps, trap_map), +) +ReadonlyDictProxy = type( + "ReadonlyDictProxy", + (DictProxyBase,), + { + "__init__": readonly_dict_proxy_init, + **construct_methods_traps_dict(dict, dict_traps, trap_map_readonly), + }, +) + +TYPE_LOOKUP[dict] = (DictProxy, ReadonlyDictProxy) diff --git a/observ/list_proxy.py b/observ/list_proxy.py new file mode 100644 index 0000000..e377ede --- /dev/null +++ b/observ/list_proxy.py @@ -0,0 +1,73 @@ +from .proxy import Proxy, TYPE_LOOKUP +from .traps import construct_methods_traps_dict, trap_map, trap_map_readonly + + +list_traps = { + "READERS": { + "count", + "index", + "copy", + "__add__", + "__getitem__", + "__contains__", + "__eq__", + "__ge__", + "__gt__", + "__le__", + "__lt__", + "__mul__", + "__ne__", + "__rmul__", + "__len__", + "__repr__", + "__str__", + "__format__", + "__sizeof__", + }, + "ITERATORS": { + "__iter__", + "__reversed__", + }, + "WRITERS": { + "append", + "clear", + "extend", + "insert", + "pop", + "remove", + "reverse", + "sort", + "__setitem__", + "__delitem__", + "__iadd__", + "__imul__", + }, +} + + +class ListProxyBase(Proxy): + pass + + +def readonly_list_proxy_init(self, target, shallow=False, **kwargs): + super(ReadonlyListProxy, self).__init__( + target, shallow=shallow, **{**kwargs, "readonly": True} + ) + + +ListProxy = type( + "ListProxy", + (ListProxyBase,), + construct_methods_traps_dict(list, list_traps, trap_map), +) +ReadonlyListProxy = type( + "ReadonlyListProxy", + (ListProxyBase,), + { + "__init__": readonly_list_proxy_init, + **construct_methods_traps_dict(list, list_traps, trap_map_readonly), + }, +) + + +TYPE_LOOKUP[list] = (ListProxy, ReadonlyListProxy) diff --git a/observ/observables.py b/observ/observables.py deleted file mode 100644 index f1dc824..0000000 --- a/observ/observables.py +++ /dev/null @@ -1,621 +0,0 @@ -""" -observe converts plain datastructures (dict, list, set) to -proxied versions of those datastructures to make them reactive. -""" -from functools import partial, wraps -import gc -from operator import xor -import sys -from weakref import WeakValueDictionary - -from .dep import Dep - - -class ProxyDb: - """ - Collection of proxies, tracked by the id of the object that they wrap. - Each time a Proxy is instantiated, it will register itself for the - wrapped object. And when a Proxy is deleted, then it will unregister. - When the last proxy that wraps an object is removed, it is uncertain - what happens to the wrapped object, so in that case the object id is - removed from the collection. - """ - - def __init__(self): - self.db = {} - gc.callbacks.append(self.cleanup) - - def cleanup(self, phase, info): - """ - Callback for garbage collector to cleanup the db for targets - that have no other references outside of the db - """ - # TODO: maybe also run on start? Check performance - if phase != "stop": - return - - keys_to_delete = [] - for key, value in self.db.items(): - # Refs: - # - sys.getrefcount - # - ref in db item - if sys.getrefcount(value["target"]) <= 2: - # We are the last to hold a reference! - keys_to_delete.append(key) - - for keys in keys_to_delete: - del self.db[keys] - - def reference(self, proxy): - """ - Adds a reference to the collection for the wrapped object's id - """ - obj_id = id(proxy.target) - - if obj_id not in self.db: - attrs = { - "dep": Dep(), - } - if isinstance(proxy.target, dict): - attrs["keydep"] = {key: Dep() for key in proxy.target.keys()} - self.db[obj_id] = { - "target": proxy.target, - "attrs": attrs, # dep, keydep - # keyed on tuple(readonly, shallow) - "proxies": WeakValueDictionary(), - } - - # Use setdefault to put the proxy in the proxies dict. If there - # was an existing value, it will return that instead. There shouldn't - # be an existing value, so we can compare the objects to see if we - # should raise an exception. - # Seems to be a tiny bit faster than checking beforehand if - # there is already an existing value in the proxies dict - result = self.db[obj_id]["proxies"].setdefault( - (proxy.readonly, proxy.shallow), proxy - ) - if result is not proxy: - raise RuntimeError("Proxy with existing configuration already in db") - - def dereference(self, proxy): - """ - Removes a reference from the database for the given proxy - """ - obj_id = id(proxy.target) - if obj_id not in self.db: - # When there are failing tests, it might happen that proxies - # are garbage collected at a point where the proxy_db is already - # cleared. That's why we need this check here. - # See fixture [clear_proxy_db](/tests/conftest.py:clear_proxy_db) - # for more info. - return - - # The given proxy is the last proxy in the WeakValueDictionary, - # so now is a good moment to see if can remove clean the deps - # for the target object - if len(self.db[obj_id]["proxies"]) == 1: - ref_count = sys.getrefcount(self.db[obj_id]["target"]) - # Ref count is still 3 here because of the reference through proxy.target - if ref_count <= 3: - # We are the last to hold a reference! - del self.db[obj_id] - - def attrs(self, proxy): - return self.db[id(proxy.target)]["attrs"] - - def get_proxy(self, target, readonly=False, shallow=False): - """ - Returns a proxy from the collection for the given object and configuration. - Will return None if there is no proxy for the object's id. - """ - if id(target) not in self.db: - return None - return self.db[id(target)]["proxies"].get((readonly, shallow)) - - -# Create a global proxy collection -proxy_db = ProxyDb() - - -class Proxy: - """ - Proxy for an object/target. - - Instantiating a Proxy will add a reference to the global proxy_db and - destroying a Proxy will remove that reference. - - Please use the `proxy` method to get a proxy for a certain object instead - of directly creating one yourself. The `proxy` method will either create - or return an existing proxy and makes sure that the db stays consistent. - """ - - __hash__ = None - - def __init__(self, target, readonly=False, shallow=False): - self.target = target - self.readonly = readonly - self.shallow = shallow - proxy_db.reference(self) - - def __del__(self): - proxy_db.dereference(self) - - -def proxy(target, readonly=False, shallow=False): - """ - Returns a Proxy for the given object. If a proxy for the given - configuration already exists, it will return that instead of - creating a new one. - - Please be aware: this only works on plain data types! - """ - # The object may be a proxy already, so check if it matches the - # given configuration (readonly and shallow) - if isinstance(target, Proxy): - if readonly == target.readonly and shallow == target.shallow: - return target - else: - # If the configuration does not match, - # unwrap the target from the proxy so that the right - # kind of proxy can be returned in the next part of - # this function - target = target.target - - # Note that at this point, target is always a non-proxy object - # Check the proxy_db to see if there's already a proxy for the target object - existing_proxy = proxy_db.get_proxy(target, readonly=readonly, shallow=shallow) - if existing_proxy is not None: - return existing_proxy - - # We can only wrap the following datatypes - if not isinstance(target, (dict, list, tuple, set)): - return target - - # Otherwise, create a new proxy - proxy_type = None - if isinstance(target, dict): - proxy_type = DictProxy if not readonly else ReadonlyDictProxy - elif isinstance(target, list): - proxy_type = ListProxy if not readonly else ReadonlyListProxy - elif isinstance(target, set): - proxy_type = SetProxy if not readonly else ReadonlySetProxy - elif isinstance(target, tuple): - return tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) - return proxy_type(target, readonly=readonly, shallow=shallow) - - -reactive = proxy -readonly = partial(proxy, readonly=True) -shallow_reactive = partial(proxy, shallow=True) -shallow_readonly = partial(proxy, shallow=True, readonly=True) - - -class StateModifiedError(Exception): - """ - Raised when a proxy is modified in a watched (or computed) expression. - """ - - pass - - -def read_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def trap(self, *args, **kwargs): - if Dep.stack: - proxy_db.attrs(self)["dep"].depend() - value = fn(self.target, *args, **kwargs) - if self.shallow: - return value - return proxy(value, readonly=self.readonly) - - return trap - - -def iterate_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def trap(self, *args, **kwargs): - if Dep.stack: - proxy_db.attrs(self)["dep"].depend() - iterator = fn(self.target, *args, **kwargs) - if self.shallow: - return iterator - if method == "items": - return ( - (key, proxy(value, readonly=self.readonly)) for key, value in iterator - ) - else: - proxied = partial(proxy, readonly=self.readonly) - return map(proxied, iterator) - - return trap - - -def read_key_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - key = args[0] - keydeps = proxy_db.attrs(self)["keydep"] - if key not in keydeps: - keydeps[key] = Dep() - keydeps[key].depend() - value = fn(self.target, *args, **kwargs) - if self.shallow: - return value - return proxy(value, readonly=self.readonly) - - return inner - - -def write_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() - old = self.target.copy() - retval = fn(self.target, *args, **kwargs) - attrs = proxy_db.attrs(self) - if obj_cls == dict: - change_detected = False - keydeps = attrs["keydep"] - for key, val in self.target.items(): - if old.get(key) is not val: - if key in keydeps: - keydeps[key].notify() - else: - keydeps[key] = Dep() - change_detected = True - if change_detected: - attrs["dep"].notify() - else: # list and set - if self.target != old: - attrs["dep"].notify() - - return retval - - return inner - - -def write_key_trap(method, obj_cls): - fn = getattr(obj_cls, method) - getitem_fn = getattr(obj_cls, "get") - - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() - key = args[0] - attrs = proxy_db.attrs(self) - is_new = key not in attrs["keydep"] - old_value = getitem_fn(self.target, key) if not is_new else None - retval = fn(self.target, *args, **kwargs) - if method == "setdefault" and not self.shallow: - # This method is only available when readonly is false - retval = reactive(retval) - - new_value = getitem_fn(self.target, key) - if is_new: - attrs["keydep"][key] = Dep() - if xor(old_value is None, new_value is None) or old_value != new_value: - attrs["keydep"][key].notify() - attrs["dep"].notify() - return retval - - return inner - - -def delete_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() - retval = fn(self.target, *args, **kwargs) - attrs = proxy_db.attrs(self) - attrs["dep"].notify() - for key in self._orphaned_keydeps(): - attrs["keydep"][key].notify() - del attrs["keydep"][key] - return retval - - return inner - - -def delete_key_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - raise StateModifiedError() - retval = fn(self.target, *args, **kwargs) - key = args[0] - attrs = proxy_db.attrs(self) - attrs["dep"].notify() - attrs["keydep"][key].notify() - del attrs["keydep"][key] - return retval - - return inner - - -trap_map = { - "READERS": read_trap, - "KEYREADERS": read_key_trap, - "ITERATORS": iterate_trap, - "WRITERS": write_trap, - "KEYWRITERS": write_key_trap, - "DELETERS": delete_trap, - "KEYDELETERS": delete_key_trap, -} - - -class ReadonlyError(Exception): - """ - Raised when a readonly proxy is modified. - """ - - pass - - -def readonly_trap(method, obj_cls): - fn = getattr(obj_cls, method) - - @wraps(fn) - def inner(self, *args, **kwargs): - raise ReadonlyError() - - return inner - - -trap_map_readonly = { - "READERS": read_trap, - "KEYREADERS": read_key_trap, - "ITERATORS": iterate_trap, - "WRITERS": readonly_trap, - "KEYWRITERS": readonly_trap, - "DELETERS": readonly_trap, - "KEYDELETERS": readonly_trap, -} - - -def bind_traps(proxy_cls, obj_cls, traps, trap_map): - for trap_type, methods in traps.items(): - for method in methods: - trap = trap_map[trap_type](method, obj_cls) - setattr(proxy_cls, method, trap) - - -dict_traps = { - "READERS": { - "copy", - "__eq__", - "__format__", - "__ge__", - "__gt__", - "__le__", - "__len__", - "__lt__", - "__ne__", - "__repr__", - "__sizeof__", - "__str__", - "keys", - }, - "KEYREADERS": { - "get", - "__contains__", - "__getitem__", - }, - "ITERATORS": { - "items", - "values", - "__iter__", - }, - "WRITERS": { - "update", - }, - "KEYWRITERS": { - "setdefault", - "__setitem__", - }, - "DELETERS": { - "clear", - "popitem", - }, - "KEYDELETERS": { - "pop", - "__delitem__", - }, -} - -if sys.version_info >= (3, 8, 0): - dict_traps["ITERATORS"].add("__reversed__") -if sys.version_info >= (3, 9, 0): - dict_traps["READERS"].add("__or__") - dict_traps["READERS"].add("__ror__") - dict_traps["WRITERS"].add("__ior__") - - -class DictProxyBase(Proxy): - def __init__(self, target, readonly=False, shallow=False): - super().__init__(target, readonly=readonly, shallow=shallow) - - def _orphaned_keydeps(self): - return set(proxy_db.attrs(self)["keydep"].keys()) - set(self.target.keys()) - - -class DictProxy(DictProxyBase): - pass - - -class ReadonlyDictProxy(DictProxyBase): - def __init__(self, target, shallow=False, **kwargs): - super().__init__(target, shallow=shallow, **{**kwargs, "readonly": True}) - - -bind_traps(DictProxy, dict, dict_traps, trap_map) -bind_traps(ReadonlyDictProxy, dict, dict_traps, trap_map_readonly) - - -list_traps = { - "READERS": { - "count", - "index", - "copy", - "__add__", - "__getitem__", - "__contains__", - "__eq__", - "__ge__", - "__gt__", - "__le__", - "__lt__", - "__mul__", - "__ne__", - "__rmul__", - "__len__", - "__repr__", - "__str__", - "__format__", - "__sizeof__", - }, - "ITERATORS": { - "__iter__", - "__reversed__", - }, - "WRITERS": { - "append", - "clear", - "extend", - "insert", - "pop", - "remove", - "reverse", - "sort", - "__setitem__", - "__delitem__", - "__iadd__", - "__imul__", - }, -} - - -class ListProxyBase(Proxy): - def __init__(self, target, readonly=False, shallow=False): - super().__init__(target, readonly=readonly, shallow=shallow) - - -class ListProxy(ListProxyBase): - pass - - -class ReadonlyListProxy(ListProxyBase): - def __init__(self, target, shallow=False, **kwargs): - super().__init__(target, shallow=shallow, **{**kwargs, "readonly": True}) - - -bind_traps(ListProxy, list, list_traps, trap_map) -bind_traps(ReadonlyListProxy, list, list_traps, trap_map_readonly) - - -set_traps = { - "READERS": { - "copy", - "difference", - "intersection", - "isdisjoint", - "issubset", - "issuperset", - "symmetric_difference", - "union", - "__and__", - "__contains__", - "__eq__", - "__format__", - "__ge__", - "__gt__", - "__iand__", - "__ior__", - "__isub__", - "__ixor__", - "__le__", - "__len__", - "__lt__", - "__ne__", - "__or__", - "__rand__", - "__repr__", - "__ror__", - "__rsub__", - "__rxor__", - "__sizeof__", - "__str__", - "__sub__", - "__xor__", - }, - "ITERATORS": { - "__iter__", - }, - "WRITERS": { - "add", - "clear", - "difference_update", - "intersection_update", - "discard", - "pop", - "remove", - "symmetric_difference_update", - "update", - }, -} - - -class SetProxyBase(Proxy): - def __init__(self, target, readonly=False, shallow=False): - super().__init__(target, readonly=readonly, shallow=shallow) - - -class SetProxy(SetProxyBase): - pass - - -class ReadonlySetProxy(SetProxyBase): - def __init__(self, target, shallow=False, **kwargs): - super().__init__(target, shallow=shallow, **{**kwargs, "readonly": True}) - - -bind_traps(SetProxy, set, set_traps, trap_map) -bind_traps(ReadonlySetProxy, set, set_traps, trap_map_readonly) - - -def to_raw(target): - """ - Returns a raw object from which any trace of proxy has been replaced - with its wrapped target value. - """ - if isinstance(target, Proxy): - return to_raw(target.target) - - if isinstance(target, list): - return [to_raw(t) for t in target] - - if isinstance(target, dict): - return {key: to_raw(value) for key, value in target.items()} - - if isinstance(target, tuple): - return tuple(to_raw(t) for t in target) - - if isinstance(target, set): - return {to_raw(t) for t in target} - - return target diff --git a/observ/proxy.py b/observ/proxy.py new file mode 100644 index 0000000..0353f7c --- /dev/null +++ b/observ/proxy.py @@ -0,0 +1,108 @@ +from functools import partial + +from .proxy_db import proxy_db + + +class Proxy: + """ + Proxy for an object/target. + + Instantiating a Proxy will add a reference to the global proxy_db and + destroying a Proxy will remove that reference. + + Please use the `proxy` method to get a proxy for a certain object instead + of directly creating one yourself. The `proxy` method will either create + or return an existing proxy and makes sure that the db stays consistent. + """ + + __hash__ = None + __slots__ = ["target", "readonly", "shallow", "proxy_db", "__weakref__"] + + def __init__(self, target, readonly=False, shallow=False): + self.target = target + self.readonly = readonly + self.shallow = shallow + self.proxy_db = proxy_db + self.proxy_db.reference(self) + + def __del__(self): + self.proxy_db.dereference(self) + + +# Lookup dict for mapping a type (dict, list, set) to a method +# that will convert an object of that type to a proxied version +TYPE_LOOKUP = {} + + +def proxy(target, readonly=False, shallow=False): + """ + Returns a Proxy for the given object. If a proxy for the given + configuration already exists, it will return that instead of + creating a new one. + + Please be aware: this only works on plain data types: dict, list, + set and tuple! + """ + # The object may be a proxy already, so check if it matches the + # given configuration (readonly and shallow) + if isinstance(target, Proxy): + if readonly == target.readonly and shallow == target.shallow: + return target + else: + # If the configuration does not match, + # unwrap the target from the proxy so that the right + # kind of proxy can be returned in the next part of + # this function + target = target.target + + # Note that at this point, target is always a non-proxy object + # Check the proxy_db to see if there's already a proxy for the target object + existing_proxy = proxy_db.get_proxy(target, readonly=readonly, shallow=shallow) + if existing_proxy is not None: + return existing_proxy + + # We can only wrap the following datatypes + if not isinstance(target, (dict, list, tuple, set)): + return target + + # Otherwise, create a new proxy + proxy_type = None + + for target_type, (writable_proxy_type, readonly_proxy_type) in TYPE_LOOKUP.items(): + if isinstance(target, target_type): + proxy_type = readonly_proxy_type if readonly else writable_proxy_type + break + else: + if isinstance(target, tuple): + return tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) + + return proxy_type(target, readonly=readonly, shallow=shallow) + + +reactive = proxy +readonly = partial(proxy, readonly=True) +shallow_reactive = partial(proxy, shallow=True) +shallow_readonly = partial(proxy, shallow=True, readonly=True) + + +def to_raw(target): + """ + Returns a raw object from which any trace of proxy has been replaced + with its wrapped target value. + """ + if isinstance(target, Proxy): + return to_raw(target.target) + + if isinstance(target, list): + return [to_raw(t) for t in target] + + if isinstance(target, dict): + return {key: to_raw(value) for key, value in target.items()} + + if isinstance(target, tuple): + return tuple(to_raw(t) for t in target) + + if isinstance(target, set): + return {to_raw(t) for t in target} + + return target diff --git a/observ/proxy_db.py b/observ/proxy_db.py new file mode 100644 index 0000000..d67d71e --- /dev/null +++ b/observ/proxy_db.py @@ -0,0 +1,111 @@ +import gc +import sys +from weakref import WeakValueDictionary + +from .dep import Dep + + +class ProxyDb: + """ + Collection of proxies, tracked by the id of the object that they wrap. + Each time a Proxy is instantiated, it will register itself for the + wrapped object. And when a Proxy is deleted, then it will unregister. + When the last proxy that wraps an object is removed, it is uncertain + what happens to the wrapped object, so in that case the object id is + removed from the collection. + """ + + def __init__(self): + self.db = {} + gc.callbacks.append(self.cleanup) + + def cleanup(self, phase, info): + """ + Callback for garbage collector to cleanup the db for targets + that have no other references outside of the db + """ + # TODO: maybe also run on start? Check performance + if phase != "stop": + return + + keys_to_delete = [] + for key, value in self.db.items(): + # Refs: + # - sys.getrefcount + # - ref in db item + if sys.getrefcount(value["target"]) <= 2: + # We are the last to hold a reference! + keys_to_delete.append(key) + + for keys in keys_to_delete: + del self.db[keys] + + def reference(self, proxy): + """ + Adds a reference to the collection for the wrapped object's id + """ + obj_id = id(proxy.target) + + if obj_id not in self.db: + attrs = { + "dep": Dep(), + } + if isinstance(proxy.target, dict): + attrs["keydep"] = {key: Dep() for key in proxy.target.keys()} + self.db[obj_id] = { + "target": proxy.target, + "attrs": attrs, # dep, keydep + # keyed on tuple(readonly, shallow) + "proxies": WeakValueDictionary(), + } + + # Use setdefault to put the proxy in the proxies dict. If there + # was an existing value, it will return that instead. There shouldn't + # be an existing value, so we can compare the objects to see if we + # should raise an exception. + # Seems to be a tiny bit faster than checking beforehand if + # there is already an existing value in the proxies dict + result = self.db[obj_id]["proxies"].setdefault( + (proxy.readonly, proxy.shallow), proxy + ) + if result is not proxy: + raise RuntimeError("Proxy with existing configuration already in db") + + def dereference(self, proxy): + """ + Removes a reference from the database for the given proxy + """ + obj_id = id(proxy.target) + if obj_id not in self.db: + # When there are failing tests, it might happen that proxies + # are garbage collected at a point where the proxy_db is already + # cleared. That's why we need this check here. + # See fixture [clear_proxy_db](/tests/conftest.py:clear_proxy_db) + # for more info. + return + + # The given proxy is the last proxy in the WeakValueDictionary, + # so now is a good moment to see if can remove clean the deps + # for the target object + if len(self.db[obj_id]["proxies"]) == 1: + ref_count = sys.getrefcount(self.db[obj_id]["target"]) + # Ref count is still 3 here because of the reference through proxy.target + if ref_count <= 3: + # We are the last to hold a reference! + del self.db[obj_id] + + def attrs(self, proxy): + return self.db[id(proxy.target)]["attrs"] + + def get_proxy(self, target, readonly=False, shallow=False): + """ + Returns a proxy from the collection for the given object and configuration. + Will return None if there is no proxy for the object's id. + """ + if id(target) not in self.db: + return None + return self.db[id(target)]["proxies"].get((readonly, shallow)) + + +# Create a global proxy collection +proxy_db = ProxyDb() diff --git a/observ/set_proxy.py b/observ/set_proxy.py new file mode 100644 index 0000000..d99d633 --- /dev/null +++ b/observ/set_proxy.py @@ -0,0 +1,79 @@ +from .proxy import Proxy, TYPE_LOOKUP +from .traps import construct_methods_traps_dict, trap_map, trap_map_readonly + + +set_traps = { + "READERS": { + "copy", + "difference", + "intersection", + "isdisjoint", + "issubset", + "issuperset", + "symmetric_difference", + "union", + "__and__", + "__contains__", + "__eq__", + "__format__", + "__ge__", + "__gt__", + "__iand__", + "__ior__", + "__isub__", + "__ixor__", + "__le__", + "__len__", + "__lt__", + "__ne__", + "__or__", + "__rand__", + "__repr__", + "__ror__", + "__rsub__", + "__rxor__", + "__sizeof__", + "__str__", + "__sub__", + "__xor__", + }, + "ITERATORS": { + "__iter__", + }, + "WRITERS": { + "add", + "clear", + "difference_update", + "intersection_update", + "discard", + "pop", + "remove", + "symmetric_difference_update", + "update", + }, +} + + +class SetProxyBase(Proxy): + pass + + +def readonly_set_proxy_init(self, target, shallow=False, **kwargs): + super(ReadonlySetProxy, self).__init__( + target, shallow=shallow, **{**kwargs, "readonly": True} + ) + + +SetProxy = type( + "SetProxy", (SetProxyBase,), construct_methods_traps_dict(set, set_traps, trap_map) +) +ReadonlySetProxy = type( + "ReadonlysetProxy", + (SetProxyBase,), + { + "__init__": readonly_set_proxy_init, + **construct_methods_traps_dict(set, set_traps, trap_map_readonly), + }, +) + +TYPE_LOOKUP[set] = (SetProxy, ReadonlySetProxy) diff --git a/observ/store.py b/observ/store.py index 5665bba..a983b67 100644 --- a/observ/store.py +++ b/observ/store.py @@ -3,7 +3,7 @@ import patchdiff -from .observables import ( +from .proxy import ( reactive, readonly, shallow_reactive, diff --git a/observ/traps.py b/observ/traps.py new file mode 100644 index 0000000..affb7e0 --- /dev/null +++ b/observ/traps.py @@ -0,0 +1,211 @@ +from functools import partial, wraps +from operator import xor + +from .dep import Dep +from .proxy import proxy + + +class StateModifiedError(Exception): + """ + Raised when a proxy is modified in a watched (or computed) expression. + """ + + pass + + +class ReadonlyError(Exception): + """ + Raised when a readonly proxy is modified. + """ + + pass + + +def read_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + self.proxy_db.attrs(self)["dep"].depend() + value = fn(self.target, *args, **kwargs) + if self.shallow: + return value + return proxy(value, readonly=self.readonly) + + return trap + + +def iterate_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + self.proxy_db.attrs(self)["dep"].depend() + iterator = fn(self.target, *args, **kwargs) + if self.shallow: + return iterator + if method == "items": + return ( + (key, proxy(value, readonly=self.readonly)) for key, value in iterator + ) + else: + proxied = partial(proxy, readonly=self.readonly) + return map(proxied, iterator) + + return trap + + +def read_key_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + key = args[0] + keydeps = self.proxy_db.attrs(self)["keydep"] + if key not in keydeps: + keydeps[key] = Dep() + keydeps[key].depend() + value = fn(self.target, *args, **kwargs) + if self.shallow: + return value + return proxy(value, readonly=self.readonly) + + return trap + + +def write_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + old = self.target.copy() + retval = fn(self.target, *args, **kwargs) + attrs = self.proxy_db.attrs(self) + if obj_cls == dict: + change_detected = False + keydeps = attrs["keydep"] + for key, val in self.target.items(): + if old.get(key) is not val: + if key in keydeps: + keydeps[key].notify() + else: + keydeps[key] = Dep() + change_detected = True + if change_detected: + attrs["dep"].notify() + else: # list and set + if self.target != old: + attrs["dep"].notify() + + return retval + + return trap + + +def write_key_trap(method, obj_cls): + fn = getattr(obj_cls, method) + getitem_fn = getattr(obj_cls, "get") + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + key = args[0] + attrs = self.proxy_db.attrs(self) + is_new = key not in attrs["keydep"] + old_value = getitem_fn(self.target, key) if not is_new else None + retval = fn(self.target, *args, **kwargs) + if method == "setdefault" and not self.shallow: + # This method is only available when readonly is false + retval = proxy(retval) + + new_value = getitem_fn(self.target, key) + if is_new: + attrs["keydep"][key] = Dep() + if xor(old_value is None, new_value is None) or old_value != new_value: + attrs["keydep"][key].notify() + attrs["dep"].notify() + return retval + + return trap + + +def delete_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + retval = fn(self.target, *args, **kwargs) + attrs = self.proxy_db.attrs(self) + attrs["dep"].notify() + for key in self._orphaned_keydeps(): + attrs["keydep"][key].notify() + del attrs["keydep"][key] + return retval + + return trap + + +def delete_key_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + retval = fn(self.target, *args, **kwargs) + key = args[0] + attrs = self.proxy_db.attrs(self) + attrs["dep"].notify() + attrs["keydep"][key].notify() + del attrs["keydep"][key] + return retval + + return trap + + +trap_map = { + "READERS": read_trap, + "KEYREADERS": read_key_trap, + "ITERATORS": iterate_trap, + "WRITERS": write_trap, + "KEYWRITERS": write_key_trap, + "DELETERS": delete_trap, + "KEYDELETERS": delete_key_trap, +} + + +def readonly_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + raise ReadonlyError() + + return trap + + +trap_map_readonly = { + "READERS": read_trap, + "KEYREADERS": read_key_trap, + "ITERATORS": iterate_trap, + "WRITERS": readonly_trap, + "KEYWRITERS": readonly_trap, + "DELETERS": readonly_trap, + "KEYDELETERS": readonly_trap, +} + + +def construct_methods_traps_dict(obj_cls, traps, trap_map): + return { + method: trap_map[trap_type](method, obj_cls) + for trap_type, methods in traps.items() + for method in methods + } diff --git a/observ/watcher.py b/observ/watcher.py index 8e4730d..9f03936 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -6,15 +6,18 @@ from __future__ import annotations from collections.abc import Container -from functools import wraps +from functools import partial, wraps import inspect from itertools import count from typing import Any, Callable, Optional, TypeVar from weakref import ref, WeakSet from .dep import Dep -from .observables import DictProxyBase, ListProxyBase, Proxy, SetProxyBase +from .dict_proxy import DictProxyBase +from .list_proxy import ListProxyBase +from .proxy import Proxy from .scheduler import scheduler +from .set_proxy import SetProxyBase T = TypeVar("T", bound=Callable[[], Any]) @@ -22,7 +25,7 @@ def watch( fn: Callable[[], Any] | Proxy | list[Proxy], - callback: Optional[Callable], + callback: Optional[Callable] = None, sync: bool = False, deep: bool | None = None, immediate: bool = False, @@ -36,6 +39,9 @@ def watch( return watcher +watch_effect = partial(watch, immediate=True, deep=True, callback=None) + + def computed(_fn=None, *, deep=True): def decorator_computed(fn: T) -> T: """ diff --git a/poetry.lock b/poetry.lock index 62f9c1f..f5df503 100644 --- a/poetry.lock +++ b/poetry.lock @@ -545,25 +545,6 @@ docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker perf = ["ipython"] testing = ["flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)", "pytest-ruff"] -[[package]] -name = "importlib-resources" -version = "6.1.0" -description = "Read resources from Python packages" -category = "dev" -optional = false -python-versions = ">=3.8" -files = [ - {file = "importlib_resources-6.1.0-py3-none-any.whl", hash = "sha256:aa50258bbfa56d4e33fbd8aa3ef48ded10d1735f11532b8df95388cc6bdb7e83"}, - {file = "importlib_resources-6.1.0.tar.gz", hash = "sha256:9d48dcccc213325e810fd723e7fbb45ccb39f6cf5c31f00cf2b965f5f10f3cb9"}, -] - -[package.dependencies] -zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""} - -[package.extras] -docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"] -testing = ["pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-ruff", "zipp (>=3.17)"] - [[package]] name = "iniconfig" version = "2.0.0" @@ -625,7 +606,6 @@ files = [ [package.dependencies] importlib-metadata = {version = ">=4.11.4", markers = "python_version < \"3.12\""} -importlib-resources = {version = "*", markers = "python_version < \"3.9\""} "jaraco.classes" = "*" jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""} pywin32-ctypes = {version = ">=0.2.0", markers = "sys_platform == \"win32\""} @@ -1220,7 +1200,6 @@ files = [ [package.dependencies] markdown-it-py = ">=2.2.0" pygments = ">=2.13.0,<3.0.0" -typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""} [package.extras] jupyter = ["ipywidgets (>=7.5.1,<9)"] @@ -1376,5 +1355,5 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p [metadata] lock-version = "2.0" -python-versions = ">=3.8" -content-hash = "80e857788dfac5c92b2350fca1fca5e0d0804c7b9e24b4d9f2251df242ceb318" +python-versions = ">=3.9" +content-hash = "5a0c918a8393c9ade652418344675aa466ccbb560792764aab80564d58ba0274" diff --git a/pyproject.toml b/pyproject.toml index 4a0246f..6d8ec99 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "observ" -version = "0.10.0" +version = "0.11.0" description = "Reactive state management for Python" authors = ["Korijn van Golen <[email protected]>", "Berend Klein Haneveld <[email protected]>"] license = "MIT" @@ -8,7 +8,7 @@ homepage = "https://github.com/fork-tongue/observ" readme = "README.md" [tool.poetry.dependencies] -python = ">=3.8" +python = ">=3.9" patchdiff = "~0.3.4" [tool.poetry.group.dev.dependencies]
Use type() to create proxy classes instead of bind_traps
2023-11-02T14:08:56
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-93
3c886afb8a7e05c3724c4d422f3222488f522c6c
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 76e4258..79ca808 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -50,12 +50,18 @@ jobs: uses: actions/setup-python@v2 with: python-version: ${{ matrix.pyversion }} + - name: Install system dependencies + run: | + sudo apt-get update -y -qq + sudo apt-get install -y libgles2-mesa-dev - name: Install poetry run: pip install "poetry>=1.4.2,<1.5" - name: Install dependencies run: poetry install - name: Test run: poetry run pytest --cov=observ --cov-report=term-missing + env: + QT_QPA_PLATFORM: offscreen build: name: Build and test wheel diff --git a/examples/observe_qt.py b/examples/observe_qt.py index 3897218..ce97f7a 100644 --- a/examples/observe_qt.py +++ b/examples/observe_qt.py @@ -7,8 +7,10 @@ and updates the label whenever a computed property based on the state changes. """ +import asyncio from time import sleep +from PySide6 import QtAsyncio from PySide6.QtCore import QObject, QThread, Signal from PySide6.QtWidgets import ( QApplication, @@ -126,8 +128,8 @@ def on_reset_clicked(self): app = QApplication([]) - # Register with Qt's event loop - scheduler.register_qt() + asyncio.set_event_loop_policy(QtAsyncio.QAsyncioEventLoopPolicy()) + scheduler.register_asyncio() # Create layout and pass state to widgets layout = QVBoxLayout() diff --git a/observ/scheduler.py b/observ/scheduler.py index 64a3513..15df230 100644 --- a/observ/scheduler.py +++ b/observ/scheduler.py @@ -5,6 +5,7 @@ from bisect import bisect from collections import defaultdict import importlib +import warnings class Scheduler: @@ -31,9 +32,25 @@ def register_request_flush(self, callback): """ self.request_flush = callback + def register_asyncio(self): + """ + Utility function for integration with asyncio + """ + import asyncio + + def request_flush(): + loop = asyncio.get_event_loop_policy().get_event_loop() + loop.call_soon(scheduler.flush) + + scheduler.register_request_flush(request_flush) + def register_qt(self): """ - Utility function for integration with Qt event loop + Legacy utility function for integration with Qt event loop. Note that using + the `register_asyncio` method is preferred over this, together with + setting the asyncio event loop policy to `QtAsyncio.QAsyncioEventLoopPolicy`. + This is supported from Pyside 6.6.0. Note that the QtAsyncio submodule + is not included in the `pyside6_essentials` package. """ for qt in ("PySide6", "PyQt6", "PySide2", "PyQt5", "PySide", "PyQt4"): try: @@ -44,6 +61,19 @@ def register_qt(self): else: raise ImportError("Could not import QtCore") + try: + importlib.import_module(f"{qt}.QtAsyncio") + + warnings.warn( + "QtAsyncio module available: please consider using `register_asyncio` " + "and call the following code:\n" + f" from {qt} import QtAsyncio\n" + " asyncio.set_event_loop_policy(QtAsyncio.QAsyncioEventLoopPolicy())" + "" + ) + except ImportError: + pass + self.timer = QtCore.QTimer() self.timer.setSingleShot(True) self.timer.timeout.connect(scheduler.flush) diff --git a/poetry.lock b/poetry.lock index 32208bf..62f9c1f 100644 --- a/poetry.lock +++ b/poetry.lock @@ -9,13 +9,19 @@ optional = false python-versions = ">=3.8" files = [ {file = "black-23.10.1-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:ec3f8e6234c4e46ff9e16d9ae96f4ef69fa328bb4ad08198c8cee45bb1f08c69"}, + {file = "black-23.10.1-cp310-cp310-macosx_10_16_x86_64.whl", hash = "sha256:1b917a2aa020ca600483a7b340c165970b26e9029067f019e3755b56e8dd5916"}, {file = "black-23.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c74de4c77b849e6359c6f01987e94873c707098322b91490d24296f66d067dc"}, {file = "black-23.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:7b4d10b0f016616a0d93d24a448100adf1699712fb7a4efd0e2c32bbb219b173"}, {file = "black-23.10.1-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:b15b75fc53a2fbcac8a87d3e20f69874d161beef13954747e053bca7a1ce53a0"}, + {file = "black-23.10.1-cp311-cp311-macosx_10_16_x86_64.whl", hash = "sha256:e293e4c2f4a992b980032bbd62df07c1bcff82d6964d6c9496f2cd726e246ace"}, {file = "black-23.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d56124b7a61d092cb52cce34182a5280e160e6aff3137172a68c2c2c4b76bcb"}, {file = "black-23.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:3f157a8945a7b2d424da3335f7ace89c14a3b0625e6593d21139c2d8214d55ce"}, + {file = "black-23.10.1-cp38-cp38-macosx_10_16_arm64.whl", hash = "sha256:cfcce6f0a384d0da692119f2d72d79ed07c7159879d0bb1bb32d2e443382bf3a"}, + {file = "black-23.10.1-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:33d40f5b06be80c1bbce17b173cda17994fbad096ce60eb22054da021bf933d1"}, {file = "black-23.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:840015166dbdfbc47992871325799fd2dc0dcf9395e401ada6d88fe11498abad"}, {file = "black-23.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:037e9b4664cafda5f025a1728c50a9e9aedb99a759c89f760bd83730e76ba884"}, + {file = "black-23.10.1-cp39-cp39-macosx_10_16_arm64.whl", hash = "sha256:7cb5936e686e782fddb1c73f8aa6f459e1ad38a6a7b0e54b403f1f05a1507ee9"}, + {file = "black-23.10.1-cp39-cp39-macosx_10_16_x86_64.whl", hash = "sha256:7670242e90dc129c539e9ca17665e39a146a761e681805c54fbd86015c7c84f7"}, {file = "black-23.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ed45ac9a613fb52dad3b61c8dea2ec9510bf3108d4db88422bacc7d1ba1243d"}, {file = "black-23.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:6d23d7822140e3fef190734216cefb262521789367fbdc0b3f22af6744058982"}, {file = "black-23.10.1-py3-none-any.whl", hash = "sha256:d431e6739f727bb2e0495df64a6c7a5310758e87505f5f8cde9ff6c0f2d7e4fe"}, @@ -897,6 +903,43 @@ files = [ [package.extras] plugins = ["importlib-metadata"] +[[package]] +name = "pyside6" +version = "6.6.0" +description = "Python bindings for the Qt cross-platform application and UI framework" +category = "dev" +optional = false +python-versions = "<3.13,>=3.8" +files = [ + {file = "PySide6-6.6.0-cp38-abi3-macosx_11_0_universal2.whl", hash = "sha256:8103f14ed46a05e81acccbfc8388e3321e392fe54f3aa4a13336bd2ed8af5cd4"}, + {file = "PySide6-6.6.0-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:d487eab0f9bfc5c9141b474093e16207ff48cd9335e6465a01deb8dff0693fbc"}, + {file = "PySide6-6.6.0-cp38-abi3-manylinux_2_31_aarch64.whl", hash = "sha256:f40917b7a0c5c7f0c7faaa87e66ffa3b58d9d1d3d977d30fc07e193b82dd6749"}, + {file = "PySide6-6.6.0-cp38-abi3-win_amd64.whl", hash = "sha256:d41dfcfa32c89502cdaa20206b7e2aba199072f94b424bc6a4d6f50b10f4eb5a"}, +] + +[package.dependencies] +PySide6-Addons = "6.6.0" +PySide6-Essentials = "6.6.0" +shiboken6 = "6.6.0" + +[[package]] +name = "pyside6-addons" +version = "6.6.0" +description = "Python bindings for the Qt cross-platform application and UI framework (Addons)" +category = "dev" +optional = false +python-versions = "<3.13,>=3.8" +files = [ + {file = "PySide6_Addons-6.6.0-cp38-abi3-macosx_11_0_universal2.whl", hash = "sha256:82fe1bb6a1aabf5ab3d8632072dc908aa37fc75b4b46e520258c441bd6b103fb"}, + {file = "PySide6_Addons-6.6.0-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:5c56e963b841aeaacbc9ca8ca34df45308818dbd6fc59faa2b5a00a299e9892b"}, + {file = "PySide6_Addons-6.6.0-cp38-abi3-manylinux_2_31_aarch64.whl", hash = "sha256:e145514a7c37ee3a6ad0ddd693f805ccff6fbd9640a24fb2ef6b7f852882d3e9"}, + {file = "PySide6_Addons-6.6.0-cp38-abi3-win_amd64.whl", hash = "sha256:414864f7cfbdf8ac4c5a745566a515f18b6018d5730bf5489cc2716b5e1fcd9b"}, +] + +[package.dependencies] +PySide6-Essentials = "6.6.0" +shiboken6 = "6.6.0" + [[package]] name = "pyside6-essentials" version = "6.6.0" @@ -956,6 +999,25 @@ pytest = ">=4.6" [package.extras] testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"] +[[package]] +name = "pytest-qt" +version = "4.2.0" +description = "pytest support for PyQt and PySide applications" +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "pytest-qt-4.2.0.tar.gz", hash = "sha256:00a17b586dd530b6d7a9399923a40489ca4a9a309719011175f55dc6b5dc8f41"}, + {file = "pytest_qt-4.2.0-py2.py3-none-any.whl", hash = "sha256:a7659960a1ab2af8fc944655a157ff45d714b80ed7a6af96a4b5bb99ecf40a22"}, +] + +[package.dependencies] +pytest = ">=3.0.0" + +[package.extras] +dev = ["pre-commit", "tox"] +doc = ["sphinx", "sphinx-rtd-theme"] + [[package]] name = "pytest-timeout" version = "2.2.0" @@ -971,6 +1033,34 @@ files = [ [package.dependencies] pytest = ">=5.0.0" +[[package]] +name = "pytest-xvfb" +version = "3.0.0" +description = "A pytest plugin to run Xvfb (or Xephyr/Xvnc) for tests." +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "pytest-xvfb-3.0.0.tar.gz", hash = "sha256:3746ab1f4d1159f03f751638d053689ccd284291b38b8fb03d3ebbe7bf69cfc0"}, + {file = "pytest_xvfb-3.0.0-py3-none-any.whl", hash = "sha256:352f247c788457ccdfcfeec8a47a2a6594c8eaf22f0302dae9e2635bb23975c2"}, +] + +[package.dependencies] +pytest = ">=2.8.1" +pyvirtualdisplay = ">=1.3" + +[[package]] +name = "pyvirtualdisplay" +version = "3.0" +description = "python wrapper for Xvfb, Xephyr and Xvnc" +category = "dev" +optional = false +python-versions = "*" +files = [ + {file = "PyVirtualDisplay-3.0-py3-none-any.whl", hash = "sha256:40d4b8dfe4b8de8552e28eb367647f311f88a130bf837fe910e7f180d5477f0e"}, + {file = "PyVirtualDisplay-3.0.tar.gz", hash = "sha256:09755bc3ceb6eb725fb07eca5425f43f2358d3bf08e00d2a9b792a1aedd16159"}, +] + [[package]] name = "pywin32-ctypes" version = "0.2.2" @@ -996,6 +1086,7 @@ files = [ {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, + {file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"}, {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, @@ -1003,8 +1094,15 @@ files = [ {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, + {file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"}, {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, + {file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"}, + {file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"}, + {file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"}, + {file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"}, + {file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"}, + {file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"}, {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, @@ -1021,6 +1119,7 @@ files = [ {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, + {file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"}, {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, @@ -1028,6 +1127,7 @@ files = [ {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, + {file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"}, {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, @@ -1277,4 +1377,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p [metadata] lock-version = "2.0" python-versions = ">=3.8" -content-hash = "e1cd00b80bad4b906eb245c6abb367a0c392f34c4dab6098a2fa981ee61468a4" +content-hash = "80e857788dfac5c92b2350fca1fca5e0d0804c7b9e24b4d9f2251df242ceb318" diff --git a/pyproject.toml b/pyproject.toml index edd700f..4a0246f 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -18,10 +18,12 @@ flake8-black = "*" flake8-import-order = "*" flake8-print = "*" pre-commit = "*" +PySide6 = { version = ">=6.6", python = "<3.13"} pytest = "*" pytest-cov = "*" +pytest-qt = "*" pytest-timeout = "*" -PySide6-Essentials = { version = "*", python = "<=3.12"} +pytest-xvfb = "*" twine = "*" urllib3 = { version = "*", python = "<4"}
PySide 6.6 asyncio support Great news from the PySide camp: https://www.qt.io/blog/qt-for-python-6.6 An official asyncio compatible Qt event loop! I guess it means we can replace Qt integration with standard asyncio logic and expect a performance increase. Relevant code: https://github.com/fork-tongue/observ/blob/9e016b74f5e4070cbc7db448e0e8020652e7ed11/observ/scheduler.py#L34-L53 "Standard asyncio logic" meaning something like this: ```python import asyncio def request(deadline): loop = asyncio.get_event_loop_policy().get_event_loop() loop.call_soon(do_stuff, deadline) ```
2023-10-25T12:49:30
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-73
c295c7638d5df5780ffa358d8f1f4ab7e53fb4e1
diff --git a/.gitignore b/.gitignore index 633a173..5210c2d 100644 --- a/.gitignore +++ b/.gitignore @@ -2,4 +2,5 @@ __pycache__ .vscode .coverage -dist \ No newline at end of file +dist +.benchmarks diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 6a9eeea..f00bec3 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -11,7 +11,7 @@ repos: hooks: - id: pytest name: Tests - entry: poetry run pytest + entry: poetry run pytest tests language: system types: [python] pass_filenames: false diff --git a/bench/profiling.py b/bench/profiling.py new file mode 100644 index 0000000..0b61119 --- /dev/null +++ b/bench/profiling.py @@ -0,0 +1,28 @@ +from observ import reactive, watch + + +def noop(): + pass + + +# @profile +def main(): + obj = reactive({}) + + watch(obj, callback=noop, deep=True, sync=True) + + obj["bar"] = "baz" + obj["quux"] = "quuz" + obj.update( + { + "bar": "foo", + "quazi": "var", + } + ) + del obj["bar"] + _ = obj["quux"] + obj.clear() + + +if __name__ == "__main__": + main() diff --git a/observ/dep.py b/observ/dep.py index f52e28f..de8d4ba 100644 --- a/observ/dep.py +++ b/observ/dep.py @@ -11,18 +11,27 @@ class Dep: stack: List["Watcher"] = [] # noqa: F821 def __init__(self) -> None: - self._subs: WeakSet["Watcher"] = WeakSet() # noqa: F821 + self._subs: WeakSet["Watcher"] = None # noqa: F821 def add_sub(self, sub: "Watcher") -> None: # noqa: F821 + if self._subs is None: + self._subs = WeakSet() self._subs.add(sub) def remove_sub(self, sub: "Watcher") -> None: # noqa: F821 - self._subs.remove(sub) + if self._subs: + self._subs.remove(sub) def depend(self) -> None: if self.stack: self.stack[-1].add_dep(self) def notify(self) -> None: - for sub in sorted(self._subs, key=lambda s: s.id): - sub.update() + # just iterating over self._subs even if + # it is empty is 10x slower + # than putting this if-statement in front of it + # because a weakset must acquire a lock on its + # weak references before iterating + if self._subs: + for sub in sorted(self._subs, key=lambda s: s.id): + sub.update() diff --git a/observ/proxy_db.py b/observ/proxy_db.py index 17bf41c..75caa8a 100644 --- a/observ/proxy_db.py +++ b/observ/proxy_db.py @@ -104,9 +104,10 @@ def get_proxy(self, target, readonly=False, shallow=False): Returns a proxy from the collection for the given object and configuration. Will return None if there is no proxy for the object's id. """ - if id(target) not in self.db: + try: + return self.db[id(target)]["proxies"].get((readonly, shallow)) + except KeyError: return None - return self.db[id(target)]["proxies"].get((readonly, shallow)) # Create a global proxy collection diff --git a/poetry.lock b/poetry.lock index f5df503..410a4eb 100644 --- a/poetry.lock +++ b/poetry.lock @@ -251,7 +251,7 @@ colorama = {version = "*", markers = "platform_system == \"Windows\""} name = "colorama" version = "0.4.6" description = "Cross-platform colored terminal text." -category = "dev" +category = "main" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" files = [ @@ -401,7 +401,7 @@ files = [ name = "exceptiongroup" version = "1.1.3" description = "Backport of PEP 654 (exception groups)" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" files = [ @@ -549,7 +549,7 @@ testing = ["flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs name = "iniconfig" version = "2.0.0" description = "brain-dead simple config-ini parsing" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" files = [ @@ -734,7 +734,7 @@ setuptools = "*" name = "packaging" version = "23.2" description = "Core utilities for Python packages" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" files = [ @@ -801,7 +801,7 @@ test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4)", "pytest-co name = "pluggy" version = "1.3.0" description = "plugin and hook calling mechanisms for python" -category = "dev" +category = "main" optional = false python-versions = ">=3.8" files = [ @@ -832,6 +832,18 @@ nodeenv = ">=0.11.1" pyyaml = ">=5.1" virtualenv = ">=20.10.0" +[[package]] +name = "py-cpuinfo" +version = "9.0.0" +description = "Get CPU info with pure Python" +category = "main" +optional = false +python-versions = "*" +files = [ + {file = "py-cpuinfo-9.0.0.tar.gz", hash = "sha256:3cdbbf3fac90dc6f118bfd64384f309edeadd902d7c8fb17f02ffa1fc3f49690"}, + {file = "py_cpuinfo-9.0.0-py3-none-any.whl", hash = "sha256:859625bc251f64e21f077d099d4162689c762b5d6a4c3c97553d56241c9674d5"}, +] + [[package]] name = "pycodestyle" version = "2.9.1" @@ -941,7 +953,7 @@ shiboken6 = "6.6.0" name = "pytest" version = "7.4.2" description = "pytest: simple powerful testing with Python" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" files = [ @@ -960,6 +972,27 @@ tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""} [package.extras] testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] +[[package]] +name = "pytest-benchmark" +version = "4.0.0" +description = "A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer." +category = "main" +optional = false +python-versions = ">=3.7" +files = [ + {file = "pytest-benchmark-4.0.0.tar.gz", hash = "sha256:fb0785b83efe599a6a956361c0691ae1dbb5318018561af10f3e915caa0048d1"}, + {file = "pytest_benchmark-4.0.0-py3-none-any.whl", hash = "sha256:fdb7db64e31c8b277dff9850d2a2556d8b60bcb0ea6524e36e28ffd7c87f71d6"}, +] + +[package.dependencies] +py-cpuinfo = "*" +pytest = ">=3.8" + +[package.extras] +aspect = ["aspectlib"] +elasticsearch = ["elasticsearch"] +histogram = ["pygal", "pygaljs"] + [[package]] name = "pytest-cov" version = "4.1.0" @@ -1255,7 +1288,7 @@ files = [ name = "tomli" version = "2.0.1" description = "A lil' TOML parser" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1356,4 +1389,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p [metadata] lock-version = "2.0" python-versions = ">=3.9" -content-hash = "5a0c918a8393c9ade652418344675aa466ccbb560792764aab80564d58ba0274" +content-hash = "689d6ebe8ad20c4ceab43975f70363c2d581b296a84538b883739dd293813a71" diff --git a/pyproject.toml b/pyproject.toml index d39b1c0..d3c453e 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -10,6 +10,7 @@ readme = "README.md" [tool.poetry.dependencies] python = ">=3.9" patchdiff = "~0.3.4" +pytest-benchmark = "^4.0.0" [tool.poetry.group.dev.dependencies] black = "*" @@ -30,3 +31,7 @@ urllib3 = { version = "*", python = "<4"} [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" + +[tool.pytest.ini_options] +addopts = "--benchmark-columns='mean, stddev, rounds'" +timeout = 3 diff --git a/setup.cfg b/setup.cfg index 15656fd..86b67f6 100644 --- a/setup.cfg +++ b/setup.cfg @@ -9,7 +9,5 @@ application-import-names = observ import-order-style = google per-file-ignores = observ/__init__.py:F401,F403 + bench/*:F821 exclude = .venv - -[tool:pytest] -timeout = 3
Benchmarks Would be nice to have a benchmark setup so that we can make informed decisions on performance. We could leverage pytest-benchmark together with [github-action-benchmark](https://github.com/benchmark-action/github-action-benchmark). And naturally we would need to have some proper functions to run as benchmark.
2022-05-18T14:39:28
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-52
8c92ac018918736b1066bbd83e20c7a72f8a40d0
diff --git a/observ/store.py b/observ/store.py index ee5df02..3fd9c92 100644 --- a/observ/store.py +++ b/observ/store.py @@ -32,7 +32,7 @@ def inner(self, *args, **kwargs): return inner -def computed(_fn=None, *, deep=False): +def computed(_fn=None, *, deep=True): def decorator_computed(fn: T) -> T: fn.deep = deep fn.decorator = "computed" diff --git a/observ/watcher.py b/observ/watcher.py index 62c1c26..901c318 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -30,7 +30,7 @@ def watch( return watcher -def computed(_fn=None, *, deep=False): +def computed(_fn=None, *, deep=True): def decorator_computed(fn: T) -> T: """ Create a watcher for an expression. @@ -188,9 +188,9 @@ def get(self) -> Any: Dep.stack.append(self) try: value = self.fn() - finally: if self.deep: traverse(value) + finally: Dep.stack.pop() self.cleanup_deps() return value
Computed deep=True default I've been thinking and it just makes more sense to have deep=True as a default on computed
2021-11-09T07:36:16
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-32
7cb2d56530e3c7a5513dcc228b1feb63e2ccdbb0
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 0000000..6a9eeea --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,17 @@ +repos: +- repo: local + hooks: + - id: flake8 + name: Linting + entry: poetry run flake8 + language: system + types: [python] + require_serial: true +- repo: local + hooks: + - id: pytest + name: Tests + entry: poetry run pytest + language: system + types: [python] + pass_filenames: false diff --git a/README.md b/README.md index 08aa64e..12ae9c9 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # Observ 👁 -Observ is a Python port of [Vue.js](https://vuejs.org/)' [computed properties and watchers](https://vuejs.org/v2/guide/computed.html). It is completely event loop/framework agnostic and has no dependencies so it can be used in any project targeting Python >= 3.6. +Observ is a Python port of [Vue.js](https://vuejs.org/)' [computed properties and watchers](https://v3.vuejs.org/api/basic-reactivity.html). It is completely event loop/framework agnostic and has no dependencies so it can be used in any project targeting Python >= 3.6. Observ provides the following two benefits for stateful applications: @@ -17,11 +17,11 @@ Observ provides the following two benefits for stateful applications: ## API -`from observ import observe, computed, watch` +`from observ import reactive, computed, watch` -* `state = observe(state)` +* `state = reactive(state)` -Observe nested structures of dicts, lists, tuples and sets. Returns an observable clone of the state input object. +Observe nested structures of dicts, lists, tuples and sets. Returns an observable proxy that wraps the state input object. * `watcher = watch(func, callback, deep=False, immediate=False)` @@ -38,3 +38,7 @@ Install observ with pip/pipenv/poetry: `pip install observ` Check out [`examples/observe_qt.py`](https://github.com/Korijn/observ/blob/master/examples/observe_qt.py) for a simple example using observ. + +## Caveats + +Observ keeps references to the object passed to the `reactive` in order to keep track of dependencies and proxies for that object. When the object that is passed into `reactive` is not managed by other code, then observ should cleanup its references automatically when the proxy is destroyed. However, if there is another reference to the original object, then observ will only release its own reference when the garbage collector is run and all other references to the object are gone. diff --git a/examples/observe_qt.py b/examples/observe_qt.py index 2258ade..6442f74 100644 --- a/examples/observe_qt.py +++ b/examples/observe_qt.py @@ -19,7 +19,7 @@ QWidget, ) -from observ import observe, scheduler, watch +from observ import reactive, scheduler, watch class Display(QWidget): @@ -122,7 +122,7 @@ def on_reset_clicked(self): if __name__ == "__main__": # Define some state - state = observe({"clicked": 0, "progress": 0}) + state = reactive({"clicked": 0, "progress": 0}) app = QApplication([]) diff --git a/observ/api.py b/observ/api.py index fbe3a17..0148b40 100644 --- a/observ/api.py +++ b/observ/api.py @@ -1,15 +1,24 @@ """ Defines the public API for observ users """ -from functools import wraps +from functools import partial, wraps from .dep import Dep -from .observables import observe +from .observables import proxy, to_raw from .scheduler import scheduler from .watcher import Watcher -__all__ = ("observe", "computed", "watch", "scheduler") +__all__ = ( + "reactive", + "readonly", + "shallow_reactive", + "shallow_readonly", + "computed", + "watch", + "scheduler", + "to_raw", +) def computed(fn): @@ -35,3 +44,9 @@ def watch(fn, callback, sync=False, deep=False, immediate=False): if watcher.callback: watcher.run_callback(watcher.value, None) return watcher + + +reactive = proxy +readonly = partial(proxy, readonly=True) +shallow_reactive = partial(proxy, shallow=True) +shallow_readonly = partial(proxy, shallow=True, readonly=True) diff --git a/observ/dep.py b/observ/dep.py index 972548a..e024b80 100644 --- a/observ/dep.py +++ b/observ/dep.py @@ -10,7 +10,7 @@ class Dep: stack: List["Watcher"] = [] # noqa: F821 def __init__(self) -> None: - self._subs = WeakSet() + self._subs: WeakSet["Watcher"] = WeakSet() # noqa: F821 def add_sub(self, sub: "Watcher") -> None: # noqa: F821 self._subs.add(sub) diff --git a/observ/observables.py b/observ/observables.py index c47ffed..3b9329e 100644 --- a/observ/observables.py +++ b/observ/observables.py @@ -2,153 +2,378 @@ observe converts plain datastructures (dict, list, set) to proxied versions of those datastructures to make them reactive. """ -from functools import wraps +from copy import copy +from functools import partial, wraps +import gc import sys +from weakref import WeakValueDictionary from .dep import Dep -def observe(obj, deep=True): - """Please be aware: this only works on plain data types!""" - if not isinstance(obj, (dict, list, tuple, set)): - return obj # common case first - elif isinstance(obj, dict): - if not isinstance(obj, ObservableDict): - reactive = ObservableDict(obj) - else: - reactive = obj - if deep: - for k, v in reactive.items(): - reactive[k] = observe(v) - return reactive - elif isinstance(obj, list): - if not isinstance(obj, ObservableList): - reactive = ObservableList(obj) +class ProxyDb: + """ + Collection of proxies, tracked by the id of the object that they wrap. + Each time a Proxy is instantiated, it will register itself for the + wrapped object. And when a Proxy is deleted, then it will unregister. + When the last proxy that wraps an object is removed, it is uncertain + what happens to the wrapped object, so in that case the object id is + removed from the collection. + """ + + def __init__(self): + self.db = {} + gc.callbacks.append(self.cleanup) + + def cleanup(self, phase, info): + """ + Callback for garbage collector to cleanup the db for targets + that have no other references outside of the db + """ + # TODO: maybe also run on start? Check performance + if phase != "stop": + return + + keys_to_delete = [] + for key, value in self.db.items(): + # Refs: + # - sys.getrefcount + # - ref in db item + if sys.getrefcount(value["target"]) <= 2: + # We are the last to hold a reference! + keys_to_delete.append(key) + + for keys in keys_to_delete: + del self.db[keys] + + def reference(self, proxy): + """ + Adds a reference to the collection for the wrapped object's id + """ + obj_id = id(proxy.target) + + if obj_id not in self.db: + attrs = { + "dep": Dep(), + } + if isinstance(proxy.target, dict): + attrs["keydep"] = {key: Dep() for key in proxy.target.keys()} + self.db[obj_id] = { + "target": proxy.target, + "attrs": attrs, # dep, keydep + # keyed on tuple(readonly, shallow) + "proxies": WeakValueDictionary(), + } + + # Use setdefault to put the proxy in the proxies dict. If there + # was an existing value, it will return that instead. There shouldn't + # be an existing value, so we can compare the objects to see if we + # should raise an exception. + # Seems to be a tiny bit faster than checking beforehand if + # there is already an existing value in the proxies dict + result = self.db[obj_id]["proxies"].setdefault( + (proxy.readonly, proxy.shallow), proxy + ) + if result is not proxy: + raise RuntimeError("Proxy with existing configuration already in db") + + def dereference(self, proxy): + """ + Removes a reference from the database for the given proxy + """ + obj_id = id(proxy.target) + if obj_id not in self.db: + # When there are failing tests, it might happen that proxies + # are garbage collected at a point where the proxy_db is already + # cleared. That's why we need this check here. + # See fixture [clear_proxy_db](/tests/conftest.py:clear_proxy_db) + # for more info. + return + + # The given proxy is the last proxy in the WeakValueDictionary, + # so now is a good moment to see if can remove clean the deps + # for the target object + if len(self.db[obj_id]["proxies"]) == 1: + ref_count = sys.getrefcount(self.db[obj_id]["target"]) + # Ref count is still 3 here because of the reference through proxy.target + if ref_count <= 3: + # We are the last to hold a reference! + del self.db[obj_id] + + def attrs(self, proxy): + return self.db[id(proxy.target)]["attrs"] + + def get_proxy(self, target, readonly=False, shallow=False): + """ + Returns a proxy from the collection for the given object and configuration. + Will return None if there is no proxy for the object's id. + """ + if id(target) not in self.db: + return None + return self.db[id(target)]["proxies"].get((readonly, shallow)) + + +# Create a global proxy collection +proxy_db = ProxyDb() + + +class Proxy: + """ + Proxy for an object/target. + + Instantiating a Proxy will add a reference to the global proxy_db and + destroying a Proxy will remove that reference. + + Please use the `proxy` method to get a proxy for a certain object instead + of directly creating one yourself. The `proxy` method will either create + or return an existing proxy and makes sure that the db stays consistent. + """ + + __hash__ = None + + def __init__(self, target, readonly=False, shallow=False): + self.target = target + self.readonly = readonly + self.shallow = shallow + proxy_db.reference(self) + + def __del__(self): + proxy_db.dereference(self) + + +def proxy(target, readonly=False, shallow=False): + """ + Returns a Proxy for the given object. If a proxy for the given + configuration already exists, it will return that instead of + creating a new one. + + Please be aware: this only works on plain data types! + """ + # The object may be a proxy already, so check if it matches the + # given configuration (readonly and shallow) + if isinstance(target, Proxy): + if readonly == target.readonly and shallow == target.shallow: + return target else: - reactive = obj - if deep: - for i, v in enumerate(reactive): - reactive[i] = observe(v) - return reactive - elif isinstance(obj, tuple): - reactive = obj # tuples are immutable - if deep: - reactive = tuple(observe(v) for v in reactive) - return reactive - elif isinstance(obj, set): - if deep: - return ObservableSet(observe(v) for v in obj) + # If the configuration does not match, + # unwrap the target from the proxy so that the right + # kind of proxy can be returned in the next part of + # this function + target = target.target + + # Note that at this point, target is always a non-proxy object + # Check the proxy_db to see if there's already a proxy for the target object + existing_proxy = proxy_db.get_proxy(target, readonly=readonly, shallow=shallow) + if existing_proxy is not None: + return existing_proxy + + # We can only wrap the following datatypes + if not isinstance(target, (dict, list, tuple, set)): + return target + + # Otherwise, create a new proxy + proxy_type = None + if isinstance(target, dict): + proxy_type = DictProxy if not readonly else ReadonlyDictProxy + elif isinstance(target, list): + proxy_type = ListProxy if not readonly else ReadonlyListProxy + elif isinstance(target, set): + proxy_type = SetProxy if not readonly else ReadonlySetProxy + elif isinstance(target, tuple): + return tuple(proxy(x, readonly=readonly, shallow=shallow) for x in target) + return proxy_type(target, readonly=readonly, shallow=shallow) + + +class StateModifiedError(Exception): + """ + Raised when a proxy is modified in a watched (or computed) expression. + """ + + pass + + +def read_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + proxy_db.attrs(self)["dep"].depend() + value = fn(self.target, *args, **kwargs) + if self.shallow: + return value + return proxy(value, readonly=self.readonly) + + return trap + + +def iterate_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def trap(self, *args, **kwargs): + if Dep.stack: + proxy_db.attrs(self)["dep"].depend() + iterator = fn(self.target, *args, **kwargs) + if self.shallow: + return iterator + if method == "items": + return ( + (key, proxy(value, readonly=self.readonly)) for key, value in iterator + ) else: - if not isinstance(obj, ObservableSet): - reactive = ObservableSet(obj) - else: - reactive = obj - return reactive - - -def make_observable(cls): - def read(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - self.__dep__.depend() - return fn(self, *args, **kwargs) - - return inner - - def read_key(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - key = args[0] - self.__keydeps__[key].depend() - return fn(self, *args, **kwargs) - - return inner - - def write(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - args = tuple(observe(a) for a in args) - kwargs = {k: observe(v) for k, v in kwargs.items()} - retval = fn(self, *args, **kwargs) - # TODO prevent firing if value hasn't actually changed? - self.__dep__.notify() - return retval - - return inner - - def write_key(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - key = args[0] - is_new = key not in self.__keydeps__ - old_value = cls.__getitem__(self, key) if not is_new else None - args = [key] + [observe(a) for a in args[1:]] - kwargs = {k: observe(v) for k, v in kwargs.items()} - retval = fn(self, *args, **kwargs) - new_value = cls.__getitem__(self, key) - if is_new: - self.__keydeps__[key] = Dep() - if old_value != new_value: - self.__keydeps__[key].notify() - self.__dep__.notify() - return retval - - return inner - - def delete(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - retval = fn(self, *args, **kwargs) - self.__dep__.notify() - for key in self._orphaned_keydeps(): - self.__keydeps__[key].notify() - del self.__keydeps__[key] - return retval - - return inner - - def delete_key(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - retval = fn(self, *args, **kwargs) + proxied = partial(proxy, readonly=self.readonly) + return map(proxied, iterator) + + return trap + + +def read_key_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: key = args[0] - self.__dep__.notify() - self.__keydeps__[key].notify() - del self.__keydeps__[key] - return retval + proxy_db.attrs(self)["keydep"][key].depend() + value = fn(self.target, *args, **kwargs) + if self.shallow: + return value + return proxy(value, readonly=self.readonly) - return inner + return inner - todo = [ - ("_READERS", read), - ("_KEYREADERS", read_key), - ("_WRITERS", write), - ("_KEYWRITERS", write_key), - ("_DELETERS", delete), - ("_KEYDELETERS", delete_key), - ] - for category, decorate in todo: - for name in getattr(cls, category, set()): - fn = getattr(cls, name) - setattr(cls, name, decorate(fn)) +def write_trap(method, obj_cls): + fn = getattr(obj_cls, method) - return cls + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + args = tuple(proxy(a) for a in args) + kwargs = {k: proxy(v) for k, v in kwargs.items()} + retval = fn(self.target, *args, **kwargs) + # TODO: prevent firing if value hasn't actually changed? + proxy_db.attrs(self)["dep"].notify() + return retval + return inner -class ObservableDict(dict): - _READERS = { - "values", + +def write_key_trap(method, obj_cls): + fn = getattr(obj_cls, method) + getitem_fn = getattr(obj_cls, "__getitem__") + + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + key = args[0] + is_new = key not in proxy_db.attrs(self)["keydep"] + old_value = getitem_fn(self.target, key) if not is_new else None + args = [key] + [proxy(a) for a in args[1:]] + kwargs = {k: proxy(v) for k, v in kwargs.items()} + retval = fn(self.target, *args, **kwargs) + new_value = getitem_fn(self.target, key) + if is_new: + proxy_db.attrs(self)["keydep"][key] = Dep() + if old_value != new_value: + proxy_db.attrs(self)["keydep"][key].notify() + proxy_db.attrs(self)["dep"].notify() + return retval + + return inner + + +def delete_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + retval = fn(self.target, *args, **kwargs) + proxy_db.attrs(self)["dep"].notify() + for key in self._orphaned_keydeps(): + proxy_db.attrs(self)["keydep"][key].notify() + del proxy_db.attrs(self)["keydep"][key] + return retval + + return inner + + +def delete_key_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: + raise StateModifiedError() + retval = fn(self.target, *args, **kwargs) + key = args[0] + proxy_db.attrs(self)["dep"].notify() + proxy_db.attrs(self)["keydep"][key].notify() + del proxy_db.attrs(self)["keydep"][key] + return retval + + return inner + + +trap_map = { + "READERS": read_trap, + "KEYREADERS": read_key_trap, + "ITERATORS": iterate_trap, + "WRITERS": write_trap, + "KEYWRITERS": write_key_trap, + "DELETERS": delete_trap, + "KEYDELETERS": delete_key_trap, +} + + +class ReadonlyError(Exception): + """ + Raised when a readonly proxy is modified. + """ + + pass + + +def readonly_trap(method, obj_cls): + fn = getattr(obj_cls, method) + + @wraps(fn) + def inner(self, *args, **kwargs): + raise ReadonlyError() + + return inner + + +trap_map_readonly = { + "READERS": read_trap, + "KEYREADERS": read_key_trap, + "ITERATORS": iterate_trap, + "WRITERS": readonly_trap, + "KEYWRITERS": readonly_trap, + "DELETERS": readonly_trap, + "KEYDELETERS": readonly_trap, +} + + +def bind_traps(proxy_cls, obj_cls, traps, trap_map): + for trap_type, methods in traps.items(): + for method in methods: + trap = trap_map[trap_type](method, obj_cls) + setattr(proxy_cls, method, trap) + + +dict_traps = { + "READERS": { "copy", - "items", - "keys", "__eq__", "__format__", "__ge__", "__gt__", - "__iter__", "__le__", "__len__", "__lt__", @@ -156,50 +381,66 @@ class ObservableDict(dict): "__repr__", "__sizeof__", "__str__", - } - _KEYREADERS = { + "keys", + }, + "KEYREADERS": { "get", "__contains__", "__getitem__", - } - _WRITERS = { + }, + "ITERATORS": { + "items", + "values", + "__iter__", + }, + "WRITERS": { "update", - } - _KEYWRITERS = { + }, + "KEYWRITERS": { "setdefault", "__setitem__", - } - _DELETERS = { + }, + "DELETERS": { "clear", "popitem", - } - _KEYDELETERS = { + }, + "KEYDELETERS": { "pop", "__delitem__", - } + }, +} + +if sys.version_info >= (3, 8, 0): + dict_traps["ITERATORS"].add("__reversed__") +if sys.version_info >= (3, 9, 0): + dict_traps["READERS"].add("__or__") + dict_traps["READERS"].add("__ror__") + dict_traps["WRITERS"].add("__ior__") + - @wraps(dict.__init__) - def __init__(self, *args, **kwargs): - dict.__init__(self, *args, **kwargs) - self.__dep__ = Dep() - self.__keydeps__ = {key: Dep() for key in dict.keys(self)} +class DictProxyBase(Proxy): + def __init__(self, target, readonly=False, shallow=False): + super().__init__(target, readonly=readonly, shallow=shallow) def _orphaned_keydeps(self): - return set(self.__keydeps__.keys()) - set(dict.keys(self)) + return set(proxy_db.attrs(self)["keydep"].keys()) - set(self.target.keys()) -if sys.version_info >= (3, 8, 0): - ObservableDict._READERS.add("__reversed__") -if sys.version_info >= (3, 9, 0): - ObservableDict._READERS.add("__or__") - ObservableDict._READERS.add("__ror__") - ObservableDict._WRITERS.add("__ior__") -ObservableDict = make_observable(ObservableDict) +class DictProxy(DictProxyBase): + pass + + +class ReadonlyDictProxy(DictProxyBase): + def __init__(self, target, shallow=False, **kwargs): + super().__init__(target, shallow=shallow, **{**kwargs, "readonly": True}) + +bind_traps(DictProxy, dict, dict_traps, trap_map) +bind_traps(ReadonlyDictProxy, dict, dict_traps, trap_map_readonly) -@make_observable -class ObservableList(list): - _READERS = { + +list_traps = { + "READERS": { "count", "index", "copy", @@ -214,15 +455,17 @@ class ObservableList(list): "__mul__", "__ne__", "__rmul__", - "__iter__", "__len__", "__repr__", "__str__", "__format__", - "__reversed__", "__sizeof__", - } - _WRITERS = { + }, + "ITERATORS": { + "__iter__", + "__reversed__", + }, + "WRITERS": { "append", "clear", "extend", @@ -235,17 +478,30 @@ class ObservableList(list): "__delitem__", "__iadd__", "__imul__", - } + }, +} + + +class ListProxyBase(Proxy): + def __init__(self, target, readonly=False, shallow=False): + super().__init__(target, readonly=readonly, shallow=shallow) - @wraps(list.__init__) - def __init__(self, *args, **kwargs): - list.__init__(self, *args, **kwargs) - self.__dep__ = Dep() +class ListProxy(ListProxyBase): + pass -@make_observable -class ObservableSet(set): - _READERS = { + +class ReadonlyListProxy(ListProxyBase): + def __init__(self, target, shallow=False, **kwargs): + super().__init__(target, shallow=shallow, **{**kwargs, "readonly": True}) + + +bind_traps(ListProxy, list, list_traps, trap_map) +bind_traps(ReadonlyListProxy, list, list_traps, trap_map_readonly) + + +set_traps = { + "READERS": { "copy", "difference", "intersection", @@ -263,7 +519,6 @@ class ObservableSet(set): "__iand__", "__ior__", "__isub__", - "__iter__", "__ixor__", "__le__", "__len__", @@ -279,8 +534,11 @@ class ObservableSet(set): "__str__", "__sub__", "__xor__", - } - _WRITERS = { + }, + "ITERATORS": { + "__iter__", + }, + "WRITERS": { "add", "clear", "difference_update", @@ -290,9 +548,52 @@ class ObservableSet(set): "remove", "symmetric_difference_update", "update", - } + }, +} + + +class SetProxyBase(Proxy): + def __init__(self, target, readonly=False, shallow=False): + super().__init__(target, readonly=readonly, shallow=shallow) + + +class SetProxy(SetProxyBase): + pass + + +class ReadonlySetProxy(SetProxyBase): + def __init__(self, target, shallow=False, **kwargs): + super().__init__(target, shallow=shallow, **{**kwargs, "readonly": True}) + + +bind_traps(SetProxy, set, set_traps, trap_map) +bind_traps(ReadonlySetProxy, set, set_traps, trap_map_readonly) + + +def to_raw(target): + """ + Returns a raw object from which any trace of proxy has been replaced + with its wrapped target value. + """ + if isinstance(target, Proxy): + return to_raw(target.target) + + if isinstance(target, list): + tcopy = copy(target) + for idx, value in enumerate(tcopy): + tcopy[idx] = to_raw(value) + return tcopy + + if isinstance(target, dict): + target = copy(target) + for key, value in target.items(): + target[key] = to_raw(value) + return target + + if isinstance(target, tuple): + return tuple(map(to_raw, target)) + + if isinstance(target, set): + return target - @wraps(set.__init__) - def __init__(self, *args, **kwargs): - set.__init__(self, *args, **kwargs) - self.__dep__ = Dep() + return target diff --git a/observ/watcher.py b/observ/watcher.py index 81b4a05..4add93c 100644 --- a/observ/watcher.py +++ b/observ/watcher.py @@ -3,7 +3,7 @@ observable datastructures, and optionally trigger callback when a change is detected. """ -from collections.abc import Container +from collections.abc import Container, Mapping import inspect from itertools import count from typing import Any @@ -14,17 +14,23 @@ def traverse(obj): + """ + Recursively traverse the whole tree to make sure + that all values have been 'get' + """ _traverse(obj, set()) -def _traverse(obj, seen): +def _traverse(obj, seen: set): seen.add(id(obj)) - if isinstance(obj, dict): + if isinstance(obj, Mapping): + # dict and DictProxies val_iter = iter(obj.values()) - elif isinstance(obj, (list, tuple, set)): + elif isinstance(obj, Container): + # list, set (and their proxies) and tuple val_iter = iter(obj) else: - val_iter = iter(()) + return for v in val_iter: if isinstance(v, Container) and id(v) not in seen: _traverse(v, seen) diff --git a/poetry.lock b/poetry.lock index 8f6fb2b..601ac71 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,1045 +1,1255 @@ -[[package]] -name = "appdirs" -version = "1.4.4" -description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "atomicwrites" -version = "1.4.0" -description = "Atomic file writes." -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" - -[[package]] -name = "attrs" -version = "21.2.0" -description = "Classes Without Boilerplate" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" - -[package.extras] -dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit"] -docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"] -tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface"] -tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins"] - -[[package]] -name = "black" -version = "20.8b1" -description = "The uncompromising code formatter." -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -appdirs = "*" -click = ">=7.1.2" -dataclasses = {version = ">=0.6", markers = "python_version < \"3.7\""} -mypy-extensions = ">=0.4.3" -pathspec = ">=0.6,<1" -regex = ">=2020.1.8" -toml = ">=0.10.1" -typed-ast = ">=1.4.0" -typing-extensions = ">=3.7.4" - -[package.extras] -colorama = ["colorama (>=0.4.3)"] -d = ["aiohttp (>=3.3.2)", "aiohttp-cors"] - -[[package]] -name = "bleach" -version = "4.1.0" -description = "An easy safelist-based HTML-sanitizing tool." -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -packaging = "*" -six = ">=1.9.0" -webencodings = "*" - -[[package]] -name = "certifi" -version = "2021.5.30" -description = "Python package for providing Mozilla's CA Bundle." -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "cffi" -version = "1.14.6" -description = "Foreign Function Interface for Python calling C code." -category = "dev" -optional = false -python-versions = "*" - -[package.dependencies] -pycparser = "*" - -[[package]] -name = "charset-normalizer" -version = "2.0.6" -description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." -category = "dev" -optional = false -python-versions = ">=3.5.0" - -[package.extras] -unicode_backport = ["unicodedata2"] - -[[package]] -name = "click" -version = "8.0.1" -description = "Composable command line interface toolkit" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -colorama = {version = "*", markers = "platform_system == \"Windows\""} -importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} - -[[package]] -name = "colorama" -version = "0.4.4" -description = "Cross-platform colored terminal text." -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" - -[[package]] -name = "coverage" -version = "5.5" -description = "Code coverage measurement for Python" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" - -[package.extras] -toml = ["toml"] - -[[package]] -name = "cryptography" -version = "3.4.8" -description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -cffi = ">=1.12" - -[package.extras] -docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"] -docstest = ["doc8", "pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"] -pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"] -sdist = ["setuptools-rust (>=0.11.4)"] -ssh = ["bcrypt (>=3.1.5)"] -test = ["pytest (>=6.0)", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"] - -[[package]] -name = "dataclasses" -version = "0.8" -description = "A backport of the dataclasses module for Python 3.6" -category = "dev" -optional = false -python-versions = ">=3.6, <3.7" - -[[package]] -name = "docutils" -version = "0.17.1" -description = "Docutils -- Python Documentation Utilities" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" - -[[package]] -name = "flake8" -version = "3.9.2" -description = "the modular source code checker: pep8 pyflakes and co" -category = "dev" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" - -[package.dependencies] -importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} -mccabe = ">=0.6.0,<0.7.0" -pycodestyle = ">=2.7.0,<2.8.0" -pyflakes = ">=2.3.0,<2.4.0" - -[[package]] -name = "flake8-black" -version = "0.2.3" -description = "flake8 plugin to call black as a code style validator" -category = "dev" -optional = false -python-versions = "*" - -[package.dependencies] -black = "*" -flake8 = ">=3.0.0" -toml = "*" - -[[package]] -name = "flake8-import-order" -version = "0.18.1" -description = "Flake8 and pylama plugin that checks the ordering of import statements." -category = "dev" -optional = false -python-versions = "*" - -[package.dependencies] -pycodestyle = "*" - -[[package]] -name = "flake8-print" -version = "4.0.0" -description = "print statement checker plugin for flake8" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -flake8 = ">=3.0" -pycodestyle = "*" -six = "*" - -[[package]] -name = "idna" -version = "3.2" -description = "Internationalized Domain Names in Applications (IDNA)" -category = "dev" -optional = false -python-versions = ">=3.5" - -[[package]] -name = "importlib-metadata" -version = "4.8.1" -description = "Read metadata from Python packages" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""} -zipp = ">=0.5" - -[package.extras] -docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] -perf = ["ipython"] -testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"] - -[[package]] -name = "iniconfig" -version = "1.1.1" -description = "iniconfig: brain-dead simple config-ini parsing" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "jeepney" -version = "0.7.1" -description = "Low-level, pure Python DBus protocol wrapper." -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.extras] -test = ["pytest", "pytest-trio", "pytest-asyncio", "testpath", "trio", "async-timeout"] -trio = ["trio", "async-generator"] - -[[package]] -name = "keyring" -version = "23.2.1" -description = "Store and access your passwords safely." -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -importlib-metadata = ">=3.6" -jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""} -pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""} -SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""} - -[package.extras] -docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] -testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-black (>=0.3.7)", "pytest-mypy"] - -[[package]] -name = "mccabe" -version = "0.6.1" -description = "McCabe checker, plugin for flake8" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "mypy-extensions" -version = "0.4.3" -description = "Experimental type system extensions for programs checked with the mypy typechecker." -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "packaging" -version = "21.0" -description = "Core utilities for Python packages" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -pyparsing = ">=2.0.2" - -[[package]] -name = "pathspec" -version = "0.9.0" -description = "Utility library for gitignore style pattern matching of file paths." -category = "dev" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" - -[[package]] -name = "pkginfo" -version = "1.7.1" -description = "Query metadatdata from sdists / bdists / installed packages." -category = "dev" -optional = false -python-versions = "*" - -[package.extras] -testing = ["nose", "coverage"] - -[[package]] -name = "pluggy" -version = "1.0.0" -description = "plugin and hook calling mechanisms for python" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} - -[package.extras] -dev = ["pre-commit", "tox"] -testing = ["pytest", "pytest-benchmark"] - -[[package]] -name = "py" -version = "1.10.0" -description = "library with cross-python path, ini-parsing, io, code, log facilities" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" - -[[package]] -name = "pycodestyle" -version = "2.7.0" -description = "Python style guide checker" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" - -[[package]] -name = "pycparser" -version = "2.20" -description = "C parser in Python" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" - -[[package]] -name = "pyflakes" -version = "2.3.1" -description = "passive checker of Python programs" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" - -[[package]] -name = "pygments" -version = "2.10.0" -description = "Pygments is a syntax highlighting package written in Python." -category = "dev" -optional = false -python-versions = ">=3.5" - -[[package]] -name = "pyparsing" -version = "2.4.7" -description = "Python parsing module" -category = "dev" -optional = false -python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" - -[[package]] -name = "pyside6" -version = "6.1.3" -description = "Python bindings for the Qt cross-platform application and UI framework" -category = "dev" -optional = false -python-versions = ">=3.6, <3.10" - -[package.dependencies] -shiboken6 = "6.1.3" - -[[package]] -name = "pytest" -version = "6.2.5" -description = "pytest: simple powerful testing with Python" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""} -attrs = ">=19.2.0" -colorama = {version = "*", markers = "sys_platform == \"win32\""} -importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} -iniconfig = "*" -packaging = "*" -pluggy = ">=0.12,<2.0" -py = ">=1.8.2" -toml = "*" - -[package.extras] -testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"] - -[[package]] -name = "pytest-cov" -version = "2.12.1" -description = "Pytest plugin for measuring coverage." -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" - -[package.dependencies] -coverage = ">=5.2.1" -pytest = ">=4.6" -toml = "*" - -[package.extras] -testing = ["fields", "hunter", "process-tests", "six", "pytest-xdist", "virtualenv"] - -[[package]] -name = "pytest-timeout" -version = "1.4.2" -description = "py.test plugin to abort hanging tests" -category = "dev" -optional = false -python-versions = "*" - -[package.dependencies] -pytest = ">=3.6.0" - -[[package]] -name = "pywin32-ctypes" -version = "0.2.0" -description = "" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "readme-renderer" -version = "29.0" -description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse" -category = "dev" -optional = false -python-versions = "*" - -[package.dependencies] -bleach = ">=2.1.0" -docutils = ">=0.13.1" -Pygments = ">=2.5.1" -six = "*" - -[package.extras] -md = ["cmarkgfm (>=0.5.0,<0.6.0)"] - -[[package]] -name = "regex" -version = "2021.9.24" -description = "Alternative regular expression module, to replace re." -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "requests" -version = "2.26.0" -description = "Python HTTP for Humans." -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" - -[package.dependencies] -certifi = ">=2017.4.17" -charset-normalizer = {version = ">=2.0.0,<2.1.0", markers = "python_version >= \"3\""} -idna = {version = ">=2.5,<4", markers = "python_version >= \"3\""} -urllib3 = ">=1.21.1,<1.27" - -[package.extras] -socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"] -use_chardet_on_py3 = ["chardet (>=3.0.2,<5)"] - -[[package]] -name = "requests-toolbelt" -version = "0.9.1" -description = "A utility belt for advanced users of python-requests" -category = "dev" -optional = false -python-versions = "*" - -[package.dependencies] -requests = ">=2.0.1,<3.0.0" - -[[package]] -name = "rfc3986" -version = "1.5.0" -description = "Validating URI References per RFC 3986" -category = "dev" -optional = false -python-versions = "*" - -[package.extras] -idna2008 = ["idna"] - -[[package]] -name = "secretstorage" -version = "3.3.1" -description = "Python bindings to FreeDesktop.org Secret Service API" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -cryptography = ">=2.0" -jeepney = ">=0.6" - -[[package]] -name = "shiboken6" -version = "6.1.3" -description = "Python / C++ bindings helper module" -category = "dev" -optional = false -python-versions = ">=3.6, <3.10" - -[[package]] -name = "six" -version = "1.16.0" -description = "Python 2 and 3 compatibility utilities" -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" - -[[package]] -name = "toml" -version = "0.10.2" -description = "Python Library for Tom's Obvious, Minimal Language" -category = "dev" -optional = false -python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" - -[[package]] -name = "tqdm" -version = "4.62.3" -description = "Fast, Extensible Progress Meter" -category = "dev" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" - -[package.dependencies] -colorama = {version = "*", markers = "platform_system == \"Windows\""} - -[package.extras] -dev = ["py-make (>=0.1.0)", "twine", "wheel"] -notebook = ["ipywidgets (>=6)"] -telegram = ["requests"] - -[[package]] -name = "twine" -version = "3.4.2" -description = "Collection of utilities for publishing packages on PyPI" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -colorama = ">=0.4.3" -importlib-metadata = ">=3.6" -keyring = ">=15.1" -pkginfo = ">=1.4.2" -readme-renderer = ">=21.0" -requests = ">=2.20" -requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0" -rfc3986 = ">=1.4.0" -tqdm = ">=4.14" - -[[package]] -name = "typed-ast" -version = "1.4.3" -description = "a fork of Python 2 and 3 ast modules with type comment support" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "typing-extensions" -version = "3.10.0.2" -description = "Backported and Experimental Type Hints for Python 3.5+" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "urllib3" -version = "1.26.7" -description = "HTTP library with thread-safe connection pooling, file post, and more." -category = "dev" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" - -[package.extras] -brotli = ["brotlipy (>=0.6.0)"] -secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"] -socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] - -[[package]] -name = "webencodings" -version = "0.5.1" -description = "Character encoding aliases for legacy web content" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "zipp" -version = "3.5.0" -description = "Backport of pathlib-compatible object wrapper for zip files" -category = "dev" -optional = false -python-versions = ">=3.6" - -[package.extras] -docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] -testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"] - -[metadata] -lock-version = "1.1" -python-versions = ">=3.6,<3.10" -content-hash = "a4d90f381f24d8ac8994ed67858fda8e546f9715bc3ff11c6b201a931d912c2b" - -[metadata.files] -appdirs = [ - {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, - {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, -] -atomicwrites = [ - {file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"}, - {file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"}, -] -attrs = [ - {file = "attrs-21.2.0-py2.py3-none-any.whl", hash = "sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1"}, - {file = "attrs-21.2.0.tar.gz", hash = "sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"}, -] -black = [ - {file = "black-20.8b1.tar.gz", hash = "sha256:1c02557aa099101b9d21496f8a914e9ed2222ef70336404eeeac8edba836fbea"}, -] -bleach = [ - {file = "bleach-4.1.0-py2.py3-none-any.whl", hash = "sha256:4d2651ab93271d1129ac9cbc679f524565cc8a1b791909c4a51eac4446a15994"}, - {file = "bleach-4.1.0.tar.gz", hash = "sha256:0900d8b37eba61a802ee40ac0061f8c2b5dee29c1927dd1d233e075ebf5a71da"}, -] -certifi = [ - {file = "certifi-2021.5.30-py2.py3-none-any.whl", hash = "sha256:50b1e4f8446b06f41be7dd6338db18e0990601dce795c2b1686458aa7e8fa7d8"}, - {file = "certifi-2021.5.30.tar.gz", hash = "sha256:2bbf76fd432960138b3ef6dda3dde0544f27cbf8546c458e60baf371917ba9ee"}, -] -cffi = [ - {file = "cffi-1.14.6-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:22b9c3c320171c108e903d61a3723b51e37aaa8c81255b5e7ce102775bd01e2c"}, - {file = "cffi-1.14.6-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:f0c5d1acbfca6ebdd6b1e3eded8d261affb6ddcf2186205518f1428b8569bb99"}, - {file = "cffi-1.14.6-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:99f27fefe34c37ba9875f224a8f36e31d744d8083e00f520f133cab79ad5e819"}, - {file = "cffi-1.14.6-cp27-cp27m-win32.whl", hash = "sha256:55af55e32ae468e9946f741a5d51f9896da6b9bf0bbdd326843fec05c730eb20"}, - {file = "cffi-1.14.6-cp27-cp27m-win_amd64.whl", hash = "sha256:7bcac9a2b4fdbed2c16fa5681356d7121ecabf041f18d97ed5b8e0dd38a80224"}, - {file = "cffi-1.14.6-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:ed38b924ce794e505647f7c331b22a693bee1538fdf46b0222c4717b42f744e7"}, - {file = "cffi-1.14.6-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e22dcb48709fc51a7b58a927391b23ab37eb3737a98ac4338e2448bef8559b33"}, - {file = "cffi-1.14.6-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:aedb15f0a5a5949ecb129a82b72b19df97bbbca024081ed2ef88bd5c0a610534"}, - {file = "cffi-1.14.6-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:48916e459c54c4a70e52745639f1db524542140433599e13911b2f329834276a"}, - {file = "cffi-1.14.6-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f627688813d0a4140153ff532537fbe4afea5a3dffce1f9deb7f91f848a832b5"}, - {file = "cffi-1.14.6-cp35-cp35m-win32.whl", hash = "sha256:f0010c6f9d1a4011e429109fda55a225921e3206e7f62a0c22a35344bfd13cca"}, - {file = "cffi-1.14.6-cp35-cp35m-win_amd64.whl", hash = "sha256:57e555a9feb4a8460415f1aac331a2dc833b1115284f7ded7278b54afc5bd218"}, - {file = "cffi-1.14.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e8c6a99be100371dbb046880e7a282152aa5d6127ae01783e37662ef73850d8f"}, - {file = "cffi-1.14.6-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:19ca0dbdeda3b2615421d54bef8985f72af6e0c47082a8d26122adac81a95872"}, - {file = "cffi-1.14.6-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:d950695ae4381ecd856bcaf2b1e866720e4ab9a1498cba61c602e56630ca7195"}, - {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9dc245e3ac69c92ee4c167fbdd7428ec1956d4e754223124991ef29eb57a09d"}, - {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a8661b2ce9694ca01c529bfa204dbb144b275a31685a075ce123f12331be790b"}, - {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b315d709717a99f4b27b59b021e6207c64620790ca3e0bde636a6c7f14618abb"}, - {file = "cffi-1.14.6-cp36-cp36m-win32.whl", hash = "sha256:80b06212075346b5546b0417b9f2bf467fea3bfe7352f781ffc05a8ab24ba14a"}, - {file = "cffi-1.14.6-cp36-cp36m-win_amd64.whl", hash = "sha256:a9da7010cec5a12193d1af9872a00888f396aba3dc79186604a09ea3ee7c029e"}, - {file = "cffi-1.14.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4373612d59c404baeb7cbd788a18b2b2a8331abcc84c3ba40051fcd18b17a4d5"}, - {file = "cffi-1.14.6-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f10afb1004f102c7868ebfe91c28f4a712227fe4cb24974350ace1f90e1febbf"}, - {file = "cffi-1.14.6-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:fd4305f86f53dfd8cd3522269ed7fc34856a8ee3709a5e28b2836b2db9d4cd69"}, - {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d6169cb3c6c2ad50db5b868db6491a790300ade1ed5d1da29289d73bbe40b56"}, - {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5d4b68e216fc65e9fe4f524c177b54964af043dde734807586cf5435af84045c"}, - {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33791e8a2dc2953f28b8d8d300dde42dd929ac28f974c4b4c6272cb2955cb762"}, - {file = "cffi-1.14.6-cp37-cp37m-win32.whl", hash = "sha256:0c0591bee64e438883b0c92a7bed78f6290d40bf02e54c5bf0978eaf36061771"}, - {file = "cffi-1.14.6-cp37-cp37m-win_amd64.whl", hash = "sha256:8eb687582ed7cd8c4bdbff3df6c0da443eb89c3c72e6e5dcdd9c81729712791a"}, - {file = "cffi-1.14.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ba6f2b3f452e150945d58f4badd92310449876c4c954836cfb1803bdd7b422f0"}, - {file = "cffi-1.14.6-cp38-cp38-manylinux1_i686.whl", hash = "sha256:64fda793737bc4037521d4899be780534b9aea552eb673b9833b01f945904c2e"}, - {file = "cffi-1.14.6-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:9f3e33c28cd39d1b655ed1ba7247133b6f7fc16fa16887b120c0c670e35ce346"}, - {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26bb2549b72708c833f5abe62b756176022a7b9a7f689b571e74c8478ead51dc"}, - {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eb687a11f0a7a1839719edd80f41e459cc5366857ecbed383ff376c4e3cc6afd"}, - {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d2ad4d668a5c0645d281dcd17aff2be3212bc109b33814bbb15c4939f44181cc"}, - {file = "cffi-1.14.6-cp38-cp38-win32.whl", hash = "sha256:487d63e1454627c8e47dd230025780e91869cfba4c753a74fda196a1f6ad6548"}, - {file = "cffi-1.14.6-cp38-cp38-win_amd64.whl", hash = "sha256:c33d18eb6e6bc36f09d793c0dc58b0211fccc6ae5149b808da4a62660678b156"}, - {file = "cffi-1.14.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:06c54a68935738d206570b20da5ef2b6b6d92b38ef3ec45c5422c0ebaf338d4d"}, - {file = "cffi-1.14.6-cp39-cp39-manylinux1_i686.whl", hash = "sha256:f174135f5609428cc6e1b9090f9268f5c8935fddb1b25ccb8255a2d50de6789e"}, - {file = "cffi-1.14.6-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f3ebe6e73c319340830a9b2825d32eb6d8475c1dac020b4f0aa774ee3b898d1c"}, - {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c8d896becff2fa653dc4438b54a5a25a971d1f4110b32bd3068db3722c80202"}, - {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4922cd707b25e623b902c86188aca466d3620892db76c0bdd7b99a3d5e61d35f"}, - {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c9e005e9bd57bc987764c32a1bee4364c44fdc11a3cc20a40b93b444984f2b87"}, - {file = "cffi-1.14.6-cp39-cp39-win32.whl", hash = "sha256:eb9e2a346c5238a30a746893f23a9535e700f8192a68c07c0258e7ece6ff3728"}, - {file = "cffi-1.14.6-cp39-cp39-win_amd64.whl", hash = "sha256:818014c754cd3dba7229c0f5884396264d51ffb87ec86e927ef0be140bfdb0d2"}, - {file = "cffi-1.14.6.tar.gz", hash = "sha256:c9a875ce9d7fe32887784274dd533c57909b7b1dcadcc128a2ac21331a9765dd"}, -] -charset-normalizer = [ - {file = "charset-normalizer-2.0.6.tar.gz", hash = "sha256:5ec46d183433dcbd0ab716f2d7f29d8dee50505b3fdb40c6b985c7c4f5a3591f"}, - {file = "charset_normalizer-2.0.6-py3-none-any.whl", hash = "sha256:5d209c0a931f215cee683b6445e2d77677e7e75e159f78def0db09d68fafcaa6"}, -] -click = [ - {file = "click-8.0.1-py3-none-any.whl", hash = "sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"}, - {file = "click-8.0.1.tar.gz", hash = "sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a"}, -] -colorama = [ - {file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"}, - {file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"}, -] -coverage = [ - {file = "coverage-5.5-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:b6d534e4b2ab35c9f93f46229363e17f63c53ad01330df9f2d6bd1187e5eaacf"}, - {file = "coverage-5.5-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:b7895207b4c843c76a25ab8c1e866261bcfe27bfaa20c192de5190121770672b"}, - {file = "coverage-5.5-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:c2723d347ab06e7ddad1a58b2a821218239249a9e4365eaff6649d31180c1669"}, - {file = "coverage-5.5-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:900fbf7759501bc7807fd6638c947d7a831fc9fdf742dc10f02956ff7220fa90"}, - {file = "coverage-5.5-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:004d1880bed2d97151facef49f08e255a20ceb6f9432df75f4eef018fdd5a78c"}, - {file = "coverage-5.5-cp27-cp27m-win32.whl", hash = "sha256:06191eb60f8d8a5bc046f3799f8a07a2d7aefb9504b0209aff0b47298333302a"}, - {file = "coverage-5.5-cp27-cp27m-win_amd64.whl", hash = "sha256:7501140f755b725495941b43347ba8a2777407fc7f250d4f5a7d2a1050ba8e82"}, - {file = "coverage-5.5-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:372da284cfd642d8e08ef606917846fa2ee350f64994bebfbd3afb0040436905"}, - {file = "coverage-5.5-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:8963a499849a1fc54b35b1c9f162f4108017b2e6db2c46c1bed93a72262ed083"}, - {file = "coverage-5.5-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:869a64f53488f40fa5b5b9dcb9e9b2962a66a87dab37790f3fcfb5144b996ef5"}, - {file = "coverage-5.5-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:4a7697d8cb0f27399b0e393c0b90f0f1e40c82023ea4d45d22bce7032a5d7b81"}, - {file = "coverage-5.5-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:8d0a0725ad7c1a0bcd8d1b437e191107d457e2ec1084b9f190630a4fb1af78e6"}, - {file = "coverage-5.5-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:51cb9476a3987c8967ebab3f0fe144819781fca264f57f89760037a2ea191cb0"}, - {file = "coverage-5.5-cp310-cp310-win_amd64.whl", hash = "sha256:c0891a6a97b09c1f3e073a890514d5012eb256845c451bd48f7968ef939bf4ae"}, - {file = "coverage-5.5-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:3487286bc29a5aa4b93a072e9592f22254291ce96a9fbc5251f566b6b7343cdb"}, - {file = "coverage-5.5-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:deee1077aae10d8fa88cb02c845cfba9b62c55e1183f52f6ae6a2df6a2187160"}, - {file = "coverage-5.5-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f11642dddbb0253cc8853254301b51390ba0081750a8ac03f20ea8103f0c56b6"}, - {file = "coverage-5.5-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:6c90e11318f0d3c436a42409f2749ee1a115cd8b067d7f14c148f1ce5574d701"}, - {file = "coverage-5.5-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:30c77c1dc9f253283e34c27935fded5015f7d1abe83bc7821680ac444eaf7793"}, - {file = "coverage-5.5-cp35-cp35m-win32.whl", hash = "sha256:9a1ef3b66e38ef8618ce5fdc7bea3d9f45f3624e2a66295eea5e57966c85909e"}, - {file = "coverage-5.5-cp35-cp35m-win_amd64.whl", hash = "sha256:972c85d205b51e30e59525694670de6a8a89691186012535f9d7dbaa230e42c3"}, - {file = "coverage-5.5-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:af0e781009aaf59e25c5a678122391cb0f345ac0ec272c7961dc5455e1c40066"}, - {file = "coverage-5.5-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:74d881fc777ebb11c63736622b60cb9e4aee5cace591ce274fb69e582a12a61a"}, - {file = "coverage-5.5-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:92b017ce34b68a7d67bd6d117e6d443a9bf63a2ecf8567bb3d8c6c7bc5014465"}, - {file = "coverage-5.5-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d636598c8305e1f90b439dbf4f66437de4a5e3c31fdf47ad29542478c8508bbb"}, - {file = "coverage-5.5-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:41179b8a845742d1eb60449bdb2992196e211341818565abded11cfa90efb821"}, - {file = "coverage-5.5-cp36-cp36m-win32.whl", hash = "sha256:040af6c32813fa3eae5305d53f18875bedd079960822ef8ec067a66dd8afcd45"}, - {file = "coverage-5.5-cp36-cp36m-win_amd64.whl", hash = "sha256:5fec2d43a2cc6965edc0bb9e83e1e4b557f76f843a77a2496cbe719583ce8184"}, - {file = "coverage-5.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:18ba8bbede96a2c3dde7b868de9dcbd55670690af0988713f0603f037848418a"}, - {file = "coverage-5.5-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:2910f4d36a6a9b4214bb7038d537f015346f413a975d57ca6b43bf23d6563b53"}, - {file = "coverage-5.5-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:f0b278ce10936db1a37e6954e15a3730bea96a0997c26d7fee88e6c396c2086d"}, - {file = "coverage-5.5-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:796c9c3c79747146ebd278dbe1e5c5c05dd6b10cc3bcb8389dfdf844f3ead638"}, - {file = "coverage-5.5-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:53194af30d5bad77fcba80e23a1441c71abfb3e01192034f8246e0d8f99528f3"}, - {file = "coverage-5.5-cp37-cp37m-win32.whl", hash = "sha256:184a47bbe0aa6400ed2d41d8e9ed868b8205046518c52464fde713ea06e3a74a"}, - {file = "coverage-5.5-cp37-cp37m-win_amd64.whl", hash = "sha256:2949cad1c5208b8298d5686d5a85b66aae46d73eec2c3e08c817dd3513e5848a"}, - {file = "coverage-5.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:217658ec7187497e3f3ebd901afdca1af062b42cfe3e0dafea4cced3983739f6"}, - {file = "coverage-5.5-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1aa846f56c3d49205c952d8318e76ccc2ae23303351d9270ab220004c580cfe2"}, - {file = "coverage-5.5-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:24d4a7de75446be83244eabbff746d66b9240ae020ced65d060815fac3423759"}, - {file = "coverage-5.5-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:d1f8bf7b90ba55699b3a5e44930e93ff0189aa27186e96071fac7dd0d06a1873"}, - {file = "coverage-5.5-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:970284a88b99673ccb2e4e334cfb38a10aab7cd44f7457564d11898a74b62d0a"}, - {file = "coverage-5.5-cp38-cp38-win32.whl", hash = "sha256:01d84219b5cdbfc8122223b39a954820929497a1cb1422824bb86b07b74594b6"}, - {file = "coverage-5.5-cp38-cp38-win_amd64.whl", hash = "sha256:2e0d881ad471768bf6e6c2bf905d183543f10098e3b3640fc029509530091502"}, - {file = "coverage-5.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d1f9ce122f83b2305592c11d64f181b87153fc2c2bbd3bb4a3dde8303cfb1a6b"}, - {file = "coverage-5.5-cp39-cp39-manylinux1_i686.whl", hash = "sha256:13c4ee887eca0f4c5a247b75398d4114c37882658300e153113dafb1d76de529"}, - {file = "coverage-5.5-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:52596d3d0e8bdf3af43db3e9ba8dcdaac724ba7b5ca3f6358529d56f7a166f8b"}, - {file = "coverage-5.5-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2cafbbb3af0733db200c9b5f798d18953b1a304d3f86a938367de1567f4b5bff"}, - {file = "coverage-5.5-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:44d654437b8ddd9eee7d1eaee28b7219bec228520ff809af170488fd2fed3e2b"}, - {file = "coverage-5.5-cp39-cp39-win32.whl", hash = "sha256:d314ed732c25d29775e84a960c3c60808b682c08d86602ec2c3008e1202e3bb6"}, - {file = "coverage-5.5-cp39-cp39-win_amd64.whl", hash = "sha256:13034c4409db851670bc9acd836243aeee299949bd5673e11844befcb0149f03"}, - {file = "coverage-5.5-pp36-none-any.whl", hash = "sha256:f030f8873312a16414c0d8e1a1ddff2d3235655a2174e3648b4fa66b3f2f1079"}, - {file = "coverage-5.5-pp37-none-any.whl", hash = "sha256:2a3859cb82dcbda1cfd3e6f71c27081d18aa251d20a17d87d26d4cd216fb0af4"}, - {file = "coverage-5.5.tar.gz", hash = "sha256:ebe78fe9a0e874362175b02371bdfbee64d8edc42a044253ddf4ee7d3c15212c"}, -] -cryptography = [ - {file = "cryptography-3.4.8-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a00cf305f07b26c351d8d4e1af84ad7501eca8a342dedf24a7acb0e7b7406e14"}, - {file = "cryptography-3.4.8-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:f44d141b8c4ea5eb4dbc9b3ad992d45580c1d22bf5e24363f2fbf50c2d7ae8a7"}, - {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a7dcbcd3f1913f664aca35d47c1331fce738d44ec34b7be8b9d332151b0b01e"}, - {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34dae04a0dce5730d8eb7894eab617d8a70d0c97da76b905de9efb7128ad7085"}, - {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1eb7bb0df6f6f583dd8e054689def236255161ebbcf62b226454ab9ec663746b"}, - {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:9965c46c674ba8cc572bc09a03f4c649292ee73e1b683adb1ce81e82e9a6a0fb"}, - {file = "cryptography-3.4.8-cp36-abi3-win32.whl", hash = "sha256:21ca464b3a4b8d8e86ba0ee5045e103a1fcfac3b39319727bc0fc58c09c6aff7"}, - {file = "cryptography-3.4.8-cp36-abi3-win_amd64.whl", hash = "sha256:3520667fda779eb788ea00080124875be18f2d8f0848ec00733c0ec3bb8219fc"}, - {file = "cryptography-3.4.8-pp36-pypy36_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d2a6e5ef66503da51d2110edf6c403dc6b494cc0082f85db12f54e9c5d4c3ec5"}, - {file = "cryptography-3.4.8-pp36-pypy36_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a305600e7a6b7b855cd798e00278161b681ad6e9b7eca94c721d5f588ab212af"}, - {file = "cryptography-3.4.8-pp36-pypy36_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:3fa3a7ccf96e826affdf1a0a9432be74dc73423125c8f96a909e3835a5ef194a"}, - {file = "cryptography-3.4.8-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:d9ec0e67a14f9d1d48dd87a2531009a9b251c02ea42851c060b25c782516ff06"}, - {file = "cryptography-3.4.8-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5b0fbfae7ff7febdb74b574055c7466da334a5371f253732d7e2e7525d570498"}, - {file = "cryptography-3.4.8-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94fff993ee9bc1b2440d3b7243d488c6a3d9724cc2b09cdb297f6a886d040ef7"}, - {file = "cryptography-3.4.8-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:8695456444f277af73a4877db9fc979849cd3ee74c198d04fc0776ebc3db52b9"}, - {file = "cryptography-3.4.8-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:cd65b60cfe004790c795cc35f272e41a3df4631e2fb6b35aa7ac6ef2859d554e"}, - {file = "cryptography-3.4.8.tar.gz", hash = "sha256:94cc5ed4ceaefcbe5bf38c8fba6a21fc1d365bb8fb826ea1688e3370b2e24a1c"}, -] -dataclasses = [ - {file = "dataclasses-0.8-py3-none-any.whl", hash = "sha256:0201d89fa866f68c8ebd9d08ee6ff50c0b255f8ec63a71c16fda7af82bb887bf"}, - {file = "dataclasses-0.8.tar.gz", hash = "sha256:8479067f342acf957dc82ec415d355ab5edb7e7646b90dc6e2fd1d96ad084c97"}, -] -docutils = [ - {file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"}, - {file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"}, -] -flake8 = [ - {file = "flake8-3.9.2-py2.py3-none-any.whl", hash = "sha256:bf8fd333346d844f616e8d47905ef3a3384edae6b4e9beb0c5101e25e3110907"}, - {file = "flake8-3.9.2.tar.gz", hash = "sha256:07528381786f2a6237b061f6e96610a4167b226cb926e2aa2b6b1d78057c576b"}, -] -flake8-black = [ - {file = "flake8-black-0.2.3.tar.gz", hash = "sha256:c199844bc1b559d91195ebe8620216f21ed67f2cc1ff6884294c91a0d2492684"}, - {file = "flake8_black-0.2.3-py3-none-any.whl", hash = "sha256:cc080ba5b3773b69ba102b6617a00cc4ecbad8914109690cfda4d565ea435d96"}, -] -flake8-import-order = [ - {file = "flake8-import-order-0.18.1.tar.gz", hash = "sha256:a28dc39545ea4606c1ac3c24e9d05c849c6e5444a50fb7e9cdd430fc94de6e92"}, - {file = "flake8_import_order-0.18.1-py2.py3-none-any.whl", hash = "sha256:90a80e46886259b9c396b578d75c749801a41ee969a235e163cfe1be7afd2543"}, -] -flake8-print = [ - {file = "flake8-print-4.0.0.tar.gz", hash = "sha256:5afac374b7dc49aac2c36d04b5eb1d746d72e6f5df75a6ecaecd99e9f79c6516"}, - {file = "flake8_print-4.0.0-py3-none-any.whl", hash = "sha256:6c0efce658513169f96d7a24cf136c434dc711eb00ebd0a985eb1120103fe584"}, -] -idna = [ - {file = "idna-3.2-py3-none-any.whl", hash = "sha256:14475042e284991034cb48e06f6851428fb14c4dc953acd9be9a5e95c7b6dd7a"}, - {file = "idna-3.2.tar.gz", hash = "sha256:467fbad99067910785144ce333826c71fb0e63a425657295239737f7ecd125f3"}, -] -importlib-metadata = [ - {file = "importlib_metadata-4.8.1-py3-none-any.whl", hash = "sha256:b618b6d2d5ffa2f16add5697cf57a46c76a56229b0ed1c438322e4e95645bd15"}, - {file = "importlib_metadata-4.8.1.tar.gz", hash = "sha256:f284b3e11256ad1e5d03ab86bb2ccd6f5339688ff17a4d797a0fe7df326f23b1"}, -] -iniconfig = [ - {file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"}, - {file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"}, -] -jeepney = [ - {file = "jeepney-0.7.1-py3-none-any.whl", hash = "sha256:1b5a0ea5c0e7b166b2f5895b91a08c14de8915afda4407fb5022a195224958ac"}, - {file = "jeepney-0.7.1.tar.gz", hash = "sha256:fa9e232dfa0c498bd0b8a3a73b8d8a31978304dcef0515adc859d4e096f96f4f"}, -] -keyring = [ - {file = "keyring-23.2.1-py3-none-any.whl", hash = "sha256:bd2145a237ed70c8ce72978b497619ddfcae640b6dcf494402d5143e37755c6e"}, - {file = "keyring-23.2.1.tar.gz", hash = "sha256:6334aee6073db2fb1f30892697b1730105b5e9a77ce7e61fca6b435225493efe"}, -] -mccabe = [ - {file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"}, - {file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"}, -] -mypy-extensions = [ - {file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"}, - {file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"}, -] -packaging = [ - {file = "packaging-21.0-py3-none-any.whl", hash = "sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"}, - {file = "packaging-21.0.tar.gz", hash = "sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7"}, -] -pathspec = [ - {file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"}, - {file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"}, -] -pkginfo = [ - {file = "pkginfo-1.7.1-py2.py3-none-any.whl", hash = "sha256:37ecd857b47e5f55949c41ed061eb51a0bee97a87c969219d144c0e023982779"}, - {file = "pkginfo-1.7.1.tar.gz", hash = "sha256:e7432f81d08adec7297633191bbf0bd47faf13cd8724c3a13250e51d542635bd"}, -] -pluggy = [ - {file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"}, - {file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"}, -] -py = [ - {file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"}, - {file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"}, -] -pycodestyle = [ - {file = "pycodestyle-2.7.0-py2.py3-none-any.whl", hash = "sha256:514f76d918fcc0b55c6680472f0a37970994e07bbb80725808c17089be302068"}, - {file = "pycodestyle-2.7.0.tar.gz", hash = "sha256:c389c1d06bf7904078ca03399a4816f974a1d590090fecea0c63ec26ebaf1cef"}, -] -pycparser = [ - {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"}, - {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"}, -] -pyflakes = [ - {file = "pyflakes-2.3.1-py2.py3-none-any.whl", hash = "sha256:7893783d01b8a89811dd72d7dfd4d84ff098e5eed95cfa8905b22bbffe52efc3"}, - {file = "pyflakes-2.3.1.tar.gz", hash = "sha256:f5bc8ecabc05bb9d291eb5203d6810b49040f6ff446a756326104746cc00c1db"}, -] -pygments = [ - {file = "Pygments-2.10.0-py3-none-any.whl", hash = "sha256:b8e67fe6af78f492b3c4b3e2970c0624cbf08beb1e493b2c99b9fa1b67a20380"}, - {file = "Pygments-2.10.0.tar.gz", hash = "sha256:f398865f7eb6874156579fdf36bc840a03cab64d1cde9e93d68f46a425ec52c6"}, -] -pyparsing = [ - {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"}, - {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"}, -] -pyside6 = [ - {file = "PySide6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-macosx_10_14_x86_64.whl", hash = "sha256:619b6bb4a3e5451fe939983cb9f5155f2cded44c8449f60a1d250747a48f00f3"}, - {file = "PySide6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-manylinux1_x86_64.whl", hash = "sha256:64d69fd2d38b47982c06619f9969b6ae5c632f21f8ff2432e4953b0179a21411"}, - {file = "PySide6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-none-win_amd64.whl", hash = "sha256:cbbe1a2bacca737b3000a8a18b0ba389ca93106800b10acb93660c1a5b6bfdec"}, -] -pytest = [ - {file = "pytest-6.2.5-py3-none-any.whl", hash = "sha256:7310f8d27bc79ced999e760ca304d69f6ba6c6649c0b60fb0e04a4a77cacc134"}, - {file = "pytest-6.2.5.tar.gz", hash = "sha256:131b36680866a76e6781d13f101efb86cf674ebb9762eb70d3082b6f29889e89"}, -] -pytest-cov = [ - {file = "pytest-cov-2.12.1.tar.gz", hash = "sha256:261ceeb8c227b726249b376b8526b600f38667ee314f910353fa318caa01f4d7"}, - {file = "pytest_cov-2.12.1-py2.py3-none-any.whl", hash = "sha256:261bb9e47e65bd099c89c3edf92972865210c36813f80ede5277dceb77a4a62a"}, -] -pytest-timeout = [ - {file = "pytest-timeout-1.4.2.tar.gz", hash = "sha256:20b3113cf6e4e80ce2d403b6fb56e9e1b871b510259206d40ff8d609f48bda76"}, - {file = "pytest_timeout-1.4.2-py2.py3-none-any.whl", hash = "sha256:541d7aa19b9a6b4e475c759fd6073ef43d7cdc9a92d95644c260076eb257a063"}, -] -pywin32-ctypes = [ - {file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"}, - {file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"}, -] -readme-renderer = [ - {file = "readme_renderer-29.0-py2.py3-none-any.whl", hash = "sha256:63b4075c6698fcfa78e584930f07f39e05d46f3ec97f65006e430b595ca6348c"}, - {file = "readme_renderer-29.0.tar.gz", hash = "sha256:92fd5ac2bf8677f310f3303aa4bce5b9d5f9f2094ab98c29f13791d7b805a3db"}, -] -regex = [ - {file = "regex-2021.9.24-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0628ed7d6334e8f896f882a5c1240de8c4d9b0dd7c7fb8e9f4692f5684b7d656"}, - {file = "regex-2021.9.24-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3baf3eaa41044d4ced2463fd5d23bf7bd4b03d68739c6c99a59ce1f95599a673"}, - {file = "regex-2021.9.24-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c000635fd78400a558bd7a3c2981bb2a430005ebaa909d31e6e300719739a949"}, - {file = "regex-2021.9.24-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:295bc8a13554a25ad31e44c4bedabd3c3e28bba027e4feeb9bb157647a2344a7"}, - {file = "regex-2021.9.24-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0e3f59d3c772f2c3baaef2db425e6fc4149d35a052d874bb95ccfca10a1b9f4"}, - {file = "regex-2021.9.24-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:aea4006b73b555fc5bdb650a8b92cf486d678afa168cf9b38402bb60bf0f9c18"}, - {file = "regex-2021.9.24-cp310-cp310-win32.whl", hash = "sha256:09eb62654030f39f3ba46bc6726bea464069c29d00a9709e28c9ee9623a8da4a"}, - {file = "regex-2021.9.24-cp310-cp310-win_amd64.whl", hash = "sha256:8d80087320632457aefc73f686f66139801959bf5b066b4419b92be85be3543c"}, - {file = "regex-2021.9.24-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:7e3536f305f42ad6d31fc86636c54c7dafce8d634e56fef790fbacb59d499dd5"}, - {file = "regex-2021.9.24-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c31f35a984caffb75f00a86852951a337540b44e4a22171354fb760cefa09346"}, - {file = "regex-2021.9.24-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c7cb25adba814d5f419733fe565f3289d6fa629ab9e0b78f6dff5fa94ab0456"}, - {file = "regex-2021.9.24-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:85c61bee5957e2d7be390392feac7e1d7abd3a49cbaed0c8cee1541b784c8561"}, - {file = "regex-2021.9.24-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c94722bf403b8da744b7d0bb87e1f2529383003ceec92e754f768ef9323f69ad"}, - {file = "regex-2021.9.24-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6adc1bd68f81968c9d249aab8c09cdc2cbe384bf2d2cb7f190f56875000cdc72"}, - {file = "regex-2021.9.24-cp36-cp36m-win32.whl", hash = "sha256:2054dea683f1bda3a804fcfdb0c1c74821acb968093d0be16233873190d459e3"}, - {file = "regex-2021.9.24-cp36-cp36m-win_amd64.whl", hash = "sha256:7783d89bd5413d183a38761fbc68279b984b9afcfbb39fa89d91f63763fbfb90"}, - {file = "regex-2021.9.24-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b15dc34273aefe522df25096d5d087abc626e388a28a28ac75a4404bb7668736"}, - {file = "regex-2021.9.24-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:10a7a9cbe30bd90b7d9a1b4749ef20e13a3528e4215a2852be35784b6bd070f0"}, - {file = "regex-2021.9.24-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fb9f5844db480e2ef9fce3a72e71122dd010ab7b2920f777966ba25f7eb63819"}, - {file = "regex-2021.9.24-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:17310b181902e0bb42b29c700e2c2346b8d81f26e900b1328f642e225c88bce1"}, - {file = "regex-2021.9.24-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0bba1f6df4eafe79db2ecf38835c2626dbd47911e0516f6962c806f83e7a99ae"}, - {file = "regex-2021.9.24-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:821e10b73e0898544807a0692a276e539e5bafe0a055506a6882814b6a02c3ec"}, - {file = "regex-2021.9.24-cp37-cp37m-win32.whl", hash = "sha256:9c371dd326289d85906c27ec2bc1dcdedd9d0be12b543d16e37bad35754bde48"}, - {file = "regex-2021.9.24-cp37-cp37m-win_amd64.whl", hash = "sha256:1e8d1898d4fb817120a5f684363b30108d7b0b46c7261264b100d14ec90a70e7"}, - {file = "regex-2021.9.24-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8a5c2250c0a74428fd5507ae8853706fdde0f23bfb62ee1ec9418eeacf216078"}, - {file = "regex-2021.9.24-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8aec4b4da165c4a64ea80443c16e49e3b15df0f56c124ac5f2f8708a65a0eddc"}, - {file = "regex-2021.9.24-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:650c4f1fc4273f4e783e1d8e8b51a3e2311c2488ba0fcae6425b1e2c248a189d"}, - {file = "regex-2021.9.24-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:2cdb3789736f91d0b3333ac54d12a7e4f9efbc98f53cb905d3496259a893a8b3"}, - {file = "regex-2021.9.24-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4e61100200fa6ab7c99b61476f9f9653962ae71b931391d0264acfb4d9527d9c"}, - {file = "regex-2021.9.24-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8c268e78d175798cd71d29114b0a1f1391c7d011995267d3b62319ec1a4ecaa1"}, - {file = "regex-2021.9.24-cp38-cp38-win32.whl", hash = "sha256:658e3477676009083422042c4bac2bdad77b696e932a3de001c42cc046f8eda2"}, - {file = "regex-2021.9.24-cp38-cp38-win_amd64.whl", hash = "sha256:a731552729ee8ae9c546fb1c651c97bf5f759018fdd40d0e9b4d129e1e3a44c8"}, - {file = "regex-2021.9.24-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:86f9931eb92e521809d4b64ec8514f18faa8e11e97d6c2d1afa1bcf6c20a8eab"}, - {file = "regex-2021.9.24-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dcbbc9cfa147d55a577d285fd479b43103188855074552708df7acc31a476dd9"}, - {file = "regex-2021.9.24-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29385c4dbb3f8b3a55ce13de6a97a3d21bd00de66acd7cdfc0b49cb2f08c906c"}, - {file = "regex-2021.9.24-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c50a6379763c733562b1fee877372234d271e5c78cd13ade5f25978aa06744db"}, - {file = "regex-2021.9.24-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f74b6d8f59f3cfb8237e25c532b11f794b96f5c89a6f4a25857d85f84fbef11"}, - {file = "regex-2021.9.24-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6c4d83d21d23dd854ffbc8154cf293f4e43ba630aa9bd2539c899343d7f59da3"}, - {file = "regex-2021.9.24-cp39-cp39-win32.whl", hash = "sha256:95e89a8558c8c48626dcffdf9c8abac26b7c251d352688e7ab9baf351e1c7da6"}, - {file = "regex-2021.9.24-cp39-cp39-win_amd64.whl", hash = "sha256:835962f432bce92dc9bf22903d46c50003c8d11b1dc64084c8fae63bca98564a"}, - {file = "regex-2021.9.24.tar.gz", hash = "sha256:6266fde576e12357b25096351aac2b4b880b0066263e7bc7a9a1b4307991bb0e"}, -] -requests = [ - {file = "requests-2.26.0-py2.py3-none-any.whl", hash = "sha256:6c1246513ecd5ecd4528a0906f910e8f0f9c6b8ec72030dc9fd154dc1a6efd24"}, - {file = "requests-2.26.0.tar.gz", hash = "sha256:b8aa58f8cf793ffd8782d3d8cb19e66ef36f7aba4353eec859e74678b01b07a7"}, -] -requests-toolbelt = [ - {file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"}, - {file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"}, -] -rfc3986 = [ - {file = "rfc3986-1.5.0-py2.py3-none-any.whl", hash = "sha256:a86d6e1f5b1dc238b218b012df0aa79409667bb209e58da56d0b94704e712a97"}, - {file = "rfc3986-1.5.0.tar.gz", hash = "sha256:270aaf10d87d0d4e095063c65bf3ddbc6ee3d0b226328ce21e036f946e421835"}, -] -secretstorage = [ - {file = "SecretStorage-3.3.1-py3-none-any.whl", hash = "sha256:422d82c36172d88d6a0ed5afdec956514b189ddbfb72fefab0c8a1cee4eaf71f"}, - {file = "SecretStorage-3.3.1.tar.gz", hash = "sha256:fd666c51a6bf200643495a04abb261f83229dcb6fd8472ec393df7ffc8b6f195"}, -] -shiboken6 = [ - {file = "shiboken6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-macosx_10_14_x86_64.whl", hash = "sha256:7d285cb61dca4d543ea1ccd6f9bad1b555f222232dfc7240ebc7df83e21fb805"}, - {file = "shiboken6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-manylinux1_x86_64.whl", hash = "sha256:c44d406263c1cbd1cb92efca02790c39b1563a25e888e3e5c7291aabd2363aee"}, - {file = "shiboken6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-none-win_amd64.whl", hash = "sha256:10023b1b6b27318db74a2232e808c8e5691ec1ff39be89d9878fc60e0adbb690"}, -] -six = [ - {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, - {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, -] -toml = [ - {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, - {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, -] -tqdm = [ - {file = "tqdm-4.62.3-py2.py3-none-any.whl", hash = "sha256:8dd278a422499cd6b727e6ae4061c40b48fce8b76d1ccbf5d34fca9b7f925b0c"}, - {file = "tqdm-4.62.3.tar.gz", hash = "sha256:d359de7217506c9851b7869f3708d8ee53ed70a1b8edbba4dbcb47442592920d"}, -] -twine = [ - {file = "twine-3.4.2-py3-none-any.whl", hash = "sha256:087328e9bb405e7ce18527a2dca4042a84c7918658f951110b38bc135acab218"}, - {file = "twine-3.4.2.tar.gz", hash = "sha256:4caec0f1ed78dc4c9b83ad537e453d03ce485725f2aea57f1bb3fdde78dae936"}, -] -typed-ast = [ - {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:2068531575a125b87a41802130fa7e29f26c09a2833fea68d9a40cf33902eba6"}, - {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c907f561b1e83e93fad565bac5ba9c22d96a54e7ea0267c708bffe863cbe4075"}, - {file = "typed_ast-1.4.3-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:1b3ead4a96c9101bef08f9f7d1217c096f31667617b58de957f690c92378b528"}, - {file = "typed_ast-1.4.3-cp35-cp35m-win32.whl", hash = "sha256:dde816ca9dac1d9c01dd504ea5967821606f02e510438120091b84e852367428"}, - {file = "typed_ast-1.4.3-cp35-cp35m-win_amd64.whl", hash = "sha256:777a26c84bea6cd934422ac2e3b78863a37017618b6e5c08f92ef69853e765d3"}, - {file = "typed_ast-1.4.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f8afcf15cc511ada719a88e013cec87c11aff7b91f019295eb4530f96fe5ef2f"}, - {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:52b1eb8c83f178ab787f3a4283f68258525f8d70f778a2f6dd54d3b5e5fb4341"}, - {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:01ae5f73431d21eead5015997ab41afa53aa1fbe252f9da060be5dad2c730ace"}, - {file = "typed_ast-1.4.3-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c190f0899e9f9f8b6b7863debfb739abcb21a5c054f911ca3596d12b8a4c4c7f"}, - {file = "typed_ast-1.4.3-cp36-cp36m-win32.whl", hash = "sha256:398e44cd480f4d2b7ee8d98385ca104e35c81525dd98c519acff1b79bdaac363"}, - {file = "typed_ast-1.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:bff6ad71c81b3bba8fa35f0f1921fb24ff4476235a6e94a26ada2e54370e6da7"}, - {file = "typed_ast-1.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0fb71b8c643187d7492c1f8352f2c15b4c4af3f6338f21681d3681b3dc31a266"}, - {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:760ad187b1041a154f0e4d0f6aae3e40fdb51d6de16e5c99aedadd9246450e9e"}, - {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5feca99c17af94057417d744607b82dd0a664fd5e4ca98061480fd8b14b18d04"}, - {file = "typed_ast-1.4.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:95431a26309a21874005845c21118c83991c63ea800dd44843e42a916aec5899"}, - {file = "typed_ast-1.4.3-cp37-cp37m-win32.whl", hash = "sha256:aee0c1256be6c07bd3e1263ff920c325b59849dc95392a05f258bb9b259cf39c"}, - {file = "typed_ast-1.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:9ad2c92ec681e02baf81fdfa056fe0d818645efa9af1f1cd5fd6f1bd2bdfd805"}, - {file = "typed_ast-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b36b4f3920103a25e1d5d024d155c504080959582b928e91cb608a65c3a49e1a"}, - {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_i686.whl", hash = "sha256:067a74454df670dcaa4e59349a2e5c81e567d8d65458d480a5b3dfecec08c5ff"}, - {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7538e495704e2ccda9b234b82423a4038f324f3a10c43bc088a1636180f11a41"}, - {file = "typed_ast-1.4.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:af3d4a73793725138d6b334d9d247ce7e5f084d96284ed23f22ee626a7b88e39"}, - {file = "typed_ast-1.4.3-cp38-cp38-win32.whl", hash = "sha256:f2362f3cb0f3172c42938946dbc5b7843c2a28aec307c49100c8b38764eb6927"}, - {file = "typed_ast-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:dd4a21253f42b8d2b48410cb31fe501d32f8b9fbeb1f55063ad102fe9c425e40"}, - {file = "typed_ast-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f328adcfebed9f11301eaedfa48e15bdece9b519fb27e6a8c01aa52a17ec31b3"}, - {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_i686.whl", hash = "sha256:2c726c276d09fc5c414693a2de063f521052d9ea7c240ce553316f70656c84d4"}, - {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:cae53c389825d3b46fb37538441f75d6aecc4174f615d048321b716df2757fb0"}, - {file = "typed_ast-1.4.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b9574c6f03f685070d859e75c7f9eeca02d6933273b5e69572e5ff9d5e3931c3"}, - {file = "typed_ast-1.4.3-cp39-cp39-win32.whl", hash = "sha256:209596a4ec71d990d71d5e0d312ac935d86930e6eecff6ccc7007fe54d703808"}, - {file = "typed_ast-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:9c6d1a54552b5330bc657b7ef0eae25d00ba7ffe85d9ea8ae6540d2197a3788c"}, - {file = "typed_ast-1.4.3.tar.gz", hash = "sha256:fb1bbeac803adea29cedd70781399c99138358c26d05fcbd23c13016b7f5ec65"}, -] -typing-extensions = [ - {file = "typing_extensions-3.10.0.2-py2-none-any.whl", hash = "sha256:d8226d10bc02a29bcc81df19a26e56a9647f8b0a6d4a83924139f4a8b01f17b7"}, - {file = "typing_extensions-3.10.0.2-py3-none-any.whl", hash = "sha256:f1d25edafde516b146ecd0613dabcc61409817af4766fbbcfb8d1ad4ec441a34"}, - {file = "typing_extensions-3.10.0.2.tar.gz", hash = "sha256:49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e"}, -] -urllib3 = [ - {file = "urllib3-1.26.7-py2.py3-none-any.whl", hash = "sha256:c4fdf4019605b6e5423637e01bc9fe4daef873709a7973e195ceba0a62bbc844"}, - {file = "urllib3-1.26.7.tar.gz", hash = "sha256:4987c65554f7a2dbf30c18fd48778ef124af6fab771a377103da0585e2336ece"}, -] -webencodings = [ - {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"}, - {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"}, -] -zipp = [ - {file = "zipp-3.5.0-py3-none-any.whl", hash = "sha256:957cfda87797e389580cb8b9e3870841ca991e2125350677b2ca83a0e99390a3"}, - {file = "zipp-3.5.0.tar.gz", hash = "sha256:f5812b1e007e48cff63449a5e9f4e7ebea716b4111f9c4f9a645f91d579bf0c4"}, -] +[[package]] +name = "appdirs" +version = "1.4.4" +description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "atomicwrites" +version = "1.4.0" +description = "Atomic file writes." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "attrs" +version = "21.2.0" +description = "Classes Without Boilerplate" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[package.extras] +dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit"] +docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"] +tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface"] +tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins"] + +[[package]] +name = "backports.entry-points-selectable" +version = "1.1.0" +description = "Compatibility shim providing selectable entry points for older implementations" +category = "dev" +optional = false +python-versions = ">=2.7" + +[package.dependencies] +importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-flake8", "pytest-cov", "pytest-black (>=0.3.7)", "pytest-mypy", "pytest-checkdocs (>=2.4)", "pytest-enabler (>=1.0.1)"] + +[[package]] +name = "black" +version = "20.8b1" +description = "The uncompromising code formatter." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +appdirs = "*" +click = ">=7.1.2" +dataclasses = {version = ">=0.6", markers = "python_version < \"3.7\""} +mypy-extensions = ">=0.4.3" +pathspec = ">=0.6,<1" +regex = ">=2020.1.8" +toml = ">=0.10.1" +typed-ast = ">=1.4.0" +typing-extensions = ">=3.7.4" + +[package.extras] +colorama = ["colorama (>=0.4.3)"] +d = ["aiohttp (>=3.3.2)", "aiohttp-cors"] + +[[package]] +name = "bleach" +version = "4.1.0" +description = "An easy safelist-based HTML-sanitizing tool." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +packaging = "*" +six = ">=1.9.0" +webencodings = "*" + +[[package]] +name = "certifi" +version = "2021.10.8" +description = "Python package for providing Mozilla's CA Bundle." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "cffi" +version = "1.15.0" +description = "Foreign Function Interface for Python calling C code." +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +pycparser = "*" + +[[package]] +name = "cfgv" +version = "3.3.1" +description = "Validate configuration and produce human readable error messages." +category = "dev" +optional = false +python-versions = ">=3.6.1" + +[[package]] +name = "charset-normalizer" +version = "2.0.7" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +category = "dev" +optional = false +python-versions = ">=3.5.0" + +[package.extras] +unicode_backport = ["unicodedata2"] + +[[package]] +name = "click" +version = "8.0.3" +description = "Composable command line interface toolkit" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} +importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} + +[[package]] +name = "colorama" +version = "0.4.4" +description = "Cross-platform colored terminal text." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[[package]] +name = "coverage" +version = "6.0.2" +description = "Code coverage measurement for Python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +tomli = {version = "*", optional = true, markers = "extra == \"toml\""} + +[package.extras] +toml = ["tomli"] + +[[package]] +name = "cryptography" +version = "35.0.0" +description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +cffi = ">=1.12" + +[package.extras] +docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"] +docstest = ["doc8", "pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"] +pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"] +sdist = ["setuptools_rust (>=0.11.4)"] +ssh = ["bcrypt (>=3.1.5)"] +test = ["pytest (>=6.2.0)", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"] + +[[package]] +name = "dataclasses" +version = "0.8" +description = "A backport of the dataclasses module for Python 3.6" +category = "dev" +optional = false +python-versions = ">=3.6, <3.7" + +[[package]] +name = "distlib" +version = "0.3.3" +description = "Distribution utilities" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "docutils" +version = "0.17.1" +description = "Docutils -- Python Documentation Utilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[[package]] +name = "filelock" +version = "3.3.1" +description = "A platform independent file lock." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +docs = ["furo (>=2021.8.17b43)", "sphinx (>=4.1)", "sphinx-autodoc-typehints (>=1.12)"] +testing = ["covdefaults (>=1.2.0)", "coverage (>=4)", "pytest (>=4)", "pytest-cov", "pytest-timeout (>=1.4.2)"] + +[[package]] +name = "flake8" +version = "4.0.1" +description = "the modular source code checker: pep8 pyflakes and co" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +importlib-metadata = {version = "<4.3", markers = "python_version < \"3.8\""} +mccabe = ">=0.6.0,<0.7.0" +pycodestyle = ">=2.8.0,<2.9.0" +pyflakes = ">=2.4.0,<2.5.0" + +[[package]] +name = "flake8-black" +version = "0.2.3" +description = "flake8 plugin to call black as a code style validator" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +black = "*" +flake8 = ">=3.0.0" +toml = "*" + +[[package]] +name = "flake8-import-order" +version = "0.18.1" +description = "Flake8 and pylama plugin that checks the ordering of import statements." +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +pycodestyle = "*" + +[[package]] +name = "flake8-print" +version = "4.0.0" +description = "print statement checker plugin for flake8" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +flake8 = ">=3.0" +pycodestyle = "*" +six = "*" + +[[package]] +name = "identify" +version = "2.3.1" +description = "File identification library for Python" +category = "dev" +optional = false +python-versions = ">=3.6.1" + +[package.extras] +license = ["editdistance-s"] + +[[package]] +name = "idna" +version = "3.3" +description = "Internationalized Domain Names in Applications (IDNA)" +category = "dev" +optional = false +python-versions = ">=3.5" + +[[package]] +name = "importlib-metadata" +version = "4.2.0" +description = "Read metadata from Python packages" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""} +zipp = ">=0.5" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"] + +[[package]] +name = "importlib-resources" +version = "5.3.0" +description = "Read resources from Python packages" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""} + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-black (>=0.3.7)", "pytest-mypy"] + +[[package]] +name = "iniconfig" +version = "1.1.1" +description = "iniconfig: brain-dead simple config-ini parsing" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "jeepney" +version = "0.7.1" +description = "Low-level, pure Python DBus protocol wrapper." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +test = ["pytest", "pytest-trio", "pytest-asyncio", "testpath", "trio", "async-timeout"] +trio = ["trio", "async-generator"] + +[[package]] +name = "keyring" +version = "23.2.1" +description = "Store and access your passwords safely." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +importlib-metadata = ">=3.6" +jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""} +pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""} +SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""} + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-black (>=0.3.7)", "pytest-mypy"] + +[[package]] +name = "mccabe" +version = "0.6.1" +description = "McCabe checker, plugin for flake8" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "mypy-extensions" +version = "0.4.3" +description = "Experimental type system extensions for programs checked with the mypy typechecker." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "nodeenv" +version = "1.6.0" +description = "Node.js virtual environment builder" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "packaging" +version = "21.0" +description = "Core utilities for Python packages" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +pyparsing = ">=2.0.2" + +[[package]] +name = "pathspec" +version = "0.9.0" +description = "Utility library for gitignore style pattern matching of file paths." +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" + +[[package]] +name = "pkginfo" +version = "1.7.1" +description = "Query metadatdata from sdists / bdists / installed packages." +category = "dev" +optional = false +python-versions = "*" + +[package.extras] +testing = ["nose", "coverage"] + +[[package]] +name = "platformdirs" +version = "2.4.0" +description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +docs = ["Sphinx (>=4)", "furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)"] +test = ["appdirs (==1.4.4)", "pytest (>=6)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)"] + +[[package]] +name = "pluggy" +version = "1.0.0" +description = "plugin and hook calling mechanisms for python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} + +[package.extras] +dev = ["pre-commit", "tox"] +testing = ["pytest", "pytest-benchmark"] + +[[package]] +name = "pre-commit" +version = "2.15.0" +description = "A framework for managing and maintaining multi-language pre-commit hooks." +category = "dev" +optional = false +python-versions = ">=3.6.1" + +[package.dependencies] +cfgv = ">=2.0.0" +identify = ">=1.0.0" +importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} +importlib-resources = {version = "*", markers = "python_version < \"3.7\""} +nodeenv = ">=0.11.1" +pyyaml = ">=5.1" +toml = "*" +virtualenv = ">=20.0.8" + +[[package]] +name = "py" +version = "1.10.0" +description = "library with cross-python path, ini-parsing, io, code, log facilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pycodestyle" +version = "2.8.0" +description = "Python style guide checker" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[[package]] +name = "pycparser" +version = "2.20" +description = "C parser in Python" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pyflakes" +version = "2.4.0" +description = "passive checker of Python programs" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pygments" +version = "2.10.0" +description = "Pygments is a syntax highlighting package written in Python." +category = "dev" +optional = false +python-versions = ">=3.5" + +[[package]] +name = "pyparsing" +version = "3.0.1" +description = "Python parsing module" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +diagrams = ["jinja2", "railroad-diagrams"] + +[[package]] +name = "pyside6" +version = "6.2.0" +description = "Python bindings for the Qt cross-platform application and UI framework" +category = "dev" +optional = false +python-versions = ">=3.6, <3.11" + +[package.dependencies] +shiboken6 = "6.2.0" + +[[package]] +name = "pytest" +version = "6.2.5" +description = "pytest: simple powerful testing with Python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""} +attrs = ">=19.2.0" +colorama = {version = "*", markers = "sys_platform == \"win32\""} +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=0.12,<2.0" +py = ">=1.8.2" +toml = "*" + +[package.extras] +testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"] + +[[package]] +name = "pytest-cov" +version = "3.0.0" +description = "Pytest plugin for measuring coverage." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +coverage = {version = ">=5.2.1", extras = ["toml"]} +pytest = ">=4.6" + +[package.extras] +testing = ["fields", "hunter", "process-tests", "six", "pytest-xdist", "virtualenv"] + +[[package]] +name = "pytest-timeout" +version = "2.0.1" +description = "pytest plugin to abort hanging tests" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +pytest = ">=5.0.0" + +[[package]] +name = "pywin32-ctypes" +version = "0.2.0" +description = "" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "pyyaml" +version = "6.0" +description = "YAML parser and emitter for Python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[[package]] +name = "readme-renderer" +version = "30.0" +description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +bleach = ">=2.1.0" +docutils = ">=0.13.1" +Pygments = ">=2.5.1" + +[package.extras] +md = ["cmarkgfm (>=0.5.0,<0.7.0)"] + +[[package]] +name = "regex" +version = "2021.10.23" +description = "Alternative regular expression module, to replace re." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "requests" +version = "2.26.0" +description = "Python HTTP for Humans." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = {version = ">=2.0.0,<2.1.0", markers = "python_version >= \"3\""} +idna = {version = ">=2.5,<4", markers = "python_version >= \"3\""} +urllib3 = ">=1.21.1,<1.27" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"] +use_chardet_on_py3 = ["chardet (>=3.0.2,<5)"] + +[[package]] +name = "requests-toolbelt" +version = "0.9.1" +description = "A utility belt for advanced users of python-requests" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +requests = ">=2.0.1,<3.0.0" + +[[package]] +name = "rfc3986" +version = "1.5.0" +description = "Validating URI References per RFC 3986" +category = "dev" +optional = false +python-versions = "*" + +[package.extras] +idna2008 = ["idna"] + +[[package]] +name = "secretstorage" +version = "3.3.1" +description = "Python bindings to FreeDesktop.org Secret Service API" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +cryptography = ">=2.0" +jeepney = ">=0.6" + +[[package]] +name = "shiboken6" +version = "6.2.0" +description = "Python / C++ bindings helper module" +category = "dev" +optional = false +python-versions = ">=3.6, <3.11" + +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "toml" +version = "0.10.2" +description = "Python Library for Tom's Obvious, Minimal Language" +category = "dev" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "tomli" +version = "1.2.2" +description = "A lil' TOML parser" +category = "dev" +optional = false +python-versions = ">=3.6" + +[[package]] +name = "tqdm" +version = "4.62.3" +description = "Fast, Extensible Progress Meter" +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + +[package.extras] +dev = ["py-make (>=0.1.0)", "twine", "wheel"] +notebook = ["ipywidgets (>=6)"] +telegram = ["requests"] + +[[package]] +name = "twine" +version = "3.4.2" +description = "Collection of utilities for publishing packages on PyPI" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +colorama = ">=0.4.3" +importlib-metadata = ">=3.6" +keyring = ">=15.1" +pkginfo = ">=1.4.2" +readme-renderer = ">=21.0" +requests = ">=2.20" +requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0" +rfc3986 = ">=1.4.0" +tqdm = ">=4.14" + +[[package]] +name = "typed-ast" +version = "1.4.3" +description = "a fork of Python 2 and 3 ast modules with type comment support" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "typing-extensions" +version = "3.10.0.2" +description = "Backported and Experimental Type Hints for Python 3.5+" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "urllib3" +version = "1.26.7" +description = "HTTP library with thread-safe connection pooling, file post, and more." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" + +[package.extras] +brotli = ["brotlipy (>=0.6.0)"] +secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"] +socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] + +[[package]] +name = "virtualenv" +version = "20.9.0" +description = "Virtual Python Environment builder" +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" + +[package.dependencies] +"backports.entry-points-selectable" = ">=1.0.4" +distlib = ">=0.3.1,<1" +filelock = ">=3.2,<4" +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} +importlib-resources = {version = ">=1.0", markers = "python_version < \"3.7\""} +platformdirs = ">=2,<3" +six = ">=1.9.0,<2" + +[package.extras] +docs = ["proselint (>=0.10.2)", "sphinx (>=3)", "sphinx-argparse (>=0.2.5)", "sphinx-rtd-theme (>=0.4.3)", "towncrier (>=19.9.0rc1)"] +testing = ["coverage (>=4)", "coverage-enable-subprocess (>=1)", "flaky (>=3)", "pytest (>=4)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.1)", "pytest-mock (>=2)", "pytest-randomly (>=1)", "pytest-timeout (>=1)", "packaging (>=20.0)"] + +[[package]] +name = "webencodings" +version = "0.5.1" +description = "Character encoding aliases for legacy web content" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "zipp" +version = "3.6.0" +description = "Backport of pathlib-compatible object wrapper for zip files" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"] + +[metadata] +lock-version = "1.1" +python-versions = ">=3.6,<3.10" +content-hash = "0e8b8053e0a6a19660176c15aa8081695ba79c728b4cd369494e4400a61089ba" + +[metadata.files] +appdirs = [ + {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, + {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, +] +atomicwrites = [ + {file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"}, + {file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"}, +] +attrs = [ + {file = "attrs-21.2.0-py2.py3-none-any.whl", hash = "sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1"}, + {file = "attrs-21.2.0.tar.gz", hash = "sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"}, +] +"backports.entry-points-selectable" = [ + {file = "backports.entry_points_selectable-1.1.0-py2.py3-none-any.whl", hash = "sha256:a6d9a871cde5e15b4c4a53e3d43ba890cc6861ec1332c9c2428c92f977192acc"}, + {file = "backports.entry_points_selectable-1.1.0.tar.gz", hash = "sha256:988468260ec1c196dab6ae1149260e2f5472c9110334e5d51adcb77867361f6a"}, +] +black = [ + {file = "black-20.8b1.tar.gz", hash = "sha256:1c02557aa099101b9d21496f8a914e9ed2222ef70336404eeeac8edba836fbea"}, +] +bleach = [ + {file = "bleach-4.1.0-py2.py3-none-any.whl", hash = "sha256:4d2651ab93271d1129ac9cbc679f524565cc8a1b791909c4a51eac4446a15994"}, + {file = "bleach-4.1.0.tar.gz", hash = "sha256:0900d8b37eba61a802ee40ac0061f8c2b5dee29c1927dd1d233e075ebf5a71da"}, +] +certifi = [ + {file = "certifi-2021.10.8-py2.py3-none-any.whl", hash = "sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"}, + {file = "certifi-2021.10.8.tar.gz", hash = "sha256:78884e7c1d4b00ce3cea67b44566851c4343c120abd683433ce934a68ea58872"}, +] +cffi = [ + {file = "cffi-1.15.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c2502a1a03b6312837279c8c1bd3ebedf6c12c4228ddbad40912d671ccc8a962"}, + {file = "cffi-1.15.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:23cfe892bd5dd8941608f93348c0737e369e51c100d03718f108bf1add7bd6d0"}, + {file = "cffi-1.15.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:41d45de54cd277a7878919867c0f08b0cf817605e4eb94093e7516505d3c8d14"}, + {file = "cffi-1.15.0-cp27-cp27m-win32.whl", hash = "sha256:4a306fa632e8f0928956a41fa8e1d6243c71e7eb59ffbd165fc0b41e316b2474"}, + {file = "cffi-1.15.0-cp27-cp27m-win_amd64.whl", hash = "sha256:e7022a66d9b55e93e1a845d8c9eba2a1bebd4966cd8bfc25d9cd07d515b33fa6"}, + {file = "cffi-1.15.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:14cd121ea63ecdae71efa69c15c5543a4b5fbcd0bbe2aad864baca0063cecf27"}, + {file = "cffi-1.15.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:d4d692a89c5cf08a8557fdeb329b82e7bf609aadfaed6c0d79f5a449a3c7c023"}, + {file = "cffi-1.15.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0104fb5ae2391d46a4cb082abdd5c69ea4eab79d8d44eaaf79f1b1fd806ee4c2"}, + {file = "cffi-1.15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:91ec59c33514b7c7559a6acda53bbfe1b283949c34fe7440bcf917f96ac0723e"}, + {file = "cffi-1.15.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f5c7150ad32ba43a07c4479f40241756145a1f03b43480e058cfd862bf5041c7"}, + {file = "cffi-1.15.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:00c878c90cb53ccfaae6b8bc18ad05d2036553e6d9d1d9dbcf323bbe83854ca3"}, + {file = "cffi-1.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:abb9a20a72ac4e0fdb50dae135ba5e77880518e742077ced47eb1499e29a443c"}, + {file = "cffi-1.15.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a5263e363c27b653a90078143adb3d076c1a748ec9ecc78ea2fb916f9b861962"}, + {file = "cffi-1.15.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f54a64f8b0c8ff0b64d18aa76675262e1700f3995182267998c31ae974fbc382"}, + {file = "cffi-1.15.0-cp310-cp310-win32.whl", hash = "sha256:c21c9e3896c23007803a875460fb786118f0cdd4434359577ea25eb556e34c55"}, + {file = "cffi-1.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:5e069f72d497312b24fcc02073d70cb989045d1c91cbd53979366077959933e0"}, + {file = "cffi-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:64d4ec9f448dfe041705426000cc13e34e6e5bb13736e9fd62e34a0b0c41566e"}, + {file = "cffi-1.15.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2756c88cbb94231c7a147402476be2c4df2f6078099a6f4a480d239a8817ae39"}, + {file = "cffi-1.15.0-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b96a311ac60a3f6be21d2572e46ce67f09abcf4d09344c49274eb9e0bf345fc"}, + {file = "cffi-1.15.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:75e4024375654472cc27e91cbe9eaa08567f7fbdf822638be2814ce059f58032"}, + {file = "cffi-1.15.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:59888172256cac5629e60e72e86598027aca6bf01fa2465bdb676d37636573e8"}, + {file = "cffi-1.15.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:27c219baf94952ae9d50ec19651a687b826792055353d07648a5695413e0c605"}, + {file = "cffi-1.15.0-cp36-cp36m-win32.whl", hash = "sha256:4958391dbd6249d7ad855b9ca88fae690783a6be9e86df65865058ed81fc860e"}, + {file = "cffi-1.15.0-cp36-cp36m-win_amd64.whl", hash = "sha256:f6f824dc3bce0edab5f427efcfb1d63ee75b6fcb7282900ccaf925be84efb0fc"}, + {file = "cffi-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:06c48159c1abed75c2e721b1715c379fa3200c7784271b3c46df01383b593636"}, + {file = "cffi-1.15.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c2051981a968d7de9dd2d7b87bcb9c939c74a34626a6e2f8181455dd49ed69e4"}, + {file = "cffi-1.15.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:fd8a250edc26254fe5b33be00402e6d287f562b6a5b2152dec302fa15bb3e997"}, + {file = "cffi-1.15.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:91d77d2a782be4274da750752bb1650a97bfd8f291022b379bb8e01c66b4e96b"}, + {file = "cffi-1.15.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:45db3a33139e9c8f7c09234b5784a5e33d31fd6907800b316decad50af323ff2"}, + {file = "cffi-1.15.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:263cc3d821c4ab2213cbe8cd8b355a7f72a8324577dc865ef98487c1aeee2bc7"}, + {file = "cffi-1.15.0-cp37-cp37m-win32.whl", hash = "sha256:17771976e82e9f94976180f76468546834d22a7cc404b17c22df2a2c81db0c66"}, + {file = "cffi-1.15.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3415c89f9204ee60cd09b235810be700e993e343a408693e80ce7f6a40108029"}, + {file = "cffi-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4238e6dab5d6a8ba812de994bbb0a79bddbdf80994e4ce802b6f6f3142fcc880"}, + {file = "cffi-1.15.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:0808014eb713677ec1292301ea4c81ad277b6cdf2fdd90fd540af98c0b101d20"}, + {file = "cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57e9ac9ccc3101fac9d6014fba037473e4358ef4e89f8e181f8951a2c0162024"}, + {file = "cffi-1.15.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8b6c2ea03845c9f501ed1313e78de148cd3f6cad741a75d43a29b43da27f2e1e"}, + {file = "cffi-1.15.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:10dffb601ccfb65262a27233ac273d552ddc4d8ae1bf93b21c94b8511bffe728"}, + {file = "cffi-1.15.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:786902fb9ba7433aae840e0ed609f45c7bcd4e225ebb9c753aa39725bb3e6ad6"}, + {file = "cffi-1.15.0-cp38-cp38-win32.whl", hash = "sha256:da5db4e883f1ce37f55c667e5c0de439df76ac4cb55964655906306918e7363c"}, + {file = "cffi-1.15.0-cp38-cp38-win_amd64.whl", hash = "sha256:181dee03b1170ff1969489acf1c26533710231c58f95534e3edac87fff06c443"}, + {file = "cffi-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:45e8636704eacc432a206ac7345a5d3d2c62d95a507ec70d62f23cd91770482a"}, + {file = "cffi-1.15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:31fb708d9d7c3f49a60f04cf5b119aeefe5644daba1cd2a0fe389b674fd1de37"}, + {file = "cffi-1.15.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6dc2737a3674b3e344847c8686cf29e500584ccad76204efea14f451d4cc669a"}, + {file = "cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:74fdfdbfdc48d3f47148976f49fab3251e550a8720bebc99bf1483f5bfb5db3e"}, + {file = "cffi-1.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ffaa5c925128e29efbde7301d8ecaf35c8c60ffbcd6a1ffd3a552177c8e5e796"}, + {file = "cffi-1.15.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3f7d084648d77af029acb79a0ff49a0ad7e9d09057a9bf46596dac9514dc07df"}, + {file = "cffi-1.15.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ef1f279350da2c586a69d32fc8733092fd32cc8ac95139a00377841f59a3f8d8"}, + {file = "cffi-1.15.0-cp39-cp39-win32.whl", hash = "sha256:2a23af14f408d53d5e6cd4e3d9a24ff9e05906ad574822a10563efcef137979a"}, + {file = "cffi-1.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:3773c4d81e6e818df2efbc7dd77325ca0dcb688116050fb2b3011218eda36139"}, + {file = "cffi-1.15.0.tar.gz", hash = "sha256:920f0d66a896c2d99f0adbb391f990a84091179542c205fa53ce5787aff87954"}, +] +cfgv = [ + {file = "cfgv-3.3.1-py2.py3-none-any.whl", hash = "sha256:c6a0883f3917a037485059700b9e75da2464e6c27051014ad85ba6aaa5884426"}, + {file = "cfgv-3.3.1.tar.gz", hash = "sha256:f5a830efb9ce7a445376bb66ec94c638a9787422f96264c98edc6bdeed8ab736"}, +] +charset-normalizer = [ + {file = "charset-normalizer-2.0.7.tar.gz", hash = "sha256:e019de665e2bcf9c2b64e2e5aa025fa991da8720daa3c1138cadd2fd1856aed0"}, + {file = "charset_normalizer-2.0.7-py3-none-any.whl", hash = "sha256:f7af805c321bfa1ce6714c51f254e0d5bb5e5834039bc17db7ebe3a4cec9492b"}, +] +click = [ + {file = "click-8.0.3-py3-none-any.whl", hash = "sha256:353f466495adaeb40b6b5f592f9f91cb22372351c84caeb068132442a4518ef3"}, + {file = "click-8.0.3.tar.gz", hash = "sha256:410e932b050f5eed773c4cda94de75971c89cdb3155a72a0831139a79e5ecb5b"}, +] +colorama = [ + {file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"}, + {file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"}, +] +coverage = [ + {file = "coverage-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1549e1d08ce38259de2bc3e9a0d5f3642ff4a8f500ffc1b2df73fd621a6cdfc0"}, + {file = "coverage-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bcae10fccb27ca2a5f456bf64d84110a5a74144be3136a5e598f9d9fb48c0caa"}, + {file = "coverage-6.0.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:53a294dc53cfb39c74758edaa6305193fb4258a30b1f6af24b360a6c8bd0ffa7"}, + {file = "coverage-6.0.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8251b37be1f2cd9c0e5ccd9ae0380909c24d2a5ed2162a41fcdbafaf59a85ebd"}, + {file = "coverage-6.0.2-cp310-cp310-win32.whl", hash = "sha256:db42baa892cba723326284490283a68d4de516bfb5aaba369b4e3b2787a778b7"}, + {file = "coverage-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:bbffde2a68398682623d9dd8c0ca3f46fda074709b26fcf08ae7a4c431a6ab2d"}, + {file = "coverage-6.0.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:60e51a3dd55540bec686d7fff61b05048ca31e804c1f32cbb44533e6372d9cc3"}, + {file = "coverage-6.0.2-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a6a9409223a27d5ef3cca57dd7cd4dfcb64aadf2fad5c3b787830ac9223e01a"}, + {file = "coverage-6.0.2-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4b34ae4f51bbfa5f96b758b55a163d502be3dcb24f505d0227858c2b3f94f5b9"}, + {file = "coverage-6.0.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3bbda1b550e70fa6ac40533d3f23acd4f4e9cb4e6e77251ce77fdf41b3309fb2"}, + {file = "coverage-6.0.2-cp36-cp36m-win32.whl", hash = "sha256:4e28d2a195c533b58fc94a12826f4431726d8eb029ac21d874345f943530c122"}, + {file = "coverage-6.0.2-cp36-cp36m-win_amd64.whl", hash = "sha256:a82d79586a0a4f5fd1cf153e647464ced402938fbccb3ffc358c7babd4da1dd9"}, + {file = "coverage-6.0.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3be1206dc09fb6298de3fce70593e27436862331a85daee36270b6d0e1c251c4"}, + {file = "coverage-6.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9cd3828bbe1a40070c11fe16a51df733fd2f0cb0d745fb83b7b5c1f05967df7"}, + {file = "coverage-6.0.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:d036dc1ed8e1388e995833c62325df3f996675779541f682677efc6af71e96cc"}, + {file = "coverage-6.0.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04560539c19ec26995ecfb3d9307ff154fbb9a172cb57e3b3cfc4ced673103d1"}, + {file = "coverage-6.0.2-cp37-cp37m-win32.whl", hash = "sha256:e4fb7ced4d9dec77d6cf533acfbf8e1415fe799430366affb18d69ee8a3c6330"}, + {file = "coverage-6.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:77b1da5767ed2f44611bc9bc019bc93c03fa495728ec389759b6e9e5039ac6b1"}, + {file = "coverage-6.0.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:61b598cbdbaae22d9e34e3f675997194342f866bb1d781da5d0be54783dce1ff"}, + {file = "coverage-6.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:36e9040a43d2017f2787b28d365a4bb33fcd792c7ff46a047a04094dc0e2a30d"}, + {file = "coverage-6.0.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:9f1627e162e3864a596486774876415a7410021f4b67fd2d9efdf93ade681afc"}, + {file = "coverage-6.0.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:e7a0b42db2a47ecb488cde14e0f6c7679a2c5a9f44814393b162ff6397fcdfbb"}, + {file = "coverage-6.0.2-cp38-cp38-win32.whl", hash = "sha256:a1b73c7c4d2a42b9d37dd43199c5711d91424ff3c6c22681bc132db4a4afec6f"}, + {file = "coverage-6.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:1db67c497688fd4ba85b373b37cc52c50d437fd7267520ecd77bddbd89ea22c9"}, + {file = "coverage-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f2f184bf38e74f152eed7f87e345b51f3ab0b703842f447c22efe35e59942c24"}, + {file = "coverage-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cd1cf1deb3d5544bd942356364a2fdc8959bad2b6cf6eb17f47d301ea34ae822"}, + {file = "coverage-6.0.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad9b8c1206ae41d46ec7380b78ba735ebb77758a650643e841dd3894966c31d0"}, + {file = "coverage-6.0.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:381d773d896cc7f8ba4ff3b92dee4ed740fb88dfe33b6e42efc5e8ab6dfa1cfe"}, + {file = "coverage-6.0.2-cp39-cp39-win32.whl", hash = "sha256:424c44f65e8be58b54e2b0bd1515e434b940679624b1b72726147cfc6a9fc7ce"}, + {file = "coverage-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:abbff240f77347d17306d3201e14431519bf64495648ca5a49571f988f88dee9"}, + {file = "coverage-6.0.2-pp36-none-any.whl", hash = "sha256:7092eab374346121805fb637572483270324407bf150c30a3b161fc0c4ca5164"}, + {file = "coverage-6.0.2-pp37-none-any.whl", hash = "sha256:30922626ce6f7a5a30bdba984ad21021529d3d05a68b4f71ea3b16bda35b8895"}, + {file = "coverage-6.0.2.tar.gz", hash = "sha256:6807947a09510dc31fa86f43595bf3a14017cd60bf633cc746d52141bfa6b149"}, +] +cryptography = [ + {file = "cryptography-35.0.0-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:d57e0cdc1b44b6cdf8af1d01807db06886f10177469312fbde8f44ccbb284bc9"}, + {file = "cryptography-35.0.0-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:ced40344e811d6abba00295ced98c01aecf0c2de39481792d87af4fa58b7b4d6"}, + {file = "cryptography-35.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:54b2605e5475944e2213258e0ab8696f4f357a31371e538ef21e8d61c843c28d"}, + {file = "cryptography-35.0.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:7b7ceeff114c31f285528ba8b390d3e9cfa2da17b56f11d366769a807f17cbaa"}, + {file = "cryptography-35.0.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d69645f535f4b2c722cfb07a8eab916265545b3475fdb34e0be2f4ee8b0b15e"}, + {file = "cryptography-35.0.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a2d0e0acc20ede0f06ef7aa58546eee96d2592c00f450c9acb89c5879b61992"}, + {file = "cryptography-35.0.0-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:07bb7fbfb5de0980590ddfc7f13081520def06dc9ed214000ad4372fb4e3c7f6"}, + {file = "cryptography-35.0.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:7eba2cebca600a7806b893cb1d541a6e910afa87e97acf2021a22b32da1df52d"}, + {file = "cryptography-35.0.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:18d90f4711bf63e2fb21e8c8e51ed8189438e6b35a6d996201ebd98a26abbbe6"}, + {file = "cryptography-35.0.0-cp36-abi3-win32.whl", hash = "sha256:c10c797ac89c746e488d2ee92bd4abd593615694ee17b2500578b63cad6b93a8"}, + {file = "cryptography-35.0.0-cp36-abi3-win_amd64.whl", hash = "sha256:7075b304cd567694dc692ffc9747f3e9cb393cc4aa4fb7b9f3abd6f5c4e43588"}, + {file = "cryptography-35.0.0-pp36-pypy36_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a688ebcd08250eab5bb5bca318cc05a8c66de5e4171a65ca51db6bd753ff8953"}, + {file = "cryptography-35.0.0-pp36-pypy36_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d99915d6ab265c22873f1b4d6ea5ef462ef797b4140be4c9d8b179915e0985c6"}, + {file = "cryptography-35.0.0-pp36-pypy36_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:928185a6d1ccdb816e883f56ebe92e975a262d31cc536429041921f8cb5a62fd"}, + {file = "cryptography-35.0.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ebeddd119f526bcf323a89f853afb12e225902a24d29b55fe18dd6fcb2838a76"}, + {file = "cryptography-35.0.0-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:22a38e96118a4ce3b97509443feace1d1011d0571fae81fc3ad35f25ba3ea999"}, + {file = "cryptography-35.0.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb80e8a1f91e4b7ef8b33041591e6d89b2b8e122d787e87eeb2b08da71bb16ad"}, + {file = "cryptography-35.0.0-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:abb5a361d2585bb95012a19ed9b2c8f412c5d723a9836418fab7aaa0243e67d2"}, + {file = "cryptography-35.0.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:1ed82abf16df40a60942a8c211251ae72858b25b7421ce2497c2eb7a1cee817c"}, + {file = "cryptography-35.0.0.tar.gz", hash = "sha256:9933f28f70d0517686bd7de36166dda42094eac49415459d9bdf5e7df3e0086d"}, +] +dataclasses = [ + {file = "dataclasses-0.8-py3-none-any.whl", hash = "sha256:0201d89fa866f68c8ebd9d08ee6ff50c0b255f8ec63a71c16fda7af82bb887bf"}, + {file = "dataclasses-0.8.tar.gz", hash = "sha256:8479067f342acf957dc82ec415d355ab5edb7e7646b90dc6e2fd1d96ad084c97"}, +] +distlib = [ + {file = "distlib-0.3.3-py2.py3-none-any.whl", hash = "sha256:c8b54e8454e5bf6237cc84c20e8264c3e991e824ef27e8f1e81049867d861e31"}, + {file = "distlib-0.3.3.zip", hash = "sha256:d982d0751ff6eaaab5e2ec8e691d949ee80eddf01a62eaa96ddb11531fe16b05"}, +] +docutils = [ + {file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"}, + {file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"}, +] +filelock = [ + {file = "filelock-3.3.1-py3-none-any.whl", hash = "sha256:2b5eb3589e7fdda14599e7eb1a50e09b4cc14f34ed98b8ba56d33bfaafcbef2f"}, + {file = "filelock-3.3.1.tar.gz", hash = "sha256:34a9f35f95c441e7b38209775d6e0337f9a3759f3565f6c5798f19618527c76f"}, +] +flake8 = [ + {file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"}, + {file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"}, +] +flake8-black = [ + {file = "flake8-black-0.2.3.tar.gz", hash = "sha256:c199844bc1b559d91195ebe8620216f21ed67f2cc1ff6884294c91a0d2492684"}, + {file = "flake8_black-0.2.3-py3-none-any.whl", hash = "sha256:cc080ba5b3773b69ba102b6617a00cc4ecbad8914109690cfda4d565ea435d96"}, +] +flake8-import-order = [ + {file = "flake8-import-order-0.18.1.tar.gz", hash = "sha256:a28dc39545ea4606c1ac3c24e9d05c849c6e5444a50fb7e9cdd430fc94de6e92"}, + {file = "flake8_import_order-0.18.1-py2.py3-none-any.whl", hash = "sha256:90a80e46886259b9c396b578d75c749801a41ee969a235e163cfe1be7afd2543"}, +] +flake8-print = [ + {file = "flake8-print-4.0.0.tar.gz", hash = "sha256:5afac374b7dc49aac2c36d04b5eb1d746d72e6f5df75a6ecaecd99e9f79c6516"}, + {file = "flake8_print-4.0.0-py3-none-any.whl", hash = "sha256:6c0efce658513169f96d7a24cf136c434dc711eb00ebd0a985eb1120103fe584"}, +] +identify = [ + {file = "identify-2.3.1-py2.py3-none-any.whl", hash = "sha256:5a5000bd3293950d992843c0ef3d82b90a582de2161557bda7f493c8c8864f26"}, + {file = "identify-2.3.1.tar.gz", hash = "sha256:8a92c56893e9a4ce951f09a50489986615e3eba7b4c60610e0b25f93ca4487ba"}, +] +idna = [ + {file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"}, + {file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"}, +] +importlib-metadata = [ + {file = "importlib_metadata-4.2.0-py3-none-any.whl", hash = "sha256:057e92c15bc8d9e8109738a48db0ccb31b4d9d5cfbee5a8670879a30be66304b"}, + {file = "importlib_metadata-4.2.0.tar.gz", hash = "sha256:b7e52a1f8dec14a75ea73e0891f3060099ca1d8e6a462a4dff11c3e119ea1b31"}, +] +importlib-resources = [ + {file = "importlib_resources-5.3.0-py3-none-any.whl", hash = "sha256:7a65eb0d8ee98eedab76e6deb51195c67f8e575959f6356a6e15fd7e1148f2a3"}, + {file = "importlib_resources-5.3.0.tar.gz", hash = "sha256:f2e58e721b505a79abe67f5868d99f8886aec8594c962c7490d0c22925f518da"}, +] +iniconfig = [ + {file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"}, + {file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"}, +] +jeepney = [ + {file = "jeepney-0.7.1-py3-none-any.whl", hash = "sha256:1b5a0ea5c0e7b166b2f5895b91a08c14de8915afda4407fb5022a195224958ac"}, + {file = "jeepney-0.7.1.tar.gz", hash = "sha256:fa9e232dfa0c498bd0b8a3a73b8d8a31978304dcef0515adc859d4e096f96f4f"}, +] +keyring = [ + {file = "keyring-23.2.1-py3-none-any.whl", hash = "sha256:bd2145a237ed70c8ce72978b497619ddfcae640b6dcf494402d5143e37755c6e"}, + {file = "keyring-23.2.1.tar.gz", hash = "sha256:6334aee6073db2fb1f30892697b1730105b5e9a77ce7e61fca6b435225493efe"}, +] +mccabe = [ + {file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"}, + {file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"}, +] +mypy-extensions = [ + {file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"}, + {file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"}, +] +nodeenv = [ + {file = "nodeenv-1.6.0-py2.py3-none-any.whl", hash = "sha256:621e6b7076565ddcacd2db0294c0381e01fd28945ab36bcf00f41c5daf63bef7"}, + {file = "nodeenv-1.6.0.tar.gz", hash = "sha256:3ef13ff90291ba2a4a7a4ff9a979b63ffdd00a464dbe04acf0ea6471517a4c2b"}, +] +packaging = [ + {file = "packaging-21.0-py3-none-any.whl", hash = "sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"}, + {file = "packaging-21.0.tar.gz", hash = "sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7"}, +] +pathspec = [ + {file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"}, + {file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"}, +] +pkginfo = [ + {file = "pkginfo-1.7.1-py2.py3-none-any.whl", hash = "sha256:37ecd857b47e5f55949c41ed061eb51a0bee97a87c969219d144c0e023982779"}, + {file = "pkginfo-1.7.1.tar.gz", hash = "sha256:e7432f81d08adec7297633191bbf0bd47faf13cd8724c3a13250e51d542635bd"}, +] +platformdirs = [ + {file = "platformdirs-2.4.0-py3-none-any.whl", hash = "sha256:8868bbe3c3c80d42f20156f22e7131d2fb321f5bc86a2a345375c6481a67021d"}, + {file = "platformdirs-2.4.0.tar.gz", hash = "sha256:367a5e80b3d04d2428ffa76d33f124cf11e8fff2acdaa9b43d545f5c7d661ef2"}, +] +pluggy = [ + {file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"}, + {file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"}, +] +pre-commit = [ + {file = "pre_commit-2.15.0-py2.py3-none-any.whl", hash = "sha256:a4ed01000afcb484d9eb8d504272e642c4c4099bbad3a6b27e519bd6a3e928a6"}, + {file = "pre_commit-2.15.0.tar.gz", hash = "sha256:3c25add78dbdfb6a28a651780d5c311ac40dd17f160eb3954a0c59da40a505a7"}, +] +py = [ + {file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"}, + {file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"}, +] +pycodestyle = [ + {file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"}, + {file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"}, +] +pycparser = [ + {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"}, + {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"}, +] +pyflakes = [ + {file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"}, + {file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"}, +] +pygments = [ + {file = "Pygments-2.10.0-py3-none-any.whl", hash = "sha256:b8e67fe6af78f492b3c4b3e2970c0624cbf08beb1e493b2c99b9fa1b67a20380"}, + {file = "Pygments-2.10.0.tar.gz", hash = "sha256:f398865f7eb6874156579fdf36bc840a03cab64d1cde9e93d68f46a425ec52c6"}, +] +pyparsing = [ + {file = "pyparsing-3.0.1-py3-none-any.whl", hash = "sha256:fd93fc45c47893c300bd98f5dd1b41c0e783eaeb727e7cea210dcc09d64ce7c3"}, + {file = "pyparsing-3.0.1.tar.gz", hash = "sha256:84196357aa3566d64ad123d7a3c67b0e597a115c4934b097580e5ce220b91531"}, +] +pyside6 = [ + {file = "PySide6-6.2.0-6.2.0-cp36.cp37.cp38.cp39.cp310-abi3-macosx_10_14_x86_64.whl", hash = "sha256:57c039dc188dd5465c8a605ffa364748ae397894ac18f6df2655252ad015c97b"}, + {file = "PySide6-6.2.0-6.2.0-cp36.cp37.cp38.cp39.cp310-abi3-manylinux1_x86_64.whl", hash = "sha256:f1b3c2992a3edbadb8b40734870ca6d6bd51b6f1678a7ec91839fad1fd0dbc91"}, + {file = "PySide6-6.2.0-6.2.0-cp36.cp37.cp38.cp39.cp310-none-win_amd64.whl", hash = "sha256:e8d0427d7b20ec53c08a01f2563934b4342e78b87ed7e01596e147b18590b5c8"}, +] +pytest = [ + {file = "pytest-6.2.5-py3-none-any.whl", hash = "sha256:7310f8d27bc79ced999e760ca304d69f6ba6c6649c0b60fb0e04a4a77cacc134"}, + {file = "pytest-6.2.5.tar.gz", hash = "sha256:131b36680866a76e6781d13f101efb86cf674ebb9762eb70d3082b6f29889e89"}, +] +pytest-cov = [ + {file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"}, + {file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"}, +] +pytest-timeout = [ + {file = "pytest-timeout-2.0.1.tar.gz", hash = "sha256:a5ec4eceddb8ea726911848593d668594107e797621e97f93a1d1dbc6fbb9080"}, + {file = "pytest_timeout-2.0.1-py3-none-any.whl", hash = "sha256:329bdea323d3e5bea4737070dd85a0d1021dbecb2da5342dc25284fdb929dff0"}, +] +pywin32-ctypes = [ + {file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"}, + {file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"}, +] +pyyaml = [ + {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"}, + {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"}, + {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"}, + {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"}, + {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"}, + {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"}, + {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"}, + {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"}, + {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"}, + {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"}, + {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"}, + {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"}, + {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"}, + {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"}, + {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"}, + {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"}, + {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"}, + {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"}, + {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"}, + {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"}, + {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"}, + {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"}, + {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"}, + {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"}, + {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"}, + {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"}, + {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"}, + {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"}, + {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"}, + {file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"}, + {file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"}, + {file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"}, + {file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"}, +] +readme-renderer = [ + {file = "readme_renderer-30.0-py2.py3-none-any.whl", hash = "sha256:3286806450d9961d6e3b5f8a59f77e61503799aca5155c8d8d40359b4e1e1adc"}, + {file = "readme_renderer-30.0.tar.gz", hash = "sha256:8299700d7a910c304072a7601eafada6712a5b011a20139417e1b1e9f04645d8"}, +] +regex = [ + {file = "regex-2021.10.23-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:45b65d6a275a478ac2cbd7fdbf7cc93c1982d613de4574b56fd6972ceadb8395"}, + {file = "regex-2021.10.23-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:74d071dbe4b53c602edd87a7476ab23015a991374ddb228d941929ad7c8c922e"}, + {file = "regex-2021.10.23-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:34d870f9f27f2161709054d73646fc9aca49480617a65533fc2b4611c518e455"}, + {file = "regex-2021.10.23-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fb698037c35109d3c2e30f2beb499e5ebae6e4bb8ff2e60c50b9a805a716f79"}, + {file = "regex-2021.10.23-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cb46b542133999580ffb691baf67410306833ee1e4f58ed06b6a7aaf4e046952"}, + {file = "regex-2021.10.23-cp310-cp310-win32.whl", hash = "sha256:5e9c9e0ce92f27cef79e28e877c6b6988c48b16942258f3bc55d39b5f911df4f"}, + {file = "regex-2021.10.23-cp310-cp310-win_amd64.whl", hash = "sha256:ab7c5684ff3538b67df3f93d66bd3369b749087871ae3786e70ef39e601345b0"}, + {file = "regex-2021.10.23-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:de557502c3bec8e634246588a94e82f1ee1b9dfcfdc453267c4fb652ff531570"}, + {file = "regex-2021.10.23-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee684f139c91e69fe09b8e83d18b4d63bf87d9440c1eb2eeb52ee851883b1b29"}, + {file = "regex-2021.10.23-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5095a411c8479e715784a0c9236568ae72509450ee2226b649083730f3fadfc6"}, + {file = "regex-2021.10.23-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b568809dca44cb75c8ebb260844ea98252c8c88396f9d203f5094e50a70355f"}, + {file = "regex-2021.10.23-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:eb672217f7bd640411cfc69756ce721d00ae600814708d35c930930f18e8029f"}, + {file = "regex-2021.10.23-cp36-cp36m-win32.whl", hash = "sha256:a7a986c45d1099a5de766a15de7bee3840b1e0e1a344430926af08e5297cf666"}, + {file = "regex-2021.10.23-cp36-cp36m-win_amd64.whl", hash = "sha256:6d7722136c6ed75caf84e1788df36397efdc5dbadab95e59c2bba82d4d808a4c"}, + {file = "regex-2021.10.23-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:9f665677e46c5a4d288ece12fdedf4f4204a422bb28ff05f0e6b08b7447796d1"}, + {file = "regex-2021.10.23-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:450dc27483548214314640c89a0f275dbc557968ed088da40bde7ef8fb52829e"}, + {file = "regex-2021.10.23-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:129472cd06062fb13e7b4670a102951a3e655e9b91634432cfbdb7810af9d710"}, + {file = "regex-2021.10.23-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a940ca7e7189d23da2bfbb38973832813eab6bd83f3bf89a977668c2f813deae"}, + {file = "regex-2021.10.23-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:530fc2bbb3dc1ebb17f70f7b234f90a1dd43b1b489ea38cea7be95fb21cdb5c7"}, + {file = "regex-2021.10.23-cp37-cp37m-win32.whl", hash = "sha256:ded0c4a3eee56b57fcb2315e40812b173cafe79d2f992d50015f4387445737fa"}, + {file = "regex-2021.10.23-cp37-cp37m-win_amd64.whl", hash = "sha256:391703a2abf8013d95bae39145d26b4e21531ab82e22f26cd3a181ee2644c234"}, + {file = "regex-2021.10.23-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:be04739a27be55631069b348dda0c81d8ea9822b5da10b8019b789e42d1fe452"}, + {file = "regex-2021.10.23-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13ec99df95003f56edcd307db44f06fbeb708c4ccdcf940478067dd62353181e"}, + {file = "regex-2021.10.23-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:8d1cdcda6bd16268316d5db1038965acf948f2a6f43acc2e0b1641ceab443623"}, + {file = "regex-2021.10.23-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0c186691a7995ef1db61205e00545bf161fb7b59cdb8c1201c89b333141c438a"}, + {file = "regex-2021.10.23-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2b20f544cbbeffe171911f6ce90388ad36fe3fad26b7c7a35d4762817e9ea69c"}, + {file = "regex-2021.10.23-cp38-cp38-win32.whl", hash = "sha256:c0938ddd60cc04e8f1faf7a14a166ac939aac703745bfcd8e8f20322a7373019"}, + {file = "regex-2021.10.23-cp38-cp38-win_amd64.whl", hash = "sha256:56f0c81c44638dfd0e2367df1a331b4ddf2e771366c4b9c5d9a473de75e3e1c7"}, + {file = "regex-2021.10.23-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:80bb5d2e92b2258188e7dcae5b188c7bf868eafdf800ea6edd0fbfc029984a88"}, + {file = "regex-2021.10.23-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1dae12321b31059a1a72aaa0e6ba30156fe7e633355e445451e4021b8e122b6"}, + {file = "regex-2021.10.23-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:1f2b59c28afc53973d22e7bc18428721ee8ca6079becf1b36571c42627321c65"}, + {file = "regex-2021.10.23-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d134757a37d8640f3c0abb41f5e68b7cf66c644f54ef1cb0573b7ea1c63e1509"}, + {file = "regex-2021.10.23-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0dcc0e71118be8c69252c207630faf13ca5e1b8583d57012aae191e7d6d28b84"}, + {file = "regex-2021.10.23-cp39-cp39-win32.whl", hash = "sha256:a30513828180264294953cecd942202dfda64e85195ae36c265daf4052af0464"}, + {file = "regex-2021.10.23-cp39-cp39-win_amd64.whl", hash = "sha256:0f7552429dd39f70057ac5d0e897e5bfe211629652399a21671e53f2a9693a4e"}, + {file = "regex-2021.10.23.tar.gz", hash = "sha256:f3f9a91d3cc5e5b0ddf1043c0ae5fa4852f18a1c0050318baf5fc7930ecc1f9c"}, +] +requests = [ + {file = "requests-2.26.0-py2.py3-none-any.whl", hash = "sha256:6c1246513ecd5ecd4528a0906f910e8f0f9c6b8ec72030dc9fd154dc1a6efd24"}, + {file = "requests-2.26.0.tar.gz", hash = "sha256:b8aa58f8cf793ffd8782d3d8cb19e66ef36f7aba4353eec859e74678b01b07a7"}, +] +requests-toolbelt = [ + {file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"}, + {file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"}, +] +rfc3986 = [ + {file = "rfc3986-1.5.0-py2.py3-none-any.whl", hash = "sha256:a86d6e1f5b1dc238b218b012df0aa79409667bb209e58da56d0b94704e712a97"}, + {file = "rfc3986-1.5.0.tar.gz", hash = "sha256:270aaf10d87d0d4e095063c65bf3ddbc6ee3d0b226328ce21e036f946e421835"}, +] +secretstorage = [ + {file = "SecretStorage-3.3.1-py3-none-any.whl", hash = "sha256:422d82c36172d88d6a0ed5afdec956514b189ddbfb72fefab0c8a1cee4eaf71f"}, + {file = "SecretStorage-3.3.1.tar.gz", hash = "sha256:fd666c51a6bf200643495a04abb261f83229dcb6fd8472ec393df7ffc8b6f195"}, +] +shiboken6 = [ + {file = "shiboken6-6.2.0-6.2.0-cp36.cp37.cp38.cp39.cp310-abi3-macosx_10_14_x86_64.whl", hash = "sha256:8ccd10c9e21cb6e1e9135f706ea988c41494f7282cb869f5b2069b8142742865"}, + {file = "shiboken6-6.2.0-6.2.0-cp36.cp37.cp38.cp39.cp310-abi3-manylinux1_x86_64.whl", hash = "sha256:dce3b434d5d1be50b8920799647bbb2346dfd9331bf922425128085020097cb8"}, + {file = "shiboken6-6.2.0-6.2.0-cp36.cp37.cp38.cp39.cp310-none-win_amd64.whl", hash = "sha256:44d32913baf698265f48a4d1072224ef6503bc49911b44dcfb48b103310ad643"}, +] +six = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] +toml = [ + {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, + {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, +] +tomli = [ + {file = "tomli-1.2.2-py3-none-any.whl", hash = "sha256:f04066f68f5554911363063a30b108d2b5a5b1a010aa8b6132af78489fe3aade"}, + {file = "tomli-1.2.2.tar.gz", hash = "sha256:c6ce0015eb38820eaf32b5db832dbc26deb3dd427bd5f6556cf0acac2c214fee"}, +] +tqdm = [ + {file = "tqdm-4.62.3-py2.py3-none-any.whl", hash = "sha256:8dd278a422499cd6b727e6ae4061c40b48fce8b76d1ccbf5d34fca9b7f925b0c"}, + {file = "tqdm-4.62.3.tar.gz", hash = "sha256:d359de7217506c9851b7869f3708d8ee53ed70a1b8edbba4dbcb47442592920d"}, +] +twine = [ + {file = "twine-3.4.2-py3-none-any.whl", hash = "sha256:087328e9bb405e7ce18527a2dca4042a84c7918658f951110b38bc135acab218"}, + {file = "twine-3.4.2.tar.gz", hash = "sha256:4caec0f1ed78dc4c9b83ad537e453d03ce485725f2aea57f1bb3fdde78dae936"}, +] +typed-ast = [ + {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:2068531575a125b87a41802130fa7e29f26c09a2833fea68d9a40cf33902eba6"}, + {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c907f561b1e83e93fad565bac5ba9c22d96a54e7ea0267c708bffe863cbe4075"}, + {file = "typed_ast-1.4.3-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:1b3ead4a96c9101bef08f9f7d1217c096f31667617b58de957f690c92378b528"}, + {file = "typed_ast-1.4.3-cp35-cp35m-win32.whl", hash = "sha256:dde816ca9dac1d9c01dd504ea5967821606f02e510438120091b84e852367428"}, + {file = "typed_ast-1.4.3-cp35-cp35m-win_amd64.whl", hash = "sha256:777a26c84bea6cd934422ac2e3b78863a37017618b6e5c08f92ef69853e765d3"}, + {file = "typed_ast-1.4.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f8afcf15cc511ada719a88e013cec87c11aff7b91f019295eb4530f96fe5ef2f"}, + {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:52b1eb8c83f178ab787f3a4283f68258525f8d70f778a2f6dd54d3b5e5fb4341"}, + {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:01ae5f73431d21eead5015997ab41afa53aa1fbe252f9da060be5dad2c730ace"}, + {file = "typed_ast-1.4.3-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c190f0899e9f9f8b6b7863debfb739abcb21a5c054f911ca3596d12b8a4c4c7f"}, + {file = "typed_ast-1.4.3-cp36-cp36m-win32.whl", hash = "sha256:398e44cd480f4d2b7ee8d98385ca104e35c81525dd98c519acff1b79bdaac363"}, + {file = "typed_ast-1.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:bff6ad71c81b3bba8fa35f0f1921fb24ff4476235a6e94a26ada2e54370e6da7"}, + {file = "typed_ast-1.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0fb71b8c643187d7492c1f8352f2c15b4c4af3f6338f21681d3681b3dc31a266"}, + {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:760ad187b1041a154f0e4d0f6aae3e40fdb51d6de16e5c99aedadd9246450e9e"}, + {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5feca99c17af94057417d744607b82dd0a664fd5e4ca98061480fd8b14b18d04"}, + {file = "typed_ast-1.4.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:95431a26309a21874005845c21118c83991c63ea800dd44843e42a916aec5899"}, + {file = "typed_ast-1.4.3-cp37-cp37m-win32.whl", hash = "sha256:aee0c1256be6c07bd3e1263ff920c325b59849dc95392a05f258bb9b259cf39c"}, + {file = "typed_ast-1.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:9ad2c92ec681e02baf81fdfa056fe0d818645efa9af1f1cd5fd6f1bd2bdfd805"}, + {file = "typed_ast-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b36b4f3920103a25e1d5d024d155c504080959582b928e91cb608a65c3a49e1a"}, + {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_i686.whl", hash = "sha256:067a74454df670dcaa4e59349a2e5c81e567d8d65458d480a5b3dfecec08c5ff"}, + {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7538e495704e2ccda9b234b82423a4038f324f3a10c43bc088a1636180f11a41"}, + {file = "typed_ast-1.4.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:af3d4a73793725138d6b334d9d247ce7e5f084d96284ed23f22ee626a7b88e39"}, + {file = "typed_ast-1.4.3-cp38-cp38-win32.whl", hash = "sha256:f2362f3cb0f3172c42938946dbc5b7843c2a28aec307c49100c8b38764eb6927"}, + {file = "typed_ast-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:dd4a21253f42b8d2b48410cb31fe501d32f8b9fbeb1f55063ad102fe9c425e40"}, + {file = "typed_ast-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f328adcfebed9f11301eaedfa48e15bdece9b519fb27e6a8c01aa52a17ec31b3"}, + {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_i686.whl", hash = "sha256:2c726c276d09fc5c414693a2de063f521052d9ea7c240ce553316f70656c84d4"}, + {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:cae53c389825d3b46fb37538441f75d6aecc4174f615d048321b716df2757fb0"}, + {file = "typed_ast-1.4.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b9574c6f03f685070d859e75c7f9eeca02d6933273b5e69572e5ff9d5e3931c3"}, + {file = "typed_ast-1.4.3-cp39-cp39-win32.whl", hash = "sha256:209596a4ec71d990d71d5e0d312ac935d86930e6eecff6ccc7007fe54d703808"}, + {file = "typed_ast-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:9c6d1a54552b5330bc657b7ef0eae25d00ba7ffe85d9ea8ae6540d2197a3788c"}, + {file = "typed_ast-1.4.3.tar.gz", hash = "sha256:fb1bbeac803adea29cedd70781399c99138358c26d05fcbd23c13016b7f5ec65"}, +] +typing-extensions = [ + {file = "typing_extensions-3.10.0.2-py2-none-any.whl", hash = "sha256:d8226d10bc02a29bcc81df19a26e56a9647f8b0a6d4a83924139f4a8b01f17b7"}, + {file = "typing_extensions-3.10.0.2-py3-none-any.whl", hash = "sha256:f1d25edafde516b146ecd0613dabcc61409817af4766fbbcfb8d1ad4ec441a34"}, + {file = "typing_extensions-3.10.0.2.tar.gz", hash = "sha256:49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e"}, +] +urllib3 = [ + {file = "urllib3-1.26.7-py2.py3-none-any.whl", hash = "sha256:c4fdf4019605b6e5423637e01bc9fe4daef873709a7973e195ceba0a62bbc844"}, + {file = "urllib3-1.26.7.tar.gz", hash = "sha256:4987c65554f7a2dbf30c18fd48778ef124af6fab771a377103da0585e2336ece"}, +] +virtualenv = [ + {file = "virtualenv-20.9.0-py2.py3-none-any.whl", hash = "sha256:1d145deec2da86b29026be49c775cc5a9aab434f85f7efef98307fb3965165de"}, + {file = "virtualenv-20.9.0.tar.gz", hash = "sha256:bb55ace18de14593947354e5e6cd1be75fb32c3329651da62e92bf5d0aab7213"}, +] +webencodings = [ + {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"}, + {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"}, +] +zipp = [ + {file = "zipp-3.6.0-py3-none-any.whl", hash = "sha256:9fe5ea21568a0a70e50f273397638d39b03353731e6cbbb3fd8502a33fec40bc"}, + {file = "zipp-3.6.0.tar.gz", hash = "sha256:71c644c5369f4a6e07636f0aa966270449561fcea2e3d6747b8d23efaa9d7832"}, +] diff --git a/pyproject.toml b/pyproject.toml index 8723108..05653ea 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -16,6 +16,7 @@ flake8 = "*" flake8-black = "*" flake8-import-order = "*" flake8-print = "*" +pre-commit = { version = "*", python = ">=3.6.1"} pytest = "*" pytest-cov = "*" pytest-timeout = "*"
Minimize deps E.g. dict.copy causes depend to be called for the dict dep and all its keydeps, where only the dict dep would be sufficient. This mostly matters in eager mode but in general it would shave off some redundant function calls
Added a unit test to verify that we don't trigger redundant recomputations. That's a relief. See https://github.com/Korijn/observ/blob/master/tests/test_deps.py#L4
2021-09-27T16:08:37
0.0
[]
[]
fork-tongue/observ
fork-tongue__observ-26
d80a058073cdb2284c02ed399246b40b3a439e97
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 5b52cd3..3805063 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -4,15 +4,18 @@ on: push: branches: - master + tags: + - 'v*' pull_request: branches: - master jobs: - build: - name: ${{ matrix.name }} + test: + name: Lint and test on ${{ matrix.name }} runs-on: ubuntu-latest strategy: + fail-fast: false matrix: include: - name: Linux py36 @@ -23,7 +26,6 @@ jobs: pyversion: '3.8' - name: Linux py39 pyversion: '3.9' - steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.pyversion }} @@ -31,12 +33,69 @@ jobs: with: python-version: ${{ matrix.pyversion }} - name: Install poetry - run: | - curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python - echo "$HOME/.poetry/bin" >> $GITHUB_PATH + run: pip install "poetry>=1.1.8,<1.2" - name: Install dependencies - run: | - poetry install - - name: Lint (black and flake8) and test - run: | - poetry run test + run: poetry install + - name: Lint + run: poetry run flake8 + - name: Test + run: poetry run pytest --cov=observ --cov-report=term-missing + + build: + name: Build and test wheel + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - name: Set up Python 3.9 + uses: actions/setup-python@v2 + with: + python-version: '3.9' + - name: Install poetry + run: pip install "poetry>=1.1.8,<1.2" + - name: Install dependencies + run: poetry install + - name: Build wheel + run: poetry build + - name: Twine check + run: poetry run twine check dist/* + - name: Upload wheel artifact + uses: actions/upload-artifact@v2 + with: + path: dist + name: dist + + publish: + name: Publish to Github and Pypi + runs-on: ubuntu-latest + needs: [test, build] + if: success() && startsWith(github.ref, 'refs/tags/v') + steps: + - uses: actions/checkout@v2 + - name: Download wheel artifact + uses: actions/[email protected] + with: + name: dist + - name: Get version from git ref + id: get_version + run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//} + - name: Create GH release + uses: actions/create-release@v1 + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + with: + tag_name: ${{ steps.get_version.outputs.VERSION }} + release_name: Release ${{ steps.get_version.outputs.VERSION }} + draft: false + prerelease: false + - name: Upload release assets + # Move back to official action after fix https://github.com/actions/upload-release-asset/issues/4 + uses: AButler/[email protected] + with: + release-tag: ${{ steps.get_version.outputs.VERSION }} + files: 'dist/*.tar.gz;dist/*.whl' + repo-token: ${{ secrets.GITHUB_TOKEN }} + - name: Publish to PyPI + uses: pypa/gh-action-pypi-publish@master + with: + user: __token__ + password: ${{ secrets.PYPI_PASSWORD }} diff --git a/.gitignore b/.gitignore index a5eb1f6..633a173 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,3 @@ -poetry.lock *.egg-info/ __pycache__ .vscode diff --git a/LICENSE b/LICENSE index 4d9a1d3..11c09a5 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright © 2019-2020 observ authors +Copyright © 2019-2021 observ authors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/examples/observe_qt.py b/examples/observe_qt.py index 4205010..a3bb4ef 100644 --- a/examples/observe_qt.py +++ b/examples/observe_qt.py @@ -7,9 +7,19 @@ and updates the label whenever a computed property based on the state changes. """ +from time import sleep -from observ import computed, observe, watch -from PyQt5.QtWidgets import QApplication, QLabel, QPushButton, QVBoxLayout, QWidget +from PySide6.QtCore import QObject, QThread, Signal +from PySide6.QtWidgets import ( + QApplication, + QLabel, + QProgressBar, + QPushButton, + QVBoxLayout, + QWidget, +) + +from observ import observe, scheduler, watch class Display(QWidget): @@ -19,23 +29,61 @@ def __init__(self, state, *args, **kwargs): self.state = state self.label = QLabel() + self.progress = QProgressBar() + self.progress.setMinimum(0) + self.progress.setMaximum(100) layout = QVBoxLayout() layout.addWidget(self.label) + layout.addWidget(self.progress) self.setLayout(layout) - @computed def label_text(): if state["clicked"] == 0: return "Please click the button below" return f"Clicked {state['clicked']} times!" + def progress_visible(): + return state["progress"] > 0 + self.watcher = watch(label_text, self.update_label, immediate=True) + self.progress_watch = watch( + lambda: state["progress"], + self.update_progress, + ) + self.progress_visible = watch( + progress_visible, self.update_visibility, immediate=True + ) + + def update_progress(self, old_value, new_value): + # Trigger another watcher during scheduler flush + if new_value == 50: + self.state["clicked"] += 0.5 + self.progress.setValue(new_value) def update_label(self, old_value, new_value): self.label.setText(new_value) + def update_visibility(self, old_value, new_value): + self.progress.setVisible(new_value) + + +class LongJob(QObject): + progress = Signal(int) + result = Signal(int) + finished = Signal() + + def run(self): + self.progress.emit(0) + for i in range(100): + sleep(1 / 100.0) + self.progress.emit(i + 1) + + self.progress.emit(0) + self.result.emit(1) + self.finished.emit() + class Controls(QWidget): def __init__(self, state, *args, **kwargs): @@ -56,7 +104,26 @@ def __init__(self, state, *args, **kwargs): self.reset.clicked.connect(self.on_reset_clicked) def on_button_clicked(self): - self.state["clicked"] += 1 + self.thread = QThread() + self.worker = LongJob() + self.worker.moveToThread(self.thread) + self.thread.started.connect(self.worker.run) + self.worker.finished.connect(self.thread.quit) + self.worker.finished.connect(self.worker.deleteLater) + self.thread.finished.connect(self.thread.deleteLater) + + self.thread.start() + + def progress(x): + self.state["progress"] = x + + def bump(x): + self.state["clicked"] += x * 0.5 + + self.button.setEnabled(False) + self.thread.finished.connect(lambda: self.button.setEnabled(True)) + self.worker.result.connect(lambda x: bump(x)) + self.worker.progress.connect(lambda x: progress(x)) def on_reset_clicked(self): self.state["clicked"] = 0 @@ -64,10 +131,12 @@ def on_reset_clicked(self): if __name__ == "__main__": # Define some state - state = observe({"clicked": 0}) + state = observe({"clicked": 0, "progress": 0}) app = QApplication([]) + scheduler.register_qt() + # Create layout and pass state to widgets layout = QVBoxLayout() layout.addWidget(Display(state)) @@ -78,4 +147,4 @@ def on_reset_clicked(self): widget.show() widget.setWindowTitle("Clicked?") - app.exec_() + app.exec() diff --git a/observ/__init__.py b/observ/__init__.py index 92cce17..14999ed 100644 --- a/observ/__init__.py +++ b/observ/__init__.py @@ -1,426 +1,4 @@ -__version__ = "0.3.0" +__version__ = "0.4.0" -from collections.abc import Container -from functools import wraps -from itertools import count -import sys -from typing import Any -from weakref import WeakSet - - -class Dep: - stack = [] - - def __init__(self) -> None: - self._subs = WeakSet() - - def add_sub(self, sub: "Watcher") -> None: - self._subs.add(sub) - - def remove_sub(self, sub: "Watcher") -> None: - self._subs.remove(sub) - - def depend(self) -> None: - if self.stack: - self.stack[-1].add_dep(self) - - def notify(self) -> None: - for sub in sorted(self._subs, key=lambda s: s.id): - sub.update() - - -def traverse(obj): - _traverse(obj, set()) - - -def _traverse(obj, seen): - seen.add(id(obj)) - if isinstance(obj, dict): - val_iter = iter(obj.values()) - elif isinstance(obj, (list, tuple, set)): - val_iter = iter(obj) - else: - val_iter = iter(()) - for v in val_iter: - if isinstance(v, Container) and id(v) not in seen: - _traverse(v, seen) - - -_ids = count() - - -class Watcher: - def __init__(self, fn, lazy=True, deep=False, callback=None) -> None: - self.id = next(_ids) - self.fn = fn - self._deps, self._new_deps = WeakSet(), WeakSet() - - self.callback = callback - self.deep = deep - self.lazy = lazy - self.dirty = self.lazy - self.value = None if self.lazy else self.get() - - def update(self) -> None: - self.dirty = True - if not self.lazy: - self.evaluate() - - def evaluate(self) -> None: - if self.dirty: - old_value = self.value - self.value = self.get() - self.dirty = False - if self.callback: - self.callback(old_value, self.value) - - def get(self) -> Any: - Dep.stack.append(self) - try: - value = self.fn() - finally: - if self.deep: - traverse(value) - Dep.stack.pop() - self.cleanup_deps() - return value - - def add_dep(self, dep: Dep) -> None: - if dep not in self._new_deps: - self._new_deps.add(dep) - if dep not in self._deps: - dep.add_sub(self) - - def cleanup_deps(self) -> None: - for dep in self._deps: - if dep not in self._new_deps: - dep.remove_sub(self) - self._deps, self._new_deps = self._new_deps, self._deps - self._new_deps.clear() - - def depend(self) -> None: - """This function is used by other watchers to depend on everything - this watcher depends on.""" - if Dep.stack: - for dep in self._deps: - dep.depend() - - def teardown(self) -> None: - for dep in self._deps: - dep.remove_sub(self) - - def __del__(self) -> None: - self.teardown() - - -def make_observable(cls): - def read(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - self.__dep__.depend() - return fn(self, *args, **kwargs) - - return inner - - def read_key(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - if Dep.stack: - key = args[0] - self.__keydeps__[key].depend() - return fn(self, *args, **kwargs) - - return inner - - def write(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - args = tuple(observe(a) for a in args) - kwargs = {k: observe(v) for k, v in kwargs.items()} - retval = fn(self, *args, **kwargs) - self.__dep__.notify() - return retval - - return inner - - def write_key(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - key = args[0] - is_new = key not in self.__keydeps__ - old_value = cls.__getitem__(self, key) if not is_new else None - args = [key] + [observe(a) for a in args[1:]] - kwargs = {k: observe(v) for k, v in kwargs.items()} - retval = fn(self, *args, **kwargs) - new_value = cls.__getitem__(self, key) - if is_new: - self.__keydeps__[key] = Dep() - if old_value != new_value: - self.__keydeps__[key].notify() - self.__dep__.notify() - return retval - - return inner - - def delete(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - retval = fn(self, *args, **kwargs) - self.__dep__.notify() - for key in self._orphaned_keydeps(): - self.__keydeps__[key].notify() - del self.__keydeps__[key] - return retval - - return inner - - def delete_key(fn): - @wraps(fn) - def inner(self, *args, **kwargs): - retval = fn(self, *args, **kwargs) - # TODO prevent firing if value hasn't actually changed? - key = args[0] - self.__dep__.notify() - self.__keydeps__[key].notify() - del self.__keydeps__[key] - return retval - - return inner - - todo = [ - ("_READERS", read), - ("_KEYREADERS", read_key), - ("_WRITERS", write), - ("_KEYWRITERS", write_key), - ("_DELETERS", delete), - ("_KEYDELETERS", delete_key), - ] - - for category, decorate in todo: - for name in getattr(cls, category, set()): - fn = getattr(cls, name) - setattr(cls, name, decorate(fn)) - - return cls - - -class ObservableDict(dict): - _READERS = { - "values", - "copy", - "items", - "keys", - "__eq__", - "__format__", - "__ge__", - "__gt__", - "__iter__", - "__le__", - "__len__", - "__lt__", - "__ne__", - "__repr__", - "__sizeof__", - "__str__", - } - _KEYREADERS = { - "get", - "__contains__", - "__getitem__", - } - _WRITERS = { - "update", - } - _KEYWRITERS = { - "setdefault", - "__setitem__", - } - _DELETERS = { - "clear", - "popitem", - } - _KEYDELETERS = { - "pop", - "__delitem__", - } - - @wraps(dict.__init__) - def __init__(self, *args, **kwargs): - dict.__init__(self, *args, **kwargs) - self.__dep__ = Dep() - self.__keydeps__ = {key: Dep() for key in dict.keys(self)} - - def _orphaned_keydeps(self): - return set(self.__keydeps__.keys()) - set(dict.keys(self)) - - -if sys.version_info >= (3, 8, 0): - ObservableDict._READERS.add("__reversed__") -if sys.version_info >= (3, 9, 0): - ObservableDict._READERS.add("__or__") - ObservableDict._READERS.add("__ror__") - ObservableDict._WRITERS.add("__ior__") -ObservableDict = make_observable(ObservableDict) - - -@make_observable -class ObservableList(list): - _READERS = { - "count", - "index", - "copy", - "__add__", - "__getitem__", - "__contains__", - "__eq__", - "__ge__", - "__gt__", - "__le__", - "__lt__", - "__mul__", - "__ne__", - "__rmul__", - "__iter__", - "__len__", - "__repr__", - "__str__", - "__format__", - "__reversed__", - "__sizeof__", - } - _WRITERS = { - "append", - "clear", - "extend", - "insert", - "pop", - "remove", - "reverse", - "sort", - "__setitem__", - "__delitem__", - "__iadd__", - "__imul__", - } - - @wraps(list.__init__) - def __init__(self, *args, **kwargs): - list.__init__(self, *args, **kwargs) - self.__dep__ = Dep() - - -@make_observable -class ObservableSet(set): - _READERS = { - "copy", - "difference", - "intersection", - "isdisjoint", - "issubset", - "issuperset", - "symmetric_difference", - "union", - "__and__", - "__contains__", - "__eq__", - "__format__", - "__ge__", - "__gt__", - "__iand__", - "__ior__", - "__isub__", - "__iter__", - "__ixor__", - "__le__", - "__len__", - "__lt__", - "__ne__", - "__or__", - "__rand__", - "__repr__", - "__ror__", - "__rsub__", - "__rxor__", - "__sizeof__", - "__str__", - "__sub__", - "__xor__", - } - _WRITERS = { - "add", - "clear", - "difference_update", - "intersection_update", - "discard", - "pop", - "remove", - "symmetric_difference_update", - "update", - } - - @wraps(set.__init__) - def __init__(self, *args, **kwargs): - set.__init__(self, *args, **kwargs) - self.__dep__ = Dep() - - -def observe(obj, deep=True): - if not isinstance(obj, (dict, list, tuple, set)): - return obj # common case first - elif isinstance(obj, dict): - if not isinstance(obj, ObservableDict): - reactive = ObservableDict(obj) - else: - reactive = obj - if deep: - for k, v in reactive.items(): - reactive[k] = observe(v) - return reactive - elif isinstance(obj, list): - if not isinstance(obj, ObservableList): - reactive = ObservableList(obj) - else: - reactive = obj - if deep: - for i, v in enumerate(reactive): - reactive[i] = observe(v) - return reactive - elif isinstance(obj, tuple): - reactive = obj # tuples are immutable - if deep: - reactive = tuple(observe(v) for v in reactive) - return reactive - elif isinstance(obj, set): - if deep: - return ObservableSet(observe(v) for v in obj) - else: - if not isinstance(obj, ObservableSet): - reactive = ObservableSet(obj) - else: - reactive = obj - return reactive - - -def computed(fn): - watcher = Watcher(fn) - - @wraps(fn) - def getter(): - if watcher.dirty: - watcher.evaluate() - if Dep.stack: - watcher.depend() - return watcher.value - - getter.__watcher__ = watcher - return getter - - -def watch(fn, callback, deep=False, immediate=False): - watcher = Watcher(fn, lazy=False, deep=deep, callback=callback) - if immediate: - watcher.dirty = True - watcher.evaluate() - return watcher +from .api import * diff --git a/observ/api.py b/observ/api.py new file mode 100644 index 0000000..f330578 --- /dev/null +++ b/observ/api.py @@ -0,0 +1,37 @@ +""" +Defines the public API for observ users +""" +from functools import wraps + +from .dep import Dep +from .observables import observe +from .scheduler import scheduler +from .watcher import Watcher + + +__all__ = ("observe", "computed", "watch", "scheduler") + + +def computed(fn): + watcher = Watcher(fn) + + @wraps(fn) + def getter(): + if watcher.dirty: + watcher.evaluate() + if Dep.stack: + watcher.depend() + return watcher.value + + getter.__watcher__ = watcher + return getter + + +def watch(fn, callback, sync=False, deep=False, immediate=False): + watcher = Watcher(fn, sync=sync, lazy=False, deep=deep, callback=callback) + if immediate: + watcher.dirty = True + watcher.evaluate() + if watcher.callback: + watcher.callback(None, watcher.value) + return watcher diff --git a/observ/dep.py b/observ/dep.py new file mode 100644 index 0000000..972548a --- /dev/null +++ b/observ/dep.py @@ -0,0 +1,27 @@ +""" +Deps implement the classic observable pattern, and +are attached to observable datastructures. +""" +from typing import List +from weakref import WeakSet + + +class Dep: + stack: List["Watcher"] = [] # noqa: F821 + + def __init__(self) -> None: + self._subs = WeakSet() + + def add_sub(self, sub: "Watcher") -> None: # noqa: F821 + self._subs.add(sub) + + def remove_sub(self, sub: "Watcher") -> None: # noqa: F821 + self._subs.remove(sub) + + def depend(self) -> None: + if self.stack: + self.stack[-1].add_dep(self) + + def notify(self) -> None: + for sub in sorted(self._subs, key=lambda s: s.id): + sub.update() diff --git a/observ/observables.py b/observ/observables.py new file mode 100644 index 0000000..c47ffed --- /dev/null +++ b/observ/observables.py @@ -0,0 +1,298 @@ +""" +observe converts plain datastructures (dict, list, set) to +proxied versions of those datastructures to make them reactive. +""" +from functools import wraps +import sys + +from .dep import Dep + + +def observe(obj, deep=True): + """Please be aware: this only works on plain data types!""" + if not isinstance(obj, (dict, list, tuple, set)): + return obj # common case first + elif isinstance(obj, dict): + if not isinstance(obj, ObservableDict): + reactive = ObservableDict(obj) + else: + reactive = obj + if deep: + for k, v in reactive.items(): + reactive[k] = observe(v) + return reactive + elif isinstance(obj, list): + if not isinstance(obj, ObservableList): + reactive = ObservableList(obj) + else: + reactive = obj + if deep: + for i, v in enumerate(reactive): + reactive[i] = observe(v) + return reactive + elif isinstance(obj, tuple): + reactive = obj # tuples are immutable + if deep: + reactive = tuple(observe(v) for v in reactive) + return reactive + elif isinstance(obj, set): + if deep: + return ObservableSet(observe(v) for v in obj) + else: + if not isinstance(obj, ObservableSet): + reactive = ObservableSet(obj) + else: + reactive = obj + return reactive + + +def make_observable(cls): + def read(fn): + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: + self.__dep__.depend() + return fn(self, *args, **kwargs) + + return inner + + def read_key(fn): + @wraps(fn) + def inner(self, *args, **kwargs): + if Dep.stack: + key = args[0] + self.__keydeps__[key].depend() + return fn(self, *args, **kwargs) + + return inner + + def write(fn): + @wraps(fn) + def inner(self, *args, **kwargs): + args = tuple(observe(a) for a in args) + kwargs = {k: observe(v) for k, v in kwargs.items()} + retval = fn(self, *args, **kwargs) + # TODO prevent firing if value hasn't actually changed? + self.__dep__.notify() + return retval + + return inner + + def write_key(fn): + @wraps(fn) + def inner(self, *args, **kwargs): + key = args[0] + is_new = key not in self.__keydeps__ + old_value = cls.__getitem__(self, key) if not is_new else None + args = [key] + [observe(a) for a in args[1:]] + kwargs = {k: observe(v) for k, v in kwargs.items()} + retval = fn(self, *args, **kwargs) + new_value = cls.__getitem__(self, key) + if is_new: + self.__keydeps__[key] = Dep() + if old_value != new_value: + self.__keydeps__[key].notify() + self.__dep__.notify() + return retval + + return inner + + def delete(fn): + @wraps(fn) + def inner(self, *args, **kwargs): + retval = fn(self, *args, **kwargs) + self.__dep__.notify() + for key in self._orphaned_keydeps(): + self.__keydeps__[key].notify() + del self.__keydeps__[key] + return retval + + return inner + + def delete_key(fn): + @wraps(fn) + def inner(self, *args, **kwargs): + retval = fn(self, *args, **kwargs) + key = args[0] + self.__dep__.notify() + self.__keydeps__[key].notify() + del self.__keydeps__[key] + return retval + + return inner + + todo = [ + ("_READERS", read), + ("_KEYREADERS", read_key), + ("_WRITERS", write), + ("_KEYWRITERS", write_key), + ("_DELETERS", delete), + ("_KEYDELETERS", delete_key), + ] + + for category, decorate in todo: + for name in getattr(cls, category, set()): + fn = getattr(cls, name) + setattr(cls, name, decorate(fn)) + + return cls + + +class ObservableDict(dict): + _READERS = { + "values", + "copy", + "items", + "keys", + "__eq__", + "__format__", + "__ge__", + "__gt__", + "__iter__", + "__le__", + "__len__", + "__lt__", + "__ne__", + "__repr__", + "__sizeof__", + "__str__", + } + _KEYREADERS = { + "get", + "__contains__", + "__getitem__", + } + _WRITERS = { + "update", + } + _KEYWRITERS = { + "setdefault", + "__setitem__", + } + _DELETERS = { + "clear", + "popitem", + } + _KEYDELETERS = { + "pop", + "__delitem__", + } + + @wraps(dict.__init__) + def __init__(self, *args, **kwargs): + dict.__init__(self, *args, **kwargs) + self.__dep__ = Dep() + self.__keydeps__ = {key: Dep() for key in dict.keys(self)} + + def _orphaned_keydeps(self): + return set(self.__keydeps__.keys()) - set(dict.keys(self)) + + +if sys.version_info >= (3, 8, 0): + ObservableDict._READERS.add("__reversed__") +if sys.version_info >= (3, 9, 0): + ObservableDict._READERS.add("__or__") + ObservableDict._READERS.add("__ror__") + ObservableDict._WRITERS.add("__ior__") +ObservableDict = make_observable(ObservableDict) + + +@make_observable +class ObservableList(list): + _READERS = { + "count", + "index", + "copy", + "__add__", + "__getitem__", + "__contains__", + "__eq__", + "__ge__", + "__gt__", + "__le__", + "__lt__", + "__mul__", + "__ne__", + "__rmul__", + "__iter__", + "__len__", + "__repr__", + "__str__", + "__format__", + "__reversed__", + "__sizeof__", + } + _WRITERS = { + "append", + "clear", + "extend", + "insert", + "pop", + "remove", + "reverse", + "sort", + "__setitem__", + "__delitem__", + "__iadd__", + "__imul__", + } + + @wraps(list.__init__) + def __init__(self, *args, **kwargs): + list.__init__(self, *args, **kwargs) + self.__dep__ = Dep() + + +@make_observable +class ObservableSet(set): + _READERS = { + "copy", + "difference", + "intersection", + "isdisjoint", + "issubset", + "issuperset", + "symmetric_difference", + "union", + "__and__", + "__contains__", + "__eq__", + "__format__", + "__ge__", + "__gt__", + "__iand__", + "__ior__", + "__isub__", + "__iter__", + "__ixor__", + "__le__", + "__len__", + "__lt__", + "__ne__", + "__or__", + "__rand__", + "__repr__", + "__ror__", + "__rsub__", + "__rxor__", + "__sizeof__", + "__str__", + "__sub__", + "__xor__", + } + _WRITERS = { + "add", + "clear", + "difference_update", + "intersection_update", + "discard", + "pop", + "remove", + "symmetric_difference_update", + "update", + } + + @wraps(set.__init__) + def __init__(self, *args, **kwargs): + set.__init__(self, *args, **kwargs) + self.__dep__ = Dep() diff --git a/observ/scheduler.py b/observ/scheduler.py new file mode 100644 index 0000000..a846964 --- /dev/null +++ b/observ/scheduler.py @@ -0,0 +1,97 @@ +""" +The scheduler queues up and deduplicates re-evaluation of lazy Watchers +and should be integrated in the event loop of your choosing. +""" +from bisect import bisect + + +class Scheduler: + def __init__(self): + self._queue = [] + self._queue_indices = [] + self.flushing = False + self.has = set() + self.circular = {} + self.index = 0 + self.waiting = False + + def register_qt(self): + """ + Utility function for integration with Qt event loop + """ + # Currently only supports PySide6 + from PySide6.QtCore import QTimer + + self.timer = QTimer() + self.timer.setSingleShot(True) + self.timer.timeout.connect(scheduler.flush) + # Set interval to 0 to trigger the timer as soon + # as possible (when Qt is done processing events) + self.timer.setInterval(0) + self.register_flush_request(self.timer.start) + + def register_flush_request(self, request_flush): + """ + Register callback for registering a call to flush + """ + self.request_flush = request_flush + + def flush(self): + """ + Flush the queue to evaluate all queued watchers. + You can call this manually, or register a callback + to request to perform the flush. + """ + if not self._queue: + return + + self.flushing = True + self.waiting = False + self._queue.sort(key=lambda s: s.id) + self._queue_indices.sort() + + while self.index < len(self._queue): + watcher = self._queue[self.index] + self.has.discard(watcher.id) + watcher.run() + + if watcher.id in self.has: + self.circular[watcher.id] = self.circular.get(watcher.id, 0) + 1 + if self.circular[watcher.id] > 100: + # TODO: help user to figure out which watcher + # or function this is about + raise RecursionError("Infinite update loop detected") + + self.index += 1 + + self._queue.clear() + self._queue_indices.clear() + self.flushing = False + self.has.clear() + self.circular.clear() + self.index = 0 + + def queue(self, watcher: "Watcher"): # noqa: F821 + if watcher.id in self.has: + return + + self.has.add(watcher.id) + if not self.flushing: + self._queue.append(watcher) + self._queue_indices.append(watcher.id) + if not self.waiting and self.request_flush: + self.waiting = True + self.request_flush() + else: + # If already flushing, splice the watcher based on its id + # If already past its id, it will be run next immediately. + # Last part of the queue should stay ordered, in order to + # properly make use of bisect and avoid deadlocks + i = bisect(self._queue_indices[self.index + 1 :], watcher.id) + i += self.index + 1 + self._queue.insert(i, watcher) + self._queue_indices.insert(i, watcher.id) + + +# Construct global instance +scheduler = Scheduler() diff --git a/observ/watcher.py b/observ/watcher.py new file mode 100644 index 0000000..221ea08 --- /dev/null +++ b/observ/watcher.py @@ -0,0 +1,107 @@ +""" +watchers perform dependency tracking via functions acting on +observable datastructures, and optionally trigger callback when +a change is detected. +""" +from collections.abc import Container +from itertools import count +from typing import Any +from weakref import WeakSet + +from .dep import Dep +from .scheduler import scheduler + + +def traverse(obj): + _traverse(obj, set()) + + +def _traverse(obj, seen): + seen.add(id(obj)) + if isinstance(obj, dict): + val_iter = iter(obj.values()) + elif isinstance(obj, (list, tuple, set)): + val_iter = iter(obj) + else: + val_iter = iter(()) + for v in val_iter: + if isinstance(v, Container) and id(v) not in seen: + _traverse(v, seen) + + +# Every Watcher gets a unique ID which is used to +# keep track of the order in which subscribers will +# be notified +_ids = count() + + +class Watcher: + def __init__(self, fn, sync=False, lazy=True, deep=False, callback=None) -> None: + """ + sync: Ignore the scheduler + lazy: Only reevalutate when value is requested + deep: Deep watch the watched value + callback: Method to call when value has changed + """ + self.id = next(_ids) + self.fn = fn + self._deps, self._new_deps = WeakSet(), WeakSet() + + self.sync = sync + self.callback = callback + self.deep = deep + self.lazy = lazy + self.dirty = self.lazy + self.value = None if self.lazy else self.get() + + def update(self) -> None: + if self.lazy: + self.dirty = True + elif self.sync: + self.run() + else: + scheduler.queue(self) + + def evaluate(self) -> None: + self.value = self.get() + self.dirty = False + + def run(self) -> None: + """Called by scheduler""" + value = self.get() + if self.deep or isinstance(value, Container) or value != self.value: + old_value = self.value + self.value = value + if self.callback: + self.callback(old_value, self.value) + + def get(self) -> Any: + Dep.stack.append(self) + try: + value = self.fn() + finally: + if self.deep: + traverse(value) + Dep.stack.pop() + self.cleanup_deps() + return value + + def add_dep(self, dep: Dep) -> None: + if dep not in self._new_deps: + self._new_deps.add(dep) + if dep not in self._deps: + dep.add_sub(self) + + def cleanup_deps(self) -> None: + for dep in self._deps: + if dep not in self._new_deps: + dep.remove_sub(self) + self._deps, self._new_deps = self._new_deps, self._deps + self._new_deps.clear() + + def depend(self) -> None: + """This function is used by other watchers to depend on everything + this watcher depends on.""" + if Dep.stack: + for dep in self._deps: + dep.depend() diff --git a/poetry.lock b/poetry.lock new file mode 100644 index 0000000..31776d8 --- /dev/null +++ b/poetry.lock @@ -0,0 +1,1030 @@ +[[package]] +name = "appdirs" +version = "1.4.4" +description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "atomicwrites" +version = "1.4.0" +description = "Atomic file writes." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "attrs" +version = "21.2.0" +description = "Classes Without Boilerplate" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[package.extras] +dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit"] +docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"] +tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface"] +tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins"] + +[[package]] +name = "black" +version = "20.8b1" +description = "The uncompromising code formatter." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +appdirs = "*" +click = ">=7.1.2" +dataclasses = {version = ">=0.6", markers = "python_version < \"3.7\""} +mypy-extensions = ">=0.4.3" +pathspec = ">=0.6,<1" +regex = ">=2020.1.8" +toml = ">=0.10.1" +typed-ast = ">=1.4.0" +typing-extensions = ">=3.7.4" + +[package.extras] +colorama = ["colorama (>=0.4.3)"] +d = ["aiohttp (>=3.3.2)", "aiohttp-cors"] + +[[package]] +name = "bleach" +version = "4.1.0" +description = "An easy safelist-based HTML-sanitizing tool." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +packaging = "*" +six = ">=1.9.0" +webencodings = "*" + +[[package]] +name = "certifi" +version = "2021.5.30" +description = "Python package for providing Mozilla's CA Bundle." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "cffi" +version = "1.14.6" +description = "Foreign Function Interface for Python calling C code." +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +pycparser = "*" + +[[package]] +name = "charset-normalizer" +version = "2.0.6" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +category = "dev" +optional = false +python-versions = ">=3.5.0" + +[package.extras] +unicode_backport = ["unicodedata2"] + +[[package]] +name = "click" +version = "8.0.1" +description = "Composable command line interface toolkit" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} +importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} + +[[package]] +name = "colorama" +version = "0.4.4" +description = "Cross-platform colored terminal text." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[[package]] +name = "coverage" +version = "5.5" +description = "Code coverage measurement for Python" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" + +[package.extras] +toml = ["toml"] + +[[package]] +name = "cryptography" +version = "3.4.8" +description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +cffi = ">=1.12" + +[package.extras] +docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"] +docstest = ["doc8", "pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"] +pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"] +sdist = ["setuptools-rust (>=0.11.4)"] +ssh = ["bcrypt (>=3.1.5)"] +test = ["pytest (>=6.0)", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"] + +[[package]] +name = "dataclasses" +version = "0.8" +description = "A backport of the dataclasses module for Python 3.6" +category = "dev" +optional = false +python-versions = ">=3.6, <3.7" + +[[package]] +name = "docutils" +version = "0.17.1" +description = "Docutils -- Python Documentation Utilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[[package]] +name = "flake8" +version = "3.9.2" +description = "the modular source code checker: pep8 pyflakes and co" +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" + +[package.dependencies] +importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} +mccabe = ">=0.6.0,<0.7.0" +pycodestyle = ">=2.7.0,<2.8.0" +pyflakes = ">=2.3.0,<2.4.0" + +[[package]] +name = "flake8-black" +version = "0.2.3" +description = "flake8 plugin to call black as a code style validator" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +black = "*" +flake8 = ">=3.0.0" +toml = "*" + +[[package]] +name = "flake8-import-order" +version = "0.18.1" +description = "Flake8 and pylama plugin that checks the ordering of import statements." +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +pycodestyle = "*" + +[[package]] +name = "flake8-print" +version = "4.0.0" +description = "print statement checker plugin for flake8" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +flake8 = ">=3.0" +pycodestyle = "*" +six = "*" + +[[package]] +name = "idna" +version = "3.2" +description = "Internationalized Domain Names in Applications (IDNA)" +category = "dev" +optional = false +python-versions = ">=3.5" + +[[package]] +name = "importlib-metadata" +version = "4.8.1" +description = "Read metadata from Python packages" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""} +zipp = ">=0.5" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +perf = ["ipython"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"] + +[[package]] +name = "iniconfig" +version = "1.1.1" +description = "iniconfig: brain-dead simple config-ini parsing" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "jeepney" +version = "0.7.1" +description = "Low-level, pure Python DBus protocol wrapper." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +test = ["pytest", "pytest-trio", "pytest-asyncio", "testpath", "trio", "async-timeout"] +trio = ["trio", "async-generator"] + +[[package]] +name = "keyring" +version = "23.2.1" +description = "Store and access your passwords safely." +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +importlib-metadata = ">=3.6" +jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""} +pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""} +SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""} + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-black (>=0.3.7)", "pytest-mypy"] + +[[package]] +name = "mccabe" +version = "0.6.1" +description = "McCabe checker, plugin for flake8" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "mypy-extensions" +version = "0.4.3" +description = "Experimental type system extensions for programs checked with the mypy typechecker." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "packaging" +version = "21.0" +description = "Core utilities for Python packages" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +pyparsing = ">=2.0.2" + +[[package]] +name = "pathspec" +version = "0.9.0" +description = "Utility library for gitignore style pattern matching of file paths." +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" + +[[package]] +name = "pkginfo" +version = "1.7.1" +description = "Query metadatdata from sdists / bdists / installed packages." +category = "dev" +optional = false +python-versions = "*" + +[package.extras] +testing = ["nose", "coverage"] + +[[package]] +name = "pluggy" +version = "1.0.0" +description = "plugin and hook calling mechanisms for python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} + +[package.extras] +dev = ["pre-commit", "tox"] +testing = ["pytest", "pytest-benchmark"] + +[[package]] +name = "py" +version = "1.10.0" +description = "library with cross-python path, ini-parsing, io, code, log facilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pycodestyle" +version = "2.7.0" +description = "Python style guide checker" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pycparser" +version = "2.20" +description = "C parser in Python" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pyflakes" +version = "2.3.1" +description = "passive checker of Python programs" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pygments" +version = "2.10.0" +description = "Pygments is a syntax highlighting package written in Python." +category = "dev" +optional = false +python-versions = ">=3.5" + +[[package]] +name = "pyparsing" +version = "2.4.7" +description = "Python parsing module" +category = "dev" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "pyside6" +version = "6.1.3" +description = "Python bindings for the Qt cross-platform application and UI framework" +category = "dev" +optional = false +python-versions = ">=3.6, <3.10" + +[package.dependencies] +shiboken6 = "6.1.3" + +[[package]] +name = "pytest" +version = "6.2.5" +description = "pytest: simple powerful testing with Python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""} +attrs = ">=19.2.0" +colorama = {version = "*", markers = "sys_platform == \"win32\""} +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=0.12,<2.0" +py = ">=1.8.2" +toml = "*" + +[package.extras] +testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"] + +[[package]] +name = "pytest-cov" +version = "2.12.1" +description = "Pytest plugin for measuring coverage." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[package.dependencies] +coverage = ">=5.2.1" +pytest = ">=4.6" +toml = "*" + +[package.extras] +testing = ["fields", "hunter", "process-tests", "six", "pytest-xdist", "virtualenv"] + +[[package]] +name = "pywin32-ctypes" +version = "0.2.0" +description = "" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "readme-renderer" +version = "29.0" +description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +bleach = ">=2.1.0" +docutils = ">=0.13.1" +Pygments = ">=2.5.1" +six = "*" + +[package.extras] +md = ["cmarkgfm (>=0.5.0,<0.6.0)"] + +[[package]] +name = "regex" +version = "2021.8.28" +description = "Alternative regular expression module, to replace re." +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "requests" +version = "2.26.0" +description = "Python HTTP for Humans." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = {version = ">=2.0.0,<2.1.0", markers = "python_version >= \"3\""} +idna = {version = ">=2.5,<4", markers = "python_version >= \"3\""} +urllib3 = ">=1.21.1,<1.27" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"] +use_chardet_on_py3 = ["chardet (>=3.0.2,<5)"] + +[[package]] +name = "requests-toolbelt" +version = "0.9.1" +description = "A utility belt for advanced users of python-requests" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +requests = ">=2.0.1,<3.0.0" + +[[package]] +name = "rfc3986" +version = "1.5.0" +description = "Validating URI References per RFC 3986" +category = "dev" +optional = false +python-versions = "*" + +[package.extras] +idna2008 = ["idna"] + +[[package]] +name = "secretstorage" +version = "3.3.1" +description = "Python bindings to FreeDesktop.org Secret Service API" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +cryptography = ">=2.0" +jeepney = ">=0.6" + +[[package]] +name = "shiboken6" +version = "6.1.3" +description = "Python / C++ bindings helper module" +category = "dev" +optional = false +python-versions = ">=3.6, <3.10" + +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "toml" +version = "0.10.2" +description = "Python Library for Tom's Obvious, Minimal Language" +category = "dev" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "tqdm" +version = "4.62.3" +description = "Fast, Extensible Progress Meter" +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + +[package.extras] +dev = ["py-make (>=0.1.0)", "twine", "wheel"] +notebook = ["ipywidgets (>=6)"] +telegram = ["requests"] + +[[package]] +name = "twine" +version = "3.4.2" +description = "Collection of utilities for publishing packages on PyPI" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +colorama = ">=0.4.3" +importlib-metadata = ">=3.6" +keyring = ">=15.1" +pkginfo = ">=1.4.2" +readme-renderer = ">=21.0" +requests = ">=2.20" +requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0" +rfc3986 = ">=1.4.0" +tqdm = ">=4.14" + +[[package]] +name = "typed-ast" +version = "1.4.3" +description = "a fork of Python 2 and 3 ast modules with type comment support" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "typing-extensions" +version = "3.10.0.2" +description = "Backported and Experimental Type Hints for Python 3.5+" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "urllib3" +version = "1.26.7" +description = "HTTP library with thread-safe connection pooling, file post, and more." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" + +[package.extras] +brotli = ["brotlipy (>=0.6.0)"] +secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"] +socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] + +[[package]] +name = "webencodings" +version = "0.5.1" +description = "Character encoding aliases for legacy web content" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "zipp" +version = "3.5.0" +description = "Backport of pathlib-compatible object wrapper for zip files" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"] + +[metadata] +lock-version = "1.1" +python-versions = ">=3.6,<3.10" +content-hash = "50794712bb36b65563b8b5d7c7fd5469fedb0f4a7982b72bedde6c59a436e546" + +[metadata.files] +appdirs = [ + {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, + {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, +] +atomicwrites = [ + {file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"}, + {file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"}, +] +attrs = [ + {file = "attrs-21.2.0-py2.py3-none-any.whl", hash = "sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1"}, + {file = "attrs-21.2.0.tar.gz", hash = "sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"}, +] +black = [ + {file = "black-20.8b1.tar.gz", hash = "sha256:1c02557aa099101b9d21496f8a914e9ed2222ef70336404eeeac8edba836fbea"}, +] +bleach = [ + {file = "bleach-4.1.0-py2.py3-none-any.whl", hash = "sha256:4d2651ab93271d1129ac9cbc679f524565cc8a1b791909c4a51eac4446a15994"}, + {file = "bleach-4.1.0.tar.gz", hash = "sha256:0900d8b37eba61a802ee40ac0061f8c2b5dee29c1927dd1d233e075ebf5a71da"}, +] +certifi = [ + {file = "certifi-2021.5.30-py2.py3-none-any.whl", hash = "sha256:50b1e4f8446b06f41be7dd6338db18e0990601dce795c2b1686458aa7e8fa7d8"}, + {file = "certifi-2021.5.30.tar.gz", hash = "sha256:2bbf76fd432960138b3ef6dda3dde0544f27cbf8546c458e60baf371917ba9ee"}, +] +cffi = [ + {file = "cffi-1.14.6-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:22b9c3c320171c108e903d61a3723b51e37aaa8c81255b5e7ce102775bd01e2c"}, + {file = "cffi-1.14.6-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:f0c5d1acbfca6ebdd6b1e3eded8d261affb6ddcf2186205518f1428b8569bb99"}, + {file = "cffi-1.14.6-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:99f27fefe34c37ba9875f224a8f36e31d744d8083e00f520f133cab79ad5e819"}, + {file = "cffi-1.14.6-cp27-cp27m-win32.whl", hash = "sha256:55af55e32ae468e9946f741a5d51f9896da6b9bf0bbdd326843fec05c730eb20"}, + {file = "cffi-1.14.6-cp27-cp27m-win_amd64.whl", hash = "sha256:7bcac9a2b4fdbed2c16fa5681356d7121ecabf041f18d97ed5b8e0dd38a80224"}, + {file = "cffi-1.14.6-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:ed38b924ce794e505647f7c331b22a693bee1538fdf46b0222c4717b42f744e7"}, + {file = "cffi-1.14.6-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e22dcb48709fc51a7b58a927391b23ab37eb3737a98ac4338e2448bef8559b33"}, + {file = "cffi-1.14.6-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:aedb15f0a5a5949ecb129a82b72b19df97bbbca024081ed2ef88bd5c0a610534"}, + {file = "cffi-1.14.6-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:48916e459c54c4a70e52745639f1db524542140433599e13911b2f329834276a"}, + {file = "cffi-1.14.6-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f627688813d0a4140153ff532537fbe4afea5a3dffce1f9deb7f91f848a832b5"}, + {file = "cffi-1.14.6-cp35-cp35m-win32.whl", hash = "sha256:f0010c6f9d1a4011e429109fda55a225921e3206e7f62a0c22a35344bfd13cca"}, + {file = "cffi-1.14.6-cp35-cp35m-win_amd64.whl", hash = "sha256:57e555a9feb4a8460415f1aac331a2dc833b1115284f7ded7278b54afc5bd218"}, + {file = "cffi-1.14.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e8c6a99be100371dbb046880e7a282152aa5d6127ae01783e37662ef73850d8f"}, + {file = "cffi-1.14.6-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:19ca0dbdeda3b2615421d54bef8985f72af6e0c47082a8d26122adac81a95872"}, + {file = "cffi-1.14.6-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:d950695ae4381ecd856bcaf2b1e866720e4ab9a1498cba61c602e56630ca7195"}, + {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9dc245e3ac69c92ee4c167fbdd7428ec1956d4e754223124991ef29eb57a09d"}, + {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a8661b2ce9694ca01c529bfa204dbb144b275a31685a075ce123f12331be790b"}, + {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b315d709717a99f4b27b59b021e6207c64620790ca3e0bde636a6c7f14618abb"}, + {file = "cffi-1.14.6-cp36-cp36m-win32.whl", hash = "sha256:80b06212075346b5546b0417b9f2bf467fea3bfe7352f781ffc05a8ab24ba14a"}, + {file = "cffi-1.14.6-cp36-cp36m-win_amd64.whl", hash = "sha256:a9da7010cec5a12193d1af9872a00888f396aba3dc79186604a09ea3ee7c029e"}, + {file = "cffi-1.14.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4373612d59c404baeb7cbd788a18b2b2a8331abcc84c3ba40051fcd18b17a4d5"}, + {file = "cffi-1.14.6-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f10afb1004f102c7868ebfe91c28f4a712227fe4cb24974350ace1f90e1febbf"}, + {file = "cffi-1.14.6-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:fd4305f86f53dfd8cd3522269ed7fc34856a8ee3709a5e28b2836b2db9d4cd69"}, + {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d6169cb3c6c2ad50db5b868db6491a790300ade1ed5d1da29289d73bbe40b56"}, + {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5d4b68e216fc65e9fe4f524c177b54964af043dde734807586cf5435af84045c"}, + {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33791e8a2dc2953f28b8d8d300dde42dd929ac28f974c4b4c6272cb2955cb762"}, + {file = "cffi-1.14.6-cp37-cp37m-win32.whl", hash = "sha256:0c0591bee64e438883b0c92a7bed78f6290d40bf02e54c5bf0978eaf36061771"}, + {file = "cffi-1.14.6-cp37-cp37m-win_amd64.whl", hash = "sha256:8eb687582ed7cd8c4bdbff3df6c0da443eb89c3c72e6e5dcdd9c81729712791a"}, + {file = "cffi-1.14.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ba6f2b3f452e150945d58f4badd92310449876c4c954836cfb1803bdd7b422f0"}, + {file = "cffi-1.14.6-cp38-cp38-manylinux1_i686.whl", hash = "sha256:64fda793737bc4037521d4899be780534b9aea552eb673b9833b01f945904c2e"}, + {file = "cffi-1.14.6-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:9f3e33c28cd39d1b655ed1ba7247133b6f7fc16fa16887b120c0c670e35ce346"}, + {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26bb2549b72708c833f5abe62b756176022a7b9a7f689b571e74c8478ead51dc"}, + {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eb687a11f0a7a1839719edd80f41e459cc5366857ecbed383ff376c4e3cc6afd"}, + {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d2ad4d668a5c0645d281dcd17aff2be3212bc109b33814bbb15c4939f44181cc"}, + {file = "cffi-1.14.6-cp38-cp38-win32.whl", hash = "sha256:487d63e1454627c8e47dd230025780e91869cfba4c753a74fda196a1f6ad6548"}, + {file = "cffi-1.14.6-cp38-cp38-win_amd64.whl", hash = "sha256:c33d18eb6e6bc36f09d793c0dc58b0211fccc6ae5149b808da4a62660678b156"}, + {file = "cffi-1.14.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:06c54a68935738d206570b20da5ef2b6b6d92b38ef3ec45c5422c0ebaf338d4d"}, + {file = "cffi-1.14.6-cp39-cp39-manylinux1_i686.whl", hash = "sha256:f174135f5609428cc6e1b9090f9268f5c8935fddb1b25ccb8255a2d50de6789e"}, + {file = "cffi-1.14.6-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f3ebe6e73c319340830a9b2825d32eb6d8475c1dac020b4f0aa774ee3b898d1c"}, + {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c8d896becff2fa653dc4438b54a5a25a971d1f4110b32bd3068db3722c80202"}, + {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4922cd707b25e623b902c86188aca466d3620892db76c0bdd7b99a3d5e61d35f"}, + {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c9e005e9bd57bc987764c32a1bee4364c44fdc11a3cc20a40b93b444984f2b87"}, + {file = "cffi-1.14.6-cp39-cp39-win32.whl", hash = "sha256:eb9e2a346c5238a30a746893f23a9535e700f8192a68c07c0258e7ece6ff3728"}, + {file = "cffi-1.14.6-cp39-cp39-win_amd64.whl", hash = "sha256:818014c754cd3dba7229c0f5884396264d51ffb87ec86e927ef0be140bfdb0d2"}, + {file = "cffi-1.14.6.tar.gz", hash = "sha256:c9a875ce9d7fe32887784274dd533c57909b7b1dcadcc128a2ac21331a9765dd"}, +] +charset-normalizer = [ + {file = "charset-normalizer-2.0.6.tar.gz", hash = "sha256:5ec46d183433dcbd0ab716f2d7f29d8dee50505b3fdb40c6b985c7c4f5a3591f"}, + {file = "charset_normalizer-2.0.6-py3-none-any.whl", hash = "sha256:5d209c0a931f215cee683b6445e2d77677e7e75e159f78def0db09d68fafcaa6"}, +] +click = [ + {file = "click-8.0.1-py3-none-any.whl", hash = "sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"}, + {file = "click-8.0.1.tar.gz", hash = "sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a"}, +] +colorama = [ + {file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"}, + {file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"}, +] +coverage = [ + {file = "coverage-5.5-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:b6d534e4b2ab35c9f93f46229363e17f63c53ad01330df9f2d6bd1187e5eaacf"}, + {file = "coverage-5.5-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:b7895207b4c843c76a25ab8c1e866261bcfe27bfaa20c192de5190121770672b"}, + {file = "coverage-5.5-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:c2723d347ab06e7ddad1a58b2a821218239249a9e4365eaff6649d31180c1669"}, + {file = "coverage-5.5-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:900fbf7759501bc7807fd6638c947d7a831fc9fdf742dc10f02956ff7220fa90"}, + {file = "coverage-5.5-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:004d1880bed2d97151facef49f08e255a20ceb6f9432df75f4eef018fdd5a78c"}, + {file = "coverage-5.5-cp27-cp27m-win32.whl", hash = "sha256:06191eb60f8d8a5bc046f3799f8a07a2d7aefb9504b0209aff0b47298333302a"}, + {file = "coverage-5.5-cp27-cp27m-win_amd64.whl", hash = "sha256:7501140f755b725495941b43347ba8a2777407fc7f250d4f5a7d2a1050ba8e82"}, + {file = "coverage-5.5-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:372da284cfd642d8e08ef606917846fa2ee350f64994bebfbd3afb0040436905"}, + {file = "coverage-5.5-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:8963a499849a1fc54b35b1c9f162f4108017b2e6db2c46c1bed93a72262ed083"}, + {file = "coverage-5.5-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:869a64f53488f40fa5b5b9dcb9e9b2962a66a87dab37790f3fcfb5144b996ef5"}, + {file = "coverage-5.5-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:4a7697d8cb0f27399b0e393c0b90f0f1e40c82023ea4d45d22bce7032a5d7b81"}, + {file = "coverage-5.5-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:8d0a0725ad7c1a0bcd8d1b437e191107d457e2ec1084b9f190630a4fb1af78e6"}, + {file = "coverage-5.5-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:51cb9476a3987c8967ebab3f0fe144819781fca264f57f89760037a2ea191cb0"}, + {file = "coverage-5.5-cp310-cp310-win_amd64.whl", hash = "sha256:c0891a6a97b09c1f3e073a890514d5012eb256845c451bd48f7968ef939bf4ae"}, + {file = "coverage-5.5-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:3487286bc29a5aa4b93a072e9592f22254291ce96a9fbc5251f566b6b7343cdb"}, + {file = "coverage-5.5-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:deee1077aae10d8fa88cb02c845cfba9b62c55e1183f52f6ae6a2df6a2187160"}, + {file = "coverage-5.5-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f11642dddbb0253cc8853254301b51390ba0081750a8ac03f20ea8103f0c56b6"}, + {file = "coverage-5.5-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:6c90e11318f0d3c436a42409f2749ee1a115cd8b067d7f14c148f1ce5574d701"}, + {file = "coverage-5.5-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:30c77c1dc9f253283e34c27935fded5015f7d1abe83bc7821680ac444eaf7793"}, + {file = "coverage-5.5-cp35-cp35m-win32.whl", hash = "sha256:9a1ef3b66e38ef8618ce5fdc7bea3d9f45f3624e2a66295eea5e57966c85909e"}, + {file = "coverage-5.5-cp35-cp35m-win_amd64.whl", hash = "sha256:972c85d205b51e30e59525694670de6a8a89691186012535f9d7dbaa230e42c3"}, + {file = "coverage-5.5-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:af0e781009aaf59e25c5a678122391cb0f345ac0ec272c7961dc5455e1c40066"}, + {file = "coverage-5.5-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:74d881fc777ebb11c63736622b60cb9e4aee5cace591ce274fb69e582a12a61a"}, + {file = "coverage-5.5-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:92b017ce34b68a7d67bd6d117e6d443a9bf63a2ecf8567bb3d8c6c7bc5014465"}, + {file = "coverage-5.5-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d636598c8305e1f90b439dbf4f66437de4a5e3c31fdf47ad29542478c8508bbb"}, + {file = "coverage-5.5-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:41179b8a845742d1eb60449bdb2992196e211341818565abded11cfa90efb821"}, + {file = "coverage-5.5-cp36-cp36m-win32.whl", hash = "sha256:040af6c32813fa3eae5305d53f18875bedd079960822ef8ec067a66dd8afcd45"}, + {file = "coverage-5.5-cp36-cp36m-win_amd64.whl", hash = "sha256:5fec2d43a2cc6965edc0bb9e83e1e4b557f76f843a77a2496cbe719583ce8184"}, + {file = "coverage-5.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:18ba8bbede96a2c3dde7b868de9dcbd55670690af0988713f0603f037848418a"}, + {file = "coverage-5.5-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:2910f4d36a6a9b4214bb7038d537f015346f413a975d57ca6b43bf23d6563b53"}, + {file = "coverage-5.5-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:f0b278ce10936db1a37e6954e15a3730bea96a0997c26d7fee88e6c396c2086d"}, + {file = "coverage-5.5-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:796c9c3c79747146ebd278dbe1e5c5c05dd6b10cc3bcb8389dfdf844f3ead638"}, + {file = "coverage-5.5-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:53194af30d5bad77fcba80e23a1441c71abfb3e01192034f8246e0d8f99528f3"}, + {file = "coverage-5.5-cp37-cp37m-win32.whl", hash = "sha256:184a47bbe0aa6400ed2d41d8e9ed868b8205046518c52464fde713ea06e3a74a"}, + {file = "coverage-5.5-cp37-cp37m-win_amd64.whl", hash = "sha256:2949cad1c5208b8298d5686d5a85b66aae46d73eec2c3e08c817dd3513e5848a"}, + {file = "coverage-5.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:217658ec7187497e3f3ebd901afdca1af062b42cfe3e0dafea4cced3983739f6"}, + {file = "coverage-5.5-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1aa846f56c3d49205c952d8318e76ccc2ae23303351d9270ab220004c580cfe2"}, + {file = "coverage-5.5-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:24d4a7de75446be83244eabbff746d66b9240ae020ced65d060815fac3423759"}, + {file = "coverage-5.5-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:d1f8bf7b90ba55699b3a5e44930e93ff0189aa27186e96071fac7dd0d06a1873"}, + {file = "coverage-5.5-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:970284a88b99673ccb2e4e334cfb38a10aab7cd44f7457564d11898a74b62d0a"}, + {file = "coverage-5.5-cp38-cp38-win32.whl", hash = "sha256:01d84219b5cdbfc8122223b39a954820929497a1cb1422824bb86b07b74594b6"}, + {file = "coverage-5.5-cp38-cp38-win_amd64.whl", hash = "sha256:2e0d881ad471768bf6e6c2bf905d183543f10098e3b3640fc029509530091502"}, + {file = "coverage-5.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d1f9ce122f83b2305592c11d64f181b87153fc2c2bbd3bb4a3dde8303cfb1a6b"}, + {file = "coverage-5.5-cp39-cp39-manylinux1_i686.whl", hash = "sha256:13c4ee887eca0f4c5a247b75398d4114c37882658300e153113dafb1d76de529"}, + {file = "coverage-5.5-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:52596d3d0e8bdf3af43db3e9ba8dcdaac724ba7b5ca3f6358529d56f7a166f8b"}, + {file = "coverage-5.5-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2cafbbb3af0733db200c9b5f798d18953b1a304d3f86a938367de1567f4b5bff"}, + {file = "coverage-5.5-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:44d654437b8ddd9eee7d1eaee28b7219bec228520ff809af170488fd2fed3e2b"}, + {file = "coverage-5.5-cp39-cp39-win32.whl", hash = "sha256:d314ed732c25d29775e84a960c3c60808b682c08d86602ec2c3008e1202e3bb6"}, + {file = "coverage-5.5-cp39-cp39-win_amd64.whl", hash = "sha256:13034c4409db851670bc9acd836243aeee299949bd5673e11844befcb0149f03"}, + {file = "coverage-5.5-pp36-none-any.whl", hash = "sha256:f030f8873312a16414c0d8e1a1ddff2d3235655a2174e3648b4fa66b3f2f1079"}, + {file = "coverage-5.5-pp37-none-any.whl", hash = "sha256:2a3859cb82dcbda1cfd3e6f71c27081d18aa251d20a17d87d26d4cd216fb0af4"}, + {file = "coverage-5.5.tar.gz", hash = "sha256:ebe78fe9a0e874362175b02371bdfbee64d8edc42a044253ddf4ee7d3c15212c"}, +] +cryptography = [ + {file = "cryptography-3.4.8-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a00cf305f07b26c351d8d4e1af84ad7501eca8a342dedf24a7acb0e7b7406e14"}, + {file = "cryptography-3.4.8-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:f44d141b8c4ea5eb4dbc9b3ad992d45580c1d22bf5e24363f2fbf50c2d7ae8a7"}, + {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a7dcbcd3f1913f664aca35d47c1331fce738d44ec34b7be8b9d332151b0b01e"}, + {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34dae04a0dce5730d8eb7894eab617d8a70d0c97da76b905de9efb7128ad7085"}, + {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1eb7bb0df6f6f583dd8e054689def236255161ebbcf62b226454ab9ec663746b"}, + {file = "cryptography-3.4.8-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:9965c46c674ba8cc572bc09a03f4c649292ee73e1b683adb1ce81e82e9a6a0fb"}, + {file = "cryptography-3.4.8-cp36-abi3-win32.whl", hash = "sha256:21ca464b3a4b8d8e86ba0ee5045e103a1fcfac3b39319727bc0fc58c09c6aff7"}, + {file = "cryptography-3.4.8-cp36-abi3-win_amd64.whl", hash = "sha256:3520667fda779eb788ea00080124875be18f2d8f0848ec00733c0ec3bb8219fc"}, + {file = "cryptography-3.4.8-pp36-pypy36_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d2a6e5ef66503da51d2110edf6c403dc6b494cc0082f85db12f54e9c5d4c3ec5"}, + {file = "cryptography-3.4.8-pp36-pypy36_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a305600e7a6b7b855cd798e00278161b681ad6e9b7eca94c721d5f588ab212af"}, + {file = "cryptography-3.4.8-pp36-pypy36_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:3fa3a7ccf96e826affdf1a0a9432be74dc73423125c8f96a909e3835a5ef194a"}, + {file = "cryptography-3.4.8-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:d9ec0e67a14f9d1d48dd87a2531009a9b251c02ea42851c060b25c782516ff06"}, + {file = "cryptography-3.4.8-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5b0fbfae7ff7febdb74b574055c7466da334a5371f253732d7e2e7525d570498"}, + {file = "cryptography-3.4.8-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94fff993ee9bc1b2440d3b7243d488c6a3d9724cc2b09cdb297f6a886d040ef7"}, + {file = "cryptography-3.4.8-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:8695456444f277af73a4877db9fc979849cd3ee74c198d04fc0776ebc3db52b9"}, + {file = "cryptography-3.4.8-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:cd65b60cfe004790c795cc35f272e41a3df4631e2fb6b35aa7ac6ef2859d554e"}, + {file = "cryptography-3.4.8.tar.gz", hash = "sha256:94cc5ed4ceaefcbe5bf38c8fba6a21fc1d365bb8fb826ea1688e3370b2e24a1c"}, +] +dataclasses = [ + {file = "dataclasses-0.8-py3-none-any.whl", hash = "sha256:0201d89fa866f68c8ebd9d08ee6ff50c0b255f8ec63a71c16fda7af82bb887bf"}, + {file = "dataclasses-0.8.tar.gz", hash = "sha256:8479067f342acf957dc82ec415d355ab5edb7e7646b90dc6e2fd1d96ad084c97"}, +] +docutils = [ + {file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"}, + {file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"}, +] +flake8 = [ + {file = "flake8-3.9.2-py2.py3-none-any.whl", hash = "sha256:bf8fd333346d844f616e8d47905ef3a3384edae6b4e9beb0c5101e25e3110907"}, + {file = "flake8-3.9.2.tar.gz", hash = "sha256:07528381786f2a6237b061f6e96610a4167b226cb926e2aa2b6b1d78057c576b"}, +] +flake8-black = [ + {file = "flake8-black-0.2.3.tar.gz", hash = "sha256:c199844bc1b559d91195ebe8620216f21ed67f2cc1ff6884294c91a0d2492684"}, + {file = "flake8_black-0.2.3-py3-none-any.whl", hash = "sha256:cc080ba5b3773b69ba102b6617a00cc4ecbad8914109690cfda4d565ea435d96"}, +] +flake8-import-order = [ + {file = "flake8-import-order-0.18.1.tar.gz", hash = "sha256:a28dc39545ea4606c1ac3c24e9d05c849c6e5444a50fb7e9cdd430fc94de6e92"}, + {file = "flake8_import_order-0.18.1-py2.py3-none-any.whl", hash = "sha256:90a80e46886259b9c396b578d75c749801a41ee969a235e163cfe1be7afd2543"}, +] +flake8-print = [ + {file = "flake8-print-4.0.0.tar.gz", hash = "sha256:5afac374b7dc49aac2c36d04b5eb1d746d72e6f5df75a6ecaecd99e9f79c6516"}, + {file = "flake8_print-4.0.0-py3-none-any.whl", hash = "sha256:6c0efce658513169f96d7a24cf136c434dc711eb00ebd0a985eb1120103fe584"}, +] +idna = [ + {file = "idna-3.2-py3-none-any.whl", hash = "sha256:14475042e284991034cb48e06f6851428fb14c4dc953acd9be9a5e95c7b6dd7a"}, + {file = "idna-3.2.tar.gz", hash = "sha256:467fbad99067910785144ce333826c71fb0e63a425657295239737f7ecd125f3"}, +] +importlib-metadata = [ + {file = "importlib_metadata-4.8.1-py3-none-any.whl", hash = "sha256:b618b6d2d5ffa2f16add5697cf57a46c76a56229b0ed1c438322e4e95645bd15"}, + {file = "importlib_metadata-4.8.1.tar.gz", hash = "sha256:f284b3e11256ad1e5d03ab86bb2ccd6f5339688ff17a4d797a0fe7df326f23b1"}, +] +iniconfig = [ + {file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"}, + {file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"}, +] +jeepney = [ + {file = "jeepney-0.7.1-py3-none-any.whl", hash = "sha256:1b5a0ea5c0e7b166b2f5895b91a08c14de8915afda4407fb5022a195224958ac"}, + {file = "jeepney-0.7.1.tar.gz", hash = "sha256:fa9e232dfa0c498bd0b8a3a73b8d8a31978304dcef0515adc859d4e096f96f4f"}, +] +keyring = [ + {file = "keyring-23.2.1-py3-none-any.whl", hash = "sha256:bd2145a237ed70c8ce72978b497619ddfcae640b6dcf494402d5143e37755c6e"}, + {file = "keyring-23.2.1.tar.gz", hash = "sha256:6334aee6073db2fb1f30892697b1730105b5e9a77ce7e61fca6b435225493efe"}, +] +mccabe = [ + {file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"}, + {file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"}, +] +mypy-extensions = [ + {file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"}, + {file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"}, +] +packaging = [ + {file = "packaging-21.0-py3-none-any.whl", hash = "sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"}, + {file = "packaging-21.0.tar.gz", hash = "sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7"}, +] +pathspec = [ + {file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"}, + {file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"}, +] +pkginfo = [ + {file = "pkginfo-1.7.1-py2.py3-none-any.whl", hash = "sha256:37ecd857b47e5f55949c41ed061eb51a0bee97a87c969219d144c0e023982779"}, + {file = "pkginfo-1.7.1.tar.gz", hash = "sha256:e7432f81d08adec7297633191bbf0bd47faf13cd8724c3a13250e51d542635bd"}, +] +pluggy = [ + {file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"}, + {file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"}, +] +py = [ + {file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"}, + {file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"}, +] +pycodestyle = [ + {file = "pycodestyle-2.7.0-py2.py3-none-any.whl", hash = "sha256:514f76d918fcc0b55c6680472f0a37970994e07bbb80725808c17089be302068"}, + {file = "pycodestyle-2.7.0.tar.gz", hash = "sha256:c389c1d06bf7904078ca03399a4816f974a1d590090fecea0c63ec26ebaf1cef"}, +] +pycparser = [ + {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"}, + {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"}, +] +pyflakes = [ + {file = "pyflakes-2.3.1-py2.py3-none-any.whl", hash = "sha256:7893783d01b8a89811dd72d7dfd4d84ff098e5eed95cfa8905b22bbffe52efc3"}, + {file = "pyflakes-2.3.1.tar.gz", hash = "sha256:f5bc8ecabc05bb9d291eb5203d6810b49040f6ff446a756326104746cc00c1db"}, +] +pygments = [ + {file = "Pygments-2.10.0-py3-none-any.whl", hash = "sha256:b8e67fe6af78f492b3c4b3e2970c0624cbf08beb1e493b2c99b9fa1b67a20380"}, + {file = "Pygments-2.10.0.tar.gz", hash = "sha256:f398865f7eb6874156579fdf36bc840a03cab64d1cde9e93d68f46a425ec52c6"}, +] +pyparsing = [ + {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"}, + {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"}, +] +pyside6 = [ + {file = "PySide6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-macosx_10_14_x86_64.whl", hash = "sha256:619b6bb4a3e5451fe939983cb9f5155f2cded44c8449f60a1d250747a48f00f3"}, + {file = "PySide6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-manylinux1_x86_64.whl", hash = "sha256:64d69fd2d38b47982c06619f9969b6ae5c632f21f8ff2432e4953b0179a21411"}, + {file = "PySide6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-none-win_amd64.whl", hash = "sha256:cbbe1a2bacca737b3000a8a18b0ba389ca93106800b10acb93660c1a5b6bfdec"}, +] +pytest = [ + {file = "pytest-6.2.5-py3-none-any.whl", hash = "sha256:7310f8d27bc79ced999e760ca304d69f6ba6c6649c0b60fb0e04a4a77cacc134"}, + {file = "pytest-6.2.5.tar.gz", hash = "sha256:131b36680866a76e6781d13f101efb86cf674ebb9762eb70d3082b6f29889e89"}, +] +pytest-cov = [ + {file = "pytest-cov-2.12.1.tar.gz", hash = "sha256:261ceeb8c227b726249b376b8526b600f38667ee314f910353fa318caa01f4d7"}, + {file = "pytest_cov-2.12.1-py2.py3-none-any.whl", hash = "sha256:261bb9e47e65bd099c89c3edf92972865210c36813f80ede5277dceb77a4a62a"}, +] +pywin32-ctypes = [ + {file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"}, + {file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"}, +] +readme-renderer = [ + {file = "readme_renderer-29.0-py2.py3-none-any.whl", hash = "sha256:63b4075c6698fcfa78e584930f07f39e05d46f3ec97f65006e430b595ca6348c"}, + {file = "readme_renderer-29.0.tar.gz", hash = "sha256:92fd5ac2bf8677f310f3303aa4bce5b9d5f9f2094ab98c29f13791d7b805a3db"}, +] +regex = [ + {file = "regex-2021.8.28-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9d05ad5367c90814099000442b2125535e9d77581855b9bee8780f1b41f2b1a2"}, + {file = "regex-2021.8.28-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3bf1bc02bc421047bfec3343729c4bbbea42605bcfd6d6bfe2c07ade8b12d2a"}, + {file = "regex-2021.8.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f6a808044faae658f546dd5f525e921de9fa409de7a5570865467f03a626fc0"}, + {file = "regex-2021.8.28-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a617593aeacc7a691cc4af4a4410031654f2909053bd8c8e7db837f179a630eb"}, + {file = "regex-2021.8.28-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:79aef6b5cd41feff359acaf98e040844613ff5298d0d19c455b3d9ae0bc8c35a"}, + {file = "regex-2021.8.28-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0fc1f8f06977c2d4f5e3d3f0d4a08089be783973fc6b6e278bde01f0544ff308"}, + {file = "regex-2021.8.28-cp310-cp310-win32.whl", hash = "sha256:6eebf512aa90751d5ef6a7c2ac9d60113f32e86e5687326a50d7686e309f66ed"}, + {file = "regex-2021.8.28-cp310-cp310-win_amd64.whl", hash = "sha256:ac88856a8cbccfc14f1b2d0b829af354cc1743cb375e7f04251ae73b2af6adf8"}, + {file = "regex-2021.8.28-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c206587c83e795d417ed3adc8453a791f6d36b67c81416676cad053b4104152c"}, + {file = "regex-2021.8.28-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8690ed94481f219a7a967c118abaf71ccc440f69acd583cab721b90eeedb77c"}, + {file = "regex-2021.8.28-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:328a1fad67445550b982caa2a2a850da5989fd6595e858f02d04636e7f8b0b13"}, + {file = "regex-2021.8.28-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c7cb4c512d2d3b0870e00fbbac2f291d4b4bf2634d59a31176a87afe2777c6f0"}, + {file = "regex-2021.8.28-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66256b6391c057305e5ae9209941ef63c33a476b73772ca967d4a2df70520ec1"}, + {file = "regex-2021.8.28-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8e44769068d33e0ea6ccdf4b84d80c5afffe5207aa4d1881a629cf0ef3ec398f"}, + {file = "regex-2021.8.28-cp36-cp36m-win32.whl", hash = "sha256:08d74bfaa4c7731b8dac0a992c63673a2782758f7cfad34cf9c1b9184f911354"}, + {file = "regex-2021.8.28-cp36-cp36m-win_amd64.whl", hash = "sha256:abb48494d88e8a82601af905143e0de838c776c1241d92021e9256d5515b3645"}, + {file = "regex-2021.8.28-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b4c220a1fe0d2c622493b0a1fd48f8f991998fb447d3cd368033a4b86cf1127a"}, + {file = "regex-2021.8.28-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4a332404baa6665b54e5d283b4262f41f2103c255897084ec8f5487ce7b9e8e"}, + {file = "regex-2021.8.28-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c61dcc1cf9fd165127a2853e2c31eb4fb961a4f26b394ac9fe5669c7a6592892"}, + {file = "regex-2021.8.28-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ee329d0387b5b41a5dddbb6243a21cb7896587a651bebb957e2d2bb8b63c0791"}, + {file = "regex-2021.8.28-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60667673ff9c249709160529ab39667d1ae9fd38634e006bec95611f632e759"}, + {file = "regex-2021.8.28-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b844fb09bd9936ed158ff9df0ab601e2045b316b17aa8b931857365ea8586906"}, + {file = "regex-2021.8.28-cp37-cp37m-win32.whl", hash = "sha256:4cde065ab33bcaab774d84096fae266d9301d1a2f5519d7bd58fc55274afbf7a"}, + {file = "regex-2021.8.28-cp37-cp37m-win_amd64.whl", hash = "sha256:1413b5022ed6ac0d504ba425ef02549a57d0f4276de58e3ab7e82437892704fc"}, + {file = "regex-2021.8.28-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ed4b50355b066796dacdd1cf538f2ce57275d001838f9b132fab80b75e8c84dd"}, + {file = "regex-2021.8.28-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28fc475f560d8f67cc8767b94db4c9440210f6958495aeae70fac8faec631797"}, + {file = "regex-2021.8.28-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc178caebd0f338d57ae445ef8e9b737ddf8fbc3ea187603f65aec5b041248f"}, + {file = "regex-2021.8.28-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:999ad08220467b6ad4bd3dd34e65329dd5d0df9b31e47106105e407954965256"}, + {file = "regex-2021.8.28-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:808ee5834e06f57978da3e003ad9d6292de69d2bf6263662a1a8ae30788e080b"}, + {file = "regex-2021.8.28-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d5111d4c843d80202e62b4fdbb4920db1dcee4f9366d6b03294f45ed7b18b42e"}, + {file = "regex-2021.8.28-cp38-cp38-win32.whl", hash = "sha256:473858730ef6d6ff7f7d5f19452184cd0caa062a20047f6d6f3e135a4648865d"}, + {file = "regex-2021.8.28-cp38-cp38-win_amd64.whl", hash = "sha256:31a99a4796bf5aefc8351e98507b09e1b09115574f7c9dbb9cf2111f7220d2e2"}, + {file = "regex-2021.8.28-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:04f6b9749e335bb0d2f68c707f23bb1773c3fb6ecd10edf0f04df12a8920d468"}, + {file = "regex-2021.8.28-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b006628fe43aa69259ec04ca258d88ed19b64791693df59c422b607b6ece8bb"}, + {file = "regex-2021.8.28-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:121f4b3185feaade3f85f70294aef3f777199e9b5c0c0245c774ae884b110a2d"}, + {file = "regex-2021.8.28-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a577a21de2ef8059b58f79ff76a4da81c45a75fe0bfb09bc8b7bb4293fa18983"}, + {file = "regex-2021.8.28-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1743345e30917e8c574f273f51679c294effba6ad372db1967852f12c76759d8"}, + {file = "regex-2021.8.28-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:e1e8406b895aba6caa63d9fd1b6b1700d7e4825f78ccb1e5260551d168db38ed"}, + {file = "regex-2021.8.28-cp39-cp39-win32.whl", hash = "sha256:ed283ab3a01d8b53de3a05bfdf4473ae24e43caee7dcb5584e86f3f3e5ab4374"}, + {file = "regex-2021.8.28-cp39-cp39-win_amd64.whl", hash = "sha256:610b690b406653c84b7cb6091facb3033500ee81089867ee7d59e675f9ca2b73"}, + {file = "regex-2021.8.28.tar.gz", hash = "sha256:f585cbbeecb35f35609edccb95efd95a3e35824cd7752b586503f7e6087303f1"}, +] +requests = [ + {file = "requests-2.26.0-py2.py3-none-any.whl", hash = "sha256:6c1246513ecd5ecd4528a0906f910e8f0f9c6b8ec72030dc9fd154dc1a6efd24"}, + {file = "requests-2.26.0.tar.gz", hash = "sha256:b8aa58f8cf793ffd8782d3d8cb19e66ef36f7aba4353eec859e74678b01b07a7"}, +] +requests-toolbelt = [ + {file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"}, + {file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"}, +] +rfc3986 = [ + {file = "rfc3986-1.5.0-py2.py3-none-any.whl", hash = "sha256:a86d6e1f5b1dc238b218b012df0aa79409667bb209e58da56d0b94704e712a97"}, + {file = "rfc3986-1.5.0.tar.gz", hash = "sha256:270aaf10d87d0d4e095063c65bf3ddbc6ee3d0b226328ce21e036f946e421835"}, +] +secretstorage = [ + {file = "SecretStorage-3.3.1-py3-none-any.whl", hash = "sha256:422d82c36172d88d6a0ed5afdec956514b189ddbfb72fefab0c8a1cee4eaf71f"}, + {file = "SecretStorage-3.3.1.tar.gz", hash = "sha256:fd666c51a6bf200643495a04abb261f83229dcb6fd8472ec393df7ffc8b6f195"}, +] +shiboken6 = [ + {file = "shiboken6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-macosx_10_14_x86_64.whl", hash = "sha256:7d285cb61dca4d543ea1ccd6f9bad1b555f222232dfc7240ebc7df83e21fb805"}, + {file = "shiboken6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-abi3-manylinux1_x86_64.whl", hash = "sha256:c44d406263c1cbd1cb92efca02790c39b1563a25e888e3e5c7291aabd2363aee"}, + {file = "shiboken6-6.1.3-6.1.3-cp36.cp37.cp38.cp39-none-win_amd64.whl", hash = "sha256:10023b1b6b27318db74a2232e808c8e5691ec1ff39be89d9878fc60e0adbb690"}, +] +six = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] +toml = [ + {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, + {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, +] +tqdm = [ + {file = "tqdm-4.62.3-py2.py3-none-any.whl", hash = "sha256:8dd278a422499cd6b727e6ae4061c40b48fce8b76d1ccbf5d34fca9b7f925b0c"}, + {file = "tqdm-4.62.3.tar.gz", hash = "sha256:d359de7217506c9851b7869f3708d8ee53ed70a1b8edbba4dbcb47442592920d"}, +] +twine = [ + {file = "twine-3.4.2-py3-none-any.whl", hash = "sha256:087328e9bb405e7ce18527a2dca4042a84c7918658f951110b38bc135acab218"}, + {file = "twine-3.4.2.tar.gz", hash = "sha256:4caec0f1ed78dc4c9b83ad537e453d03ce485725f2aea57f1bb3fdde78dae936"}, +] +typed-ast = [ + {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:2068531575a125b87a41802130fa7e29f26c09a2833fea68d9a40cf33902eba6"}, + {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c907f561b1e83e93fad565bac5ba9c22d96a54e7ea0267c708bffe863cbe4075"}, + {file = "typed_ast-1.4.3-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:1b3ead4a96c9101bef08f9f7d1217c096f31667617b58de957f690c92378b528"}, + {file = "typed_ast-1.4.3-cp35-cp35m-win32.whl", hash = "sha256:dde816ca9dac1d9c01dd504ea5967821606f02e510438120091b84e852367428"}, + {file = "typed_ast-1.4.3-cp35-cp35m-win_amd64.whl", hash = "sha256:777a26c84bea6cd934422ac2e3b78863a37017618b6e5c08f92ef69853e765d3"}, + {file = "typed_ast-1.4.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f8afcf15cc511ada719a88e013cec87c11aff7b91f019295eb4530f96fe5ef2f"}, + {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:52b1eb8c83f178ab787f3a4283f68258525f8d70f778a2f6dd54d3b5e5fb4341"}, + {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:01ae5f73431d21eead5015997ab41afa53aa1fbe252f9da060be5dad2c730ace"}, + {file = "typed_ast-1.4.3-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c190f0899e9f9f8b6b7863debfb739abcb21a5c054f911ca3596d12b8a4c4c7f"}, + {file = "typed_ast-1.4.3-cp36-cp36m-win32.whl", hash = "sha256:398e44cd480f4d2b7ee8d98385ca104e35c81525dd98c519acff1b79bdaac363"}, + {file = "typed_ast-1.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:bff6ad71c81b3bba8fa35f0f1921fb24ff4476235a6e94a26ada2e54370e6da7"}, + {file = "typed_ast-1.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0fb71b8c643187d7492c1f8352f2c15b4c4af3f6338f21681d3681b3dc31a266"}, + {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:760ad187b1041a154f0e4d0f6aae3e40fdb51d6de16e5c99aedadd9246450e9e"}, + {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5feca99c17af94057417d744607b82dd0a664fd5e4ca98061480fd8b14b18d04"}, + {file = "typed_ast-1.4.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:95431a26309a21874005845c21118c83991c63ea800dd44843e42a916aec5899"}, + {file = "typed_ast-1.4.3-cp37-cp37m-win32.whl", hash = "sha256:aee0c1256be6c07bd3e1263ff920c325b59849dc95392a05f258bb9b259cf39c"}, + {file = "typed_ast-1.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:9ad2c92ec681e02baf81fdfa056fe0d818645efa9af1f1cd5fd6f1bd2bdfd805"}, + {file = "typed_ast-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b36b4f3920103a25e1d5d024d155c504080959582b928e91cb608a65c3a49e1a"}, + {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_i686.whl", hash = "sha256:067a74454df670dcaa4e59349a2e5c81e567d8d65458d480a5b3dfecec08c5ff"}, + {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7538e495704e2ccda9b234b82423a4038f324f3a10c43bc088a1636180f11a41"}, + {file = "typed_ast-1.4.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:af3d4a73793725138d6b334d9d247ce7e5f084d96284ed23f22ee626a7b88e39"}, + {file = "typed_ast-1.4.3-cp38-cp38-win32.whl", hash = "sha256:f2362f3cb0f3172c42938946dbc5b7843c2a28aec307c49100c8b38764eb6927"}, + {file = "typed_ast-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:dd4a21253f42b8d2b48410cb31fe501d32f8b9fbeb1f55063ad102fe9c425e40"}, + {file = "typed_ast-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f328adcfebed9f11301eaedfa48e15bdece9b519fb27e6a8c01aa52a17ec31b3"}, + {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_i686.whl", hash = "sha256:2c726c276d09fc5c414693a2de063f521052d9ea7c240ce553316f70656c84d4"}, + {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:cae53c389825d3b46fb37538441f75d6aecc4174f615d048321b716df2757fb0"}, + {file = "typed_ast-1.4.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b9574c6f03f685070d859e75c7f9eeca02d6933273b5e69572e5ff9d5e3931c3"}, + {file = "typed_ast-1.4.3-cp39-cp39-win32.whl", hash = "sha256:209596a4ec71d990d71d5e0d312ac935d86930e6eecff6ccc7007fe54d703808"}, + {file = "typed_ast-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:9c6d1a54552b5330bc657b7ef0eae25d00ba7ffe85d9ea8ae6540d2197a3788c"}, + {file = "typed_ast-1.4.3.tar.gz", hash = "sha256:fb1bbeac803adea29cedd70781399c99138358c26d05fcbd23c13016b7f5ec65"}, +] +typing-extensions = [ + {file = "typing_extensions-3.10.0.2-py2-none-any.whl", hash = "sha256:d8226d10bc02a29bcc81df19a26e56a9647f8b0a6d4a83924139f4a8b01f17b7"}, + {file = "typing_extensions-3.10.0.2-py3-none-any.whl", hash = "sha256:f1d25edafde516b146ecd0613dabcc61409817af4766fbbcfb8d1ad4ec441a34"}, + {file = "typing_extensions-3.10.0.2.tar.gz", hash = "sha256:49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e"}, +] +urllib3 = [ + {file = "urllib3-1.26.7-py2.py3-none-any.whl", hash = "sha256:c4fdf4019605b6e5423637e01bc9fe4daef873709a7973e195ceba0a62bbc844"}, + {file = "urllib3-1.26.7.tar.gz", hash = "sha256:4987c65554f7a2dbf30c18fd48778ef124af6fab771a377103da0585e2336ece"}, +] +webencodings = [ + {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"}, + {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"}, +] +zipp = [ + {file = "zipp-3.5.0-py3-none-any.whl", hash = "sha256:957cfda87797e389580cb8b9e3870841ca991e2125350677b2ca83a0e99390a3"}, + {file = "zipp-3.5.0.tar.gz", hash = "sha256:f5812b1e007e48cff63449a5e9f4e7ebea716b4111f9c4f9a645f91d579bf0c4"}, +] diff --git a/pyproject.toml b/pyproject.toml index dc021ac..88e7bac 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,24 +1,25 @@ [tool.poetry] name = "observ" -version = "0.3.0" -description = "" +version = "0.4.0" +description = "Reactive state management for Python" authors = ["Korijn van Golen <[email protected]>"] license = "MIT" homepage = "https://github.com/Korijn/observ" readme = "README.md" [tool.poetry.dependencies] -python = ">=3.6" +python = ">=3.6,<3.10" [tool.poetry.dev-dependencies] -black = "^19.10b0" -flake8 = "^3.8.3" -flake8-black = "^0.2.0" -flake8-import-order = "^0.18.1" -flake8-print = "^3.1.4" -pytest = "^5.4.3" -pytest-cov = "^2.10.0" -# PyQt5 = "^5.15.4" # needed for PyQt example +black = "*" +flake8 = "*" +flake8-black = "*" +flake8-import-order = "*" +flake8-print = "*" +pytest = "*" +pytest-cov = "*" +PySide6 = "*" +twine = "*" [build-system] requires = ["poetry>=0.12"] diff --git a/scripts.py b/scripts.py deleted file mode 100644 index 837ac1d..0000000 --- a/scripts.py +++ /dev/null @@ -1,24 +0,0 @@ -from subprocess import CalledProcessError, run -import sys - - -def test(args): - try: - run(["flake8"], check=True) - run( - ["pytest", "--cov=observ", "--cov-report=term-missing"] + args, check=True, - ) - except CalledProcessError: - sys.exit(1) - - -def main(): - cmd = sys.argv[0] - if cmd == "test": - test(sys.argv[1:]) - else: - sys.exit(1) - - -if __name__ == "__main__": - main() diff --git a/setup.cfg b/setup.cfg index 8fe7c74..769c63e 100644 --- a/setup.cfg +++ b/setup.cfg @@ -5,5 +5,7 @@ max-line-length = 88 extend-ignore = # See https://github.com/PyCQA/pycodestyle/issues/373 E203, -application-import-names = pylinalg -import-order-style = google \ No newline at end of file +application-import-names = observ +import-order-style = google +per-file-ignores = + observ/__init__.py:F401,F403
Callback scheduling Should they be buffered? Should I provide event loop integration for various libs (i.e. asyncio, qt, etc)?
I could imagine that if you make mutations inside a function, you'd expect everything in the function to complete before the observers respond. Otherwise I could expect some weird race conditions > I could imagine that if you make mutations inside a function, you'd expect everything in the function to complete before the observers respond. Otherwise I could expect some weird race conditions Yeah, I am pretty sure it's possible to cause infinite loops right now, if you work with watchers only. Lazy computed variables, not so much.
2021-09-21T18:48:28
0.0
[]
[]
raphaelvallat/yasa
raphaelvallat__yasa-126
9be559d23639d9cc59f1a23d4eb7fd366f4bb32b
diff --git a/docs/changelog.rst b/docs/changelog.rst index 6bb1bcb..e4fb211 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -11,12 +11,12 @@ What's new v0.7.dev -------- -**Object-oriented hypnogram manipulation** +**Object-oriented hypnogram** -This version introduces the new :py:class:`yasa.Hypnogram` class, which will progressively become +This version introduces the new :py:class:`yasa.Hypnogram` class, which will gradually become the standard way to store and manipulate hypnograms in YASA. Put simply, YASA now uses an object-oriented approach to hypnograms. That is, hypnograms are now stored as a class (aka object), -which comes with several pre-built functions (named methods) and attributes. See for example below: +which comes with several pre-built functions (aka methods) and attributes. See for example below: .. code-block:: python @@ -25,7 +25,7 @@ which comes with several pre-built functions (named methods) and attributes. See values = ["W", "W", "W", "S", "S", "S", "S", "S", "W", "S", "S", "S"] hyp = Hypnogram(values, n_stages=2, start="2022-12-23 22:30:00", scorer="RM") # Below are some class attributes - hyp.hypno # Hypnogram values (pandas Series) + hyp.hypno # Hypnogram values (a pandas.Series of categorical dtype) hyp.duration # Total duration of the hypnogram, in minutes hyp.sampling_frequency # Sampling frequency of the hypnogram hyp.mapping # Mapping from strings to integers diff --git a/yasa/hypno.py b/yasa/hypno.py index f3d6f92..81b0c80 100644 --- a/yasa/hypno.py +++ b/yasa/hypno.py @@ -9,6 +9,7 @@ from yasa.io import set_log_level from yasa.plotting import plot_hypnogram from yasa.sleepstats import transition_matrix +from pandas.api.types import CategoricalDtype __all__ = [ "Hypnogram", @@ -18,7 +19,7 @@ "hypno_upsample_to_data", "hypno_find_periods", "load_profusion_hypno", - "simulate_hypno", + "simulate_hypnogram", ] @@ -29,13 +30,13 @@ class Hypnogram: """ Main class for manipulating hypnogram in YASA. - Starting with YASA v0.7, YASA takes a more object-oriented approach to hypnograms. That is, + Starting with v0.7, YASA takes a more object-oriented approach to hypnograms. That is, hypnograms are now stored as a class (aka object), which comes with its own attributes and functions. Furthermore, YASA does not allow integer values to define the stages anymore. Instead, users must pass an array of strings with the actual stage names (e.g. ["WAKE", "WAKE", "N1", ..., "REM", "REM"]). - .. seealso:: :py:func:`yasa.simulate_hypno` + .. seealso:: :py:func:`yasa.simulate_hypnogram` .. versionadded:: 0.7.0 @@ -96,7 +97,8 @@ class Hypnogram: 9 SLEEP 10 SLEEP 11 SLEEP - Name: Stage, dtype: object + Name: Stage, dtype: category + Categories (4, object): ['WAKE', 'SLEEP', 'ART', 'UNS'] >>> # Number of epochs in the hypnogram >>> hyp.n_epochs @@ -125,7 +127,7 @@ class Hypnogram: 9 1 10 1 11 1 - Name: Stage, dtype: int64 + Name: Stage, dtype: int16 >>> # Calculate the summary sleep statistics >>> hyp.sleep_statistics() @@ -149,15 +151,15 @@ class Hypnogram: SLEEP 1 6 All these methods and properties are also valid with a 5-stages hypnogram. In the example below, - we use the :py:func:`yasa.simulate_hypno` to generate a plausible 5-stages hypnogram with a + we use the :py:func:`yasa.simulate_hypnogram` to generate a plausible 5-stages hypnogram with a 30-seconds resolution. A random seed is specified to ensure that we get reproducible results. Lastly, we set an actual start time to the hypnogram. As a result, the index of the resulting hypnogram is a :py:class:`pandas.DatetimeIndex`. - >>> from yasa import simulate_hypno - >>> hyp = simulate_hypno( + >>> from yasa import simulate_hypnogram + >>> hyp = simulate_hypnogram( ... tib=500, n_stages=5, start="2022-12-15 22:30:00", scorer="S1", seed=42) - >>> hyp + >>> hyp # This is a shortcut to `hyp.hypno` Time 2022-12-15 22:30:00 WAKE 2022-12-15 22:30:30 WAKE @@ -170,7 +172,8 @@ class Hypnogram: 2022-12-16 06:48:30 N2 2022-12-16 06:49:00 N2 2022-12-16 06:49:30 N2 - Freq: 30S, Name: S1, Length: 1000, dtype: object + Freq: 30S, Name: S1, Length: 1000, dtype: category + Categories (7, object): ['WAKE', 'N1', 'N2', 'N3', 'REM', 'ART', 'UNS'] The summary sleep statistics will include more items with a 5-stages hypnogram than a 2-stages hypnogram, i.e. the amount and percentage of each sleep stage, the REM latency, etc. @@ -238,6 +241,10 @@ def __init__(self, values, n_stages=5, *, freq="30s", start=None, scorer=None): map_accepted = {"S": "SLEEP", "W": "WAKE", "R": "REM"} hypno = hypno.replace(map_accepted) labels = pd.Series(accepted).replace(map_accepted).unique().tolist() + # Change dtype of series to "categorical" (reduces memory) + cat_dtype = CategoricalDtype(labels, ordered=False) + hypno = hypno.astype(cat_dtype) + # Create Index if start is not None: hypno.index = pd.date_range(start=start, freq=freq, periods=hypno.shape[0]) hypno.index.name = "Time" @@ -246,6 +253,7 @@ def __init__(self, values, n_stages=5, *, freq="30s", start=None, scorer=None): fake_dt = pd.date_range(start="2022-12-03 00:00:00", freq=freq, periods=hypno.shape[0]) hypno.index.name = "Epoch" timedelta = fake_dt - fake_dt[0] + # Set attributes self._hypno = hypno self._n_epochs = hypno.shape[0] self._freq = freq @@ -266,7 +274,14 @@ def __str__(self): @property def hypno(self): - """The hypnogram values, stored in a :py:class:`pandas.Series`.""" + """ + The hypnogram values, stored in a :py:class:`pandas.Series`. To reduce memory usage, the + stages are stored as categories (:py:class:`pandas.Categorical`). ``hypno`` + inherits all the methods of a standard :py:class:`pandas.Series`, e.g. ``.describe()``, + ``.unique()``, ``.to_csv()``, and more. + + .. note:: ``print(Hypnogram)`` is a shortcut to ``print(Hypnogram.hypno)``. + """ return self._hypno @property @@ -306,7 +321,8 @@ def duration(self): def n_stages(self): """ The number of allowed stages in the hypnogram. This is not the number of unique stages - in the current hypnogram. + in the current hypnogram. This does not include Artefact and Unscored which are always + allowed. """ return self._n_stages @@ -345,16 +361,6 @@ def scorer(self): # CLASS METHODS BELOW - def copy(self): - """Return a new copy of the current Hypnogram.""" - return type(self)( - values=self.hypno.to_numpy(), - n_stages=self.n_stages, - freq=self.freq, - start=self.start, - scorer=self.scorer, - ) - def as_annotations(self): """ Return a pandas DataFrame summarizing epoch-level information. @@ -397,7 +403,7 @@ def as_annotations(self): return pd.DataFrame(data).set_index("epoch") def as_int(self): - """Return hypnogram as integers. + """Return hypnogram values as integers. The default mapping from string to integer is: @@ -412,7 +418,7 @@ def as_int(self): Examples -------- - Convert a 2-stages hypnogram to integers + Convert a 2-stages hypnogram to a pandas.Series of integers >>> from yasa import Hypnogram >>> hyp = Hypnogram(["W", "W", "S", "S", "W", "S"], n_stages=2) @@ -424,7 +430,7 @@ def as_int(self): 3 1 4 0 5 1 - Name: Stage, dtype: int64 + Name: Stage, dtype: int16 Same with a 4-stages hypnogram @@ -439,9 +445,10 @@ def as_int(self): 4 3 5 4 6 0 - Name: Stage, dtype: int64 + Name: Stage, dtype: int16 """ - return self.hypno.replace(self.mapping).astype(int) + # Return as int16 (-32768 to 32767) to reduce memory usage + return self.hypno.replace(self.mapping).astype(np.int16) def consolidate_stages(self, new_n_stages): """Reduce the number of stages in a hypnogram to match actigraphy or wearables. @@ -486,7 +493,8 @@ def consolidate_stages(self, new_n_stages): 5 SLEEP 6 SLEEP 7 WAKE - Name: Stage, dtype: object + Name: Stage, dtype: category + Categories (4, object): ['WAKE', 'SLEEP', 'ART', 'UNS'] """ assert self.n_stages in [3, 4, 5], "`self.n_stages` must be 3, 4, or 5" assert new_n_stages in [2, 3, 4], "`new_n_stages` must be 2, 3, or 4" @@ -520,6 +528,16 @@ def consolidate_stages(self, new_n_stages): scorer=self.scorer, ) + def copy(self): + """Return a new copy of the current Hypnogram.""" + return type(self)( + values=self.hypno.to_numpy(), + n_stages=self.n_stages, + freq=self.freq, + start=self.start, + scorer=self.scorer, + ) + def find_periods(self, threshold="5min", equal_length=False): """Find sequences of consecutive values exceeding a certain duration in hypnogram. @@ -579,8 +597,8 @@ def find_periods(self, threshold="5min", equal_length=False): This function is not limited to binary arrays, e.g. a 5-stages hypnogram at 30-sec resolution: - >>> from yasa import simulate_hypno - >>> hyp = simulate_hypno(tib=30, seed=42) + >>> from yasa import simulate_hypnogram + >>> hyp = simulate_hypnogram(tib=30, seed=42) >>> hyp.find_periods(threshold="2min") values start length 0 WAKE 0 5 @@ -626,21 +644,21 @@ def plot_hypnogram(self, **kwargs): .. plot:: >>> from yasa import simulate_hypnogram - >>> ax = simulate_hypno(tib=480, seed=88).plot_hypnogram(highlight="REM"") + >>> ax = simulate_hypnogram(tib=480, seed=88).plot_hypnogram(highlight="REM") """ return plot_hypnogram(self, **kwargs) def simulate_similar(self, **kwargs): """Simulate a new hypnogram based on properties of the current hypnogram. - .. seealso:: :py:func:`yasa.simulate_hypno` + .. seealso:: :py:func:`yasa.simulate_hypnogram` Parameters ---------- self : :py:class:`yasa.Hypnogram` Hypnogram, assumed to be already cropped to time in bed (TIB). **kwargs : dict - Optional keyword arguments passed to :py:func:`yasa.simulate_hypno`. + Optional keyword arguments passed to :py:func:`yasa.simulate_hypnogram`. Returns ------- @@ -675,7 +693,7 @@ def simulate_similar(self, **kwargs): "`n_stages` and `freq` cannot be included as additional `**kwargs` " "because they must match properties of the current Hypnogram." ) - simulate_hypno_kwargs = { + simulate_hypnogram_kwargs = { "tib": self.duration, "n_stages": self.n_stages, "freq": self.freq, @@ -683,8 +701,8 @@ def simulate_similar(self, **kwargs): "start": self.start, "scorer": self.scorer, } - simulate_hypno_kwargs.update(kwargs) - return simulate_hypno(**simulate_hypno_kwargs) + simulate_hypnogram_kwargs.update(kwargs) + return simulate_hypnogram(**simulate_hypnogram_kwargs) def sleep_statistics(self): """ @@ -755,9 +773,9 @@ def sleep_statistics(self): Sleep statistics for a 5-stages hypnogram - >>> from yasa import simulate_hypno + >>> from yasa import simulate_hypnogram >>> # Generate a 8 hr (= 480 minutes) 5-stages hypnogram with a 30-seconds resolution - >>> hyp = simulate_hypno(tib=480, seed=42) + >>> hyp = simulate_hypnogram(tib=480, seed=42) >>> hyp.sleep_statistics() {'TIB': 480.0, 'SPT': 477.5, @@ -886,9 +904,9 @@ def transition_matrix(self): Examples -------- - >>> from yasa import Hypnogram, simulate_hypno + >>> from yasa import Hypnogram, simulate_hypnogram >>> # Generate a 8 hr (= 480 minutes) 5-stages hypnogram with a 30-seconds resolution - >>> hyp = simulate_hypno(tib=480, seed=42) + >>> hyp = simulate_hypnogram(tib=480, seed=42) >>> counts, probs = hyp.transition_matrix() >>> counts To Stage WAKE N1 N2 N3 REM @@ -946,7 +964,8 @@ def upsample(self, new_freq, **kwargs): 2022-12-23 23:01:00 SLEEP 2022-12-23 23:01:30 SLEEP 2022-12-23 23:02:00 WAKE - Freq: 30S, Name: Stage, dtype: object + Freq: 30S, Name: Stage, dtype: category + Categories (4, object): ['WAKE', 'SLEEP', 'ART', 'UNS'] Upsample to a 15-seconds resolution @@ -963,7 +982,8 @@ def upsample(self, new_freq, **kwargs): 2022-12-23 23:01:45 SLEEP 2022-12-23 23:02:00 WAKE 2022-12-23 23:02:15 WAKE - Freq: 15S, Name: Stage, dtype: object + Freq: 15S, Name: Stage, dtype: category + Categories (4, object): ['WAKE', 'SLEEP', 'ART', 'UNS'] """ assert pd.Timedelta(new_freq) < pd.Timedelta(self.freq), ( f"The upsampling `new_freq` ({new_freq}) must be higher than the current frequency of " @@ -1497,7 +1517,7 @@ def hypno_find_periods(hypno, sf_hypno, threshold="5min", equal_length=False): ############################################################################# -def simulate_hypno( +def simulate_hypnogram( tib=480, trans_probas=None, init_probas=None, @@ -1577,8 +1597,8 @@ def simulate_hypno( Examples -------- - >>> from yasa import simulate_hypno - >>> hyp = simulate_hypno(tib=5, seed=1) + >>> from yasa import simulate_hypnogram + >>> hyp = simulate_hypnogram(tib=5, seed=1) >>> print(hyp) Epoch 0 WAKE @@ -1593,7 +1613,7 @@ def simulate_hypno( 9 N2 Name: Stage, dtype: object - >>> hyp = simulate_hypno(tib=5, n_stages=2, seed=1) + >>> hyp = simulate_hypnogram(tib=5, n_stages=2, seed=1) >>> print(hyp) Epoch 0 WAKE @@ -1610,7 +1630,7 @@ def simulate_hypno( Add some Unscored epochs. - >>> hyp = simulate_hypno(tib=5, n_stages=2, seed=1) + >>> hyp = simulate_hypnogram(tib=5, n_stages=2, seed=1) >>> hyp.hypno.iloc[-2:] = "UNS" >>> print(hyp) Epoch diff --git a/yasa/plotting.py b/yasa/plotting.py index 39d08c7..87ba2f0 100644 --- a/yasa/plotting.py +++ b/yasa/plotting.py @@ -41,9 +41,9 @@ def plot_hypnogram(hyp, lw=1.5, highlight="REM", fill_color=None, ax=None): -------- .. plot:: - >>> from yasa import simulate_hypno + >>> from yasa import simulate_hypnogram >>> import matplotlib.pyplot as plt - >>> hyp = simulate_hypno(tib=300, seed=11) + >>> hyp = simulate_hypnogram(tib=300, seed=11) >>> ax = hyp.plot_hypnogram() >>> plt.tight_layout() @@ -58,8 +58,8 @@ def plot_hypnogram(hyp, lw=1.5, highlight="REM", fill_color=None, ax=None): .. plot:: >>> fig, axes = plt.subplots(nrows=2, figsize=(6, 4), constrained_layout=True) - >>> hyp_a = simulate_hypno(n_stages=3, seed=99) - >>> hyp_b = simulate_hypno(n_stages=3, seed=99, start="2022-01-31 23:30:00") + >>> hyp_a = simulate_hypnogram(n_stages=3, seed=99) + >>> hyp_b = simulate_hypnogram(n_stages=3, seed=99, start="2022-01-31 23:30:00") >>> hyp_a.plot_hypnogram(lw=1, fill_color="whitesmoke", highlight=None, ax=axes[0]) >>> hyp_b.plot_hypnogram(lw=1, fill_color="whitesmoke", highlight=None, ax=axes[1]) """
Hypnogram.hypno as a pd.Categorical? @remrama , related to our discussion about the output of `yasa.Hypnogram.upsample_to_data` in #124, I was thinking that we could in fact set the default dtype of `hyp.hypno` as a [pandas.Categorical](https://pandas.pydata.org/pandas-docs/version/1.4/reference/api/pandas.Categorical.html). This reduces the memory size of an upsampled hypnogram (5 stages, 8 hour TIB, freq="10ms") from 22 MB (dtype=str or dtype=np.int64) to 2.7 MB. ```python pd.Categorical(hyp.hypno, categories=hyp.labels, ordered=False) ```
O great idea!! I didn't realize that drops memory so much. Seems like a "go" at first thought, though there might be some subtle and unforeseen limitations. I can't remember any specific examples, but this has been somewhat of my prior experience with `pandas.Categorical`. I think it's definitely worth trying. Great! I also had some prior issues, especially when using groupby functions. But I don't think it applies to us so let's make the switch! I can create a separate PR once we've merged #124
2022-12-30T17:48:26
0.0
[]
[]
raphaelvallat/yasa
raphaelvallat__yasa-75
65f033fdcca8b1590808edad25c7a53c713ffa19
diff --git a/docs/changelog.rst b/docs/changelog.rst index 9a56502..b41f127 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -22,9 +22,10 @@ c. Added a new dataset containing 8 hours of ECG data. The dataset is in compres a. Added the :py:func:`yasa.compare_detection` function to determine the correctness of detected events against ground-truth events. It calculates the true positive, false positives and false negatives, and from those, the precision, recall and F1-scores. The input should be the indices of the onset of the event, in samples. It includes a max_distance argument which specifies the tolerance window (in number of samples) for two events to be considered the same. b. Added the :py:meth:`yasa.SpindlesResults.compare_detection` and :py:meth:`yasa.SWResults.compare_detection` method. This is a powerful and flexible function that allows to calculate the performance of the current detection against a) another detection or b) ground-truth annotations. For example, we can compare the output of the spindles detection with different thresholds. c. Added the :py:meth:`yasa.SpindlesResults.compare_channels` and :py:meth:`yasa.SWResults.compare_channels` methods to compare the overlap of the detected events between channels. Agreement is calculated using the F1-score (default), precision or recall. -d. Better handling of flat data in :py:func:`yasa.spindles_detect`. The function previously returned a division by zero error if part of the data was flat. See `issue 85 <https://github.com/raphaelvallat/yasa/issues/85>`_ -e. When using an MNE.Raw object, conversion of the data from Volts to micro-Volts is now performed within MNE. `PR 70 <https://github.com/raphaelvallat/yasa/pull/70>`_ -f. Use `black <https://black.readthedocs.io/en/stable/>`_ code formatting. +d. Add ``vmin`` and ``vmax`` parameters to :py:func:`yasa.plot_spectrogram`. `PR 75 <https://github.com/raphaelvallat/yasa/pull/75>`_ +e. Better handling of flat data in :py:func:`yasa.spindles_detect`. The function previously returned a division by zero error if part of the data was flat. See `issue 85 <https://github.com/raphaelvallat/yasa/issues/85>`_ +f. When using an MNE.Raw object, conversion of the data from Volts to micro-Volts is now performed within MNE. `PR 70 <https://github.com/raphaelvallat/yasa/pull/70>`_ +g. Use `black <https://black.readthedocs.io/en/stable/>`_ code formatting. **Others** diff --git a/yasa/plotting.py b/yasa/plotting.py index 18095a9..3207e40 100644 --- a/yasa/plotting.py +++ b/yasa/plotting.py @@ -111,7 +111,16 @@ def plot_hypnogram(hypno, sf_hypno=1 / 30, lw=1.5, figsize=(9, 3)): def plot_spectrogram( - data, sf, hypno=None, win_sec=30, fmin=0.5, fmax=25, trimperc=2.5, cmap="RdBu_r" + data, + sf, + hypno=None, + win_sec=30, + fmin=0.5, + fmax=25, + trimperc=2.5, + cmap="RdBu_r", + vmin=None, + vmax=None, ): """ Plot a full-night multi-taper spectrogram, optionally with the hypnogram on top. @@ -158,6 +167,10 @@ def plot_spectrogram( are defined as the 2.5 and 97.5 percentiles of the spectrogram. cmap : str Colormap. Default to 'RdBu_r'. + vmin : int or float + The lower range of color scale. Overwrites ``trimperc`` + vmax : int or float + The upper range of color scale. Overwrites ``trimperc`` Returns ------- @@ -212,6 +225,12 @@ def plot_spectrogram( assert isinstance(fmax, (int, float)), "fmax must be int or float." assert fmin < fmax, "fmin must be strictly inferior to fmax." assert fmax < sf / 2, "fmax must be less than Nyquist (sf / 2)." + assert isinstance(vmin, (int, float, type(None))), "vmin must be int, float, or None." + assert isinstance(vmax, (int, float, type(None))), "vmax must be int, float, or None." + if vmin is not None: + assert isinstance(vmax, (int, float)), "vmax must be int or float if vmin is provided" + if vmax is not None: + assert isinstance(vmin, (int, float)), "vmin must be int or float if vmax is provided" # Calculate multi-taper spectrogram nperseg = int(win_sec * sf) @@ -226,8 +245,11 @@ def plot_spectrogram( t /= 3600 # Convert t to hours # Normalization - vmin, vmax = np.percentile(Sxx, [0 + trimperc, 100 - trimperc]) - norm = Normalize(vmin=vmin, vmax=vmax) + if vmin is None: + vmin, vmax = np.percentile(Sxx, [0 + trimperc, 100 - trimperc]) + norm = Normalize(vmin=vmin, vmax=vmax) + else: + norm = Normalize(vmin=vmin, vmax=vmax) if hypno is None: fig, ax = plt.subplots(nrows=1, figsize=(12, 4))
Open `vmax` and `vmin` access in `yasa.plot_spectrogram()` It's not always useful, but sometimes you want to plot different channels on a common scale. I think `plot_spectrogram()` could have `vmax=None` and `vmin=None` and, if supplied, they would overrule the normalization of the colormap. It might be that this can be done once the spectrogram is but I'm not so fluent in matplotlib to do so.
Hi @matiasandina, Good idea. It should be pretty straightforward to implement. Do you want to submit a PR? Or I'll work on it for the next release. Thanks, Raphael I'm about to head out for vacation, I can take a look at it in 2-3 weeks at minimum. El mié, 8 jun 2022 a las 12:41, Raphael Vallat ***@***.***>) escribió: > Hi @matiasandina <https://github.com/matiasandina>, > > Good idea. It should be pretty straightforward to implement. Do you want > to submit a PR? Or I'll work on it for the next release. > > Thanks, > Raphael > > — > Reply to this email directly, view it on GitHub > <https://github.com/raphaelvallat/yasa/issues/73#issuecomment-1150147957>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABZF2N4VC5ZXJIW66J4T4ULVODEJZANCNFSM5YDDEAMA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > Turns out I needed it myself and it was way easier than what I thought. Will submit a PR soon ``` from io import BytesIO import numpy as np import requests import local version of plot_spectrogram r = requests.get('https://github.com/raphaelvallat/yasa/raw/master/notebooks/data_full_6hrs_100Hz_Cz%2BFz%2BPz.npz', stream=True) npz = np.load(BytesIO(r.raw.read())) data = npz.get('data')[0, :] sf = 100 fig = plot_spectrogram(data, sf) ``` ![image](https://user-images.githubusercontent.com/7494967/172890303-f03ce37e-10be-4166-aba4-7d4fc08e55c6.png) ``` fig2 = plot_spectrogram(data, sf, vmin=-50, vmax=100) ``` ![image](https://user-images.githubusercontent.com/7494967/172890329-716c55b4-3b96-4671-8f4f-0f1efc26b4d3.png) Trying to test it for failure ``` fig3 = plot_spectrogram(data, sf, vmin=-50, vmax='asdf') ``` Results in ``` 113 assert fmax < sf / 2, 'fmax must be less than Nyquist (sf / 2).' 114 assert isinstance(vmin, (int, float, type(None))), 'vmin must be int, float, or None.' --> 115 assert isinstance(vmax, (int, float, type(None))), 'vmax must be int, float, or None.' 116 if vmin is not None: 117 assert isinstance(vmax, (int, float)), 'vmax must be int or float if vmin is provided' ```
2022-06-09T15:59:17
0.0
[]
[]
oresat/CANopen-monitor
oresat__CANopen-monitor-60
691fe2e8c1f09d163e74668c4110290be00b047b
diff --git a/canopen_monitor/__init__.py b/canopen_monitor/__init__.py index 6a4401d..acaa3b2 100755 --- a/canopen_monitor/__init__.py +++ b/canopen_monitor/__init__.py @@ -2,7 +2,7 @@ MAJOR = 3 MINOR = 3 -PATCH = 0 +PATCH = 1 APP_NAME = 'canopen-monitor' APP_DESCRIPTION = 'An NCurses-based TUI application for tracking activity' \ diff --git a/canopen_monitor/parse/eds.py b/canopen_monitor/parse/eds.py index 9c573db..933c3a3 100644 --- a/canopen_monitor/parse/eds.py +++ b/canopen_monitor/parse/eds.py @@ -145,6 +145,11 @@ def __init__(self, eds_data: [str]): prev = 0 for i, line in enumerate(eds_data): if line == '': + # Handle extra empty strings + if prev == i: + prev = i + 1 + continue + section = eds_data[prev:i] id = section[0][1:-1].split('sub')
Application Crash on DCF File Parsing More information to follow. ``` Traceback (most recent call last): File "/home/monitor/.local/bin//canopen-monitor", line 8, in <module> sys.exit(main()) File "/home/monitor/.local/lib/python3.9/site-packages/canopen_monitor/__main__.py", line 68, in main eds_configs = load_eds_files() File "/home/monitor/.local/lib/python3.9/site-packages/canopen_monitor/__main__.py", line 19, in load_eds_files config = load_eds_file(full_path) File "/home/monitor/.local/lib/python3.9/site-packages/canopen_monitor/parse/eds.py", line 190, in load_eds_file return EDS(list(map(lambda x: x.strip(), file.read().split('\n')))) File "/home/monitor/.local/lib/python3.9/site-packages/canopen_monitor/parse/eds.py", line 149, in __init__ id = section[0][1:-1].split('sub') IndexError: list index out of range ```
Sample DCF file. [battery.dcf.txt](https://github.com/oresat/CANopen-monitor/files/6417500/battery.dcf.txt) The issue is caused by a blank line as the last line of the DCF file It looks like ending with a subindex actually requires a single blank line and ending with a non subindex requires no blank line.
2021-05-03T22:35:40
0.0
[]
[]
oresat/CANopen-monitor
oresat__CANopen-monitor-52
fa74b09f83b2c6d51c1be8fa9da1d0ce925c4836
diff --git a/README.md b/README.md index 8b8567d..b759c4c 100755 --- a/README.md +++ b/README.md @@ -31,9 +31,13 @@ An NCurses-based TUI application for tracking activity over the CAN bus and deco *** # Configuration +The default configurations provided by CANOpen Monitor can be found in +[canopen_monitor/assets](./canopen_monitor/assets). These are the default +assets provided. At runtime these configs are copied to +`~/.config/canopen-monitor` where they can be modified and the changes +will persist. -The default configurations provided by CANOpen Monitor can be found in [canopen_monitor/assets](./canopen_monitor/assets). These are the default assets provided. At runtime these configs are copied to `~/.config/canopen-monitor` where they can be modified and the changes will persist. - +EDS files are loaded from `~/.cache/canopen-monitor` *** # Development and Contribution @@ -42,13 +46,39 @@ The default configurations provided by CANOpen Monitor can be found in [canopen_ Check out our [Read The Docs](https://canopen-monitor.readthedocs.io) pages for more info on the application sub-components and methods. +### Pre-Requisites +* Ubuntu/Debian Linux System + +* Python 3.8.5 or higher (pyenv is recommended for managing different python versions, https://realpython.com/intro-to-pyenv/#build-dependencies) + ### Install Locally +#### Setup a virtual CAN signal generator +`$` `sudo apt-get install can-utils` + +#### Start a virtual CAN +`$` `sudo ip link add dev vcan0 type vcan` + +`$` `sudo ip link set up vcan0` + +#### Clone the repo +`$` `git clone https://github.com/Boneill3/CANopen-monitor.git` + +`$` `cd CANopen-monitor` + `$` `pip install -e .[dev]` *(Note: the `-e` flag creates a symbolic-link to your local development version. Set it once, and forget it)* -### Create Documentation Locally +#### Generate random messages with socketcan-dev +`$` `chmod 700 socketcan-dev` + +`$` `./socketcan-dev.py --random-id --random-message -r` + +#### Start the monitor +`$` `canopen-monitor` + +### Create documentation locally `$` `make -C docs clean html` diff --git a/canopen_monitor/__init__.py b/canopen_monitor/__init__.py index a19f6fa..2a5e141 100755 --- a/canopen_monitor/__init__.py +++ b/canopen_monitor/__init__.py @@ -2,7 +2,7 @@ MAJOR = 3 MINOR = 2 -PATCH = 2 +PATCH = 3 APP_NAME = 'canopen-monitor' APP_DESCRIPTION = 'An NCurses-based TUI application for tracking activity' \ diff --git a/canopen_monitor/app.py b/canopen_monitor/app.py index e2d1a36..9112601 100644 --- a/canopen_monitor/app.py +++ b/canopen_monitor/app.py @@ -2,48 +2,108 @@ import curses import datetime as dt from enum import Enum -from . import APP_NAME, APP_VERSION, APP_LICENSE, APP_AUTHOR, APP_DESCRIPTION, APP_URL +from . import APP_NAME, APP_VERSION, APP_LICENSE, APP_AUTHOR, APP_DESCRIPTION, \ + APP_URL from .can import MessageTable, MessageType from .ui import MessagePane, PopupWindow +# Key Constants not defined in curses +# _UBUNTU key constants work in Ubuntu +KEY_S_UP = 337 +KEY_S_DOWN = 336 +KEY_C_UP = 567 +KEY_C_UP_UBUNTU = 566 +KEY_C_DOWN = 526 +KEY_C_DOWN_UBUNTU = 525 + +# Additional User Interface Related Constants +VERTICAL_SCROLL_RATE = 16 +HORIZONTAL_SCROLL_RATE = 4 + def pad_hex(value: int) -> str: + """ + Convert integer value to a hex string with padding + :param value: number of spaces to pad hex value + :return: padded string + """ return f'0x{hex(value).upper()[2:].rjust(3, "0")}' class KeyMap(Enum): - F1 = ('F1', 'Toggle app info menu', curses.KEY_F1) - F2 = ('F2', 'Toggle this menu', curses.KEY_F2) - UP_ARR = ('Up Arrow', 'Scroll pane up 1 row', curses.KEY_UP) - DOWN_ARR = ('Down Arrow', 'Scroll pane down 1 row', curses.KEY_DOWN) - LEFT_ARR = ('Left Arrow', 'Scroll pane left 4 cols', curses.KEY_LEFT) - RIGHT_ARR = ('Right Arrow', 'Scroll pane right 4 cols', curses.KEY_RIGHT) - S_UP_ARR = ('Shift + Up Arrow', 'Scroll pane up 16 rows', 337) - S_DOWN_ARR = ('Shift + Down Arrow', 'Scroll pane down 16 rows', 336) - C_UP_ARR = ('Ctrl + Up Arrow', 'Move pane selection up', 567) - C_DOWN_ARR = ('Ctrl + Down Arrow', 'Move pane selection down', 526) - RESIZE = ('Resize Terminal', - 'Reset the dimensions of the app', - curses.KEY_RESIZE) + """ + Enumerator of valid keyboard input + value[0]: input name + value[1]: input description + value[2]: curses input value key + """ + + F1 = {'name':'F1','description':'Toggle app info menu','key' : curses.KEY_F1} + F2 = {'name':'F2', 'description':'Toggle this menu', 'key': curses.KEY_F2} + UP_ARR = {'name':'Up Arrow', 'description':'Scroll pane up 1 row', 'key':curses.KEY_UP} + DOWN_ARR = {'name':'Down Arrow', 'description':'Scroll pane down 1 row', 'key':curses.KEY_DOWN} + LEFT_ARR = {'name':'Left Arrow', 'description':'Scroll pane left 4 cols', 'key':curses.KEY_LEFT} + RIGHT_ARR = {'name':'Right Arrow', 'description':'Scroll pane right 4 cols', 'key':curses.KEY_RIGHT} + S_UP_ARR = {'name':'Shift + Up Arrow', 'description':'Scroll pane up 16 rows', 'key':KEY_S_UP} + S_DOWN_ARR ={'name':'Shift + Down Arrow', 'description':'Scroll pane down 16 rows', 'key':KEY_S_DOWN} + C_UP_ARR ={'name':'Ctrl + Up Arrow', 'description':'Move pane selection up', 'key':[KEY_C_UP, KEY_C_UP_UBUNTU]} + C_DOWN_ARR ={'name':'Ctrl + Down Arrow', 'description':'Move pane selection down', 'key':[KEY_C_DOWN, KEY_C_DOWN_UBUNTU]} + RESIZE ={'name':'Resize Terminal', 'description':'Reset the dimensions of the app', 'key':curses.KEY_RESIZE} class App: - """The User Interface + """ + The User Interface Container + :param table + :type MessageTable + + :param selected_pane_pos index of currently selected pane + :type int + + :param selected_pane reference to currently selected Pane + :type MessagePane """ def __init__(self: App, message_table: MessageTable): + """ + App Initialization function + :param message_table: Reference to shared message table object + :type MessageTable + """ self.table = message_table self.selected_pane_pos = 0 self.selected_pane = None + self.key_dict = { + KeyMap.UP_ARR.value['key']: self.up, + KeyMap.S_UP_ARR.value['key']: self.shift_up, + KeyMap.C_UP_ARR.value['key'][0]: self.ctrl_up, + KeyMap.C_UP_ARR.value['key'][1]: self.ctrl_up, # Ubuntu key + KeyMap.DOWN_ARR.value['key']: self.down, + KeyMap.S_DOWN_ARR.value['key']: self.shift_down, + KeyMap.C_DOWN_ARR.value['key'][0]: self.ctrl_down, + KeyMap.C_DOWN_ARR.value['key'][1]: self.ctrl_down, # Ubuntu key + KeyMap.LEFT_ARR.value['key']: self.left, + KeyMap.RIGHT_ARR.value['key']: self.right, + KeyMap.RESIZE.value['key']: self.resize, + KeyMap.F1.value['key']: self.f1, + KeyMap.F2.value['key']: self.f2 + } - def __enter__(self: App): + def __enter__(self: App) -> App: + """ + Enter the runtime context related to this object + Create the user interface layout. Any changes to the layout should + be done here. + :return: self + :type App + """ # Monitor setup, take a snapshot of the terminal state self.screen = curses.initscr() # Initialize standard out - self.screen.scrollok(True) # Enable window scroll - self.screen.keypad(True) # Enable special key input - self.screen.nodelay(True) # Disable user-input blocking - curses.curs_set(False) # Disable the cursor - self.__init_color_pairs() # Enable colors and create pairs + self.screen.scrollok(True) # Enable window scroll + self.screen.keypad(True) # Enable special key input + self.screen.nodelay(True) # Disable user-input blocking + curses.curs_set(False) # Disable the cursor + self.__init_color_pairs() # Enable colors and create pairs # Don't initialize any grids, sub-panes, or windows until standard io # screen has been initialized @@ -63,10 +123,10 @@ def __enter__(self: App): self.hotkeys_win = PopupWindow(self.screen, header='Hotkeys', content=list( - map(lambda x: - f'{x.value[0]}: {x.value[1]}' - f' ({x.value[2]})', - list(KeyMap))), + map(lambda x: + f'{x.value["name"]}: {x.value["description"]}' + f' ({x.value["key"]})', + list(KeyMap))), footer='F2: exit window', style=curses.color_pair(1)) self.hb_pane = MessagePane(cols={'Node ID': ('node_name', 0, hex), @@ -101,56 +161,124 @@ def __enter__(self: App): return self def __exit__(self: App, type, value, traceback) -> None: + """ + Exit the runtime context related to this object. + Cleanup any curses settings to allow the terminal + to return to normal + :param type: exception type or None + :param value: exception value or None + :param traceback: exception traceback or None + :return: None + """ # Monitor destruction, restore terminal state - curses.nocbreak() # Re-enable line-buffering - curses.noecho() # Enable user-input echo - curses.curs_set(True) # Enable the cursor - curses.resetty() # Restore the terminal state - curses.endwin() # Destroy the virtual screen + curses.nocbreak() # Re-enable line-buffering + curses.noecho() # Enable user-input echo + curses.curs_set(True) # Enable the cursor + curses.resetty() # Restore the terminal state + curses.endwin() # Destroy the virtual screen - def _handle_keyboard_input(self: App) -> None: - """This is only a temporary implementation + def up(self): + """ + Up arrow key scrolls pane up 1 row + :return: None + """ + self.selected_pane.scroll_up() - .. deprecated:: + def shift_up(self): + """ + Shift + Up arrow key scrolls pane up 16 rows + :return: None + """ + self.selected_pane.scroll_up(rate=VERTICAL_SCROLL_RATE) - Soon to be removed + def ctrl_up(self): """ - # Grab user input - input = self.screen.getch() - curses.flushinp() + Ctrl + Up arrow key moves pane selection up + :return: None + """ + self.__select_pane(self.hb_pane, 0) - if(input == curses.KEY_UP): - self.selected_pane.scroll_up() - elif(input == curses.KEY_DOWN): - self.selected_pane.scroll_down() - elif(input == 337): # Shift + Up - self.selected_pane.scroll_up(rate=16) - elif(input == 336): # Shift + Down - self.selected_pane.scroll_down(rate=16) - elif(input == curses.KEY_LEFT): - self.selected_pane.scroll_left(rate=4) - elif(input == curses.KEY_RIGHT): - self.selected_pane.scroll_right(rate=4) - elif(input == curses.KEY_RESIZE): - self.hb_pane._reset_scroll_positions() - self.misc_pane._reset_scroll_positions() - self.screen.clear() - elif(input == 567): # Ctrl + Up - self.__select_pane(self.hb_pane, 0) - elif(input == 526): # Ctrl + Down - self.__select_pane(self.misc_pane, 1) - elif(input == curses.KEY_F1): - if(self.hotkeys_win.enabled): - self.hotkeys_win.toggle() - self.hotkeys_win.clear() - self.info_win.toggle() - elif(input == curses.KEY_F2): - if(self.info_win.enabled): - self.info_win.toggle() - self.info_win.clear() + def down(self): + """ + Down arrow key scrolls pane down 1 row + :return: None + """ + self.selected_pane.scroll_down() + + def shift_down(self): + """ + Shift + Down arrow key scrolls down pane 16 rows + :return: + """ + self.selected_pane.scroll_down(rate=VERTICAL_SCROLL_RATE) + + def ctrl_down(self): + """ + Ctrl + Down arrow key moves pane selection down + :return: None + """ + self.__select_pane(self.misc_pane, 1) + + def left(self): + """ + Left arrow key scrolls pane left 4 cols + :return: None + """ + self.selected_pane.scroll_left(rate=HORIZONTAL_SCROLL_RATE) + + def right(self): + """ + Right arrow key scrolls pane right 4 cols + :return: None + """ + self.selected_pane.scroll_right(rate=HORIZONTAL_SCROLL_RATE) + + def resize(self): + """ + Resets the dimensions of the app + :return: None + """ + self.hb_pane._reset_scroll_positions() + self.misc_pane._reset_scroll_positions() + self.screen.clear() + + def f1(self): + """ + Toggle app info menu + :return: None + """ + if self.hotkeys_win.enabled: self.hotkeys_win.toggle() + self.hotkeys_win.clear() + self.info_win.toggle() + + def f2(self): + """ + Toggles KeyMap + :return: None + """ + if self.info_win.enabled: + self.info_win.toggle() + self.info_win.clear() + self.hotkeys_win.toggle() + + def _handle_keyboard_input(self: App) -> None: + """ + Retrieves keyboard input and calls the associated key function + """ + keyboard_input = self.screen.getch() + curses.flushinp() + + try: + self.key_dict[keyboard_input]() + except KeyError: + ... def __init_color_pairs(self: App) -> None: + """ + Initialize color options used by curses + :return: None + """ curses.start_color() # Implied: color pair 0 is standard black and white curses.init_pair(1, curses.COLOR_GREEN, curses.COLOR_BLACK) @@ -160,8 +288,14 @@ def __init_color_pairs(self: App) -> None: curses.init_pair(5, curses.COLOR_MAGENTA, curses.COLOR_BLACK) def __select_pane(self: App, pane: MessagePane, pos: int) -> None: + """ + Set Pane as Selected + :param pane: Reference to selected Pane + :param pos: Index of Selected Pane + :return: None + """ # Only undo previous selection if there was any - if(self.selected_pane is not None): + if (self.selected_pane is not None): self.selected_pane.selected = False # Select the new pane and change internal Pane state to indicate it @@ -170,6 +304,11 @@ def __select_pane(self: App, pane: MessagePane, pos: int) -> None: self.selected_pane.selected = True def __draw_header(self: App, ifaces: [tuple]) -> None: + """ + Draw the header at the top of the interface + :param ifaces: CAN Bus Interfaces + :return: None + """ # Draw the timestamp date_str = f'{dt.datetime.now().ctime()},' self.screen.addstr(0, 0, date_str) @@ -183,16 +322,25 @@ def __draw_header(self: App, ifaces: [tuple]) -> None: pos += sl + 1 def __draw__footer(self: App) -> None: + """ + Draw the footer at the bottom of the interface + :return: None + """ height, width = self.screen.getmaxyx() footer = '<F1>: Info, <F2>: Hotkeys' self.screen.addstr(height - 1, 1, footer) - def draw(self: App, ifaces: [tuple]): + def draw(self: App, ifaces: [tuple]) -> None: + """ + Draw the entire interface + :param ifaces: CAN Bus Interfaces + :return: None + """ window_active = self.info_win.enabled or self.hotkeys_win.enabled self.__draw_header(ifaces) # Draw header info # Draw panes - if(not window_active): + if (not window_active): self.hb_pane.draw() self.misc_pane.draw() @@ -200,7 +348,11 @@ def draw(self: App, ifaces: [tuple]): self.info_win.draw() self.hotkeys_win.draw() - self.__draw__footer() # Draw footer info + self.__draw__footer() - def refresh(self: App): + def refresh(self: App) -> None: + """ + Refresh entire screen + :return: None + """ self.screen.refresh() diff --git a/canopen_monitor/can/message.py b/canopen_monitor/can/message.py index 82fcd6e..263cf51 100644 --- a/canopen_monitor/can/message.py +++ b/canopen_monitor/can/message.py @@ -34,8 +34,12 @@ class MessageType(Enum): # Special Types UKNOWN = (-0x1, -0x1) # Pseudo type unknown - PDO = (0x180, 0x57F) # Super type PDO - SDO = (0x580, 0x680) # Super type SDO + PDO = (0x180, 0x57F) # Super type PDO + SDO = (0x580, 0x680) # Super type SDO + + def __init__(self, start, end): + self.start = start + self.end = end @property def supertype(self: MessageType) -> MessageType: @@ -49,17 +53,15 @@ def supertype(self: MessageType) -> MessageType: :return: The supertype of this type :rtype: MessageType """ - if(self.value[0] >= self.PDO.value[0] - and self.value[0] <= self.PDO.value[1]): + if self.PDO.start <= self.start <= self.PDO.end: return MessageType['PDO'] - elif(self.value[0] >= self.SDO.value[0] - and self.value[0] <= self.SDO.value[1]): + elif self.SDO.start <= self.start <= self.SDO.end: return MessageType['SDO'] else: return MessageType['UKNOWN'] @staticmethod - def cob_to_node(mtype: MessageType, cob_id: int) -> int: + def cob_to_node(msg_type: MessageType, cob_id: int) -> int: """Determines the Node ID based on the given COB ID The COB ID is the raw ID sent with the CAN message, and the node id is @@ -85,7 +87,7 @@ def cob_to_node(mtype: MessageType, cob_id: int) -> int: :return: The Node ID :rtype: int """ - return cob_id - mtype.value[0] + return cob_id - msg_type.start @staticmethod def cob_id_to_type(cob_id: int) -> MessageType: @@ -97,9 +99,9 @@ def cob_id_to_type(cob_id: int) -> MessageType: :return: The message type (range) the COB ID fits into :rtype: MessageType """ - for t in list(MessageType): - if(cob_id >= t.value[0] and cob_id <= t.value[1]): - return t + for msg_type in list(MessageType): + if msg_type.start <= cob_id <= msg_type.end: + return msg_type return MessageType['UKNOWN'] def __str__(self) -> str: @@ -124,8 +126,6 @@ class MessageState(Enum): DEAD = 'Dead' def __str__(self: MessageState) -> str: - """ Overloaded `str()` operator - """ return self.value + ' ' @@ -160,9 +160,9 @@ def state(self: Message) -> MessageState: :return: State of the message :rtype: MessageState """ - if(self.age >= DEAD_TIME): + if self.age >= DEAD_TIME: return MessageState['DEAD'] - elif(self.age >= STALE_TIME): + elif self.age >= STALE_TIME: return MessageState['STALE'] else: return MessageState['ALIVE'] diff --git a/canopen_monitor/parse/__init__.py b/canopen_monitor/parse/__init__.py index e8027a3..7c66ddd 100644 --- a/canopen_monitor/parse/__init__.py +++ b/canopen_monitor/parse/__init__.py @@ -1,9 +1,8 @@ """This module is primarily responsible for providing a high-level interface -for parsing CANOpen messages according to Ojbect Definiton files or Electronic +for parsing CANOpen messages according to Object Definiton files or Electronic Data Sheet files, provided by the end user. """ import enum -from re import finditer from .eds import EDS, load_eds_file from .canopen import CANOpenParser @@ -11,92 +10,4 @@ 'CANOpenParser', 'EDS', 'load_eds_file', -] - -data_types = {0x01: "BOOLEAN", - 0x02: "INTEGER8", - 0x03: "INTEGER16", - 0x04: "INTEGER32", - 0x05: "UNSIGNED8", - 0x06: "UNSIGNED16", - 0x07: "UNSIGNED32", - 0x08: "REAL32", - 0x09: "VISIBLE_STRING", - 0x0A: "OCTET_STRING", - 0x0B: "UNICODE_STRING", - 0x0F: "DOMAIN", - 0x11: "REAL64", - 0x15: "INTEGER64", - 0x1B: "UNSIGNED64"} - -node_names = {0x01: "C3", - 0x06: "Solar Panel", - 0x11: "SDR GPS", - 0x12: "Star Tracker", - 0x21: "OreSat Live", - 0x22: "Cirrus Flux Cameras", - 0x31: "Battery", - 0x32: "Test Board 1", - 0x33: "Test Board 2", - 0x40: "MDC"} - - -class DataTypes(enum.Enum): - BOOLEAN = 0x1 - INTEGER8 = 0x2 - INTEGER16 = 0x3 - INTEGER32 = 0x4 - UNSIGNED8 = 0x5 - UNSIGNED16 = 0x6 - UNSIGNED32 = 0x7 - REAL32 = 0x8 - VISIBLE_STRING = 0x9 - OCTET_STRING = 0xA - UNICODE_STRING = 0xB - DOMAIN = 0xF - REAL64 = 0x11 - INTEGER64 = 0x15 - UNSIGNED64 = 0x1B - - -object_types = {0x00: "NULL", - 0x02: "DOMAIN", - 0x05: "DEFTYPE", - 0x06: "DEFSTRUCT", - 0x07: "VAR", - 0x08: "ARRAY", - 0x09: "RECORD"} - - -def camel_to_snake(old_name: str) -> str: - new_name = '' - - for match in finditer('[A-Z0-9]+[a-z]*', old_name): - span = match.span() - substr = old_name[span[0]:span[1]] - # length = span[1] - span[0] - found_submatch = False - - for sub_match in finditer('[A-Z]+', substr): - sub_span = sub_match.span() - sub_substr = old_name[sub_span[0]:sub_span[1]] - sub_length = sub_span[1] - sub_span[0] - - if (sub_length > 1): - found_submatch = True - - if (span[0] != 0): - new_name += '_' - - first = sub_substr[:-1] - second = substr.replace(first, '') - - new_name += '{}_{}'.format(first, second).lower() - - if (not found_submatch): - if (span[0] != 0): - new_name += '_' - - new_name += substr.lower() - - return new_name +] \ No newline at end of file diff --git a/canopen_monitor/parse/canopen.py b/canopen_monitor/parse/canopen.py index 8d6177c..841a180 100644 --- a/canopen_monitor/parse/canopen.py +++ b/canopen_monitor/parse/canopen.py @@ -7,36 +7,53 @@ from .sdo import SDOParser from .utilities import FailedValidationError - class CANOpenParser: + """ + A convenience wrapper for the parse function + """ def __init__(self, eds_configs: dict): self.sdo_parser = SDOParser() self.eds_configs = eds_configs def parse(self, message: Message) -> str: + """ + Detect the type of the given message and return the parsed version + + Arguments + --------- + @:param: message: a Message object containing the message + + Returns + ------- + `str`: The parsed message + + """ node_id = message.node_id eds_config = self.eds_configs.get(hex(node_id)) \ if node_id is not None else None + # Detect message type and select the appropriate parse function if (message.type == MessageType.SYNC): - parse = SYNCParser.parse + parse_function = SYNCParser.parse elif (message.type == MessageType.EMER): - parse = EMCYParser.parse + parse_function = EMCYParser.parse elif (message.supertype == MessageType.PDO): - parse = PDOParser.parse + parse_function = PDOParser.parse elif (message.supertype == MessageType.SDO): if self.sdo_parser.is_complete: self.sdo_parser = SDOParser() - parse = self.sdo_parser.parse + parse_function = self.sdo_parser.parse elif (message.type == MessageType.HEARTBEAT): - parse = HBParser.parse + parse_function = HBParser.parse elif (message.type == MessageType.TIME): - parse = TIMEParser.parse + parse_function = TIMEParser.parse else: - parse = None + parse_function = None + # Call the parse function and save the result + # On error, return the message data try: - parsed_message = parse(message.arb_id, message.data, eds_config) + parsed_message = parse_function(message.arb_id, message.data, eds_config) except (FailedValidationError, TypeError): parsed_message = ' '.join(list(map(lambda x: hex(x)[2:] .upper() diff --git a/canopen_monitor/parse/eds.py b/canopen_monitor/parse/eds.py index 92dad30..c0857d1 100644 --- a/canopen_monitor/parse/eds.py +++ b/canopen_monitor/parse/eds.py @@ -2,6 +2,47 @@ from typing import Union import canopen_monitor.parse as cmp from dateutil.parser import parse as dtparse +from re import sub, finditer + + +def camel_to_snake(old_str: str) -> str: + """ + Converts camel cased string to snake case, counting groups of repeated capital letters (such as "PDO") as one unit + That is, string like "PDO_group" become "pdo_group" instead of "p_d_o_group" + """ + # Find all groups that contains one or more capital letters followed by one or more lowercase letters + # The new, camel_cased string will be built up along the way + new_str = "" + for match in finditer('[A-Z0-9]+[a-z]*', old_str): + span = match.span() + substr = old_str[span[0]:span[1]] + found_submatch = False + + # Add a "_" to the newstring to separate the current match group from the previous + # It looks like we shouldn't need to worry about getting "_strings_like_this", because they don't seem to happen + if (span[0] != 0): + new_str += '_' + + # Find all sub-groups of *more than one* capital letters within the match group, and seperate them with "_" characters, + # Append the subgroups to the new_str as they are found + # If no subgroups are found, just append the match group to the new_str + for sub_match in finditer('[A-Z]+', substr): + sub_span = sub_match.span() + sub_substr = old_str[sub_span[0]:sub_span[1]] + sub_length = sub_span[1] - sub_span[0] + + if (sub_length > 1): + found_submatch = True + + first = sub_substr[:-1] + second = substr.replace(first, '') + + new_str += '{}_{}'.format(first, second).lower() + + if (not found_submatch): + new_str += substr.lower() + + return new_str class Metadata: @@ -16,7 +57,7 @@ def __init__(self, data): key, value = e.split('=') # Create the proper field name - key = cmp.camel_to_snake(key) + key = camel_to_snake(key) # Turn date-time-like objects into datetimes if ('time' in key): @@ -60,7 +101,7 @@ def __init__(self, data, sub_id=None): elif(all(c in string.hexdigits for c in value)): value = int(value, 16) - self.__setattr__(cmp.camel_to_snake(key), value) + self.__setattr__(camel_to_snake(key), value) def add(self, index) -> None: self.sub_indices.append(index) @@ -99,7 +140,7 @@ def __init__(self, eds_data: [str]): .add(Index(section[1:], sub_id=int(id[1], 16))) else: name = section[0][1:-1] - self.__setattr__(cmp.camel_to_snake(name), + self.__setattr__(camel_to_snake(name), Metadata(section[1:])) prev = i + 1 self.node_id = self[0x2101].default_value diff --git a/canopen_monitor/parse/hb.py b/canopen_monitor/parse/hb.py index 905c2ed..4993e14 100644 --- a/canopen_monitor/parse/hb.py +++ b/canopen_monitor/parse/hb.py @@ -2,6 +2,7 @@ from .utilities import FailedValidationError from ..can import MessageType +STATE_BYTE_IDX = 0 def parse(cob_id: int, data: list, eds_config: EDS): """ @@ -9,7 +10,8 @@ def parse(cob_id: int, data: list, eds_config: EDS): Arguments --------- - @:param: data: a byte string containing the heartbeat message + @:param: data: a byte string containing the heartbeat message, + byte 0 is the heartbeat state info. Returns ------- @@ -21,8 +23,9 @@ def parse(cob_id: int, data: list, eds_config: EDS): 0x05: "Operational", 0x7F: "Pre-operational" } + node_id = MessageType.cob_to_node(MessageType.HEARTBEAT, cob_id) - if len(data) < 1 or data[0] not in states: + if len(data) < 1 or data[STATE_BYTE_IDX] not in states: raise FailedValidationError(data, node_id, cob_id, __name__, "Invalid heartbeat state detected") - return states.get(data[0]) + return states.get(data[STATE_BYTE_IDX]) diff --git a/canopen_monitor/parse/pdo.py b/canopen_monitor/parse/pdo.py index 9b24a95..650868e 100644 --- a/canopen_monitor/parse/pdo.py +++ b/canopen_monitor/parse/pdo.py @@ -2,6 +2,7 @@ from math import ceil, floor from .eds import EDS from .utilities import FailedValidationError, get_name, decode +from ..can import MessageType PDO1_TX = 0x1A00 PDO1_RX = 0x1600 @@ -22,36 +23,36 @@ def parse(cob_id: int, data: bytes, eds: EDS): The eds mapping is determined by the cob_id passed ot this function. That indicated which PDO record to look up in the EDS file. """ - if 0x180 <= cob_id < 0x200: # PDO1 tx + if MessageType.PDO1_TX.value[0] <= cob_id < MessageType.PDO1_RX.value[0]: # PDO1 tx pdo_type = PDO1_TX - elif 0x200 <= cob_id < 0x280: # PDO1 rx + elif MessageType.PDO1_RX.value[0] <= cob_id < MessageType.PDO2_TX.value[0]: # PDO1 rx pdo_type = PDO1_RX - elif 0x280 <= cob_id < 0x300: # PDO2 tx + elif MessageType.PDO2_TX.value[0] <= cob_id < MessageType.PDO2_RX.value[0]: # PDO2 tx pdo_type = PDO2_TX - elif 0x300 <= cob_id < 0x380: # PDO2 rx + elif MessageType.PDO2_RX.value[0] <= cob_id < MessageType.PDO3_TX.value[0]: # PDO2 rx pdo_type = PDO2_RX - elif 0x380 <= cob_id < 0x400: # PDO3 tx + elif MessageType.PDO3_TX.value[0] <= cob_id < MessageType.PDO3_RX.value[0]: # PDO3 tx pdo_type = PDO3_TX - elif 0x400 <= cob_id < 0x480: # PDO3 rx + elif MessageType.PDO3_RX.value[0] <= cob_id < MessageType.PDO4_TX.value[0]: # PDO3 rx pdo_type = PDO3_RX - elif 0x480 <= cob_id < 0x500: # PDO4 tx + elif MessageType.PDO4_TX.value[0] <= cob_id < MessageType.PDO4_RX.value[0]: # PDO4 tx pdo_type = PDO4_TX - elif 0x500 <= cob_id < 0x580: # PDO4 rx + elif MessageType.PDO4_RX.value[0] <= cob_id < (MessageType.PDO4_RX.value[1] + 1): # PDO4 rx pdo_type = PDO4_RX else: - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"Unable to determine pdo type with given " f"cob_id {hex(cob_id)}, expected value " - f"between 0x180 and 0x580") + f"between MessageType.PDO1_TX.value[0] and MessageType.PDO4_RX.value[1] + 1") if len(data) > 8 or len(data) < 1: - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"Invalid payload length {len(data)} " f"expected between 1 and 8") try: eds_elements = eds[hex(pdo_type)][0] except TypeError: - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"Unable to find eds data for pdo type " f"{hex(pdo_type)}") @@ -66,12 +67,12 @@ def parse(cob_id: int, data: bytes, eds: EDS): if num_elements in (0xFE, 0xFF): if len(data) != 8: - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"Invalid payload length {len(data)} " f"expected 8") return parse_mpdo(num_elements, pdo_type, eds, data, cob_id) - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"Invalid pdo mapping detected in eds file at " f"[{pdo_type}sub0]") @@ -87,7 +88,7 @@ def parse_pdo(num_elements, pdo_type, cob_id, eds, data): try: eds_record = eds[hex(pdo_type)][i] except TypeError: - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"Unable to find eds data for pdo type " f"{hex(pdo_type)} index {i}") @@ -120,7 +121,7 @@ def parse_pdo(num_elements, pdo_type, cob_id, eds, data): def parse_mpdo(num_elements, pdo_type, eds, data, cob_id): mpdo = MPDO(data) if mpdo.is_source_addressing and num_elements != 0xFE: - raise FailedValidationError(data, cob_id - 0x180, cob_id, __name__, + raise FailedValidationError(data, cob_id - MessageType.PDO1_TX.value[0], cob_id, __name__, f"MPDO type and definition do not match. " f"Check eds file at [{pdo_type}sub0]") diff --git a/canopen_monitor/parse/sdo.py b/canopen_monitor/parse/sdo.py index 4acceec..dc66c04 100644 --- a/canopen_monitor/parse/sdo.py +++ b/canopen_monitor/parse/sdo.py @@ -2,6 +2,7 @@ from .eds import EDS from .utilities import FailedValidationError, get_name, decode from typing import List +from ..can import MessageType SDO_TX = 'SDO_TX' SDO_RX = 'SDO_RX' @@ -887,12 +888,12 @@ def is_complete(self): def parse(self, cob_id: int, data: List[int], eds: EDS): node_id = None try: - if 0x580 <= cob_id < 0x600: + if cob_id in range(*MessageType.SDO_TX.value): sdo_type = SDO_TX - node_id = cob_id - 0x580 - elif 0x600 <= cob_id < 0x680: + node_id = cob_id - MessageType.SDO_TX.value[0] + elif cob_id in range(*MessageType.SDO_RX.value): sdo_type = SDO_RX - node_id = cob_id - 0x600 + node_id = cob_id - MessageType.SDO_RX.value[0] else: raise ValueError(f"Provided COB-ID {str(cob_id)} " f"is outside of the range of SDO messages") diff --git a/canopen_monitor/parse/utilities.py b/canopen_monitor/parse/utilities.py index bdf5fe9..aa5d817 100644 --- a/canopen_monitor/parse/utilities.py +++ b/canopen_monitor/parse/utilities.py @@ -81,16 +81,10 @@ def get_name(eds_config: EDS, index: List[int]) -> (str, str): def decode(defined_type: str, data: List[int]) -> str: """ - Does something? - - Arguments - --------- - defined_type `str`: The data type? - data `[int]`: The data? - - Returns - ------- - `str`: something + Decodes data by defined type + :param defined_type: Hex constant for type + :param data: list of ints to be decoded + :return: Decoded data as string """ if defined_type in (UNSIGNED8, UNSIGNED16, UNSIGNED32, UNSIGNED64): result = str(int.from_bytes(data, byteorder="little", signed=False)) diff --git a/canopen_monitor/ui/message_pane.py b/canopen_monitor/ui/message_pane.py index 422082d..1ced972 100644 --- a/canopen_monitor/ui/message_pane.py +++ b/canopen_monitor/ui/message_pane.py @@ -5,7 +5,8 @@ class MessagePane(Pane): - """A derivative of Pane customized specifically to list miscellaneous CAN + """ + A derivative of Pane customized specifically to list miscellaneous CAN messages stored in a MessageTable :param name: The name of the pane (to be printed in the top left) @@ -58,7 +59,8 @@ def __init__(self: MessagePane, self.__reset_col_widths() def resize(self: MessagePane, height: int, width: int) -> None: - """A wrapper for `Pane.resize()`. This intercepts a call for a resize + """ + A wrapper for `Pane.resize()`. This intercepts a call for a resize in order to upate MessagePane-specific details that change on a resize event. The parent `resize()` gets called first and then MessagePane's details are updated. @@ -78,26 +80,34 @@ def resize(self: MessagePane, height: int, width: int) -> None: self.__top_max = occluded if occluded > 0 else 0 def _reset_scroll_positions(self: MessagePane) -> None: + """ + Reset the scroll positions. + Initialize the y position to be zero. + Initialize the x position to be zero. + """ self.cursor = self.cursor_max self.scroll_position_y = 0 self.scroll_position_x = 0 @property def scroll_limit_y(self: MessagePane) -> int: - """The maximim rows the pad is allowed to shift by when scrolling + """ + The maximim rows the pad is allowed to shift by when scrolling """ return self.d_height - 2 @property def scroll_limit_x(self: MessagePane) -> int: - """The maximim columns the pad is allowed to shift by when scrolling + """ + The maximim columns the pad is allowed to shift by when scrolling """ max_length = sum(list(map(lambda x: x[1], self.cols.values()))) occluded = max_length - self.d_width + 7 return occluded if(occluded > 0) else 0 def scroll_up(self: MessagePane, rate: int = 1) -> None: - """This overrides `Pane.scroll_up()`. Instead of shifting the + """ + This overrides `Pane.scroll_up()`. Instead of shifting the pad vertically, the slice of messages from the `MessageTable` is shifted. @@ -123,7 +133,8 @@ def scroll_up(self: MessagePane, rate: int = 1) -> None: self.__top = min if(self.__top < min) else self.__top def scroll_down(self: MessagePane, rate: int = 1) -> None: - """This overrides `Pane.scroll_up()`. Instead of shifting the + """ + This overrides `Pane.scroll_up()`. Instead of shifting the pad vertically, the slice of messages from the `MessageTable` is shifted. @@ -149,7 +160,8 @@ def scroll_down(self: MessagePane, rate: int = 1) -> None: self.__top = max if(self.__top > max) else self.__top def __draw_header(self: Pane) -> None: - """Draw the table header at the top of the Pane + """ + Draw the table header at the top of the Pane This uses the `cols` dictionary to determine what to write """ @@ -169,7 +181,8 @@ def __draw_header(self: Pane) -> None: pos += data[1] + self.__col_sep def draw(self: MessagePane) -> None: - """Draw all records from the MessageTable to the Pane + """ + Draw all records from the MessageTable to the Pane """ super().draw() self.resize(self.v_height, self.v_width) @@ -199,11 +212,21 @@ def draw(self: MessagePane) -> None: super().refresh() def __reset_col_widths(self: Message): + """ + Reset the width of Pane collumn. + Based on the length of data to change the width. + """ for name, data in self.cols.items(): self.cols[name] = (data[0], len(name), data[2]) \ if (len(data) == 3) else (data[0], len(name)) def __check_col_widths(self: MessagePane, messages: [Message]) -> None: + """ + Check the width of the message in Pane column. + + :param messages: The list of the messages + :type messages: list + """ for message in messages: for name, data in self.cols.items(): attr = getattr(message, data[0]) diff --git a/canopen_monitor/ui/pane.py b/canopen_monitor/ui/pane.py index 222d832..822da2b 100755 --- a/canopen_monitor/ui/pane.py +++ b/canopen_monitor/ui/pane.py @@ -4,7 +4,8 @@ class Pane(ABC): - """Abstract Pane Class, contains a PAD and a window + """ + Abstract Pane Class, contains a PAD and a window :param v_height: The virtual height of the embedded pad :type v_height: int @@ -30,7 +31,8 @@ def __init__(self: Pane, x: int = 0, border: bool = True, color_pair: int = 0): - """Abstract pane initialization + """ + Abstract pane initialization :param border: Toggiling whether or not to draw a border :type border: bool @@ -64,15 +66,22 @@ def __init__(self: Pane, @property def scroll_limit_y(self: Pane) -> int: + """ + Limit the scroll on the y axis + """ return 0 @property def scroll_limit_x(self: Pane) -> int: + """ + Limit the scroll on the x axis + """ return 0 @abstractmethod def draw(self: Pane) -> None: - """Abstract draw method, must be overwritten in child class + """ + Abstract draw method, must be overwritten in child class draw should first resize the pad using: `super().resize(w, h)` then add content using: self._pad.addstr() then refresh using: `super().refresh()` @@ -91,7 +100,8 @@ def draw(self: Pane) -> None: self._pad.box() def resize(self: Pane, height: int, width: int) -> None: - """Resize the virtual pad and change internal variables to reflect that + """ + Resize the virtual pad and change internal variables to reflect that :param height: New virtual height :type height: int @@ -105,12 +115,17 @@ def resize(self: Pane, height: int, width: int) -> None: self._pad.resize(self.v_height, self.v_width) def __reset_draw_dimensions(self: Pane) -> None: + """ + Reset the pane dimensions. + You can change the width and height of the pane. + """ p_height, p_width = self.parent.getmaxyx() self.d_height = min(self.v_height, p_height - 1) self.d_width = min(self.v_width, p_width - 1) def clear(self: Pane) -> None: - """Clear all contents of pad and parent window + """ + Clear all contents of pad and parent window .. warning:: @@ -123,7 +138,8 @@ def clear(self: Pane) -> None: # self.refresh() def clear_line(self: Pane, y: int, style: any = None) -> None: - """Clears a single line of the Pane + """ + Clears a single line of the Pane :param y: The line to clear :type y: int @@ -139,7 +155,8 @@ def clear_line(self: Pane, y: int, style: any = None) -> None: self._pad.attroff(line_style) def refresh(self: Pane) -> None: - """Refresh the pane based on configured draw dimensions + """ + Refresh the pane based on configured draw dimensions """ self._pad.refresh(self.scroll_position_y, self.scroll_position_x, @@ -150,7 +167,8 @@ def refresh(self: Pane) -> None: self.needs_refresh = False def scroll_up(self: Pane, rate: int = 1) -> bool: - """Scroll pad upwards + """ + Scroll pad upwards .. note:: @@ -171,7 +189,8 @@ def scroll_up(self: Pane, rate: int = 1) -> bool: return True def scroll_down(self: Pane, rate: int = 1) -> bool: - """Scroll pad downwards + """ + Scroll pad downwards .. note:: @@ -192,7 +211,8 @@ def scroll_down(self: Pane, rate: int = 1) -> bool: return True def scroll_left(self: Pane, rate: int = 1) -> bool: - """Scroll pad left + """ + Scroll pad left .. note:: @@ -213,7 +233,8 @@ def scroll_left(self: Pane, rate: int = 1) -> bool: return True def scroll_right(self: Pane, rate: int = 1) -> bool: - """Scroll pad right + """ + Scroll pad right .. note:: @@ -241,7 +262,8 @@ def add_line(self: Pane, underline: bool = False, highlight: bool = False, color: any = None) -> None: - """Adds a line of text to the Pane and if needed, it handles the + """ + Adds a line of text to the Pane and if needed, it handles the process of resizing the embedded pad :param y: Line's row position diff --git a/canopen_monitor/ui/windows.py b/canopen_monitor/ui/windows.py index 6d6dc54..7463f1c 100755 --- a/canopen_monitor/ui/windows.py +++ b/canopen_monitor/ui/windows.py @@ -4,6 +4,7 @@ class PopupWindow(Pane): + def __init__(self: PopupWindow, parent: any, header: str = 'Alert', @@ -15,11 +16,9 @@ def __init__(self: PopupWindow, width=1, y=10, x=10) + """Set an init to Popup window""" # Pop-up window properties - self.header = header - self.content = content - self.footer = footer - self.enabled = False + self.setWindowProperties(header, content, footer) # Parent window dimensions (Usually should be STDOUT directly) p_height, p_width = self.parent.getmaxyx() @@ -28,18 +27,32 @@ def __init__(self: PopupWindow, self.content = self.break_lines(int(2 * p_width / 3), self.content) # UI dimensions - p_height, p_width = self.parent.getmaxyx() + self.setUIDimension(p_height, p_width) + + # UI properties + self.style = (style or curses.color_pair(0)) + self._pad.attron(self.style) + + + def setUIDimension(self, p_height, p_width): + """Set UI Dimension (x,y) by giving parent + height and width""" self.v_height = (len(self.content)) + 2 width = len(self.header) + 2 - if(len(self.content) > 0): + if (len(self.content) > 0): width = max(width, max(list(map(lambda x: len(x), self.content)))) self.v_width = width + 4 self.y = int(((p_height + self.v_height) / 2) - self.v_height) self.x = int(((p_width + self.v_width) / 2) - self.v_width) - # UI properties - self.style = (style or curses.color_pair(0)) - self._pad.attron(self.style) + + def setWindowProperties(self:PopupWindow, header, content, footer): + """Set default window properties""" + self.header = header + self.content = content + self.footer = footer + self.enabled = False + def break_lines(self: PopupWindow, max_width: int, @@ -49,44 +62,60 @@ def break_lines(self: PopupWindow, length = len(line) mid = int(length / 2) - if(length >= max_width): - # Break the line at the next available space - for j, c in enumerate(line[mid - 1:]): - if(c == ' '): - mid += j - break - - # Apply the line break to the content array - content.pop(i) - content.insert(i, line[:mid - 1]) - content.insert(i + 1, line[mid:]) + self.determine_to_break_content(content, i, length, line, max_width, mid) return content + def determine_to_break_content(self, content, i, length, line, max_width, mid): + if (length >= max_width): + # Break the line at the next available space + for j, c in enumerate(line[mid - 1:]): + if (c == ' '): + mid += j + break + self.apply_line_to_content_array(content, i, line, mid) + + + def apply_line_to_content_array(self, content, i, line, mid): + """Apply the line break to the content array""" + content.pop(i) + content.insert(i, line[:mid - 1]) + content.insert(i + 1, line[mid:]) + def toggle(self: PopupWindow) -> bool: self.enabled = not self.enabled return self.enabled + def __draw_header(self: PopupWindow) -> None: + """Add the header line to the window""" self.add_line(0, 1, self.header, underline=True) + def __draw__footer(self: PopupWindow) -> None: + """Add the footer to the window""" f_width = len(self.footer) + 2 self.add_line(self.v_height - 1, self.v_width - f_width, self.footer, underline=True) + + def __draw_content(self): + """Read each line of the content and add to the window""" + for i, line in enumerate(self.content): + self.add_line(1 + i, 2, line) + + def draw(self: PopupWindow) -> None: if(self.enabled): super().resize(self.v_height, self.v_width) super().draw() self.__draw_header() - - for i, line in enumerate(self.content): - self.add_line(1 + i, 2, line) - + self.__draw_content() self.__draw__footer() super().refresh() else: # super().clear() ... + +
Pane Selection not working for Ubuntu TTY Control+Up and Control+Down is not working to switch between selected panes. Scrolling within the pane is working without any issues using up, down, shift up, shift down and scrolling with the mouse. This could have to do with the fact that I am running on a VM. I'm running an Ubuntu VM using virtual box with a windows host. I tested with a soft keyboard within the vm to ensure that the host is not capturing the keyboard here and I had the same result.
This function has been deployed and tested on multiple platforms without successful reproduction of the aforementioned bug so I am doubtful that this is a CM-specific issue. Please try reproducing this on something other than a VM and document your findings here. If you can't reproduce it, then I will go ahead and close this issue. So, I am still unable to use control+down even when the host OS is ubuntu, so it looks like it is not related to running in a VM. I’m going to see what values ncurses is specifically getting to look into this further. ### For some reason curses is reading a different value Control+UP is read as 566 instead of the expected 567 Control+Down is read as 525 instead of the expected 526 ### MDC It worked fine when using ssh to connect to the MDC from the ubuntu VM, so I tried comparing some settings. localectl: System locale is the same, but X11 layout is US locally and n/a in MDC. Tried using a different TERM environment by launching in tmux, same thing Also confirmed TERM matches in MDC and locally ### Other notes Ran with ubuntu as host OS with different physical keyboard same result Ran on multiple VM platforms (VirtualBox on windows and Parallels on Mac) ### Conclusion So I do not know why there is this discrepancy, but accepting 525 and 566 seems like a reasonable solution. After reviewing the [curses constants](https://docs.python.org/3/library/curses.html#curses.ncurses_version#constants) again, I might have come up with something that works better. We can probably partially alleviate the incompatibility by adding the two separate keycodes together. there's no registered symbols for `CTRL` or `ALT`, but there are registered symbols for the arrow keys. So what we could do is determine the keycode for common keys like `CTRL` and `ALT` and create our own symbols and then add that to the arrow key symbols from curses and compare input against that. I would like to avoid having to hard code specific keycodes for specific tty setups, if possible and shoot for a platform-agnostic solution as much as possible, since that reduces the complexity.
2021-03-16T00:35:43
0.0
[]
[]
statelyai/xstate-python
statelyai__xstate-python-32
b8ed32b0eba94269460632db09cdc3bea6d2d04b
diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile new file mode 100644 index 0000000..e0b973d --- /dev/null +++ b/.devcontainer/Dockerfile @@ -0,0 +1,22 @@ +# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.177.0/containers/python-3/.devcontainer/base.Dockerfile + +# [Choice] Python version: 3, 3.9, 3.8, 3.7, 3.6 +ARG VARIANT="3.9" +FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT} + +# [Option] Install Node.js +ARG INSTALL_NODE="true" +ARG NODE_VERSION="lts/*" +RUN if [ "${INSTALL_NODE}" = "true" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi + +# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image. +COPY requirements_dev.txt /tmp/pip-tmp/ +RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements_dev.txt \ + && rm -rf /tmp/pip-tmp + +# [Optional] Uncomment this section to install additional OS packages. +# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ +# && apt-get -y install --no-install-recommends <your-package-list-here> + +# [Optional] Uncomment this line to install global node packages. +# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1 \ No newline at end of file diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json new file mode 100644 index 0000000..b9a1a3b --- /dev/null +++ b/.devcontainer/devcontainer.json @@ -0,0 +1,48 @@ +// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at: +// https://github.com/microsoft/vscode-dev-containers/tree/v0.177.0/containers/python-3 +{ + "name": "XState Python", + "build": { + "dockerfile": "Dockerfile", + "context": "..", + "args": { + // Update 'VARIANT' to pick a Python version: 3, 3.6, 3.7, 3.8, 3.9 + "VARIANT": "3.7", + // Options + "INSTALL_NODE": "false", + } + }, + + // Set *default* container specific settings.json values on container create. + "settings": { + "terminal.integrated.shell.linux": "/bin/bash", + "python.pythonPath": "/usr/local/bin/python", + "python.languageServer": "Pylance", + "python.linting.enabled": true, + "python.linting.pylintEnabled": true, + "python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8", + "python.formatting.blackPath": "/usr/local/py-utils/bin/black", + "python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf", + "python.linting.banditPath": "/usr/local/py-utils/bin/bandit", + "python.linting.flake8Path": "/usr/local/py-utils/bin/flake8", + "python.linting.mypyPath": "/usr/local/py-utils/bin/mypy", + "python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle", + "python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle", + "python.linting.pylintPath": "/usr/local/py-utils/bin/pylint" + }, + + // Add the IDs of extensions you want installed when the container is created. + "extensions": [ + "ms-python.python", + "ms-python.vscode-pylance" + ], + + // Use 'forwardPorts' to make a list of ports inside the container available locally. + // "forwardPorts": [], + + // Use 'postCreateCommand' to run commands after the container is created. + "postCreateCommand": "git submodule init && git submodule update", + + // Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root. + "remoteUser": "vscode" +} diff --git a/.vscode/tasks.json b/.vscode/tasks.json new file mode 100644 index 0000000..163a245 --- /dev/null +++ b/.vscode/tasks.json @@ -0,0 +1,13 @@ +{ + // See https://go.microsoft.com/fwlink/?LinkId=733558 + // for the documentation about the tasks.json format + "version": "2.0.0", + "tasks": [ + { + "label": "Run tests", + "type": "shell", + "command": "pytest", + "problemMatcher": [] + } + ] +} \ No newline at end of file diff --git a/README.md b/README.md index 846c08f..2c19be1 100644 --- a/README.md +++ b/README.md @@ -31,11 +31,22 @@ More advanced examples in [the "examples" folder](./examples) ## Testing +You can set up your development environment in two different ways. + +### Using [VS Code Remote Containers](https://code.visualstudio.com/docs/remote/containers) + +Note this requires VS Code and the Remote Containers extension (which uses Docker containers) + +1. [Open the folder in a container](https://code.visualstudio.com/docs/remote/containers#_quick-start-open-an-existing-folder-in-a-container). This will setup your environment with python (including python and pylance extensions), dependencies and download scxml tests. +1. Run `pytest` to run the tests! 👩‍🔬 (or run the `Run tests` task via VS Code) + +### Or on local drive + 1. Run `python3.7 -m venv .venv` to create a virtual environment 1. Run `source .venv/bin/activate` to go into that virtual environment 1. Run `pip install -r requirements_dev.txt` to install all of the dependencies in `requirements.txt` (which includes `pytest`) 1. Make sure test files are present and up to date by running `git submodule init` (first time) and `git submodule update` -1. Run `pytest` to run the tests! 👩‍🔬 +1. Run `pytest` to run the tests! 👩‍🔬 (or run the `Run tests` task via VS Code) ## SCXML diff --git a/package.py b/package.py deleted file mode 100644 index fc65889..0000000 --- a/package.py +++ /dev/null @@ -1,10 +0,0 @@ -def convert(my_name): - """ - Print a line about converting a notebook. - Args: - my_name (str): person's name - Returns: - None - """ - - print(f"I'll convert a notebook for you some day, {my_name}.") \ No newline at end of file diff --git a/requirements_dev.txt b/requirements_dev.txt index 1a5c10a..5142f78 100644 --- a/requirements_dev.txt +++ b/requirements_dev.txt @@ -1,5 +1,4 @@ pip==19.0.3 wheel==0.33.1 twine==1.13.0 -Js2Py==0.70 pytest diff --git a/setup.cfg b/setup.cfg index a1c8664..ff0eddd 100644 --- a/setup.cfg +++ b/setup.cfg @@ -18,8 +18,5 @@ classifiers = zip_safe = False include_package_data = True packages = xstate -install_requires = - ipython>=6 - nbformat>=4 - nbconvert>=5 - requests>=2 +install_requires = + Js2Py==0.70 \ No newline at end of file
Unnecessary dependencies In `setup.cfg` the following dependencies are listed ``` ipython>=6 nbformat>=4 nbconvert>=5 requests>=2 ``` Isn't that a mistake? I also see `Js2Py` being used in `scxml.py` but only listed in `requirements_dev.txt`. That seems wrong as well. Let me know if any of this is intended and then I'll fix in a PR.
The `setup.cfg` dependencies look wrong to me, too. And yes, `Js2Py` should be included.
2021-05-27T16:08:38
0.0
[]
[]
kevin1024/vcrpy
kevin1024__vcrpy-784
defad28771d99434a1fdb3b46994b21b14c36cc7
diff --git a/tox.ini b/tox.ini index e1476903..62035131 100644 --- a/tox.ini +++ b/tox.ini @@ -6,7 +6,7 @@ envlist = {py38,py39,py310,py311,py312}-{requests-urllib3-1,httplib2,urllib3-1,tornado4,boto3,aiohttp,httpx}, {py310,py311,py312}-{requests-urllib3-2,urllib3-2}, {pypy3}-{requests-urllib3-1,httplib2,urllib3-1,tornado4,boto3}, - {py310}-httpx019, + #{py310}-httpx019, cov-report diff --git a/vcr/patch.py b/vcr/patch.py index afcaab57..f69ae768 100644 --- a/vcr/patch.py +++ b/vcr/patch.py @@ -95,8 +95,8 @@ except ImportError: # pragma: no cover pass else: - _HttpxSyncClient_send = httpx.Client.send - _HttpxAsyncClient_send = httpx.AsyncClient.send + _HttpxSyncClient_send_single_request = httpx.Client._send_single_request + _HttpxAsyncClient_send_single_request = httpx.AsyncClient._send_single_request class CassettePatcherBuilder: @@ -307,11 +307,11 @@ def _httpx(self): else: from .stubs.httpx_stubs import async_vcr_send, sync_vcr_send - new_async_client_send = async_vcr_send(self._cassette, _HttpxAsyncClient_send) - yield httpx.AsyncClient, "send", new_async_client_send + new_async_client_send = async_vcr_send(self._cassette, _HttpxAsyncClient_send_single_request) + yield httpx.AsyncClient, "_send_single_request", new_async_client_send - new_sync_client_send = sync_vcr_send(self._cassette, _HttpxSyncClient_send) - yield httpx.Client, "send", new_sync_client_send + new_sync_client_send = sync_vcr_send(self._cassette, _HttpxSyncClient_send_single_request) + yield httpx.Client, "_send_single_request", new_sync_client_send def _urllib3_patchers(self, cpool, conn, stubs): http_connection_remover = ConnectionRemover( diff --git a/vcr/serializers/compat.py b/vcr/serializers/compat.py index 0ab358d2..65ba96fc 100644 --- a/vcr/serializers/compat.py +++ b/vcr/serializers/compat.py @@ -56,7 +56,7 @@ def convert_body_to_unicode(resp): If the request or responses body is bytes, decode it to a string (for python3 support) """ - if type(resp) is not dict: + if not isinstance(resp, dict): # Some of the tests just serialize and deserialize a string. return _convert_string_to_unicode(resp) else: diff --git a/vcr/stubs/httpx_stubs.py b/vcr/stubs/httpx_stubs.py index 515855ee..aa5e05a4 100644 --- a/vcr/stubs/httpx_stubs.py +++ b/vcr/stubs/httpx_stubs.py @@ -38,7 +38,7 @@ def _to_serialized_response(httpx_response): "status_code": httpx_response.status_code, "http_version": httpx_response.http_version, "headers": _transform_headers(httpx_response), - "content": httpx_response.content.decode("utf-8", "ignore"), + "content": httpx_response.content, } @@ -57,7 +57,7 @@ def _from_serialized_headers(headers): @patch("httpx.Response.close", MagicMock()) @patch("httpx.Response.read", MagicMock()) def _from_serialized_response(request, serialized_response, history=None): - content = serialized_response.get("content").encode() + content = serialized_response.get("content") response = httpx.Response( status_code=serialized_response.get("status_code"), request=request, @@ -106,30 +106,8 @@ def _record_responses(cassette, vcr_request, real_response): def _play_responses(cassette, request, vcr_request, client, kwargs): - history = [] - - allow_redirects = kwargs.get( - HTTPX_REDIRECT_PARAM.name, - HTTPX_REDIRECT_PARAM.default, - ) vcr_response = cassette.play_response(vcr_request) response = _from_serialized_response(request, vcr_response) - - while allow_redirects and 300 <= response.status_code <= 399: - next_url = response.headers.get("location") - if not next_url: - break - - vcr_request = VcrRequest("GET", next_url, None, dict(response.headers)) - vcr_request = cassette.find_requests_with_most_matches(vcr_request)[0][0] - - history.append(response) - # add cookies from response to session cookie store - client.cookies.extract_cookies(response) - - vcr_response = cassette.play_response(vcr_request) - response = _from_serialized_response(vcr_request, vcr_response, history) - return response @@ -141,6 +119,7 @@ async def _async_vcr_send(cassette, real_send, *args, **kwargs): return response real_response = await real_send(*args, **kwargs) + await real_response.aread() return _record_responses(cassette, vcr_request, real_response) @@ -160,6 +139,7 @@ def _sync_vcr_send(cassette, real_send, *args, **kwargs): return response real_response = real_send(*args, **kwargs) + real_response.read() return _record_responses(cassette, vcr_request, real_response)
[BUG] HTTPX Binary upload data unsuported I am trying to migrate one library from requests to http and it seems httpx portion have some bugs folowing casete cant be used with httpx https://github.com/billdeitrick/pypco/blob/master/tests/cassettes/TestPublicRequestFunctions.test_upload_and_use_file.yaml because it tries to decode the binary string (that is nonsense) ``` Error Traceback (most recent call last): File "pypco\pypco\pco.py", line 251, in request_response response = self._do_url_managed_request(method, url, payload, upload, **params) File "pypco\pypco\pco.py", line 223, in _do_url_managed_request return self._do_ratelimit_managed_request(method, url, payload, upload, **params) File "pypco\pypco\pco.py", line 184, in _do_ratelimit_managed_request response = self._do_timeout_managed_request(method, url, payload, upload, **params) File "pypco\pypco\pco.py", line 145, in _do_timeout_managed_request return self._do_request(method, url, payload, upload, **params) File "pypco\pypco\pco.py", line 110, in _do_request response = self.session.request( File ".virtualenvs\pypco-EZZlDlNc\lib\site-packages\httpx\_client.py", line 815, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) File ".virtualenvs\pypco-EZZlDlNc\lib\site-packages\vcr\stubs\httpx_stubs.py", line 168, in _inner_send return _sync_vcr_send(cassette, real_send, *args, **kwargs) File ".virtualenvs\pypco-EZZlDlNc\lib\site-packages\vcr\stubs\httpx_stubs.py", line 155, in _sync_vcr_send vcr_request, response = _shared_vcr_send(cassette, real_send, *args, **kwargs) File ".virtualenvs\pypco-EZZlDlNc\lib\site-packages\vcr\stubs\httpx_stubs.py", line 81, in _shared_vcr_send vcr_request = _make_vcr_request(real_request, **kwargs) File ".virtualenvs\pypco-EZZlDlNc\lib\site-packages\vcr\stubs\httpx_stubs.py", line 72, in _make_vcr_request body = httpx_request.read().decode("utf-8") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 137: invalid start byte ```
Probably related: https://github.com/ktosiek/pytest-vcr/issues/46 indeed it is The same thing happens when trying to record a cassette with httpx, where binary request data causes a `UnicodeDecodeError.` Here's a simple reproduction script that does the same thing successfully both with requests and with httpx outside of a `vcr.use_cassette` context. ```python3 import httpx import requests import vcr requests_response = requests.post('http://example.com', data=b'\xff') print(requests_response.status_code) httpx_response = httpx.post('http://example.com', data=b'\xff') print(httpx_response.status_code) with vcr.use_cassette('requests.yaml'): requests_response = requests.post('http://example.com', data=b'\xff') print(requests_response.status_code) with vcr.use_cassette('httpx.yaml'): httpx_response = httpx.post('http://example.com', data=b'\xff') print(httpx_response.status_code) ``` Expected output: ``` 200 200 200 200 ``` Actual output: ``` 200 200 200 Traceback (most recent call last): File "/Users/rberryhill/.pyenv/versions/3.8.13/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/rberryhill/.pyenv/versions/3.8.13/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/rberryhill/src/vcrbug/bug.py", line 18, in <module> httpx_response = httpx.post('http://example.com', data=b'\xff') File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/httpx/_api.py", line 304, in post return request( File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/httpx/_api.py", line 100, in request return client.request( File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/httpx/_client.py", line 821, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/vcr/stubs/httpx_stubs.py", line 168, in _inner_send return _sync_vcr_send(cassette, real_send, *args, **kwargs) File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/vcr/stubs/httpx_stubs.py", line 155, in _sync_vcr_send vcr_request, response = _shared_vcr_send(cassette, real_send, *args, **kwargs) File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/vcr/stubs/httpx_stubs.py", line 81, in _shared_vcr_send vcr_request = _make_vcr_request(real_request, **kwargs) File "/Users/rberryhill/src/vcrbug/venv/lib/python3.8/site-packages/vcr/stubs/httpx_stubs.py", line 72, in _make_vcr_request body = httpx_request.read().decode("utf-8") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte ``` This is with python 3.8.13, httpx==0.23.3, vcrpy==4.2.1. Can we use `errors="surrogateescape"` here?
2023-12-08T04:02:42
0.0
[]
[]
kevin1024/vcrpy
kevin1024__vcrpy-739
4f70152e7ce510cde41cf071585cbdb481e4e8f2
diff --git a/vcr/matchers.py b/vcr/matchers.py index 98395b67..70e78910 100644 --- a/vcr/matchers.py +++ b/vcr/matchers.py @@ -2,9 +2,13 @@ import logging import urllib import xmlrpc.client +from string import hexdigits +from typing import List, Set from .util import read_body +_HEXDIG_CODE_POINTS: Set[int] = {ord(s.encode("ascii")) for s in hexdigits} + log = logging.getLogger(__name__) @@ -49,11 +53,17 @@ def raw_body(r1, r2): def body(r1, r2): - transformer = _get_transformer(r1) - r2_transformer = _get_transformer(r2) - if transformer != r2_transformer: - transformer = _identity - if transformer(read_body(r1)) != transformer(read_body(r2)): + transformers = list(_get_transformers(r1)) + if transformers != list(_get_transformers(r2)): + transformers = [] + + b1 = read_body(r1) + b2 = read_body(r2) + for transform in transformers: + b1 = transform(b1) + b2 = transform(b2) + + if b1 != b2: raise AssertionError @@ -72,6 +82,62 @@ def checker(headers): return checker +def _dechunk(body): + if isinstance(body, str): + body = body.encode("utf-8") + elif isinstance(body, bytearray): + body = bytes(body) + elif hasattr(body, "__iter__"): + body = list(body) + if body: + if isinstance(body[0], str): + body = ("".join(body)).encode("utf-8") + elif isinstance(body[0], bytes): + body = b"".join(body) + elif isinstance(body[0], int): + body = bytes(body) + else: + raise ValueError(f"Body chunk type {type(body[0])} not supported") + else: + body = None + + if not isinstance(body, bytes): + return body + + # Now decode chunked data format (https://en.wikipedia.org/wiki/Chunked_transfer_encoding) + # Example input: b"45\r\n<69 bytes>\r\n0\r\n\r\n" where int(b"45", 16) == 69. + CHUNK_GAP = b"\r\n" + BODY_LEN: int = len(body) + + chunks: List[bytes] = [] + pos: int = 0 + + while True: + for i in range(pos, BODY_LEN): + if body[i] not in _HEXDIG_CODE_POINTS: + break + + if i == 0 or body[i : i + len(CHUNK_GAP)] != CHUNK_GAP: + if pos == 0: + return body # i.e. assume non-chunk data + raise ValueError("Malformed chunked data") + + size_bytes = int(body[pos:i], 16) + if size_bytes == 0: # i.e. well-formed ending + return b"".join(chunks) + + chunk_data_first = i + len(CHUNK_GAP) + chunk_data_after_last = chunk_data_first + size_bytes + + if body[chunk_data_after_last : chunk_data_after_last + len(CHUNK_GAP)] != CHUNK_GAP: + raise ValueError("Malformed chunked data") + + chunk_data = body[chunk_data_first:chunk_data_after_last] + chunks.append(chunk_data) + + pos = chunk_data_after_last + len(CHUNK_GAP) + + def _transform_json(body): # Request body is always a byte string, but json.loads() wants a text # string. RFC 7159 says the default encoding is UTF-8 (although UTF-16 @@ -83,6 +149,7 @@ def _transform_json(body): _xml_header_checker = _header_checker("text/xml") _xmlrpc_header_checker = _header_checker("xmlrpc", header="User-Agent") _checker_transformer_pairs = ( + (_header_checker("chunked", header="Transfer-Encoding"), _dechunk), ( _header_checker("application/x-www-form-urlencoded"), lambda body: urllib.parse.parse_qs(body.decode("ascii")), @@ -92,16 +159,10 @@ def _transform_json(body): ) -def _identity(x): - return x - - -def _get_transformer(request): +def _get_transformers(request): for checker, transformer in _checker_transformer_pairs: if checker(request.headers): - return transformer - else: - return _identity + yield transformer def requests_match(r1, r2, matchers):
[>=4.3.1] Chunked uploads are broken with urllib3 v2 With vcrpy 5.0.0, when [updating urllib3 from `1.26.16` to `2.0.3`](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/merge_requests/531), chunked uploads [are failing](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/jobs/4563346377). This is because the body appears to vcrpy unchunked: ```python (Pdb) self.cassette.requests[18].url == self._vcr_request.url True (Pdb) self.cassette.requests[18].body b'45\r\nBlob bf9500cc6aa4fac1a75c958496c9e9caeceacb256113e28836fc5b3bd448f706\r\n0\r\n\r\n' (Pdb) [c for c in self._vcr_request.body] [b'Blob bf9500cc6aa4fac1a75c958496c9e9caeceacb256113e28836fc5b3bd448f706'] (Pdb) self._vcr_request.headers {'User-Agent': 'gitlabracadabra/1.13.0', 'Docker-Distribution-Api-Version': 'registry/2.0', 'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlJJWTM6SUhGMjpZUlhVOlJGTVI6NEVRRDpJWVZDOjZMSzM6MzVJTDpGRFJGOklKUE46Q0MySjpMTEIzIiwidHlwIjoiSldUIn0.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6InJvb3QvdGVzdF9yZWdpc3RyeS9vcGVyYXRvci1mcmFtZXdvcmsvb2xtIiwiYWN0aW9ucyI6WyJwdXNoIiwicHVsbCJdfV0sImp0aSI6ImM5ZGQ1YWQxLWQ0M2ItNDI0Yi1iZDUwLWU4NzJlYjYxZjYwNSIsImF1ZCI6ImNvbnRhaW5lcl9yZWdpc3RyeSIsInN1YiI6InJvb3QiLCJpc3MiOiJnaXRsYWItaXNzdWVyIiwiaWF0IjoxNjE3MTkzMjE5LCJuYmYiOjE2MTcxOTMyMTQsImV4cCI6MTYxNzE5MzUxOX0.b6PKFISAEPnDnfTKR8XVIHUOc6JYomUOB-u_w3QQIqF8gkb92n199zvFSwcpOZolcXjk1DvCfMtUkOvklzAgXyIwNcBx7t_o_SO4mCqWEnbfWxzhJKKSPm6YLAC0IoQbxCsPfA4CK7XEOCJU7eTLaCTqLvq6GvmpjtyLF-KgZV8eUV4zPDNYkhvUqcJXSQR_UHqQ4k32h8itBGYzfhykHSP2B2ZvbBDHLYmBlsDFtc7LJVUEEvKKhCR1VQppwgIoHwJwu7OR3R6I5_XO23NwBbPYvAQZC9twPBKNvoPfcToMzSF_keuAVodUgAyYMUHmnFPw5jX0-PcPt0L9j6a9rA', 'Cookie': 'perf_bar_enabled=true; experimentation_subject_id=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkltSmpaRFZrTXpRMkxXWXlPR010TkRZek1DMWhNMll5TFRNMlpEYzNZVFJrWWpsa09DST0iLCJleHAiOm51bGwsInB1ciI6ImNvb2tpZS5leHBlcmltZW50YXRpb25fc3ViamVjdF9pZCJ9fQ%3D%3D--403d642cd6f4aaa4365664b26bec713ff089b313', 'Transfer-Encoding': 'chunked'} (Pdb) self.cassette.requests[18].headers {'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlJJWTM6SUhGMjpZUlhVOlJGTVI6NEVRRDpJWVZDOjZMSzM6MzVJTDpGRFJGOklKUE46Q0MySjpMTEIzIiwidHlwIjoiSldUIn0.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6InJvb3QvdGVzdF9yZWdpc3RyeS9vcGVyYXRvci1mcmFtZXdvcmsvb2xtIiwiYWN0aW9ucyI6WyJwdXNoIiwicHVsbCJdfV0sImp0aSI6ImM5ZGQ1YWQxLWQ0M2ItNDI0Yi1iZDUwLWU4NzJlYjYxZjYwNSIsImF1ZCI6ImNvbnRhaW5lcl9yZWdpc3RyeSIsInN1YiI6InJvb3QiLCJpc3MiOiJnaXRsYWItaXNzdWVyIiwiaWF0IjoxNjE3MTkzMjE5LCJuYmYiOjE2MTcxOTMyMTQsImV4cCI6MTYxNzE5MzUxOX0.b6PKFISAEPnDnfTKR8XVIHUOc6JYomUOB-u_w3QQIqF8gkb92n199zvFSwcpOZolcXjk1DvCfMtUkOvklzAgXyIwNcBx7t_o_SO4mCqWEnbfWxzhJKKSPm6YLAC0IoQbxCsPfA4CK7XEOCJU7eTLaCTqLvq6GvmpjtyLF-KgZV8eUV4zPDNYkhvUqcJXSQR_UHqQ4k32h8itBGYzfhykHSP2B2ZvbBDHLYmBlsDFtc7LJVUEEvKKhCR1VQppwgIoHwJwu7OR3R6I5_XO23NwBbPYvAQZC9twPBKNvoPfcToMzSF_keuAVodUgAyYMUHmnFPw5jX0-PcPt0L9j6a9rA', 'Cookie': 'experimentation_subject_id=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkltSmpaRFZrTXpRMkxXWXlPR010TkRZek1DMWhNMll5TFRNMlpEYzNZVFJrWWpsa09DST0iLCJleHAiOm51bGwsInB1ciI6ImNvb2tpZS5leHBlcmltZW50YXRpb25fc3ViamVjdF9pZCJ9fQ%3D%3D--403d642cd6f4aaa4365664b26bec713ff089b313; perf_bar_enabled=true', 'Docker-Distribution-Api-Version': 'registry/2.0', 'Transfer-Encoding': 'chunked'} ``` Notes: - The fix is probably similar to https://github.com/kevin1024/vcrpy/pull/723. - The affected cassette is [here](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/blob/6130c586f1cd4d9dfb9055dd033ac59e9251c124/gitlabracadabra/tests/unit/fixtures/ProjectImageMirrors_image_mirror_digest.yaml#L687-719) - I use [a custom body matcher](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/blob/6130c586f1cd4d9dfb9055dd033ac59e9251c124/gitlabracadabra/tests/vcrfuncs.py#L54-68), but I don't think this is relevant here (except that this could be a way to workaround) [>=4.3.1] Chunked uploads are broken with urllib3 v2 With vcrpy 5.0.0, when [updating urllib3 from `1.26.16` to `2.0.3`](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/merge_requests/531), chunked uploads [are failing](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/jobs/4563346377). This is because the body appears to vcrpy unchunked: ```python (Pdb) self.cassette.requests[18].url == self._vcr_request.url True (Pdb) self.cassette.requests[18].body b'45\r\nBlob bf9500cc6aa4fac1a75c958496c9e9caeceacb256113e28836fc5b3bd448f706\r\n0\r\n\r\n' (Pdb) [c for c in self._vcr_request.body] [b'Blob bf9500cc6aa4fac1a75c958496c9e9caeceacb256113e28836fc5b3bd448f706'] (Pdb) self._vcr_request.headers {'User-Agent': 'gitlabracadabra/1.13.0', 'Docker-Distribution-Api-Version': 'registry/2.0', 'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlJJWTM6SUhGMjpZUlhVOlJGTVI6NEVRRDpJWVZDOjZMSzM6MzVJTDpGRFJGOklKUE46Q0MySjpMTEIzIiwidHlwIjoiSldUIn0.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6InJvb3QvdGVzdF9yZWdpc3RyeS9vcGVyYXRvci1mcmFtZXdvcmsvb2xtIiwiYWN0aW9ucyI6WyJwdXNoIiwicHVsbCJdfV0sImp0aSI6ImM5ZGQ1YWQxLWQ0M2ItNDI0Yi1iZDUwLWU4NzJlYjYxZjYwNSIsImF1ZCI6ImNvbnRhaW5lcl9yZWdpc3RyeSIsInN1YiI6InJvb3QiLCJpc3MiOiJnaXRsYWItaXNzdWVyIiwiaWF0IjoxNjE3MTkzMjE5LCJuYmYiOjE2MTcxOTMyMTQsImV4cCI6MTYxNzE5MzUxOX0.b6PKFISAEPnDnfTKR8XVIHUOc6JYomUOB-u_w3QQIqF8gkb92n199zvFSwcpOZolcXjk1DvCfMtUkOvklzAgXyIwNcBx7t_o_SO4mCqWEnbfWxzhJKKSPm6YLAC0IoQbxCsPfA4CK7XEOCJU7eTLaCTqLvq6GvmpjtyLF-KgZV8eUV4zPDNYkhvUqcJXSQR_UHqQ4k32h8itBGYzfhykHSP2B2ZvbBDHLYmBlsDFtc7LJVUEEvKKhCR1VQppwgIoHwJwu7OR3R6I5_XO23NwBbPYvAQZC9twPBKNvoPfcToMzSF_keuAVodUgAyYMUHmnFPw5jX0-PcPt0L9j6a9rA', 'Cookie': 'perf_bar_enabled=true; experimentation_subject_id=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkltSmpaRFZrTXpRMkxXWXlPR010TkRZek1DMWhNMll5TFRNMlpEYzNZVFJrWWpsa09DST0iLCJleHAiOm51bGwsInB1ciI6ImNvb2tpZS5leHBlcmltZW50YXRpb25fc3ViamVjdF9pZCJ9fQ%3D%3D--403d642cd6f4aaa4365664b26bec713ff089b313', 'Transfer-Encoding': 'chunked'} (Pdb) self.cassette.requests[18].headers {'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlJJWTM6SUhGMjpZUlhVOlJGTVI6NEVRRDpJWVZDOjZMSzM6MzVJTDpGRFJGOklKUE46Q0MySjpMTEIzIiwidHlwIjoiSldUIn0.eyJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6InJvb3QvdGVzdF9yZWdpc3RyeS9vcGVyYXRvci1mcmFtZXdvcmsvb2xtIiwiYWN0aW9ucyI6WyJwdXNoIiwicHVsbCJdfV0sImp0aSI6ImM5ZGQ1YWQxLWQ0M2ItNDI0Yi1iZDUwLWU4NzJlYjYxZjYwNSIsImF1ZCI6ImNvbnRhaW5lcl9yZWdpc3RyeSIsInN1YiI6InJvb3QiLCJpc3MiOiJnaXRsYWItaXNzdWVyIiwiaWF0IjoxNjE3MTkzMjE5LCJuYmYiOjE2MTcxOTMyMTQsImV4cCI6MTYxNzE5MzUxOX0.b6PKFISAEPnDnfTKR8XVIHUOc6JYomUOB-u_w3QQIqF8gkb92n199zvFSwcpOZolcXjk1DvCfMtUkOvklzAgXyIwNcBx7t_o_SO4mCqWEnbfWxzhJKKSPm6YLAC0IoQbxCsPfA4CK7XEOCJU7eTLaCTqLvq6GvmpjtyLF-KgZV8eUV4zPDNYkhvUqcJXSQR_UHqQ4k32h8itBGYzfhykHSP2B2ZvbBDHLYmBlsDFtc7LJVUEEvKKhCR1VQppwgIoHwJwu7OR3R6I5_XO23NwBbPYvAQZC9twPBKNvoPfcToMzSF_keuAVodUgAyYMUHmnFPw5jX0-PcPt0L9j6a9rA', 'Cookie': 'experimentation_subject_id=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkltSmpaRFZrTXpRMkxXWXlPR010TkRZek1DMWhNMll5TFRNMlpEYzNZVFJrWWpsa09DST0iLCJleHAiOm51bGwsInB1ciI6ImNvb2tpZS5leHBlcmltZW50YXRpb25fc3ViamVjdF9pZCJ9fQ%3D%3D--403d642cd6f4aaa4365664b26bec713ff089b313; perf_bar_enabled=true', 'Docker-Distribution-Api-Version': 'registry/2.0', 'Transfer-Encoding': 'chunked'} ``` Notes: - The fix is probably similar to https://github.com/kevin1024/vcrpy/pull/723. - The affected cassette is [here](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/blob/6130c586f1cd4d9dfb9055dd033ac59e9251c124/gitlabracadabra/tests/unit/fixtures/ProjectImageMirrors_image_mirror_digest.yaml#L687-719) - I use [a custom body matcher](https://gitlab.com/gitlabracadabra/gitlabracadabra/-/blob/6130c586f1cd4d9dfb9055dd033ac59e9251c124/gitlabracadabra/tests/vcrfuncs.py#L54-68), but I don't think this is relevant here (except that this could be a way to workaround)
Hi @sathieu, I have not been successful in reproducing this issue yet. Could you provide a minimal reproducer in Python with VCR.py 5.0.0 and chunked transfer that works with urllib3 v2 but not v2? --- In case the log at https://gitlab.com/gitlabracadabra/gitlabracadabra/-/jobs/4563346377 disappears, I am attaching a full raw copy here: [full_job.log](https://github.com/kevin1024/vcrpy/files/11941698/full_job.log) @hartwork Here is a minimal reproducer: [testcase.tar.gz](https://github.com/kevin1024/vcrpy/files/11955752/testcase.tar.gz) works with `requirements-good.txt`, fails with `requirements-bad.txt`. Hi @sathieu thanks for these files, well done! :pray: I confirm that this works with urllib3 v1 but not v2. My understanding is that the request matcher that compres request bodies does not consider these arguably equal bodies as equal. Comparision of bodies is done as follows: https://github.com/kevin1024/vcrpy/blob/4f70152e7ce510cde41cf071585cbdb481e4e8f2/vcr/matchers.py#L51-L57 So both sides are transformed before comparison. Which transformer that is is looked up here… https://github.com/kevin1024/vcrpy/blob/4f70152e7ce510cde41cf071585cbdb481e4e8f2/vcr/matchers.py#L99-L104 …and it is fed by this block here: https://github.com/kevin1024/vcrpy/blob/4f70152e7ce510cde41cf071585cbdb481e4e8f2/vcr/matchers.py#L85-L92 Now that we have that: Maybe one way to fix this is to **add a new transformer** there? I have a demo of something like that on a [new branch `issue-734-fix-body-matcher-for-chunked-requests`](https://github.com/kevin1024/vcrpy/compare/master...issue-734-fix-body-matcher-for-chunked-requests) now. Could you have a look? What do you think? One more thing: method `no_read` that you added — where can I learn more about that and its purpose, is that some common API? Thanks! @hartwork Thanks ! Adding a transformer would work, but the question is why the body is unchuncked with the `Transfer-Encoding: chunked` header still present? Maybe https://github.com/urllib3/urllib3/pull/2565 (see also https://github.com/urllib3/urllib3/issues/2291#issuecomment-871481619, especially *If Content-Length can be determined, set Content-Length and proceed not chunked.*)? NB: this `no_read` method was a test (for the `read` method). It was not needed, and you can ignore it. @sathieu thanks for your reply! I have fixed multiple bugs in the [demo in branch `issue-734-fix-body-matcher-for-chunked-requests`](https://github.com/kevin1024/vcrpy/compare/master...issue-734-fix-body-matcher-for-chunked-requests) in the meantime. Regarding the difference, with urllib3 v1 `request.body` is of type `bytearray` while with v2 it's of type `Stream`, i.e. your custom class. Hi @sathieu, I have not been successful in reproducing this issue yet. Could you provide a minimal reproducer in Python with VCR.py 5.0.0 and chunked transfer that works with urllib3 v2 but not v2? --- In case the log at https://gitlab.com/gitlabracadabra/gitlabracadabra/-/jobs/4563346377 disappears, I am attaching a full raw copy here: [full_job.log](https://github.com/kevin1024/vcrpy/files/11941698/full_job.log) @hartwork Here is a minimal reproducer: [testcase.tar.gz](https://github.com/kevin1024/vcrpy/files/11955752/testcase.tar.gz) works with `requirements-good.txt`, fails with `requirements-bad.txt`. Hi @sathieu thanks for these files, well done! :pray: I confirm that this works with urllib3 v1 but not v2. My understanding is that the request matcher that compres request bodies does not consider these arguably equal bodies as equal. Comparision of bodies is done as follows: https://github.com/kevin1024/vcrpy/blob/4f70152e7ce510cde41cf071585cbdb481e4e8f2/vcr/matchers.py#L51-L57 So both sides are transformed before comparison. Which transformer that is is looked up here… https://github.com/kevin1024/vcrpy/blob/4f70152e7ce510cde41cf071585cbdb481e4e8f2/vcr/matchers.py#L99-L104 …and it is fed by this block here: https://github.com/kevin1024/vcrpy/blob/4f70152e7ce510cde41cf071585cbdb481e4e8f2/vcr/matchers.py#L85-L92 Now that we have that: Maybe one way to fix this is to **add a new transformer** there? I have a demo of something like that on a [new branch `issue-734-fix-body-matcher-for-chunked-requests`](https://github.com/kevin1024/vcrpy/compare/master...issue-734-fix-body-matcher-for-chunked-requests) now. Could you have a look? What do you think? One more thing: method `no_read` that you added — where can I learn more about that and its purpose, is that some common API? Thanks! @hartwork Thanks ! Adding a transformer would work, but the question is why the body is unchuncked with the `Transfer-Encoding: chunked` header still present? Maybe https://github.com/urllib3/urllib3/pull/2565 (see also https://github.com/urllib3/urllib3/issues/2291#issuecomment-871481619, especially *If Content-Length can be determined, set Content-Length and proceed not chunked.*)? NB: this `no_read` method was a test (for the `read` method). It was not needed, and you can ignore it. @sathieu thanks for your reply! I have fixed multiple bugs in the [demo in branch `issue-734-fix-body-matcher-for-chunked-requests`](https://github.com/kevin1024/vcrpy/compare/master...issue-734-fix-body-matcher-for-chunked-requests) in the meantime. Regarding the difference, with urllib3 v1 `request.body` is of type `bytearray` while with v2 it's of type `Stream`, i.e. your custom class.
2023-07-08T00:44:42
0.0
[]
[]
kevin1024/vcrpy
kevin1024__vcrpy-723
7eb235cd9c43ad84c666eb996ae5c1551350c76d
diff --git a/vcr/filters.py b/vcr/filters.py index 751c5d4e..ac07fb0c 100644 --- a/vcr/filters.py +++ b/vcr/filters.py @@ -153,9 +153,15 @@ def decompress_body(body, encoding): if not body: return "" if encoding == "gzip": - return zlib.decompress(body, zlib.MAX_WBITS | 16) + try: + return zlib.decompress(body, zlib.MAX_WBITS | 16) + except zlib.error: + return body # assumes that the data was already decompressed else: # encoding == 'deflate' - return zlib.decompress(body) + try: + return zlib.decompress(body) + except zlib.error: + return body # assumes that the data was already decompressed # Deepcopy here in case `headers` contain objects that could # be mutated by a shallow copy and corrupt the real response.
[4.3.1] urllib3>=2 causes vcrpy to save/read gzip responses uncompressed ### 1. Observed behavior When using urllib3==2.0.3, the responses saved in the cassettes are saved uncompressed. Also, when reading a cassette, the responses will be interpreted as decompressed, even though the "Content-Encoding" is "gzip". The unfortunate consequence is that a test suite with cassettes recorded using urllib3 1.x doesn't work when upgrading urllib3 to 2.x, which defeates the purpose of the test suite. ### 2. How to reproduce * In a virtualenv with vcrpy==4.3.1 and urllib3==2.0.3, run the following script: ```python import vcr import requests with vcr.use_cassette("urllib3-v2.yaml"): requests.get("https://httpbin.org/gzip") ``` * Check that the response body is in **string format**, not in gzip compressed binary data: ```yaml response: body: string: "{\n \"gzipped\": true, \n \"headers\": {\n \"Accept\": \"*/*\", \n \"Accept-Encoding\": \"gzip, deflate\", \n \"Host\": \"httpbin.org\", \n \"User-Agent\": \"python-requests/2.31.0\", \n \"X-Amzn-Trace-Id\": \"Root=1-648f439e-7634146d3ce1ea581683263d\"\n }, \n \"method\": \"GET\", \n \"origin\": \"139.47.88.20\"\n}\n" ``` * Install urllib3<2, rename the cassette file and run the script again. * Check that now the response body in the new cassette is in **gzip compressed binary data**: ```yaml response: body: string: !!binary | H4sIAKxDj2QC/z2PPW/DIBCGd/8KxBgFgm1a40odPERt1yiRshJ8wUgNUCBDE+W/BxzJ4z3vx93d K4SwvhnvYcQfKIUrrFFhE8gRQszsnscMBqXApzzj1WaFZ9NCydYqNxqri1za1miE869MsBi/XZzD U0r+ZCx1QS/aIUIggwY7O/x/mpwlAf6uEFPcNLStKVvMRzJcbpbsg1RAfsrReOdc+qzJOxdn3kpF eNv1rOmAvTEhBesafpKyFwrnhsfrvQvkJXP4a7t/dWMXjDa2sLrtKe+oELRhuHpUT6SXOIEmAQAA ``` ### 3. Expected behavior The cassettes should be written and readed the same with urllib3 2.x and urllib3 1.x.
Hi @Terseus, thanks for the report! I confirm the change in cassette storage when playing with your example locally. I would like to add that gzip response do work for new cassettes on the outside, but it does makes existing cassettes break on upgrade as you reported, true.
2023-06-21T00:50:31
0.0
[]
[]
kevin1024/vcrpy
kevin1024__vcrpy-721
7eb235cd9c43ad84c666eb996ae5c1551350c76d
diff --git a/vcr/stubs/__init__.py b/vcr/stubs/__init__.py index 7c7dbe0b..2a34ea10 100644 --- a/vcr/stubs/__init__.py +++ b/vcr/stubs/__init__.py @@ -170,6 +170,13 @@ def data(self): def drain_conn(self): pass + def stream(self, amt=65536, decode_content=None): + while True: + b = self._content.read(amt) + yield b + if not b: + break + class VCRConnection: # A reference to the cassette that's currently being patched in
[4.3.1] Missing support for Response.raw.stream() API from urllib3 v2 I've noticed this because [https://pypi.org/project/dns-lexicon/ dns-lexicon]'s test suite started failing after upgrading to urllib3-2. A trivial reproducer is: ```python import requests import vcr with vcr.use_cassette("test.yaml"): r = requests.get("https://gentoo.org", stream=True) print(list(r.raw.stream())) ``` Without vcrpy, this program works. With vcrpy, it throws: ```pytb Traceback (most recent call last): File "/tmp/test.py", line 7, in <module> print(list(r.raw.stream())) ^^^^^^^^^^^^ AttributeError: 'VCRHTTPResponse' object has no attribute 'stream' ``` [4.3.1] Missing support for Response.raw.stream() API from urllib3 v2 I've noticed this because [https://pypi.org/project/dns-lexicon/ dns-lexicon]'s test suite started failing after upgrading to urllib3-2. A trivial reproducer is: ```python import requests import vcr with vcr.use_cassette("test.yaml"): r = requests.get("https://gentoo.org", stream=True) print(list(r.raw.stream())) ``` Without vcrpy, this program works. With vcrpy, it throws: ```pytb Traceback (most recent call last): File "/tmp/test.py", line 7, in <module> print(list(r.raw.stream())) ^^^^^^^^^^^^ AttributeError: 'VCRHTTPResponse' object has no attribute 'stream' ```
I confirm that this works with `urllib3==1.26.16` and does not work with `urllib3==2.0.3`. Labelling as a bug… I confirm that this works with `urllib3==1.26.16` and does not work with `urllib3==2.0.3`. Labelling as a bug…
2023-06-20T21:57:05
0.0
[]
[]
carltongibson/django-template-partials
carltongibson__django-template-partials-53
f93cbc6ff6dc5bad682d38f64d5231e1992cfdb5
diff --git a/CHANGELOG.md b/CHANGELOG.md index 2fef4b2..ffd16ba 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,8 @@ # CHANGELOG +* Deprecated passing arguments to the `inline` argument of the `partialdef` + tag. Either use `inline` or nothing. + ## 24.4 (2024-08-16) * Fixed a regression in 24.3 for inline partials with wrapping content. @@ -66,6 +69,8 @@ conventions. (All the built-in tags follow the `<name>` `end<name>` pattern.) Thanks to George Hickman. + Note: Passing `inline=True` has been deprecated in 24.5. Only pass `inline` instead. + * Adding `"template_partials"` to `INSTALLED_APPS` will now **automatically** configure the partials template loader. diff --git a/README.md b/README.md index e287a06..cfcf234 100644 --- a/README.md +++ b/README.md @@ -100,7 +100,7 @@ the content inside your partial, use the `inline` argument in that situation: ```html {% block main %} -{% partialdef inline-partial inline=True %} +{% partialdef inline-partial inline %} CONTENT {% endpartialdef %} {% endblock main %} diff --git a/src/template_partials/templatetags/partials.py b/src/template_partials/templatetags/partials.py index 86a3fbf..fe2ab0e 100644 --- a/src/template_partials/templatetags/partials.py +++ b/src/template_partials/templatetags/partials.py @@ -81,7 +81,7 @@ def partialdef_func(parser, token): Stores the nodelist in the context under the key "partial_contents" and can be retrieved using the {% partial %} tag. - The optional inline=True argument will render the contents of the partial + The optional ``inline`` argument will render the contents of the partial where it is defined. """ return _define_partial(parser, token, "endpartialdef") @@ -115,6 +115,12 @@ def _define_partial(parser, token, end_tag): # the inline argument is optional, so fallback to not using it inline = False + if inline and inline != "inline": + warnings.warn( + "The 'inline' argument does not have any parameters; either use 'inline' or remove it completely.", + DeprecationWarning, + ) + # Parse the content until the end tag (`endpartialdef` or deprecated `endpartial`) acceptable_endpartials = (end_tag, f"{end_tag} {partial_name}") nodelist = parser.parse(acceptable_endpartials)
Check the token for partialdef inline Hello, right now in order for a partialdef to be inline the check is very basic. The argument is only evaluated as a token, which is a string. The only condition is that some argument has to be there. Relevant code: ``` try: inline = tokens[2] except IndexError: # the inline argument is optional, so fallback to not using it inline = False ``` So even `{% partialdef some_partial False %}` will show the partial inline. I think the inline argument should be a boolean. So you can render that partial conditionally like so: `{% partialdef some_partial inline=some_true_or_false_condition %}`
I've noticed also the partial name when using the partial templatetag is only parsed as a string. It would be nice if the arguments will be evaluated as variables so they can be dynamic. `{% partial tab %}` # here tab evaluated as a string, not as a variable. Hi @paulik123 — thanks for the report. To be honest I think the usage should just be `{% partialdef some_partial inline %}` — so passing the third argument trigger it. We could add a check for the literal value `'inline'` at that point, so I agree `False` is a little weird there. (If you want `False` just don't pass the argument, which is the default.) Make sense? Fancy making a PR? Thank you for the quick response and this awesome library. Sure, I'll come back with a PR.
2024-08-17T09:25:24
0.0
[]
[]
carltongibson/django-template-partials
carltongibson__django-template-partials-2
3255f1a5b63bb82757f0dd644bc2787e32f32a82
diff --git a/src/template_partials/templatetags/partials.py b/src/template_partials/templatetags/partials.py index 283d4a2..0ef5baa 100644 --- a/src/template_partials/templatetags/partials.py +++ b/src/template_partials/templatetags/partials.py @@ -13,6 +13,10 @@ def __init__(self, nodelist, origin, name): self.origin = origin self.name = name + def get_exception_info(self, exception, token): + template = self.origin.loader.get_template(self.origin.template_name) + return template.get_exception_info(exception, token) + def render(self, context): "Display stage -- can be called many times" with context.render_context.push_state(self):
'TemplateProxy' object has no attribute 'get_exception_info' Happens when I used a model that wasn't migrated yet, so there was probably some underlaying database error. The line that threw the error was the partial template tag.
OK, yes, that's for the debug template. (At least the bottom part of the traceback would be helpful.) `TemplateProxy` should likely implement something here, so as not to error. Fancy doing that? Yes, was already looking at what was needed. Great. Thanks. For first pass, something minimal would be enough. (I don't know we need to copy too much from Django...)
2023-06-13T12:22:31
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-84
335c2887a32d8becfcd86b8f7fb4cc88019a5a4e
diff --git a/pylsp_ruff/plugin.py b/pylsp_ruff/plugin.py index 6628e91..86a9c21 100644 --- a/pylsp_ruff/plugin.py +++ b/pylsp_ruff/plugin.py @@ -125,6 +125,9 @@ def pylsp_format_document(workspace: Workspace, document: Document) -> Generator source = document.source settings = load_settings(workspace=workspace, document_path=document.path) + if not settings.format_enabled: + return + new_text = run_ruff_format( settings=settings, document_path=document.path, document_source=source ) @@ -720,6 +723,7 @@ def load_settings(workspace: Workspace, document_path: str) -> PluginSettings: # Leave config to pyproject.toml return PluginSettings( enabled=plugin_settings.enabled, + format_enabled=plugin_settings.format_enabled, executable=plugin_settings.executable, unsafe_fixes=plugin_settings.unsafe_fixes, extend_ignore=plugin_settings.extend_ignore, diff --git a/pylsp_ruff/settings.py b/pylsp_ruff/settings.py index 2345ce1..27af16f 100644 --- a/pylsp_ruff/settings.py +++ b/pylsp_ruff/settings.py @@ -8,6 +8,7 @@ @dataclass class PluginSettings: enabled: bool = True + format_enabled: bool = True executable: Optional[str] = None config: Optional[str] = None line_length: Optional[int] = None
Allow disabling formatter As someone only needing the linting part of ruff, I'd like to be able to disable the formatter, which conflicts with the formatter I do use. I spent the better part of the day trying to figure out why black always broke the formatting when running inside of my editor but not when running from our pre-commit hook or manually in the terminal... Now I finally figured out that it was actually this plugin having formatting enabled by default. Ruff formatting is not 100% compatible with black, which basically stops me from using this plugin entirely in my workflow. Which sucks because I do actually want the linting part of ruff. Otherwise I guess I'll need to setup another language server for which I can then entirely disable document formatting
Looking into it.
2024-02-29T10:49:58
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-62
9ba6e5117afec1913e2b39278554fcef5749d595
diff --git a/README.md b/README.md index 71f02ea..ec7a5bc 100644 --- a/README.md +++ b/README.md @@ -50,6 +50,12 @@ lspconfig.pylsp.setup { } ``` +## Code actions + +`python-lsp-ruff` supports code actions as given by possible fixes by `ruff`. `python-lsp-ruff` also supports [unsafe fixes](https://docs.astral.sh/ruff/linter/#fix-safety). +Fixes considered unsafe by `ruff` are marked `(unsafe)` in the code action. +The `Fix all` code action *only* consideres safe fixes. + ## Configuration Configuration options can be passed to the python-language-server. If a `pyproject.toml` diff --git a/pylsp_ruff/plugin.py b/pylsp_ruff/plugin.py index a36babf..16a9c8f 100644 --- a/pylsp_ruff/plugin.py +++ b/pylsp_ruff/plugin.py @@ -238,9 +238,10 @@ def pylsp_code_actions( if diagnostic.data: # Has fix fix = converter.structure(diagnostic.data, RuffFix) - # Ignore fix if marked as unsafe and unsafe_fixes are disabled - if fix.applicability != "safe" and not settings.unsafe_fixes: - continue + if fix.applicability == "unsafe": + if not settings.unsafe_fixes: + continue + fix.message += " (unsafe)" if diagnostic.code == "I001": code_actions.append( @@ -359,6 +360,9 @@ def create_fix_all_code_action( title = "Ruff: Fix All" kind = CodeActionKind.SourceFixAll + # No unsafe fixes for 'Fix all', see https://github.com/python-lsp/python-lsp-ruff/issues/55 + settings.unsafe_fixes = False + new_text = run_ruff_fix(document=document, settings=settings) range = Range( start=Position(line=0, character=0),
Suggestion on code actions for unsafe rules With the introduction of the `unsafe` auto-fixes in ruff, I feel having a single toggle in pylsp for enabling/disabling them are quite too inefficient or "too safe", as either make all unsafe fixes disappear, or making the `Fix all`-code action quite unsafe in nature My idea is instead, if you leave `pylsp.plugins.ruff.unsafeFixes` as `False` then the `Fix all`-code-action should fix all `safe` rules, but still make the __individual__ `unsafe` rules available and actionable on the respective diagnostic/location This strikes the balance of being safe with the `Fix all`-code-action, and still being able to discover _and_ make a choice on the `unsafe` fix (If you instead set `pylsp.plugins.ruff.unsafeFixes` to `True`, then the plugin should work exactly as now (because then you signed up for the "unsafety-ness")) With that being said, the `unsafe`-code actions should probably indicate they are unsafe somehow If this isn't too crazy of an idea, I'd happily try to implement it :-)
I was also fiddling with the idea of adding `(unsafe)` to any unsafe code action. I don't like the idea however of `Fix all` not actually performing *all* possible fixes. Consider also this example: ```python def f(): a = t # F821: Undefined name `t` a = 2 # F841: Local variable `a` is assigned but never used ``` Running `Fix all` without `--unsafe-fixes` would not fix any of the errors (which makes sense, since there is no safe fix for `F821` and `F841`. Now you could manually run the code action for `F841` which removes `a=2` *but not* `F821`. *With* the `--unsafe-fixes` option, the `Fix all` option `ruff` will format the code as ```python def f(): pass ``` My point is that I would expect the `Fix all` to do exactly what I would expect it to do, namely to do the exact same as `ruff check --fix --unsafe-fixes ...`. However I am happy to hear the opinion of others on this. > I was also fiddling with the idea of adding (unsafe) to any unsafe code action. I think this would be more reasonable; it at least gives the programmer a visual reminder that the action is not-safe > I don't like the idea however of Fix all not actually performing all possible fixes. My idea was that `Fix all` in would either fix everything or only fix `safe` actions, for `--unsafe-fixes` and without respectively. Instead, for cases _without `--unsafe-fixes`_, we extend the published code-actions to _also_ include `unsafe` code actions, but to **not** let them be apart of the `Fix all` code action (so it only fixes all `safe` rules, which is expected without the `--unsafe-fixes` flag) This way they are visual in the editor, but you have to apply them manually on a rule-by-rule basis (which is why I thought having a visual reminder that the code action is `unsafe` would be helpful) Simply put: _always_ run with `--unsafe-fixes`, making `pylsp.plugins.ruff.unsafeFixes` only controls whether or not `unsafe` fixes are included in the `Fix all` code action
2023-11-21T08:47:22
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-38
07fa9ac0149dcb79534d656d56fbceaeb7baade9
diff --git a/README.md b/README.md index 5b250a5..d4969f0 100644 --- a/README.md +++ b/README.md @@ -66,5 +66,17 @@ the valid configuration keys: - `pylsp.plugins.ruff.select`: List of error codes to enable. - `pylsp.plugins.ruff.extendSelect`: Same as select, but append to existing error codes. - `pylsp.plugins.ruff.format`: List of error codes to fix during formatting. The default is `["I"]`, any additional codes are appended to this list. + - `pylsp.plugins.ruff.severities`: Dictionary of custom severity levels for specific codes, see [below](#custom-severities). For more information on the configuration visit [Ruff's homepage](https://beta.ruff.rs/docs/configuration/). + +## Custom severities + +By default all diagnostics are marked as warning, except for `"E999"` and all error codes starting with `"F"`, which are displayed as errors. +This default can be changed through the `pylsp.plugins.ruff.severities` option, which takes the error code as a key and any of +`"E"`, `"W"`, `"I"` and `"H"` to be displayed as errors, warnings, information and hints, respectively. +For more information on the diagnostic severities please refer to +[the official LSP reference](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#diagnosticSeverity). + +Note that `python-lsp-ruff` does *not* accept regex, and it will *not* check whether the error code exists. If the custom severity level is not displayed, +please check first that the error code is correct and that the given value is one of the possible keys from above. diff --git a/pylsp_ruff/plugin.py b/pylsp_ruff/plugin.py index 50cf9ef..0ada4b7 100644 --- a/pylsp_ruff/plugin.py +++ b/pylsp_ruff/plugin.py @@ -53,6 +53,13 @@ "F841", # local variable `name` is assigned to but never used } +DIAGNOSTIC_SEVERITIES = { + "E": DiagnosticSeverity.Error, + "W": DiagnosticSeverity.Warning, + "I": DiagnosticSeverity.Information, + "H": DiagnosticSeverity.Hint, +} + @hookimpl def pylsp_settings(): @@ -91,7 +98,10 @@ def pylsp_format_document(workspace: Workspace, document: Document) -> Generator else: source = document.source - new_text = run_ruff_format(workspace, document.path, document_source=source) + settings = load_settings(workspace=workspace, document_path=document.path) + new_text = run_ruff_format( + settings=settings, document_path=document.path, document_source=source + ) # Avoid applying empty text edit if new_text == source: @@ -122,12 +132,13 @@ def pylsp_lint(workspace: Workspace, document: Document) -> List[Dict]: ------- List of dicts containing the diagnostics. """ - checks = run_ruff_check(workspace, document) - diagnostics = [create_diagnostic(c) for c in checks] + settings = load_settings(workspace, document.path) + checks = run_ruff_check(document=document, settings=settings) + diagnostics = [create_diagnostic(check=c, settings=settings) for c in checks] return converter.unstructure(diagnostics) -def create_diagnostic(check: RuffCheck) -> Diagnostic: +def create_diagnostic(check: RuffCheck, settings: PluginSettings) -> Diagnostic: # Adapt range to LSP specification (zero-based) range = Range( start=Position( @@ -146,6 +157,12 @@ def create_diagnostic(check: RuffCheck) -> Diagnostic: if check.code == "E999" or check.code[0] == "F": severity = DiagnosticSeverity.Error + # Override severity with custom severity if possible, use default otherwise + if settings.severities is not None: + custom_sev = settings.severities.get(check.code, None) + if custom_sev is not None: + severity = DIAGNOSTIC_SEVERITIES.get(custom_sev, severity) + tags = [] if check.code in UNNECESSITY_CODES: tags = [DiagnosticTag.Unnecessary] @@ -198,39 +215,48 @@ def pylsp_code_actions( has_organize_imports = False for diagnostic in diagnostics: - code_actions.append(create_disable_code_action(document, diagnostic)) + code_actions.append( + create_disable_code_action(document=document, diagnostic=diagnostic) + ) if diagnostic.data: # Has fix fix = converter.structure(diagnostic.data, RuffFix) if diagnostic.code == "I001": code_actions.append( - create_organize_imports_code_action(document, diagnostic, fix) + create_organize_imports_code_action( + document=document, diagnostic=diagnostic, fix=fix + ) ) has_organize_imports = True else: code_actions.append( - create_fix_code_action(document, diagnostic, fix), + create_fix_code_action( + document=document, diagnostic=diagnostic, fix=fix + ), ) - checks = run_ruff_check(workspace, document) + settings = load_settings(workspace=workspace, document_path=document.path) + checks = run_ruff_check(document=document, settings=settings) checks_with_fixes = [c for c in checks if c.fix] checks_organize_imports = [c for c in checks_with_fixes if c.code == "I001"] if not has_organize_imports and checks_organize_imports: check = checks_organize_imports[0] fix = check.fix # type: ignore - diagnostic = create_diagnostic(check) + diagnostic = create_diagnostic(check=check, settings=settings) code_actions.extend( [ - create_organize_imports_code_action(document, diagnostic, fix), - create_disable_code_action(document, diagnostic), + create_organize_imports_code_action( + document=document, diagnostic=diagnostic, fix=fix + ), + create_disable_code_action(document=document, diagnostic=diagnostic), ] ) if checks_with_fixes: code_actions.append( - create_fix_all_code_action(workspace, document), + create_fix_all_code_action(document=document, settings=settings), ) return converter.unstructure(code_actions) @@ -308,13 +334,13 @@ def create_organize_imports_code_action( def create_fix_all_code_action( - workspace: Workspace, document: Document, + settings: PluginSettings, ) -> CodeAction: title = "Ruff: Fix All" kind = CodeActionKind.SourceFixAll - new_text = run_ruff_fix(workspace, document) + new_text = run_ruff_fix(document=document, settings=settings) range = Range( start=Position(line=0, character=0), end=Position(line=len(document.lines), character=0), @@ -345,9 +371,11 @@ def create_text_edits(fix: RuffFix) -> List[TextEdit]: return edits -def run_ruff_check(workspace: Workspace, document: Document) -> List[RuffCheck]: +def run_ruff_check(document: Document, settings: PluginSettings) -> List[RuffCheck]: result = run_ruff( - workspace, document_path=document.path, document_source=document.source + document_path=document.path, + document_source=document.source, + settings=settings, ) try: result = json.loads(result) @@ -356,20 +384,21 @@ def run_ruff_check(workspace: Workspace, document: Document) -> List[RuffCheck]: return converter.structure(result, List[RuffCheck]) -def run_ruff_fix(workspace: Workspace, document: Document) -> str: +def run_ruff_fix(document: Document, settings: PluginSettings) -> str: result = run_ruff( - workspace, document_path=document.path, document_source=document.source, fix=True, + settings=settings, ) return result def run_ruff_format( - workspace: Workspace, document_path: str, document_source: str + settings: PluginSettings, + document_path: str, + document_source: str, ) -> str: - settings = load_settings(workspace, document_path) fixable_codes = ["I"] if settings.format: fixable_codes.extend(settings.format) @@ -377,9 +406,9 @@ def run_ruff_format( f"--fixable={','.join(fixable_codes)}", ] result = run_ruff( - workspace, - document_path, - document_source, + settings=settings, + document_path=document_path, + document_source=document_source, fix=True, extra_arguments=extra_arguments, ) @@ -387,7 +416,7 @@ def run_ruff_format( def run_ruff( - workspace: Workspace, + settings: PluginSettings, document_path: str, document_source: str, fix: bool = False, @@ -398,8 +427,8 @@ def run_ruff( Parameters ---------- - workspace : pyls.workspace.Workspace - Workspace to run ruff in. + settings : PluginSettings + Settings to use. document_path : str Path to file to run ruff on. document_source : str @@ -414,7 +443,6 @@ def run_ruff( ------- String containing the result in json format. """ - settings = load_settings(workspace, document_path) executable = settings.executable arguments = build_arguments(document_path, settings, fix, extra_arguments) @@ -558,6 +586,7 @@ def load_settings(workspace: Workspace, document_path: str) -> PluginSettings: extend_ignore=plugin_settings.extend_ignore, extend_select=plugin_settings.extend_select, format=plugin_settings.format, + severities=plugin_settings.severities, ) return plugin_settings diff --git a/pylsp_ruff/settings.py b/pylsp_ruff/settings.py index a4d47e1..7993183 100644 --- a/pylsp_ruff/settings.py +++ b/pylsp_ruff/settings.py @@ -24,6 +24,8 @@ class PluginSettings: format: Optional[List[str]] = None + severities: Optional[Dict[str, str]] = None + def to_camel_case(snake_str: str) -> str: components = snake_str.split("_")
How to set diagnostics level for specific errors For example, how to set "E501"(Line too long) become a Hint level diagnostics?
This is currently not possible, but I plan to implement this soon, seeing that this is requested for ruff-lsp as well: https://github.com/astral-sh/ruff-lsp/issues/82 Glad to hear that!
2023-05-30T08:46:17
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-35
d1c229a3cbb4828e483bd86b1f0be1801b2c8ecc
diff --git a/pylsp_ruff/plugin.py b/pylsp_ruff/plugin.py index e5f547d..5b44f09 100644 --- a/pylsp_ruff/plugin.py +++ b/pylsp_ruff/plugin.py @@ -193,26 +193,25 @@ def pylsp_code_actions( _context = converter.structure(context, CodeActionContext) diagnostics = _context.diagnostics - diagnostics_with_fixes = [d for d in diagnostics if d.data] code_actions = [] has_organize_imports = False - for diagnostic in diagnostics_with_fixes: - fix = converter.structure(diagnostic.data, RuffFix) - - if diagnostic.code == "I001": - code_actions.append( - create_organize_imports_code_action(document, diagnostic, fix) - ) - has_organize_imports = True - else: - code_actions.extend( - [ + for diagnostic in diagnostics: + code_actions.append(create_disable_code_action(document, diagnostic)) + + if diagnostic.data: # Has fix + fix = converter.structure(diagnostic.data, RuffFix) + + if diagnostic.code == "I001": + code_actions.append( + create_organize_imports_code_action(document, diagnostic, fix) + ) + has_organize_imports = True + else: + code_actions.append( create_fix_code_action(document, diagnostic, fix), - create_disable_code_action(document, diagnostic), - ] - ) + ) checks = run_ruff_check(workspace, document) checks_with_fixes = [c for c in checks if c.fix] @@ -222,8 +221,11 @@ def pylsp_code_actions( check = checks_organize_imports[0] fix = check.fix # type: ignore diagnostic = create_diagnostic(check) - code_actions.append( - create_organize_imports_code_action(document, diagnostic, fix), + code_actions.extend( + [ + create_organize_imports_code_action(document, diagnostic, fix), + create_disable_code_action(document, diagnostic), + ] ) if checks_with_fixes:
Expose CodeAction "Disable for this line" for non-autofixable errors Currently the CodeAction "Disable for this line" is only present if the error is flagged as autofix-able, but I'd like to have it for non-autofixable errors too. It's a very minor convenience boost :-) Thanks for a great plugin! Magnus Expose CodeAction "Disable for this line" for non-autofixable errors Currently the CodeAction "Disable for this line" is only present if the error is flagged as autofix-able, but I'd like to have it for non-autofixable errors too. It's a very minor convenience boost :-) Thanks for a great plugin! Magnus
2023-05-24T16:57:59
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-25
81ce5934594e93966158db939adadf3d23fe5b8e
diff --git a/README.md b/README.md index 32ec5b5..40c75b9 100644 --- a/README.md +++ b/README.md @@ -19,8 +19,12 @@ There also exists an [AUR package](https://aur.archlinux.org/packages/python-lsp # Usage -This plugin will disable `flake8`, `pycodestyle`, `pyflakes` and `mccabe` by default. +This plugin will disable `flake8`, `pycodestyle`, `pyflakes`, `mccabe` and `isort` by default. When enabled, all linting diagnostics will be provided by `ruff`. +Sorting of the imports through `ruff` when formatting is enabled by default. +The list of code fixes can be changed via the `pylsp.plugins.ruff.format` option. + +When enabled, sorting of imports when formatting will be provided by `ruff`. # Configuration @@ -44,5 +48,6 @@ the valid configuration keys: - `pylsp.plugins.ruff.perFileIgnores`: File-specific error codes to be ignored. - `pylsp.plugins.ruff.select`: List of error codes to enable. - `pylsp.plugins.ruff.extendSelect`: Same as select, but append to existing error codes. + - `pylsp.plugins.ruff.format`: List of error codes to fix during formatting. For more information on the configuration visit [Ruff's homepage](https://beta.ruff.rs/docs/configuration/). diff --git a/pylsp_ruff/plugin.py b/pylsp_ruff/plugin.py index 3b369f4..343cb57 100644 --- a/pylsp_ruff/plugin.py +++ b/pylsp_ruff/plugin.py @@ -4,9 +4,8 @@ import sys from pathlib import PurePath from subprocess import PIPE, Popen -from typing import Dict, List +from typing import Dict, Generator, List, Optional -from lsprotocol.converters import get_converter from lsprotocol.types import ( CodeAction, CodeActionContext, @@ -26,8 +25,10 @@ from pylsp_ruff.ruff import Check as RuffCheck from pylsp_ruff.ruff import Fix as RuffFix +from pylsp_ruff.settings import PluginSettings, get_converter log = logging.getLogger(__name__) +logging.getLogger("blib2to3").setLevel(logging.ERROR) converter = get_converter() DIAGNOSTIC_SOURCE = "ruff" @@ -52,26 +53,52 @@ def pylsp_settings(): log.debug("Initializing pylsp_ruff") # this plugin disables flake8, mccabe, and pycodestyle by default - return { + settings = { "plugins": { - "ruff": { - "enabled": True, - "config": None, - "exclude": None, - "executable": "ruff", - "ignore": None, - "extendIgnore": None, - "lineLength": None, - "perFileIgnores": None, - "select": None, - "extendSelect": None, - }, + "ruff": PluginSettings(), "pyflakes": {"enabled": False}, "flake8": {"enabled": False}, "mccabe": {"enabled": False}, "pycodestyle": {"enabled": False}, + "pyls_isort": {"enabled": False}, } } + return converter.unstructure(settings) + + +@hookimpl(hookwrapper=True) +def pylsp_format_document(workspace: Workspace, document: Document) -> Generator: + """ + Provide formatting through ruff. + + Parameters + ---------- + workspace : pylsp.workspace.Workspace + Current workspace. + document : pylsp.workspace.Document + Document to apply ruff on. + """ + log.debug(f"textDocument/formatting: {document}") + outcome = yield + result = outcome.get_result() + if result: + source = result[0]["newText"] + else: + source = document.source + + new_text = run_ruff_format(workspace, document.path, document_source=source) + + # Avoid applying empty text edit + if new_text == source: + return + + range = Range( + start=Position(line=0, character=0), + end=Position(line=len(document.lines), character=0), + ) + text_edit = TextEdit(range=range, new_text=new_text) + + outcome.force_result(converter.unstructure([text_edit])) @hookimpl @@ -312,7 +339,9 @@ def create_text_edits(fix: RuffFix) -> List[TextEdit]: def run_ruff_check(workspace: Workspace, document: Document) -> List[RuffCheck]: - result = run_ruff(workspace, document) + result = run_ruff( + workspace, document_path=document.path, document_source=document.source + ) try: result = json.loads(result) except json.JSONDecodeError: @@ -321,11 +350,42 @@ def run_ruff_check(workspace: Workspace, document: Document) -> List[RuffCheck]: def run_ruff_fix(workspace: Workspace, document: Document) -> str: - result = run_ruff(workspace, document, fix=True) + result = run_ruff( + workspace, + document_path=document.path, + document_source=document.source, + fix=True, + ) + return result + + +def run_ruff_format( + workspace: Workspace, document_path: str, document_source: str +) -> str: + settings = load_settings(workspace, document_path) + fixable_codes = ["I"] + if settings.format: + fixable_codes.extend(settings.format) + extra_arguments = [ + f"--fixable={','.join(fixable_codes)}", + ] + result = run_ruff( + workspace, + document_path, + document_source, + fix=True, + extra_arguments=extra_arguments, + ) return result -def run_ruff(workspace: Workspace, document: Document, fix: bool = False) -> str: +def run_ruff( + workspace: Workspace, + document_path: str, + document_source: str, + fix: bool = False, + extra_arguments: Optional[List[str]] = None, +) -> str: """ Run ruff on the given document and the given arguments. @@ -333,20 +393,25 @@ def run_ruff(workspace: Workspace, document: Document, fix: bool = False) -> str ---------- workspace : pyls.workspace.Workspace Workspace to run ruff in. - document : pylsp.workspace.Document - File to run ruff on. + document_path : str + Path to file to run ruff on. + document_source : str + Document source or to apply ruff on. + Needed when the source differs from the file source, e.g. during formatting. fix : bool Whether to run fix or no-fix. + extra_arguments : List[str] + Extra arguments to pass to ruff. Returns ------- String containing the result in json format. """ - config = load_config(workspace, document) - executable = config.pop("executable") - arguments = build_arguments(document, config, fix) + settings = load_settings(workspace, document_path) + executable = settings.executable + arguments = build_arguments(document_path, settings, fix, extra_arguments) - log.debug(f"Calling {executable} with args: {arguments} on '{document.path}'") + log.debug(f"Calling {executable} with args: {arguments} on '{document_path}'") try: cmd = [executable] cmd.extend(arguments) @@ -356,7 +421,7 @@ def run_ruff(workspace: Workspace, document: Document, fix: bool = False) -> str cmd = [sys.executable, "-m", "ruff"] cmd.extend(arguments) p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) - (stdout, stderr) = p.communicate(document.source.encode()) + (stdout, stderr) = p.communicate(document_source.encode()) if stderr: log.error(f"Error running ruff: {stderr.decode()}") @@ -364,7 +429,12 @@ def run_ruff(workspace: Workspace, document: Document, fix: bool = False) -> str return stdout.decode() -def build_arguments(document: Document, options: Dict, fix: bool = False) -> List[str]: +def build_arguments( + document_path: str, + settings: PluginSettings, + fix: bool = False, + extra_arguments: Optional[List[str]] = None, +) -> List[str]: """ Build arguments for ruff. @@ -372,53 +442,69 @@ def build_arguments(document: Document, options: Dict, fix: bool = False) -> Lis ---------- document : pylsp.workspace.Document Document to apply ruff on. - options : Dict - Dict of arguments to pass to ruff. + settings : PluginSettings + Settings to use for arguments to pass to ruff. + fix : bool + Whether to execute with --fix. + extra_arguments : List[str] + Extra arguments to pass to ruff. Returns ------- List containing the arguments. """ + args = [] # Suppress update announcements - args = ["--quiet"] + args.append("--quiet") # Use the json formatting for easier evaluation - args.extend(["--format=json"]) + args.append("--format=json") if fix: - args.extend(["--fix"]) + args.append("--fix") else: # Do not attempt to fix -> returns file instead of diagnostics - args.extend(["--no-fix"]) + args.append("--no-fix") # Always force excludes - args.extend(["--force-exclude"]) + args.append("--force-exclude") # Pass filename to ruff for per-file-ignores, catch unsaved - if document.path != "": - args.extend(["--stdin-filename", document.path]) - - # Convert per-file-ignores dict to right format - per_file_ignores = options.pop("per-file-ignores") - - if per_file_ignores: - for path, errors in per_file_ignores.items(): - errors = (",").join(errors) - if PurePath(document.path).match(path): - args.extend([f"--ignore={errors}"]) - - for arg_name, arg_val in options.items(): - if arg_val is None: - continue - arg = None - if isinstance(arg_val, list): - arg = "--{}={}".format(arg_name, ",".join(arg_val)) - else: - arg = "--{}={}".format(arg_name, arg_val) - args.append(arg) + if document_path != "": + args.append(f"--stdin-filename={document_path}") + + if settings.config: + args.append(f"--config={settings.config}") + + if settings.line_length: + args.append(f"--line-length={settings.line_length}") + + if settings.exclude: + args.append(f"--exclude={','.join(settings.exclude)}") + + if settings.select: + args.append(f"--select={','.join(settings.select)}") + + if settings.extend_select: + args.append(f"--extend-select={','.join(settings.extend_select)}") + + if settings.ignore: + args.append(f"--ignore={','.join(settings.ignore)}") + + if settings.extend_ignore: + args.append(f"--extend-ignore={','.join(settings.extend_ignore)}") + + if settings.per_file_ignores: + for path, errors in settings.per_file_ignores.items(): + if not PurePath(document_path).match(path): + continue + args.append(f"--ignore={','.join(errors)}") + + if extra_arguments: + args.extend(extra_arguments) args.extend(["--", "-"]) return args -def load_config(workspace: Workspace, document: Document) -> Dict: +def load_settings(workspace: Workspace, document_path: str) -> PluginSettings: """ Load settings from pyproject.toml file in the project path. @@ -426,18 +512,19 @@ def load_config(workspace: Workspace, document: Document) -> Dict: ---------- workspace : pylsp.workspace.Workspace Current workspace. - document : pylsp.workspace.Document - Document to apply ruff on. + document_path : str + Path to the document to apply ruff on. Returns ------- - Dictionary containing the settings to use when calling ruff. + PluginSettings read via lsp. """ config = workspace._config - _settings = config.plugin_settings("ruff", document_path=document.path) + _plugin_settings = config.plugin_settings("ruff", document_path=document_path) + plugin_settings = converter.structure(_plugin_settings, PluginSettings) pyproject_file = find_parents( - workspace.root_path, document.path, ["pyproject.toml"] + workspace.root_path, document_path, ["pyproject.toml"] ) # Check if pyproject is present, ignore user settings if toml exists @@ -446,32 +533,13 @@ def load_config(workspace: Workspace, document: Document) -> Dict: f"Found pyproject file: {str(pyproject_file[0])}, " + "skipping pylsp config." ) - # Leave config to pyproject.toml - settings = { - "config": None, - "exclude": None, - "executable": _settings.get("executable", "ruff"), - "ignore": None, - "extend-ignore": _settings.get("extendIgnore", None), - "line-length": None, - "per-file-ignores": None, - "select": None, - "extend-select": _settings.get("extendSelect", None), - } - - else: - # Default values are given by ruff - settings = { - "config": _settings.get("config", None), - "exclude": _settings.get("exclude", None), - "executable": _settings.get("executable", "ruff"), - "ignore": _settings.get("ignore", None), - "extend-ignore": _settings.get("extendIgnore", None), - "line-length": _settings.get("lineLength", None), - "per-file-ignores": _settings.get("perFileIgnores", None), - "select": _settings.get("select", None), - "extend-select": _settings.get("extendSelect", None), - } + return PluginSettings( + enabled=plugin_settings.executable, + executable=plugin_settings.executable, + extend_ignore=plugin_settings.extend_ignore, + extend_select=plugin_settings.extend_select, + format=plugin_settings.format, + ) - return settings + return plugin_settings diff --git a/pylsp_ruff/settings.py b/pylsp_ruff/settings.py new file mode 100644 index 0000000..a4d47e1 --- /dev/null +++ b/pylsp_ruff/settings.py @@ -0,0 +1,55 @@ +from dataclasses import dataclass, fields +from typing import Dict, List, Optional + +import lsprotocol.converters +from cattrs.gen import make_dict_structure_fn, make_dict_unstructure_fn, override + + +@dataclass +class PluginSettings: + enabled: bool = True + executable: str = "ruff" + + config: Optional[str] = None + line_length: Optional[int] = None + + exclude: Optional[List[str]] = None + + select: Optional[List[str]] = None + extend_select: Optional[List[str]] = None + + ignore: Optional[List[str]] = None + extend_ignore: Optional[List[str]] = None + per_file_ignores: Optional[Dict[str, List[str]]] = None + + format: Optional[List[str]] = None + + +def to_camel_case(snake_str: str) -> str: + components = snake_str.split("_") + return components[0] + "".join(x.title() for x in components[1:]) + + +def to_camel_case_unstructure(converter, klass): + return make_dict_unstructure_fn( + klass, + converter, + **{a.name: override(rename=to_camel_case(a.name)) for a in fields(klass)}, + ) + + +def to_camel_case_structure(converter, klass): + return make_dict_structure_fn( + klass, + converter, + **{a.name: override(rename=to_camel_case(a.name)) for a in fields(klass)}, + ) + + +def get_converter(): + converter = lsprotocol.converters.get_converter() + unstructure_hook = to_camel_case_unstructure(converter, PluginSettings) + structure_hook = to_camel_case_structure(converter, PluginSettings) + converter.register_unstructure_hook(PluginSettings, unstructure_hook) + converter.register_structure_hook(PluginSettings, structure_hook) + return converter
Formatting capabilities See the discussion in #22.
2023-04-03T16:21:35
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-22
cc61c3655d113f81a8bdefe2d82b918c10bc4659
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 9c55aa5..4fe1958 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,9 +1,4 @@ repos: - - repo: https://github.com/timothycrosley/isort - rev: 5.11.5 - hooks: - - id: isort - - repo: https://github.com/psf/black rev: 22.8.0 hooks: diff --git a/pylsp_ruff/plugin.py b/pylsp_ruff/plugin.py new file mode 100644 index 0000000..3b369f4 --- /dev/null +++ b/pylsp_ruff/plugin.py @@ -0,0 +1,477 @@ +import json +import logging +import re +import sys +from pathlib import PurePath +from subprocess import PIPE, Popen +from typing import Dict, List + +from lsprotocol.converters import get_converter +from lsprotocol.types import ( + CodeAction, + CodeActionContext, + CodeActionKind, + Diagnostic, + DiagnosticSeverity, + DiagnosticTag, + Position, + Range, + TextEdit, + WorkspaceEdit, +) +from pylsp import hookimpl +from pylsp._utils import find_parents +from pylsp.config.config import Config +from pylsp.workspace import Document, Workspace + +from pylsp_ruff.ruff import Check as RuffCheck +from pylsp_ruff.ruff import Fix as RuffFix + +log = logging.getLogger(__name__) +converter = get_converter() + +DIAGNOSTIC_SOURCE = "ruff" + +# shamelessly borrowed from: +# https://github.com/charliermarsh/ruff-lsp/blob/2a0e2ea3afefdbf00810b8df91030c1c6b59d103/ruff_lsp/server.py#L214 +NOQA_REGEX = re.compile( + r"(?i:# (?:(?:ruff|flake8): )?(?P<noqa>noqa))" + r"(?::\s?(?P<codes>([A-Z]+[0-9]+(?:[,\s]+)?)+))?" +) + +UNNECESSITY_CODES = { + "F401", # `module` imported but unused + "F504", # % format unused named arguments + "F522", # .format(...) unused named arguments + "F523", # .format(...) unused positional arguments + "F841", # local variable `name` is assigned to but never used +} + + +@hookimpl +def pylsp_settings(): + log.debug("Initializing pylsp_ruff") + # this plugin disables flake8, mccabe, and pycodestyle by default + return { + "plugins": { + "ruff": { + "enabled": True, + "config": None, + "exclude": None, + "executable": "ruff", + "ignore": None, + "extendIgnore": None, + "lineLength": None, + "perFileIgnores": None, + "select": None, + "extendSelect": None, + }, + "pyflakes": {"enabled": False}, + "flake8": {"enabled": False}, + "mccabe": {"enabled": False}, + "pycodestyle": {"enabled": False}, + } + } + + +@hookimpl +def pylsp_lint(workspace: Workspace, document: Document) -> List[Dict]: + """ + Register ruff as the linter. + + Parameters + ---------- + workspace : pylsp.workspace.Workspace + Current workspace. + document : pylsp.workspace.Document + Document to apply ruff on. + + Returns + ------- + List of dicts containing the diagnostics. + """ + checks = run_ruff_check(workspace, document) + diagnostics = [create_diagnostic(c) for c in checks] + return converter.unstructure(diagnostics) + + +def create_diagnostic(check: RuffCheck) -> Diagnostic: + # Adapt range to LSP specification (zero-based) + range = Range( + start=Position( + line=check.location.row - 1, + character=check.location.column - 1, + ), + end=Position( + line=check.end_location.row - 1, + character=check.end_location.column - 1, + ), + ) + + # Ruff intends to implement severity codes in the future, + # see https://github.com/charliermarsh/ruff/issues/645. + severity = DiagnosticSeverity.Warning + if check.code == "E999" or check.code[0] == "F": + severity = DiagnosticSeverity.Error + + tags = [] + if check.code in UNNECESSITY_CODES: + tags = [DiagnosticTag.Unnecessary] + + return Diagnostic( + source=DIAGNOSTIC_SOURCE, + code=check.code, + range=range, + message=check.message, + severity=severity, + tags=tags, + data=check.fix, + ) + + +@hookimpl +def pylsp_code_actions( + config: Config, + workspace: Workspace, + document: Document, + range: Dict, + context: Dict, +) -> List[Dict]: + """ + Provide code actions through ruff. + + Parameters + ---------- + config : pylsp.config.config.Config + Current workspace. + workspace : pylsp.workspace.Workspace + Current workspace. + document : pylsp.workspace.Document + Document to apply ruff on. + range : Dict + Range argument given by pylsp. Not used here. + context : Dict + CodeActionContext given as dict. + + Returns + ------- + List of dicts containing the code actions. + """ + log.debug(f"textDocument/codeAction: {document} {range} {context}") + + _context = converter.structure(context, CodeActionContext) + diagnostics = _context.diagnostics + diagnostics_with_fixes = [d for d in diagnostics if d.data] + + code_actions = [] + has_organize_imports = False + + for diagnostic in diagnostics_with_fixes: + fix = converter.structure(diagnostic.data, RuffFix) + + if diagnostic.code == "I001": + code_actions.append( + create_organize_imports_code_action(document, diagnostic, fix) + ) + has_organize_imports = True + else: + code_actions.extend( + [ + create_fix_code_action(document, diagnostic, fix), + create_disable_code_action(document, diagnostic), + ] + ) + + checks = run_ruff_check(workspace, document) + checks_with_fixes = [c for c in checks if c.fix] + checks_organize_imports = [c for c in checks_with_fixes if c.code == "I001"] + + if not has_organize_imports and checks_organize_imports: + check = checks_organize_imports[0] + fix = check.fix # type: ignore + diagnostic = create_diagnostic(check) + code_actions.append( + create_organize_imports_code_action(document, diagnostic, fix), + ) + + if checks_with_fixes: + code_actions.append( + create_fix_all_code_action(workspace, document), + ) + + return converter.unstructure(code_actions) + + +def create_fix_code_action( + document: Document, + diagnostic: Diagnostic, + fix: RuffFix, +) -> CodeAction: + title = f"Ruff ({diagnostic.code}): {fix.message}" + kind = CodeActionKind.QuickFix + + text_edits = create_text_edits(fix) + workspace_edit = WorkspaceEdit(changes={document.uri: text_edits}) + return CodeAction( + title=title, + kind=kind, + diagnostics=[diagnostic], + edit=workspace_edit, + ) + + +def create_disable_code_action( + document: Document, + diagnostic: Diagnostic, +) -> CodeAction: + title = f"Ruff ({diagnostic.code}): Disable for this line" + kind = CodeActionKind.QuickFix + + line = document.lines[diagnostic.range.start.line].rstrip("\r\n") + match = NOQA_REGEX.search(line) + has_noqa = match is not None + has_codes = match and match.group("codes") is not None + # `foo # noqa: OLD` -> `foo # noqa: OLD,NEW` + if has_noqa and has_codes: + new_line = f"{line},{diagnostic.code}" + # `foo # noqa` -> `foo # noqa: NEW` + elif has_noqa: + new_line = f"{line}: {diagnostic.code}" + # `foo` -> `foo # noqa: NEW` + else: + new_line = f"{line} # noqa: {diagnostic.code}" + + range = Range( + start=Position(line=diagnostic.range.start.line, character=0), + end=Position(line=diagnostic.range.start.line, character=len(line)), + ) + text_edit = TextEdit(range=range, new_text=new_line) + workspace_edit = WorkspaceEdit(changes={document.uri: [text_edit]}) + return CodeAction( + title=title, + kind=kind, + diagnostics=[diagnostic], + edit=workspace_edit, + ) + + +def create_organize_imports_code_action( + document: Document, + diagnostic: Diagnostic, + fix: RuffFix, +) -> CodeAction: + title = f"Ruff: {fix.message}" + kind = CodeActionKind.SourceOrganizeImports + + text_edits = create_text_edits(fix) + workspace_edit = WorkspaceEdit(changes={document.uri: text_edits}) + return CodeAction( + title=title, + kind=kind, + diagnostics=[diagnostic], + edit=workspace_edit, + ) + + +def create_fix_all_code_action( + workspace: Workspace, + document: Document, +) -> CodeAction: + title = "Ruff: Fix All" + kind = CodeActionKind.SourceFixAll + + new_text = run_ruff_fix(workspace, document) + range = Range( + start=Position(line=0, character=0), + end=Position(line=len(document.lines), character=0), + ) + text_edit = TextEdit(range=range, new_text=new_text) + workspace_edit = WorkspaceEdit(changes={document.uri: [text_edit]}) + return CodeAction( + title=title, + kind=kind, + edit=workspace_edit, + ) + + +def create_text_edits(fix: RuffFix) -> List[TextEdit]: + edits = [] + for edit in fix.edits: + range = Range( + start=Position( + line=edit.location.row - 1, + character=edit.location.column, # yes, no -1 + ), + end=Position( + line=edit.end_location.row - 1, + character=edit.end_location.column, # yes, no -1 + ), + ) + edits.append(TextEdit(range=range, new_text=edit.content)) + return edits + + +def run_ruff_check(workspace: Workspace, document: Document) -> List[RuffCheck]: + result = run_ruff(workspace, document) + try: + result = json.loads(result) + except json.JSONDecodeError: + result = [] # type: ignore + return converter.structure(result, List[RuffCheck]) + + +def run_ruff_fix(workspace: Workspace, document: Document) -> str: + result = run_ruff(workspace, document, fix=True) + return result + + +def run_ruff(workspace: Workspace, document: Document, fix: bool = False) -> str: + """ + Run ruff on the given document and the given arguments. + + Parameters + ---------- + workspace : pyls.workspace.Workspace + Workspace to run ruff in. + document : pylsp.workspace.Document + File to run ruff on. + fix : bool + Whether to run fix or no-fix. + + Returns + ------- + String containing the result in json format. + """ + config = load_config(workspace, document) + executable = config.pop("executable") + arguments = build_arguments(document, config, fix) + + log.debug(f"Calling {executable} with args: {arguments} on '{document.path}'") + try: + cmd = [executable] + cmd.extend(arguments) + p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) + except Exception: + log.debug(f"Can't execute {executable}. Trying with '{sys.executable} -m ruff'") + cmd = [sys.executable, "-m", "ruff"] + cmd.extend(arguments) + p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) + (stdout, stderr) = p.communicate(document.source.encode()) + + if stderr: + log.error(f"Error running ruff: {stderr.decode()}") + + return stdout.decode() + + +def build_arguments(document: Document, options: Dict, fix: bool = False) -> List[str]: + """ + Build arguments for ruff. + + Parameters + ---------- + document : pylsp.workspace.Document + Document to apply ruff on. + options : Dict + Dict of arguments to pass to ruff. + + Returns + ------- + List containing the arguments. + """ + # Suppress update announcements + args = ["--quiet"] + # Use the json formatting for easier evaluation + args.extend(["--format=json"]) + if fix: + args.extend(["--fix"]) + else: + # Do not attempt to fix -> returns file instead of diagnostics + args.extend(["--no-fix"]) + # Always force excludes + args.extend(["--force-exclude"]) + # Pass filename to ruff for per-file-ignores, catch unsaved + if document.path != "": + args.extend(["--stdin-filename", document.path]) + + # Convert per-file-ignores dict to right format + per_file_ignores = options.pop("per-file-ignores") + + if per_file_ignores: + for path, errors in per_file_ignores.items(): + errors = (",").join(errors) + if PurePath(document.path).match(path): + args.extend([f"--ignore={errors}"]) + + for arg_name, arg_val in options.items(): + if arg_val is None: + continue + arg = None + if isinstance(arg_val, list): + arg = "--{}={}".format(arg_name, ",".join(arg_val)) + else: + arg = "--{}={}".format(arg_name, arg_val) + args.append(arg) + + args.extend(["--", "-"]) + + return args + + +def load_config(workspace: Workspace, document: Document) -> Dict: + """ + Load settings from pyproject.toml file in the project path. + + Parameters + ---------- + workspace : pylsp.workspace.Workspace + Current workspace. + document : pylsp.workspace.Document + Document to apply ruff on. + + Returns + ------- + Dictionary containing the settings to use when calling ruff. + """ + config = workspace._config + _settings = config.plugin_settings("ruff", document_path=document.path) + + pyproject_file = find_parents( + workspace.root_path, document.path, ["pyproject.toml"] + ) + + # Check if pyproject is present, ignore user settings if toml exists + if pyproject_file: + log.debug( + f"Found pyproject file: {str(pyproject_file[0])}, " + + "skipping pylsp config." + ) + + # Leave config to pyproject.toml + settings = { + "config": None, + "exclude": None, + "executable": _settings.get("executable", "ruff"), + "ignore": None, + "extend-ignore": _settings.get("extendIgnore", None), + "line-length": None, + "per-file-ignores": None, + "select": None, + "extend-select": _settings.get("extendSelect", None), + } + + else: + # Default values are given by ruff + settings = { + "config": _settings.get("config", None), + "exclude": _settings.get("exclude", None), + "executable": _settings.get("executable", "ruff"), + "ignore": _settings.get("ignore", None), + "extend-ignore": _settings.get("extendIgnore", None), + "line-length": _settings.get("lineLength", None), + "per-file-ignores": _settings.get("perFileIgnores", None), + "select": _settings.get("select", None), + "extend-select": _settings.get("extendSelect", None), + } + + return settings diff --git a/pylsp_ruff/ruff.py b/pylsp_ruff/ruff.py new file mode 100644 index 0000000..2f83f9b --- /dev/null +++ b/pylsp_ruff/ruff.py @@ -0,0 +1,31 @@ +from dataclasses import dataclass +from typing import List, Union + + +@dataclass +class Location: + row: int + column: int + + +@dataclass +class Edit: + content: str + location: Location + end_location: Location + + +@dataclass +class Fix: + edits: List[Edit] + message: str + + +@dataclass +class Check: + code: str + message: str + filename: str + location: Location + end_location: Location + fix: Union[Fix, None] = None diff --git a/pylsp_ruff/ruff_lint.py b/pylsp_ruff/ruff_lint.py deleted file mode 100644 index ec436fc..0000000 --- a/pylsp_ruff/ruff_lint.py +++ /dev/null @@ -1,276 +0,0 @@ -import json -import logging -import sys -from pathlib import PurePath -from subprocess import PIPE, Popen - -from pylsp import hookimpl, lsp -from pylsp._utils import find_parents -from pylsp.workspace import Document, Workspace - -log = logging.getLogger(__name__) - -UNNECESSITY_CODES = { - "F401", # `module` imported but unused - "F504", # % format unused named arguments - "F522", # .format(...) unused named arguments - "F523", # .format(...) unused positional arguments - "F841", # local variable `name` is assigned to but never used -} - - -@hookimpl -def pylsp_settings(): - # this plugin disables flake8, mccabe, and pycodestyle by default - return { - "plugins": { - "ruff": { - "enabled": True, - "config": None, - "exclude": None, - "executable": "ruff", - "ignore": None, - "extendIgnore": None, - "lineLength": None, - "perFileIgnores": None, - "select": None, - "extendSelect": None, - }, - "pyflakes": {"enabled": False}, - "flake8": {"enabled": False}, - "mccabe": {"enabled": False}, - "pycodestyle": {"enabled": False}, - } - } - - -@hookimpl -def pylsp_lint(workspace: Workspace, document: Document) -> list: - """ - Register ruff as the linter. - - Parameters - ---------- - workspace : pylsp.workspace.Workspace - Current workspace. - document : pylsp.workspace.Document - Document to apply ruff on. - - Returns - ------- - List of dicts containing the diagnostics. - """ - config = workspace._config - settings = config.plugin_settings("ruff", document_path=document.path) - log.debug(f"Got ruff settings: {settings}") - - args_dict = load_config(workspace, document) - ruff_executable = args_dict.pop("executable") - args = build_args(document, args_dict) - - output = run_ruff_lint(ruff_executable, document, args) - return parse_ruff_stdout(output) - - -def run_ruff_lint(ruff_executable: str, document: Document, arguments: list) -> str: - """ - Run ruff on the given document and the given arguments. - - Parameters - ---------- - ruff_executable : str - Path to the executable. - document : pylsp.workspace.Document - File to run ruff on. - arguments : list - Arguments to provide for ruff. - - Returns - ------- - String containing the result in json format. - """ - log.debug(f"Calling {ruff_executable} with args: {arguments} on '{document.path}'") - try: - cmd = [ruff_executable] - cmd.extend(arguments) - p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) - except Exception: - log.debug( - f"Can't execute {ruff_executable}. Trying with '{sys.executable} -m ruff'" - ) - cmd = [sys.executable, "-m", "ruff"] - cmd.extend(arguments) - p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) - (stdout, stderr) = p.communicate(document.source.encode()) - - if stderr: - log.error(f"Error running ruff: {stderr.decode()}") - - return stdout.decode() - - -def parse_ruff_stdout(stdout: str) -> list: - """ - Convert the ruff stdout to a list of Python dicts. - - See the flake8 implementation for the resulting format of dicts. - - Parameters - ---------- - stdout : str - Standard output of the ruff process. - - Returns - ------- - List of dicts containing the diagnostics. - """ - result_list = [] - - # Catch empty list - if stdout == "": - return [] - - diagnostics = json.loads(stdout) - - for diagnostic in diagnostics: - result_dict = {} - result_dict["source"] = "ruff" - result_dict["code"] = diagnostic["code"] - - # Convert range to LSP specification - result_dict["range"] = { # type: ignore - "start": { - # Adapt range to LSP specification (zero-based) - "line": diagnostic["location"]["row"] - 1, - "character": diagnostic["location"]["column"] - 1, - }, - "end": { - "line": diagnostic["end_location"]["row"] - 1, - "character": diagnostic["end_location"]["column"] - 1, - }, - } - - result_dict["message"] = diagnostic["message"] - - # Ruff intends to implement severity codes in the future, - # see https://github.com/charliermarsh/ruff/issues/645. - result_dict["severity"] = lsp.DiagnosticSeverity.Warning - if diagnostic["code"] == "E999" or diagnostic["code"][0] == "F": - result_dict["severity"] = lsp.DiagnosticSeverity.Error - - if diagnostic["code"] in UNNECESSITY_CODES: - result_dict["tags"] = [lsp.DiagnosticTag.Unnecessary] # type: ignore - - result_list.append(result_dict) - - return result_list - - -def build_args(document: Document, options: dict) -> list: - """ - Build arguments for ruff. - - Parameters - ---------- - document : pylsp.workspace.Document - Document to apply ruff on. - options : dict - Dict of arguments to pass to ruff. - - Returns - ------- - List containing the arguments. - """ - # Suppress update announcements - args = ["--quiet"] - # Use the json formatting for easier evaluation - args.extend(["--format=json"]) - # Do not attempt to fix -> returns file instead of diagnostics - args.extend(["--no-fix"]) - # Always force excludes - args.extend(["--force-exclude"]) - # Pass filename to ruff for per-file-ignores, catch unsaved - if document.path != "": - args.extend(["--stdin-filename", document.path]) - - # Convert per-file-ignores dict to right format - per_file_ignores = options.pop("per-file-ignores") - - if per_file_ignores: - for path, errors in per_file_ignores.items(): - errors = (",").join(errors) - if PurePath(document.path).match(path): - args.extend([f"--ignore={errors}"]) - - for arg_name, arg_val in options.items(): - if arg_val is None: - continue - arg = None - if isinstance(arg_val, list): - arg = "--{}={}".format(arg_name, ",".join(arg_val)) - else: - arg = "--{}={}".format(arg_name, arg_val) - args.append(arg) - - args.extend(["--", "-"]) - - return args - - -def load_config(workspace: Workspace, document: Document) -> dict: - """ - Load settings from pyproject.toml file in the project path. - - Parameters - ---------- - workspace : pylsp.workspace.Workspace - Current workspace. - document : pylsp.workspace.Document - Document to apply ruff on. - - Returns - ------- - Dictionary containing the settings to use when calling ruff. - """ - config = workspace._config - _settings = config.plugin_settings("ruff", document_path=document.path) - - pyproject_file = find_parents( - workspace.root_path, document.path, ["pyproject.toml"] - ) - - # Check if pyproject is present, ignore user settings if toml exists - if pyproject_file: - log.debug( - f"Found pyproject file: {str(pyproject_file[0])}, " - + "skipping pylsp config." - ) - - # Leave config to pyproject.toml - settings = { - "config": None, - "exclude": None, - "executable": _settings.get("executable", "ruff"), - "ignore": None, - "extend-ignore": _settings.get("extendIgnore", None), - "line-length": None, - "per-file-ignores": None, - "select": None, - "extend-select": _settings.get("extendSelect", None), - } - - else: - # Default values are given by ruff - settings = { - "config": _settings.get("config", None), - "exclude": _settings.get("exclude", None), - "executable": _settings.get("executable", "ruff"), - "ignore": _settings.get("ignore", None), - "extend-ignore": _settings.get("extendIgnore", None), - "line-length": _settings.get("lineLength", None), - "per-file-ignores": _settings.get("perFileIgnores", None), - "select": _settings.get("select", None), - "extend-select": _settings.get("extendSelect", None), - } - - return settings diff --git a/pyproject.toml b/pyproject.toml index 52e68fa..3a96869 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -13,15 +13,16 @@ readme = "README.md" requires-python = ">=3.7" license = {text = "MIT"} dependencies = [ - "ruff>=0.0.192", + "ruff>=0.0.260", "python-lsp-server", + "lsprotocol>=2022.0.0a1", ] [project.optional-dependencies] dev = ["pytest", "pre-commit"] [project.entry-points.pylsp] -ruff = "pylsp_ruff.ruff_lint" +ruff = "pylsp_ruff.plugin" [project.urls] "Homepage" = "https://github.com/python-lsp/python-lsp-ruff"
Surface Ruff auto-fix capabilities as Quick Fix actions
I don't think pylsp supports quick fix actions but it supports `documentFormatting`. On that side note, we need to add `--no-fix` to the ruff call for linting because otherwise ruff will return the new file instead of json dict with diagnostics. Actually, I just went through the log of pylsp and found this: ``` 2022-12-11 16:48:17,187 CET - INFO - pylsp.python_lsp - Server capabilities: {'codeActionProvider': True, ... ``` It at least says that codeactions are available, but I need to look into that. I think it does, they are called code actions. For example, they are used by [pylsp-rope](https://github.com/python-rope/pylsp-rope). > I think it does, they are called code actions. For example, they are used by [pylsp-rope](https://github.com/python-rope/pylsp-rope). Can we add it as a feature to https://github.com/python-lsp/python-lsp-server#lsp-server-features? Yeah sorry, I believe Quick Fix is a “kind” of Code Action. > Can we add it as a feature to https://github.com/python-lsp/python-lsp-server#lsp-server-features? Sure, please open a pull request for that and I'll merge it. > On that side note, we need to add --no-fix to the ruff call for linting because otherwise ruff will return the new file instead of json dict with diagnostics. Yeah we should pass `--no-fix`, that sounds right. If I understand correctly, then in the future it is planned for the plugin to provided fixes from ruff as lsp actions, right? That'd be great! since, if I'm correct, `python-lsp-server` and most of other plugins don't provide functionality for that (ex. simple stuff like removing trailing whitespaces, etc), and it's very tiresome to fix all that stuff by hand when it could easily be done by the lsp. I have been letting it slide for a while since there seemed to be not much of interest, but I can start implementing it now. Should this be done when applying `textDocument/formatting` or should it be `textDocument/codeAction` only? I think it should be implemented as code actions, not auto-formatting, so that users can opt in into the suggested fixes. Furthermore, I think most people understand auto-formatting as providing purely stylistic rewriting to make the code look better, not actual changes without their consent. I agree 👍 I wonder though, are there any methods in the lsp reference to apply all code actions? Don't know about that, sorry. > I agree 👍 I wonder though, are there any methods in the lsp reference to apply all code actions? I don't know much about lsp's but if it is of any help, this is an example of what hls (the haskell lsp) does in that regard :) <img width="660" alt="Screenshot 2023-02-10 at 7 35 16 p m" src="https://user-images.githubusercontent.com/79347623/218231720-9b577e91-e64a-4fdf-bfa7-93ab01571b88.png">
2023-03-18T23:40:39
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-20
500b626a775f6a38a76bb12e16796161413c35ba
diff --git a/README.md b/README.md index 00d9638..09f0df0 100644 --- a/README.md +++ b/README.md @@ -25,8 +25,9 @@ When enabled, all linting diagnostics will be provided by `ruff`. Configuration options can be passed to the python-language-server. If a `pyproject.toml` file is present in the project, `python-lsp-ruff` will use these configuration options. -Note that any configuration options passed to ruff via `pylsp` are ignored if the project has -a `pyproject.toml`. +Note that any configuration options (except for `extendIgnore` and `extendSelect`, see +[this issue](https://github.com/python-lsp/python-lsp-ruff/issues/19)) passed to ruff via +`pylsp` are ignored if the project has a `pyproject.toml`. The plugin follows [python-lsp-server's configuration](https://github.com/python-lsp/python-lsp-server/#configuration). These are @@ -37,6 +38,7 @@ the valid configuration keys: - `pylsp.plugins.ruff.exclude`: Exclude files from being checked by `ruff`. - `pylsp.plugins.ruff.executable`: Path to the `ruff` executable. Assumed to be in PATH by default. - `pylsp.plugins.ruff.ignore`: Error codes to ignore. + - `pylsp.plugins.ruff.extendIgnore`: Same as ignore, but append to existing ignores. - `pylsp.plugins.ruff.lineLength`: Set the line-length for length checks. - `pylsp.plugins.ruff.perFileIgnores`: File-specific error codes to be ignored. - `pylsp.plugins.ruff.select`: List of error codes to enable. diff --git a/pylsp_ruff/ruff_lint.py b/pylsp_ruff/ruff_lint.py index e0e74ba..ec436fc 100644 --- a/pylsp_ruff/ruff_lint.py +++ b/pylsp_ruff/ruff_lint.py @@ -30,6 +30,7 @@ def pylsp_settings(): "exclude": None, "executable": "ruff", "ignore": None, + "extendIgnore": None, "lineLength": None, "perFileIgnores": None, "select": None, @@ -251,10 +252,11 @@ def load_config(workspace: Workspace, document: Document) -> dict: "exclude": None, "executable": _settings.get("executable", "ruff"), "ignore": None, + "extend-ignore": _settings.get("extendIgnore", None), "line-length": None, "per-file-ignores": None, "select": None, - "extend-select": None, + "extend-select": _settings.get("extendSelect", None), } else: @@ -264,6 +266,7 @@ def load_config(workspace: Workspace, document: Document) -> dict: "exclude": _settings.get("exclude", None), "executable": _settings.get("executable", "ruff"), "ignore": _settings.get("ignore", None), + "extend-ignore": _settings.get("extendIgnore", None), "line-length": _settings.get("lineLength", None), "per-file-ignores": _settings.get("perFileIgnores", None), "select": _settings.get("select", None), diff --git a/pyproject.toml b/pyproject.toml index 0d7e807..52e68fa 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -7,7 +7,7 @@ name = "python-lsp-ruff" authors = [ {name = "Julian Hossbach", email = "[email protected]"} ] -version = "1.1.0" +version = "1.2.0" description = "Ruff linting plugin for pylsp" readme = "README.md" requires-python = ">=3.7"
Can't set ignored error codes I'm sending the following configuration to pylsp via the didChangeConfiguration notification but I don't get the desired effect of disabling error code D101: ```json { "pylsp": { "plugins": { "pycodestyle": { "enabled": false }, "pyflakes": { "enabled": false }, "ruff": { "enabled": true, "ignore": [ "D101" ] } } } } ``` However, if I switch `pylsp.ruff.enabled` to `false`, the plugin is disabled, so the editor part of my setup must be working.
Please enable debug messages for pylsp and post the log messages here, i.e. by starting pylsp with `pylsp -vvv --log-file /tmp/lsp.log` Thx for the log @astoff. This is the relevant line: ``` 2023-02-28 11:58:43,096 CET - DEBUG - pylsp_ruff.ruff_lint - Got ruff settings: {'extendSelect': None, 'enabled': True, 'ignore': ['D101'], 'perFileIgnores': None, 'lineLength': None, 'exclude': None, 'executable': 'ruff', 'select': None, 'config': None} 2023-02-28 11:58:43,097 CET - DEBUG - pylsp_ruff.ruff_lint - Found pyproject file: /.../pyproject.toml, skipping pylsp config. ``` This was intentionally set in order to leave any configuration to be found by ruff itself. There are two options here: - You can add `D101` to your ignore/extend-ignore list in the `pyproject.toml` file. If you are not the only one working on this project this would be advantageous in that it is editor-independent. However if you only want it specifically for your editor option two might be relevant: - We add a config entry called `ignore-extended` and make it independent of any `pyproject.toml` file present. I am reluctant to add the latter functionality as this would likely lead to confusing behaviour Thanks for looking into this. Indeed, I don't want to change the pyproject.toml file, and my intention was to apply some extra settings on top of it. Let me explain a bit my workflow. The pyproject.toml file has all the nitty-gritty rules checked by CI and that need to be fulfilled before my code is ready to merge. I can also run them locally with `make lint` when I'm polishing up my work. On the other hand, when I'm actually coding things up, I don't want to get nagged by details; I only want to see the diagnostics that help me _write_ the code. I'm not sure which confusing behavior you foresee, but it seems very natural and customary to me that server settings > project settings > system settings get merged, with that order of precedence. I had the same issue once. How about we include any `extendIgnore` and `extendSelect` options regardless of config files present? That would work for me, yes. :-)
2023-03-01T15:52:41
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-18
cd8c3bc81a94dcad6511d548d895e55d54510ba2
diff --git a/README.md b/README.md index 4221b45..00d9638 100644 --- a/README.md +++ b/README.md @@ -40,3 +40,6 @@ the valid configuration keys: - `pylsp.plugins.ruff.lineLength`: Set the line-length for length checks. - `pylsp.plugins.ruff.perFileIgnores`: File-specific error codes to be ignored. - `pylsp.plugins.ruff.select`: List of error codes to enable. + - `pylsp.plugins.ruff.extendSelect`: Same as select, but append to existing error codes. + +For more information on the configuration visit [Ruff's homepage](https://beta.ruff.rs/docs/configuration/). diff --git a/pylsp_ruff/ruff_lint.py b/pylsp_ruff/ruff_lint.py index ae5a960..e0e74ba 100644 --- a/pylsp_ruff/ruff_lint.py +++ b/pylsp_ruff/ruff_lint.py @@ -33,6 +33,7 @@ def pylsp_settings(): "lineLength": None, "perFileIgnores": None, "select": None, + "extendSelect": None, }, "pyflakes": {"enabled": False}, "flake8": {"enabled": False}, @@ -253,6 +254,7 @@ def load_config(workspace: Workspace, document: Document) -> dict: "line-length": None, "per-file-ignores": None, "select": None, + "extend-select": None, } else: @@ -265,6 +267,7 @@ def load_config(workspace: Workspace, document: Document) -> dict: "line-length": _settings.get("lineLength", None), "per-file-ignores": _settings.get("perFileIgnores", None), "select": _settings.get("select", None), + "extend-select": _settings.get("extendSelect", None), } return settings diff --git a/pyproject.toml b/pyproject.toml index a5ca030..0d7e807 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -7,7 +7,7 @@ name = "python-lsp-ruff" authors = [ {name = "Julian Hossbach", email = "[email protected]"} ] -version = "1.0.5" +version = "1.1.0" description = "Ruff linting plugin for pylsp" readme = "README.md" requires-python = ">=3.7"
'E501 Line too long': Not showing in-line I am using the Helix editor with pylsp as the language server and python-lsp-ruff as plugin for ruff. Most diagnostics appear inline in the editor as expected except E501 which is related to line legth. Running ruff directly on the violating file shows the error in the terminal. ```shell ruff test.py test.py:3:89: E501 Line too long (100 > 88 characters) ``` The log from pylsp is here: ``` 2023-02-24 12:50:41,955 IST - DEBUG - pylsp.config.config - pylsp_lint [hook] config: <pylsp.config.config.Config object at 0x104791f50> workspace: <pylsp.workspace.Workspace object at 0x104830ad0> document: file:///Users/rkshthrmsh/ws/sandbox/test.py is_saved: True 2023-02-24 12:50:41,956 IST - DEBUG - pylsp.config.config - Got user config from PyCodeStyleConfig: {} 2023-02-24 12:50:41,957 IST - DEBUG - pylsp.config.config - Got project config from PyCodeStyleConfig: {} 2023-02-24 12:50:41,957 IST - DEBUG - pylsp.config.config - With configuration: {'plugins': {'pyflakes': {'enabled': False}, 'pydocstyle': {'enabled': False}, 'rope_rename': {'enabled': False}, 'ruff': {'exclude': None, 'ignore': None, 'select': ['D'], 'lineLength': None, 'perFileIgnores': None, 'executable': 'ruff', 'config': None, 'enabled': True}, 'autopep8': {'enabled': False}, 'flake8': {'enabled': False}, 'preload': {'modules': ['OpenGL', 'PIL', 'array', 'audioop', 'binascii', 'cPickle', 'cStringIO', 'cmath', 'collections', 'datetime', 'errno', 'exceptions', 'gc', 'imageop', 'imp', 'itertools', 'marshal', 'math', 'matplotlib', 'mmap', 'mpmath', 'msvcrt', 'networkx', 'nose', 'nt', 'numpy', 'operator', 'os', 'os.path', 'pandas', 'parser', 'rgbimg', 'scipy', 'signal', 'skimage', 'sklearn', 'statsmodels', 'strop', 'sympy', 'sys', 'thread', 'time', 'wx', 'xxsubtype', 'zipimport', 'zlib']}, 'rope_autoimport': {'enabled': False, 'memory': False}, 'jedi_completion': {'fuzzy': True, 'include_class_objects': True, 'include_function_objects': True}, 'rope_completion': {'enabled': False, 'eager': False}, 'pycodestyle': {'enabled': False}, 'yapf': {'enabled': True}, 'pylint': {'enabled': False, 'args': [], 'executable': None}, 'mccabe': {'enabled': False}}, 'rope': {'extensionModules': ['OpenGL', 'PIL', 'array', 'audioop', 'binascii', 'cPickle', 'cStringIO', 'cmath', 'collections', 'datetime', 'errno', 'exceptions', 'gc', 'imageop', 'imp', 'itertools', 'marshal', 'math', 'matplotlib', 'mmap', 'mpmath', 'msvcrt', 'networkx', 'nose', 'nt', 'numpy', 'operator', 'os', 'os.path', 'pandas', 'parser', 'rgbimg', 'scipy', 'signal', 'skimage', 'sklearn', 'statsmodels', 'strop', 'sympy', 'sys', 'thread', 'time', 'wx', 'xxsubtype', 'zipimport', 'zlib']}} 2023-02-24 12:50:41,958 IST - DEBUG - pylsp_ruff.ruff_lint - Got ruff settings: {'exclude': None, 'ignore': None, 'select': ['D'], 'lineLength': None, 'perFileIgnores': None, 'line_length': 100, 'executable': 'ruff', 'config': None, 'enabled': True} 2023-02-24 12:50:41,958 IST - DEBUG - pylsp_ruff.ruff_lint - Calling ruff with args: ['--quiet', '--format=json', '--no-fix', '--force-exclude', '--stdin-filename', '/Users/rkshthrmsh/ws/sandbox/test.py', '--select=D', '--', '-'] on '/Users/rkshthrmsh/ws/sandbox/test.py' 2023-02-24 12:50:41,971 IST - DEBUG - pylsp.config.config - finish pylsp_lint --> [[]] [hook] 2023-02-24 12:50:41,972 IST - DEBUG - pylsp_jsonrpc.endpoint - Sending notification: textDocument/publishDiagnostics {'uri': 'file:///Users/rkshthrmsh/ws/sandbox/test.py', 'diagnostics': []} 2023-02-24 12:50:43,477 IST - DEBUG - pylsp_jsonrpc.endpoint - Handling notification from client {'jsonrpc': '2.0', 'method': 'textDocument/didSave', 'params': {'text': '"""Implement CCSDS 123.0-B-2 Compression.\n\nCCSDS 123.0-B-2 specifies a standard for Low-Complexity Lossless and Near-Lossless Multispectral and\nHyper\n"""\n', 'textDocument': {'uri': 'file:///Users/rkshthrmsh/ws/sandbox/test.py'}}} 2023-02-24 12:50:43,480 IST - DEBUG - pylsp.config.config - pylsp_document_did_save [hook] config: <pylsp.config.config.Config object at 0x104791f50> workspace: <pylsp.workspace.Workspace object at 0x104830ad0> document: file:///Users/rkshthrmsh/ws/sandbox/test.py 2023-02-24 12:50:43,480 IST - DEBUG - pylsp.config.config - finish pylsp_document_did_save --> [] [hook] 2023-02-24 12:50:43,480 IST - DEBUG - pylsp_jsonrpc.endpoint - Handling request from client {'jsonrpc': '2.0', 'method': 'shutdown', 'params': None, 'id': 1} 2023-02-24 12:50:43,480 IST - DEBUG - pylsp_jsonrpc.endpoint - Got result from synchronous request handler: None 2023-02-24 12:50:43,481 IST - DEBUG - pylsp_jsonrpc.endpoint - Handling notification from client {'jsonrpc': '2.0', 'method': 'exit', 'params': None} ```
Based on the log files it looks like you provide python-lsp-server with `'line_length': 100` in your lsp configuration file for helix(the same place where you specified the "select" argument for ruff). Remove that line to get the default length of 88. Change made, but still not able to see the in-line warning. Log after change: ``` 2023-02-24 14:28:05,761 IST - DEBUG - pylsp.config.config - Got user config from PyCodeStyleConfig: {} 2023-02-24 14:28:05,762 IST - DEBUG - pylsp.config.config - Got project config from PyCodeStyleConfig: {} 2023-02-24 14:28:05,762 IST - DEBUG - pylsp.config.config - With configuration: {'rope': {'extensionModules': ['OpenGL', 'PIL', 'array', 'audioop', 'binascii', 'cPickle', 'cStringIO', 'cmath', 'collections', 'datetime', 'errno', 'exceptions', 'gc', 'imageop', 'imp', 'itertools', 'marshal', 'math', 'matplotlib', 'mmap', 'mpmath', 'msvcrt', 'networkx', 'nose', 'nt', 'numpy', 'operator', 'os', 'os.path', 'pandas', 'parser', 'rgbimg', 'scipy', 'signal', 'skimage', 'sklearn', 'statsmodels', 'strop', 'sympy', 'sys', 'thread', 'time', 'wx', 'xxsubtype', 'zipimport', 'zlib']}, 'plugins': {'pycodestyle': {'enabled': False}, 'mccabe': {'enabled': False}, 'pydocstyle': {'enabled': False}, 'jedi_completion': {'fuzzy': True, 'include_class_objects': True, 'include_function_objects': True}, 'rope_rename': {'enabled': False}, 'ruff': {'exclude': None, 'ignore': None, 'enabled': True, 'lineLength': None, 'executable': 'ruff', 'select': ['D'], 'config': None, 'perFileIgnores': None}, 'rope_completion': {'enabled': True, 'eager': True}, 'autopep8': {'enabled': False}, 'preload': {'modules': ['OpenGL', 'PIL', 'array', 'audioop', 'binascii', 'cPickle', 'cStringIO', 'cmath', 'collections', 'datetime', 'errno', 'exceptions', 'gc', 'imageop', 'imp', 'itertools', 'marshal', 'math', 'matplotlib', 'mmap', 'mpmath', 'msvcrt', 'networkx', 'nose', 'nt', 'numpy', 'operator', 'os', 'os.path', 'pandas', 'parser', 'rgbimg', 'scipy', 'signal', 'skimage', 'sklearn', 'statsmodels', 'strop', 'sympy', 'sys', 'thread', 'time', 'wx', 'xxsubtype', 'zipimport', 'zlib']}, 'pylint': {'enabled': False, 'args': [], 'executable': None}, 'flake8': {'enabled': False}, 'pyflakes': {'enabled': False}, 'rope_autoimport': {'enabled': True, 'memory': True}, 'yapf': {'enabled': True}}} 2023-02-24 14:28:05,763 IST - DEBUG - pylsp_ruff.ruff_lint - Got ruff settings: {'exclude': None, 'ignore': None, 'enabled': True, 'lineLength': None, 'executable': 'ruff', 'select': ['D'], 'config': None, 'perFileIgnores': None} 2023-02-24 14:28:05,763 IST - DEBUG - pylsp_ruff.ruff_lint - Calling ruff with args: ['--quiet', '--format=json', '--no-fix', '--force-exclude', '--stdin-filename', '/Users/rkshthrmsh/ws/sandbox/python_study_group/test.py', '--select=D', '--', '-'] on '/Users/rkshthrmsh/ws/sandbox/python_study_group/test.py' 2023-02-24 14:28:05,776 IST - DEBUG - pylsp.config.config - finish pylsp_lint --> [[]] [hook] 2023-02-24 14:28:05,776 IST - DEBUG - pylsp_jsonrpc.endpoint - Sending notification: textDocument/publishDiagnostics {'uri': 'file:///Users/rkshthrmsh/ws/sandbox/python_study_group/test.py', 'diagnostics': []} 2023-02-24 14:28:07,565 IST - DEBUG - pylsp_jsonrpc.endpoint - Handling notification from client {'jsonrpc': '2.0', 'method': 'textDocument/didSave', 'params': {'text': '"""Implement CCSDS 123.0-B-2 Compression.\n\nCCSDS 123.0-B-2 specifies a standard for Low-Complexity Lossless and Near-Lossless Multispectral and\nHyper\n"""\n', 'textDocument': {'uri': 'file:///Users/rkshthrmsh/ws/sandbox/python_study_group/test.py'}}} 2023-02-24 14:28:07,567 IST - DEBUG - pylsp.config.config - pylsp_document_did_save [hook] config: <pylsp.config.config.Config object at 0x104a75a10> workspace: <pylsp.workspace.Workspace object at 0x105540f90> document: file:///Users/rkshthrmsh/ws/sandbox/python_study_group/test.py ``` Hmm, there is nothing wrong with the call to ruff, so it has to be some config of ruff on your side that sets line-length to 100. Can you check for local project `pyproject.toml` files where the ruff line-length was specified? I think there is also a global ruff configuration in `~/.config/ruff/` There are no config files. Currently ruff is using all defaults. As you can see in the log, line length is set to none by the Helix config. I am wondering if the way in which line length diagnostics are handled is different compared to the rest. Ah, I see the issue: fixing `select` reduces the errors to check for to only those specified. What you need is `extend-select`, but this is currently not available as a config option to `python-lsp-ruff`. I will create a PR for this, in the meantime you can add the needed error letters to the `select` option (so E, F, ...) That was it. Thanks! Reopening to track issue in the PR
2023-02-25T11:45:14
0.0
[]
[]
python-lsp/python-lsp-ruff
python-lsp__python-lsp-ruff-4
69dd6a384bab7a7088c3e521cf3ebc957946e2fe
diff --git a/pylsp_ruff/ruff_lint.py b/pylsp_ruff/ruff_lint.py index 7a1e44e..3f826a9 100644 --- a/pylsp_ruff/ruff_lint.py +++ b/pylsp_ruff/ruff_lint.py @@ -1,6 +1,5 @@ import json import logging -import sys from pathlib import PurePath from subprocess import PIPE, Popen, SubprocessError @@ -8,15 +7,6 @@ from pylsp._utils import find_parents from pylsp.workspace import Document, Workspace -# Use built-in tomllib for python>=3.11 -if sys.version_info >= (3, 11): - try: - import tomllib - except ImportError: - import tomli as tomllib -else: - import tomli as tomllib - log = logging.getLogger(__name__) UNNECESSITY_CODES = { @@ -187,6 +177,8 @@ def build_args(document: Document, options: dict) -> list: args = ["--quiet"] # Use the json formatting for easier evaluation args.extend(["--format=json"]) + # Do not attempt to fix -> returns file instead of diagnostics + args.extend(["--no-fix"]) # Convert per-file-ignores dict to right format per_file_ignores = options.pop("per-file-ignores") @@ -230,36 +222,38 @@ def load_config(workspace: Workspace, document: Document) -> dict: config = workspace._config _settings = config.plugin_settings("ruff", document_path=document.path) - # Default values are given by ruff - settings = { - "config": _settings.get("config", None), - "exclude": _settings.get("exclude", None), - "executable": _settings.get("executable", "ruff"), - "ignore": _settings.get("ignore", None), - "line-length": _settings.get("lineLength", None), - "per-file-ignores": _settings.get("perFileIgnores", None), - "select": _settings.get("select", None), - } - pyproject_file = find_parents( workspace.root_path, document.path, ["pyproject.toml"] ) - # Load config from pyproject file if it exists + # Check if pyproject is present, ignore user settings if toml exists if pyproject_file: - try: - log.debug(f"Found pyproject file: {str(pyproject_file[0])}") - - with open(str(pyproject_file[0]), "rb") as pyproject_toml: - toml_dict = tomllib.load(pyproject_toml) - - toml_config = toml_dict.get("tool", {}).get("ruff", {}) - - # Update settings with local project settings - for key, value in toml_config.items(): - settings[key] = value + log.debug( + f"Found pyproject file: {str(pyproject_file[0])}, " + + "skipping pylsp config." + ) + + # Leave config to pyproject.toml + settings = { + "config": None, + "exclude": None, + "executable": _settings.get("executable", "ruff"), + "ignore": None, + "line-length": None, + "per-file-ignores": None, + "select": None, + } - except (tomllib.TOMLDecodeError, OSError) as e: - log.warning(f"Failed loading {str(pyproject_file)}: {e}") + else: + # Default values are given by ruff + settings = { + "config": _settings.get("config", None), + "exclude": _settings.get("exclude", None), + "executable": _settings.get("executable", "ruff"), + "ignore": _settings.get("ignore", None), + "line-length": _settings.get("lineLength", None), + "per-file-ignores": _settings.get("perFileIgnores", None), + "select": _settings.get("select", None), + } return settings
Avoiding passing `pyproject.toml` options as command-line arguments See discussion in #1. Avoiding passing `pyproject.toml` options as command-line arguments See discussion in #1.
\cc @jhossbach > I think it should be fine to either: (1) let Ruff handle the pyproject.toml, and only pass CLI arguments that come directly from the LSP config (which could include config, allowing users to override); or (2) keep the current logic of letting the LSP find the pyproject.toml, pass it in as --config if the user hasn't provided a config override in the LSP settings. I'd rather go with (1), it feels cleaner to let `ruff` handle the project-wide settings. Shouldn't the configuration hierarchy be project config > pylsp config? Say e.g. you disagree with the default line length of 88, so instead you configure your lsp via neovim to be 100 by default. If you now work in a project with a `pyproject.toml`, you would expect the LSP to use the configuration there instead, or should it always take the user-configured line-length despite any pyproject.toml file present? We could add some logic to use the user-configured options only if the `pyproject.toml` file doesn't contain this option. Yeah I think you're right that project configuration should take precedence. I don't really have a great answer for how we should enable "defaults" in those other cases. 🤔 This should work: ```python for key, value in toml_config.items(): if key in [ "exclude", "ignore", "line-length", "per-file-ignores", "select", ]: settings[key] = value ``` It will skip any entries in the `pyproject.toml` that are not in the list. We can extend this list if we want, but additional arguments in the toml file are used automatically by ruff anyway so unless someone wants to have more configurability from within pylsp I don't think this is necessary. \cc @jhossbach > I think it should be fine to either: (1) let Ruff handle the pyproject.toml, and only pass CLI arguments that come directly from the LSP config (which could include config, allowing users to override); or (2) keep the current logic of letting the LSP find the pyproject.toml, pass it in as --config if the user hasn't provided a config override in the LSP settings. I'd rather go with (1), it feels cleaner to let `ruff` handle the project-wide settings. Shouldn't the configuration hierarchy be project config > pylsp config? Say e.g. you disagree with the default line length of 88, so instead you configure your lsp via neovim to be 100 by default. If you now work in a project with a `pyproject.toml`, you would expect the LSP to use the configuration there instead, or should it always take the user-configured line-length despite any pyproject.toml file present? We could add some logic to use the user-configured options only if the `pyproject.toml` file doesn't contain this option. Yeah I think you're right that project configuration should take precedence. I don't really have a great answer for how we should enable "defaults" in those other cases. 🤔 This should work: ```python for key, value in toml_config.items(): if key in [ "exclude", "ignore", "line-length", "per-file-ignores", "select", ]: settings[key] = value ``` It will skip any entries in the `pyproject.toml` that are not in the list. We can extend this list if we want, but additional arguments in the toml file are used automatically by ruff anyway so unless someone wants to have more configurability from within pylsp I don't think this is necessary.
2022-12-11T19:50:47
0.0
[]
[]
dstathis/fastprocess
dstathis__fastprocess-4
6ebc8d75c7e495d971d3244d907c8a65c84864cd
diff --git a/README.md b/README.md index cb81240..e16123c 100644 --- a/README.md +++ b/README.md @@ -40,26 +40,20 @@ Here are the results of running ./benchmark/bench --------------------------------------------------- 10000 spawns with fork and exec... -real 0m2.181s -user 0m0.116s -sys 0m2.056s +real 0m2.157s +user 0m0.048s +sys 0m2.104s --------------------------------------------------- -10000 spawns with fastprocess (posix_spawn)... +10000 spawns with fastprocess... -real 0m2.587s -user 0m1.150s -sys 0m0.438s ---------------------------------------------------- -10000 spawns with old fastprocess (fork/exec)... - -real 0m3.668s -user 0m0.440s -sys 0m3.223s +real 0m2.598s +user 0m1.225s +sys 0m0.356s --------------------------------------------------- 10000 spawns with subprocess... -real 0m12.134s -user 0m7.768s -sys 0m9.108s +real 0m12.211s +user 0m7.832s +sys 0m9.072s --------------------------------------------------- ``` diff --git a/benchmark/bench b/benchmark/bench index 21629c7..c936c77 100755 --- a/benchmark/bench +++ b/benchmark/bench @@ -8,14 +8,10 @@ echo "10000 spawns with fork and exec..." time python ${scriptdir}/forktest.py printf %"$(tput cols)"s |tr " " "-" echo -echo "10000 spawns with fastprocess (posix_spawn)..." +echo "10000 spawns with fastprocess..." time python ${scriptdir}/fptest.py printf %"$(tput cols)"s |tr " " "-" echo -echo "10000 spawns with old fastprocess (fork/exec)..." -time python ${scriptdir}/old_fptest.py -printf %"$(tput cols)"s |tr " " "-" -echo echo "10000 spawns with subprocess..." time python ${scriptdir}/sptest.py printf %"$(tput cols)"s |tr " " "-" diff --git a/fastprocess/__init__.py b/fastprocess/__init__.py index b0b27a1..3ee40af 100644 --- a/fastprocess/__init__.py +++ b/fastprocess/__init__.py @@ -1,1 +1,1 @@ -from fastprocess.fastprocess import FastProcess, OldFastProcess +from fastprocess.fastprocess import FastProcess diff --git a/fastprocess/fastprocess.py b/fastprocess/fastprocess.py index 2b77d91..77c96db 100644 --- a/fastprocess/fastprocess.py +++ b/fastprocess/fastprocess.py @@ -26,28 +26,3 @@ def signal(self, sig): def wait(self): self.waited = True return os.WEXITSTATUS(os.waitpid(self.pid, 0)[1]) - - -class OldFastProcess: - def __init__(self, cmd, stdin=None, stdout=None, stderr=None): - self.pid = os.fork() - if self.pid == 0: - if stdin: - os.dup2(stdin.fileno(), 0) - if stdout: - os.dup2(stdout.fileno(), 1) - if stderr: - os.dup2(stderr.fileno(), 2) - os.execvp(cmd[0], cmd) - - def __del__(self): - os.kill(self.pid, signal.SIGTERM) - - def terminate(self): - os.kill(self.pid, signal.SIGTERM) - - def signal(self, sig): - os.kill(self.pid, sig) - - def wait(self): - return os.WEXITSTATUS(os.waitpid(self.pid, 0)[1])
wait will cause the process to dissapear so only kill if we have not … fixes: #1
2020-08-20T05:34:29
0.0
[]
[]
eEcoLiDAR/laserchicken
eEcoLiDAR__laserchicken-188
1656f261462f0e266fa11c4bdc43c1ed52ccc530
diff --git a/laserchicken/feature_extractor/eigenvals_feature_extractor.py b/laserchicken/feature_extractor/eigenvals_feature_extractor.py index 9b90e68..72e2a93 100644 --- a/laserchicken/feature_extractor/eigenvals_feature_extractor.py +++ b/laserchicken/feature_extractor/eigenvals_feature_extractor.py @@ -60,7 +60,7 @@ def _reorder_vectors(eigen_vectors, new_vector_indices): # and reshape and even repeat some indices to get the behavior I'm looking for. vects_t = eigen_vectors.transpose([0, 2, 1]) # Eigen vectors used as column vectors. Making row vectors here. flattened_vects_t = vects_t.reshape(-1, 3 * 3) # Flatten the (eigen) vector dimension - vect_indices = np.zeros_like(flattened_vects_t, dtype=np.int) + [0, 1, 2, 0, 1, 2, 0, 1, 2] # 0,1,2 for x,y,z + vect_indices = np.zeros_like(flattened_vects_t, dtype=int) + [0, 1, 2, 0, 1, 2, 0, 1, 2] # 0,1,2 for x,y,z vect_indices[:, :3] += new_vector_indices[:, 0:1] * 3 # Because x,y,z, indices have to be increased by 3. vect_indices[:, 3:6] += new_vector_indices[:, 1:2] * 3 vect_indices[:, 6:9] += new_vector_indices[:, 2:3] * 3 diff --git a/laserchicken/feature_extractor/feature_extraction.py b/laserchicken/feature_extractor/feature_extraction.py index 2aecc96..48444c5 100644 --- a/laserchicken/feature_extractor/feature_extraction.py +++ b/laserchicken/feature_extractor/feature_extraction.py @@ -56,7 +56,7 @@ def compute_features(env_point_cloud, neighborhoods, target_point_cloud, feature for feature_name in extended_features: target_point_cloud[point][feature_name] = {"type": 'float64', "data": np.zeros_like(target_point_cloud[point]['x']['data'], - dtype=np.float64)} + dtype=float)} if provenance in env_point_cloud: utils.add_metadata(target_point_cloud, sys.modules[__name__], @@ -87,7 +87,7 @@ def _get_point_cloud_size(target_point_cloud): def _calculate_number_of_chunks(chunk_size, n_targets): - return int(np.math.ceil(n_targets / chunk_size)) + return int(np.ceil(n_targets / chunk_size)) def _compute_features_for_chunk(features_to_do, env_point_cloud, current_neighborhoods, target_point_cloud, @@ -124,7 +124,7 @@ def _add_features_from_single_extractor(extractor, env_point_cloud, current_neig target_indices, volume) n_targets = len(target_indices) - feature_values = [np.empty(n_targets, dtype=np.float64) for _ in range(n_features)] + feature_values = [np.empty(n_targets, dtype=float) for _ in range(n_features)] if n_features > 1: for i in range(n_features): feature_values[i] = point_values[i] diff --git a/laserchicken/filter.py b/laserchicken/filter.py index 2e1e804..631b458 100644 --- a/laserchicken/filter.py +++ b/laserchicken/filter.py @@ -5,7 +5,7 @@ import shapefile import shapely import sys -from shapely.geometry import Point + from shapely.errors import WKTReadingError from shapely.wkt import loads from shapely.geometry import box @@ -97,11 +97,11 @@ def _check_valid_arguments(attribute, point_cloud): def select_polygon(point_cloud, polygon_string, read_from_file=False, return_mask=False): """ - Return the selection of the input point cloud that contains only points within a given polygon. + Return the selection of the input point cloud that contains only points within the given polygon(s). :param point_cloud: Input point cloud - :param polygon_string: Polygon, either defined in a WKT string or in a file (WKT and ESRI formats supported) - :param read_from_file: if true, polygon is expected to be the name of the file where the polygon is defined + :param polygon_string: polygon(s), either defined in a WKT string or in a file (WKT and ESRI formats supported) + :param read_from_file: if true, polygon is expected to be the name of the file where the geometry is defined :param return_mask: if true, return a mask of selected points, rather than point cloud :return: """ @@ -121,12 +121,12 @@ def select_polygon(point_cloud, polygon_string, read_from_file=False, return_mas elif isinstance(polygon,shapely.geometry.multipolygon.MultiPolygon): points_in = [] count=1 - for poly in polygon: - if not(count%200) or count==len(polygon): - print('Checking polygon {}/{}...'.format(count, len(polygon))) + for poly in polygon.geoms: + if not(count%200) or count==len(polygon.geoms): + print('Checking polygon {}/{}...'.format(count, len(polygon.geoms))) points_in.extend(_contains(point_cloud, poly)) count=count+1 - print('{} points found in {} polygons.'.format(len(points_in), len(polygon))) + print('{} points found in {} polygons.'.format(len(points_in), len(polygon.geoms))) else: raise ValueError('It is not a Polygon or Multipolygon.') diff --git a/requirements.txt b/requirements.txt index 5392a97..9d5a112 100644 --- a/requirements.txt +++ b/requirements.txt @@ -4,7 +4,7 @@ pytest mock plyfile python-dateutil -shapely +shapely>=2 PyShp pandas click
Shapely>=2 Iterating over multipolygons was deprecated and it fails from shapely 2.0. This can create issues in the filter module - this is fixed by this PR.
2023-07-05T10:04:09
0.0
[]
[]
eEcoLiDAR/laserchicken
eEcoLiDAR__laserchicken-182
f5c4e772e4893f7242ed0b10aa17ac7e693a55a0
diff --git a/.coveragerc b/.coveragerc index 77ebcde..138c7a4 100644 --- a/.coveragerc +++ b/.coveragerc @@ -1,3 +1,4 @@ [run] +relative_files = True branch = True omit = */test_*.py diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml new file mode 100644 index 0000000..3740285 --- /dev/null +++ b/.github/workflows/build.yml @@ -0,0 +1,55 @@ +name: Build + +on: + push: + pull_request: + schedule: + - cron: '0 0 1 * *' + +jobs: + + build: + name: Build for ${{ matrix.python-version }} + runs-on: 'ubuntu-latest' + strategy: + fail-fast: false + matrix: + python-version: ['3.7', '3.8', '3.9'] + steps: + - uses: actions/checkout@v2 + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v2 + with: + python-version: ${{ matrix.python-version }} + - name: Python info + shell: bash -l {0} + run: | + which python + python --version + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install -r requirements.txt + - name: Build + shell: bash -l {0} + run: | + python setup.py build + - name: Test + shell: bash -l {0} + run: | + pip install pytest pytest-cov + pytest --cov=laserchicken --cov-report xml:coverage.xml + - name: Coveralls + uses: AndreMiras/coveralls-python-action@develop + with: + parallel: true + flag-name: Unit Test + + coveralls_finish: + needs: build + runs-on: ubuntu-latest + steps: + - name: Coveralls Finished + uses: AndreMiras/coveralls-python-action@develop + with: + parallel-finished: true diff --git a/.github/workflows/pypi.yml b/.github/workflows/pypi.yml new file mode 100644 index 0000000..28cdccc --- /dev/null +++ b/.github/workflows/pypi.yml @@ -0,0 +1,25 @@ +name: Publish + +on: + release: + types: [published] + +jobs: + publish: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - name: Set up Python + uses: actions/setup-python@v1 + with: + python-version: 3.7 + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install setuptools wheel twine + python setup.py bdist_wheel + - name: Publish package + uses: pypa/gh-action-pypi-publish@master + with: + user: __token__ + password: ${{ secrets.PYPI_TOKEN }} diff --git a/.travis.yml b/.travis.yml deleted file mode 100644 index d6716b4..0000000 --- a/.travis.yml +++ /dev/null @@ -1,38 +0,0 @@ -language: python -dist: xenial -sudo: false - -python: -- "3.6" -- "3.7" -- "3.8" - -before_install: -- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then - wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh; - else - wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; - fi -- bash miniconda.sh -b -p $HOME/miniconda -- export PATH="$HOME/miniconda/bin:$PATH" -- conda config --set always_yes yes -- conda update -q conda -- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION pip -- source activate test-environment -- "pip install -q --upgrade 'pip'" -- "pip install -q 'coverage'" -- "pip install -q 'pytest-cov'" -- "pip install -q 'codacy-coverage'" -- "pip install -r requirements.txt" -- "pip install -q coveralls" - -install: -- "echo done" - -# command to run tests, e.g. python setup.py test -script: -- pytest --cov=laserchicken --cov-report xml:coverage.xml - -after_script: -- python-codacy-coverage -r coverage.xml -- coveralls diff --git a/.zenodo.json b/.zenodo.json index 923834f..064d298 100644 --- a/.zenodo.json +++ b/.zenodo.json @@ -59,7 +59,7 @@ "license": { "id": "Apache-2.0" }, - "publication_date": "2022-05-20", + "publication_date": "2022-09-05", "title": "Laserchicken: toolkit for ALS point clouds", - "version": "0.5.0" + "version": "0.6.0" } diff --git a/CHANGELOG.md b/CHANGELOG.md index 484596e..34a3e68 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -6,6 +6,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## Unreleased +## 0.6.0 - 2022-09-05 +## Changed: +- continuous integration moved from travis to GH actions +- naming of band ratio's modified (symbol "<" dropped from label) + +## Added +- releases published on PyPI using GH actions +- checklists included + +## Fixed +- Fixed import of deprecated `scipy.stats.stats` + ## 0.5.0 - 2022-05-20 ## Changed: - update documentation with table of features diff --git a/CITATION.cff b/CITATION.cff index 0ba3d71..f048fb4 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -52,7 +52,7 @@ authors: family-names: Koma given-names: Zsófia cff-version: "1.0.3" -date-released: 2022-05-20 +date-released: 2022-09-05 doi: "10.5281/zenodo.1219422" keywords: - "airborne laser scanning" @@ -62,5 +62,5 @@ keywords: license: "Apache-2.0" message: "If you use this software, please cite it using these metadata." title: "Laserchicken: toolkit for ALS point clouds" -version: "0.5.0" +version: "0.6.0" ... diff --git a/README.md b/README.md index 957568b..7f37117 100644 --- a/README.md +++ b/README.md @@ -3,17 +3,18 @@ Please cite the software if you are using it in your scientific publication. <img src="https://raw.githubusercontent.com/eEcoLiDAR/laserchicken/master/laserchicken_logo.png" width="500"/> </p> -[![Build Status](https://travis-ci.org/eEcoLiDAR/laserchicken.svg?branch=master)](https://travis-ci.org/eEcoLiDAR/laserchicken) +[![Build Status](https://github.com/eEcoLiDAR/laserchicken/workflows/Build/badge.svg)](https://github.com/eEcoLiDAR/laserchicken/actions) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/6e3836750fe14f34ba85e26956e8ef10)](https://www.codacy.com/app/c-meijer/eEcoLiDAR?utm_source=www.github.com&amp;utm_medium=referral&amp;utm_content=eEcoLiDAR/eEcoLiDAR&amp;utm_campaign=Badge_Grade) -[![Coverage Status](https://coveralls.io/repos/github/eEcoLiDAR/eEcoLiDAR/badge.svg)](https://coveralls.io/github/eEcoLiDAR/eEcoLiDAR) +[![Coverage Status](https://coveralls.io/repos/github/eEcoLiDAR/laserchicken/badge.svg)](https://coveralls.io/github/eEcoLiDAR/laserchicken) [![DOI](https://zenodo.org/badge/95649056.svg)](https://zenodo.org/badge/latestdoi/95649056) [![Documentation Status](https://readthedocs.org/projects/laserchicken/badge/?version=latest)](https://laserchicken.readthedocs.io/en/latest/) +[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4524/badge)](https://bestpractices.coreinfrastructure.org/projects/4524) Toolkit for handling point clouds created using airborne laser scanning (ALS). Find neighboring points in your point cloud and describe them as feature values. Read our [user manual](https://laserchicken.readthedocs.io/) and our (very modest) [tutorial](https://github.com/eEcoLiDAR/laserchicken/blob/master/tutorial.ipynb). # Installation Prerequisites: -- Python 3.6 or higher +- Python 3.7 or higher - pip ``` pip install laserchicken @@ -29,7 +30,7 @@ pip install laserchicken * Edit Changelog (based on commits in https://github.com/eecolidar/laserchicken/compare/v0.3.2...master) * Test if package can be installed with pip (`pip install .`) * Create Github release -* Upload to pypi: +* Upload to pypi (now implemented via GitHub Actions): ```python setup.py sdist bdist_wheel``` ```python -m twine upload --repository-url https://upload.pypi.org/legacy/ dist/*``` (or ```python -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*``` to test first) diff --git a/laserchicken/_version.txt b/laserchicken/_version.txt index 8f0916f..a918a2a 100644 --- a/laserchicken/_version.txt +++ b/laserchicken/_version.txt @@ -1,1 +1,1 @@ -0.5.0 +0.6.0 diff --git a/laserchicken/feature_extractor/band_ratio_feature_extractor.py b/laserchicken/feature_extractor/band_ratio_feature_extractor.py index 318dd7c..3bf65eb 100644 --- a/laserchicken/feature_extractor/band_ratio_feature_extractor.py +++ b/laserchicken/feature_extractor/band_ratio_feature_extractor.py @@ -47,10 +47,10 @@ def provides(self): """ name = 'band_ratio_' if self.lower_limit is not None: - name += str(self.lower_limit) + '<' + name += str(self.lower_limit) + '_' name += self.data_key if self.upper_limit is not None: - name += '<' + str(self.upper_limit) + name += '_' + str(self.upper_limit) return [name] def extract(self, point_cloud, neighborhoods, target_point_cloud, target_index, volume_description): diff --git a/laserchicken/feature_extractor/kurtosis_feature_extractor.py b/laserchicken/feature_extractor/kurtosis_feature_extractor.py index 34d8338..fab21bd 100644 --- a/laserchicken/feature_extractor/kurtosis_feature_extractor.py +++ b/laserchicken/feature_extractor/kurtosis_feature_extractor.py @@ -1,5 +1,5 @@ import numpy as np -import scipy.stats.stats as stat +import scipy.stats as stats from laserchicken.feature_extractor.base_feature_extractor import FeatureExtractor from laserchicken.keys import point @@ -23,7 +23,7 @@ def extract(self, point_cloud, neighborhoods, target_point_cloud, target_indices def _extract_one(self, sourcepc, neighborhood): if neighborhood: z = sourcepc[point][self.data_key]['data'][neighborhood] - kurtosis_z = stat.kurtosis(z) + kurtosis_z = stats.kurtosis(z) else: kurtosis_z = np.NaN return kurtosis_z diff --git a/laserchicken/feature_extractor/percentile_feature_extractor.py b/laserchicken/feature_extractor/percentile_feature_extractor.py index beb88fb..3aa4dd3 100644 --- a/laserchicken/feature_extractor/percentile_feature_extractor.py +++ b/laserchicken/feature_extractor/percentile_feature_extractor.py @@ -1,4 +1,4 @@ -import scipy.stats.stats as stats +import scipy.stats as stats from laserchicken.feature_extractor.base_feature_extractor import FeatureExtractor from laserchicken.keys import point diff --git a/laserchicken/feature_extractor/skew_feature_extractor.py b/laserchicken/feature_extractor/skew_feature_extractor.py index d10ca26..1b49f46 100644 --- a/laserchicken/feature_extractor/skew_feature_extractor.py +++ b/laserchicken/feature_extractor/skew_feature_extractor.py @@ -1,5 +1,5 @@ import numpy as np -import scipy.stats.stats as stat +import scipy.stats as stats from laserchicken.feature_extractor.base_feature_extractor import FeatureExtractor from laserchicken.keys import point @@ -23,7 +23,7 @@ def extract(self, point_cloud, neighborhoods, target_point_cloud, target_indices def _extract_one(self, point_cloud, neighborhood): if neighborhood: source_data = point_cloud[point][self.data_key]['data'][neighborhood] - skew = stat.skew(source_data) + skew = stats.skew(source_data) else: skew = np.NaN return skew
Rename band ratio feature Closes #180
2022-09-05T12:26:12
0.0
[]
[]
eEcoLiDAR/laserchicken
eEcoLiDAR__laserchicken-181
a393326c8c78b0ada048a09328c0587d3ff44505
diff --git a/laserchicken/feature_extractor/band_ratio_feature_extractor.py b/laserchicken/feature_extractor/band_ratio_feature_extractor.py index 318dd7c..3bf65eb 100644 --- a/laserchicken/feature_extractor/band_ratio_feature_extractor.py +++ b/laserchicken/feature_extractor/band_ratio_feature_extractor.py @@ -47,10 +47,10 @@ def provides(self): """ name = 'band_ratio_' if self.lower_limit is not None: - name += str(self.lower_limit) + '<' + name += str(self.lower_limit) + '_' name += self.data_key if self.upper_limit is not None: - name += '<' + str(self.upper_limit) + name += '_' + str(self.upper_limit) return [name] def extract(self, point_cloud, neighborhoods, target_point_cloud, target_index, volume_description):
Rename band ratio Drop the "<" symbols from the band ratio features. This significantly simplify the handling of files named after these features on Windows (which does not support such characters in the file name).
2022-09-05T11:31:38
0.0
[]
[]
eEcoLiDAR/laserchicken
eEcoLiDAR__laserchicken-179
fcb96a74e8c5cfca2641cdfac5917690dd46142f
diff --git a/.coveragerc b/.coveragerc index 77ebcde..138c7a4 100644 --- a/.coveragerc +++ b/.coveragerc @@ -1,3 +1,4 @@ [run] +relative_files = True branch = True omit = */test_*.py diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml new file mode 100644 index 0000000..3740285 --- /dev/null +++ b/.github/workflows/build.yml @@ -0,0 +1,55 @@ +name: Build + +on: + push: + pull_request: + schedule: + - cron: '0 0 1 * *' + +jobs: + + build: + name: Build for ${{ matrix.python-version }} + runs-on: 'ubuntu-latest' + strategy: + fail-fast: false + matrix: + python-version: ['3.7', '3.8', '3.9'] + steps: + - uses: actions/checkout@v2 + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v2 + with: + python-version: ${{ matrix.python-version }} + - name: Python info + shell: bash -l {0} + run: | + which python + python --version + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install -r requirements.txt + - name: Build + shell: bash -l {0} + run: | + python setup.py build + - name: Test + shell: bash -l {0} + run: | + pip install pytest pytest-cov + pytest --cov=laserchicken --cov-report xml:coverage.xml + - name: Coveralls + uses: AndreMiras/coveralls-python-action@develop + with: + parallel: true + flag-name: Unit Test + + coveralls_finish: + needs: build + runs-on: ubuntu-latest + steps: + - name: Coveralls Finished + uses: AndreMiras/coveralls-python-action@develop + with: + parallel-finished: true diff --git a/.travis.yml b/.travis.yml deleted file mode 100644 index d6716b4..0000000 --- a/.travis.yml +++ /dev/null @@ -1,38 +0,0 @@ -language: python -dist: xenial -sudo: false - -python: -- "3.6" -- "3.7" -- "3.8" - -before_install: -- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then - wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh; - else - wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; - fi -- bash miniconda.sh -b -p $HOME/miniconda -- export PATH="$HOME/miniconda/bin:$PATH" -- conda config --set always_yes yes -- conda update -q conda -- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION pip -- source activate test-environment -- "pip install -q --upgrade 'pip'" -- "pip install -q 'coverage'" -- "pip install -q 'pytest-cov'" -- "pip install -q 'codacy-coverage'" -- "pip install -r requirements.txt" -- "pip install -q coveralls" - -install: -- "echo done" - -# command to run tests, e.g. python setup.py test -script: -- pytest --cov=laserchicken --cov-report xml:coverage.xml - -after_script: -- python-codacy-coverage -r coverage.xml -- coveralls diff --git a/README.md b/README.md index 957568b..b45cc43 100644 --- a/README.md +++ b/README.md @@ -3,17 +3,18 @@ Please cite the software if you are using it in your scientific publication. <img src="https://raw.githubusercontent.com/eEcoLiDAR/laserchicken/master/laserchicken_logo.png" width="500"/> </p> -[![Build Status](https://travis-ci.org/eEcoLiDAR/laserchicken.svg?branch=master)](https://travis-ci.org/eEcoLiDAR/laserchicken) +[![Build Status](https://github.com/eEcoLiDAR/laserchicken/workflows/Build/badge.svg)](https://github.com/eEcoLiDAR/laserchicken/actions) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/6e3836750fe14f34ba85e26956e8ef10)](https://www.codacy.com/app/c-meijer/eEcoLiDAR?utm_source=www.github.com&amp;utm_medium=referral&amp;utm_content=eEcoLiDAR/eEcoLiDAR&amp;utm_campaign=Badge_Grade) -[![Coverage Status](https://coveralls.io/repos/github/eEcoLiDAR/eEcoLiDAR/badge.svg)](https://coveralls.io/github/eEcoLiDAR/eEcoLiDAR) +[![Coverage Status](https://coveralls.io/repos/github/eEcoLiDAR/laserchicken/badge.svg)](https://coveralls.io/github/eEcoLiDAR/laserchicken) [![DOI](https://zenodo.org/badge/95649056.svg)](https://zenodo.org/badge/latestdoi/95649056) [![Documentation Status](https://readthedocs.org/projects/laserchicken/badge/?version=latest)](https://laserchicken.readthedocs.io/en/latest/) +[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4524/badge)](https://bestpractices.coreinfrastructure.org/projects/4524) Toolkit for handling point clouds created using airborne laser scanning (ALS). Find neighboring points in your point cloud and describe them as feature values. Read our [user manual](https://laserchicken.readthedocs.io/) and our (very modest) [tutorial](https://github.com/eEcoLiDAR/laserchicken/blob/master/tutorial.ipynb). # Installation Prerequisites: -- Python 3.6 or higher +- Python 3.7 or higher - pip ``` pip install laserchicken diff --git a/laserchicken/feature_extractor/kurtosis_feature_extractor.py b/laserchicken/feature_extractor/kurtosis_feature_extractor.py index 34d8338..fab21bd 100644 --- a/laserchicken/feature_extractor/kurtosis_feature_extractor.py +++ b/laserchicken/feature_extractor/kurtosis_feature_extractor.py @@ -1,5 +1,5 @@ import numpy as np -import scipy.stats.stats as stat +import scipy.stats as stats from laserchicken.feature_extractor.base_feature_extractor import FeatureExtractor from laserchicken.keys import point @@ -23,7 +23,7 @@ def extract(self, point_cloud, neighborhoods, target_point_cloud, target_indices def _extract_one(self, sourcepc, neighborhood): if neighborhood: z = sourcepc[point][self.data_key]['data'][neighborhood] - kurtosis_z = stat.kurtosis(z) + kurtosis_z = stats.kurtosis(z) else: kurtosis_z = np.NaN return kurtosis_z diff --git a/laserchicken/feature_extractor/percentile_feature_extractor.py b/laserchicken/feature_extractor/percentile_feature_extractor.py index beb88fb..3aa4dd3 100644 --- a/laserchicken/feature_extractor/percentile_feature_extractor.py +++ b/laserchicken/feature_extractor/percentile_feature_extractor.py @@ -1,4 +1,4 @@ -import scipy.stats.stats as stats +import scipy.stats as stats from laserchicken.feature_extractor.base_feature_extractor import FeatureExtractor from laserchicken.keys import point diff --git a/laserchicken/feature_extractor/skew_feature_extractor.py b/laserchicken/feature_extractor/skew_feature_extractor.py index d10ca26..1b49f46 100644 --- a/laserchicken/feature_extractor/skew_feature_extractor.py +++ b/laserchicken/feature_extractor/skew_feature_extractor.py @@ -1,5 +1,5 @@ import numpy as np -import scipy.stats.stats as stat +import scipy.stats as stats from laserchicken.feature_extractor.base_feature_extractor import FeatureExtractor from laserchicken.keys import point @@ -23,7 +23,7 @@ def extract(self, point_cloud, neighborhoods, target_point_cloud, target_indices def _extract_one(self, point_cloud, neighborhood): if neighborhood: source_data = point_cloud[point][self.data_key]['data'][neighborhood] - skew = stat.skew(source_data) + skew = stats.skew(source_data) else: skew = np.NaN return skew
Travis CI -> GitHub Actions We should probably move the CI workflow to GitHub Actions. I can take care of this if you have no objections @cwmeijer!
2022-09-01T13:30:22
0.0
[]
[]
eEcoLiDAR/laserchicken
eEcoLiDAR__laserchicken-177
961d970e636e6eac3dfb969c36df9a5131ffd3d1
diff --git a/laserchicken/io/las_handler.py b/laserchicken/io/las_handler.py index 031c07b..6f276dd 100644 --- a/laserchicken/io/las_handler.py +++ b/laserchicken/io/las_handler.py @@ -1,5 +1,6 @@ """ IO Handler for LAS (and compressed LAZ) file format """ -import pylas +import laspy +import numpy as np from laserchicken import keys from laserchicken.io.base_io_handler import IOHandler @@ -26,31 +27,34 @@ def read(self, attributes=DEFAULT_LAS_ATTRIBUTES): :param attributes: list of attributes to read ('all' for all attributes in file) :return: point cloud data structure """ - file = pylas.read(self.path) + file = laspy.read(self.path) + dtype = file.header.point_format.dtype() attributes_available = [el if el not in ['X', 'Y', 'Z'] else el.lower() - for el in file.points.dtype.names] + for el in dtype.fields.keys()] attributes = select_valid_attributes(attributes_available, attributes) points = {} for name in attributes: if hasattr(file, name): - data = getattr(file, name) + file_data = getattr(file, name) + data = np.zeros_like(file_data) + data[:] = file_data points[name] = _get_attribute(data, data.dtype.name) return {keys.point: points} - def write(self, point_cloud, attributes='all', file_version='1.2', point_format_id=3): + def write(self, point_cloud, attributes='all', file_version='1.2', point_format=3): """ Write point cloud to a LAS(LAZ) file. :param point_cloud: :param attributes: list of attributes to write ('all' for all attributes in point_cloud) :param file_version: - :param point_format_id: + :param point_format: :return: """ - file = pylas.create(point_format_id=point_format_id, + file = laspy.create(point_format=point_format, file_version=file_version) points = point_cloud[keys.point] @@ -58,18 +62,25 @@ def write(self, point_cloud, attributes='all', file_version='1.2', point_format_ # NOTE: adding extra dims and assignment should be done in two steps, # some fields (e.g. raw_classification) are otherwise overwritten + dtype = file.header.point_format.dtype() for attribute in attributes: data, type = _get_data_and_type(points[attribute]) type_short = convert_to_short_type(type) if attribute not in 'xyz': # x,y,z are not there but file methods can be used to convert coords to int4 - if attribute not in file.points.dtype.names: - file.add_extra_dim(name=attribute, type=type_short) + if attribute not in dtype.fields: + param = laspy.ExtraBytesParams(name=attribute, type=type) + file.add_extra_dim(param) file_type_short = convert_to_short_type(getattr(file, attribute).dtype.name) if not file_type_short == type_short: raise TypeError('Data type in file does not match the one in point cloud: ' 'for {}, {} vs {}'.format(attribute, file_type_short, type_short)) + for dim in 'xyz': + data, _ = _get_data_and_type(points[dim]) + setattr(file.header, '{}_offset'.format(dim), data.min()) + setattr(file.header, '{}_scale'.format(dim), 0.001) + for attribute in attributes: data, _ = _get_data_and_type(points[attribute]) if data.size == 0: @@ -81,7 +92,7 @@ def write(self, point_cloud, attributes='all', file_version='1.2', point_format_ file.write(self.path) except ValueError as err: raise ValueError('Error in writing LAS file (file_version {}, point_format_id {}). ' - 'pylas error below:\n{}'.format(file_version, point_format_id, err)) + 'laspy error below:\n{}'.format(file_version, point_format, err)) def _get_attribute(data, data_type): diff --git a/laserchicken/tools/cli.py b/laserchicken/tools/cli.py index 76318bf..2db1d0c 100644 --- a/laserchicken/tools/cli.py +++ b/laserchicken/tools/cli.py @@ -27,7 +27,7 @@ def main(input_file, output_file): """ [email protected]() [email protected]_callback() def process_pipeline(processors, input_file, output_file): init(autoreset=True) try: diff --git a/requirements.txt b/requirements.txt index b82f80e..5392a97 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,5 +1,4 @@ -lazperf -pylas +laspy[lazrs] scipy>=0.11 pytest mock
plyas -> laspy Drop plyas (deprecated) with lazperf backend in favour of laspy (with lazrs backend). Closes #175.
2022-05-20T09:49:31
0.0
[]
[]
LorenFrankLab/spyglass
LorenFrankLab__spyglass-903
a9eb28afcb2fbb3278df39f9117d77e39c6d732f
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index bbcbbe7d6..a60543cfe 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -8,13 +8,15 @@ assignees: '' --- **Is your feature request related to a problem? Please describe.** -A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] +A clear and concise description of what the problem is. +For example, I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. +A clear and concise description of any alternative solutions or features you've +considered. **Additional context** Add any other context or screenshots about the feature request here. diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 7a9d62306..87c66399c 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -5,15 +5,33 @@ include relevant motivation and context. Please list issues fixed or closed by This PR. - Fixes #000: How this PR fixes the issue - - `path/file.py`: Description of the change - - `path/file.py`: Description of the change + - `path/file.py`: Description of the change + - `path/file.py`: Description of the change - Fixes #000: How this PR fixes the issue - - `path/file.py`: Description of the change - - `path/file.py`: Description of the change + - `path/file.py`: Description of the change + - `path/file.py`: Description of the change # Checklist: +<!-- +For items below with choices, select one (e.g., yes, no) +and check the box to indicate that a choice was selected. +If not applicable, check the box and add `N/A` after the box. +For example: +- [X] This has a choice: no +- [X] N/A. If choice, other item. +This comment may be deleted on submission. + +Alter notes example: +```python +from spyglass.example import Table +Table.alter() # Comment regarding the change +``` +--> + - [ ] This PR should be accompanied by a release: (yes/no/unsure) -- [ ] (If release) I have updated the `CITATION.cff` -- [ ] I have updated the `CHANGELOG.md` +- [ ] If release, I have updated the `CITATION.cff` +- [ ] This PR makes edits to table definitions: (yes/no) +- [ ] If table edits, I have included an `alter` snippet for release notes. +- [ ] I have updated the `CHANGELOG.md` with PR number and description. - [ ] I have added/edited docs/notebooks to reflect the changes diff --git a/CHANGELOG.md b/CHANGELOG.md index b9b4898d1..deaf94bf4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,12 +2,20 @@ ## [0.5.2] (Unreleased) +### Release Notes + +<!-- Running draft to be removed immediately prior to release. --> + ### Infrastructure - Refactor `TableChain` to include `_searched` attribute. #867 - Fix errors in config import #882 - Save current spyglass version in analysis nwb files to aid diagnosis #897 - Add pynapple support #898 +- Update PR template checklist to include db changes. #903 +- Avoid permission check on personnel tables. #903 +- Add documentation for `SpyglassMixin`. #903 +- Add helper to identify merge table by definition. #903 - Prioritize datajoint filepath entry for defining abs_path of analysis nwbfile #918 - Fix potential duplicate entries in Merge part tables #922 @@ -36,7 +44,7 @@ - Fixes to `_convert_mp4` #834 - Replace deprecated calls to `yaml.safe_load()` #834 - Spikesorting: - - Increase`spikeinterface` version to >=0.99.1, <0.100 #852 + - Increase`spikeinterface` version to >=0.99.1, \<0.100 #852 - Bug fix in single artifact interval edge case #859 - Bug fix in FigURL #871 - LFP @@ -213,3 +221,5 @@ [0.4.2]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.4.2 [0.4.3]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.4.3 [0.5.0]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.5.0 +[0.5.1]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.5.1 +[0.5.2]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.5.2 diff --git a/docs/src/contribute.md b/docs/src/contribute.md index 4d41569c0..d34698a39 100644 --- a/docs/src/contribute.md +++ b/docs/src/contribute.md @@ -45,7 +45,7 @@ By convention, an individual pipeline has one or more the following table types: - Parameters table - Selection table - Data table -- Merge Table (see also [doc](./misc/merge_tables.md) +- Merge Table (see also [doc](./misc/merge_tables.md)) ### Common/Multi-pipeline @@ -226,16 +226,8 @@ faulty connection. - During development, we suggest using a Docker container. See [example](./notebooks/00_Setup.ipynb). -- DataJoint is unable to set delete permissions on a per-table basis. If a user - is able to delete entries in a given table, she can delete entries in any - table in the schema. The `SpikeSorting` table extends the built-in `delete` - method to check if the username matches a list of allowed users when - `delete` is called. Issues #226 and #586 track the progress of generalizing - this feature. - `numpy` style docstrings will be interpreted by API docs. To check for compliance, monitor the std out when building docs (see `docs/README.md`) -- `fetch_nwb` is currently reperated across many tables. For progress on a fix, - follow issue #530 ## Making a release diff --git a/docs/src/misc/merge_tables.md b/docs/src/misc/merge_tables.md index 1cd4b000b..bd67de4c2 100644 --- a/docs/src/misc/merge_tables.md +++ b/docs/src/misc/merge_tables.md @@ -10,23 +10,6 @@ and related discussions [here](https://github.com/datajoint/datajoint-python/issues/151) and [here](https://github.com/LorenFrankLab/spyglass/issues/469). -**Note:** Deleting entries upstream of Merge Tables will throw errors related to -deleting a part entry before the master. To circumvent this, you can add -`force_parts=True` to the -[`delete` function](https://datajoint.com/docs/core/datajoint-python/0.14/api/datajoint/__init__/#datajoint.table.Table.delete) -call, but this will leave and orphaned primary key in the master. Instead, use -`(YourTable & restriction).delete_downstream_merge()` to delete master/part -pairs. If errors persist, identify and import the offending part table and rerun -`delete_downstream_merge` with `reload_cache=True`. This process will be faster -for subsequent calls if you reassign the your table after importing. - -```python -from spyglass.common import Nwbfile - -nwbfile = Nwbfile() -(nwbfile & "nwb_file_name LIKE 'Name%'").delete_downstream_merge() -``` - ## What A Merge Table is fundamentally a master table with one part for each divergent diff --git a/docs/src/misc/mixin.md b/docs/src/misc/mixin.md new file mode 100644 index 000000000..747a12f9f --- /dev/null +++ b/docs/src/misc/mixin.md @@ -0,0 +1,123 @@ +# Spyglass Mixin + +The Spyglass Mixin provides a way to centralize all Spyglass-specific +functionalities that have been added to DataJoint tables. This includes... + +- Fetching NWB files +- Delete functionality, including permission checks and part/master pairs +- Export logging. See [export doc](export.md) for more information. + +To add this functionality to your own tables, simply inherit from the mixin: + +```python +import datajoint as dj +from spyglass.utils import SpyglassMixin + +schema = dj.schema('my_schema') + +@schema +class MyOldTable(dj.Manual): + pass + +@schema +class MyNewTable(SpyglassMixin, dj.Manual):) + pass +``` + +**NOTE**: The mixin must be the first class inherited from in order to override +default DataJoint functions. + +## Fetching NWB Files + +Many tables in Spyglass inheris from central tables that store records of NWB +files. Rather than adding a helper function to each table, the mixin provides a +single function to access these files from any table. + +```python +from spyglass.example import AnyTable + +(AnyTable & my_restriction).fetch_nwb() +``` + +This function will look at the table definition to determine if the raw file +should be fetched from `Nwbfile` or an analysis file should be fetched from +`AnalysisNwbfile`. If neither is foreign-key-referenced, the function will refer +to a `_nwb_table` attribute. + +## Delete Functionality + +The mixin overrides the default `delete` function to provide two additional +features. + +### Permission Checks + +By default, DataJoint is unable to set delete permissions on a per-table basis. +If a user is able to delete entries in a given table, she can delete entries in +any table in the schema. + +The mixin relies on the `Session.Experimenter` and `LabTeams` tables to ... + +1. Check the session and experimenter associated with the attempted deletion. +2. Check the lab teams associated with the session experimenter and the user. + +If the user shares a lab team with the session experimenter, the deletion is +permitted. + +This is not secure system and is not a replacement for database backups (see +[database management](./database_management.md)). A user could readily +curcumvent the default permission checks by adding themselves to the relevant +team or removing the mixin from the class declaration. However, it provides a +reasonable level of security for the average user. + +### Master/Part Pairs + +By default, DataJoint has protections in place to prevent deletion of a part +entry without deleting the corresponding master. This is useful for enforcing +the custom of adding/removing all parts of a master at once and avoids orphaned +masters, or null entry masters without matching data. + +For [Merge tables](./merge_tables.md), this is a significant problem. If a user +wants to delete all entries associated with a given session, she must find all +Merge entries and delete them in the correct order. The mixin provides a +function, `delete_downstream_merge`, to handle this, which is run by default +when calling `delete`. + +`delete_downstream_merge`, also aliased as `ddm`, identifies all Merge tables +downsteam of where it is called. If `dry_run=True`, it will return a list of +entries that would be deleted, otherwise it will delete them. + +Importantly, `delete_downstream_merge` cannot properly interact with tables that +have not been imported into the current namespace. If you are having trouble +with part deletion errors, import the offending table and rerun the function +with `reload_cache=True`. + +```python +from spyglass.common import Nwbfile + +restricted_nwbfile = Nwbfile() & "nwb_file_name LIKE 'Name%'" +restricted_nwbfile.delete_downstream_merge(dry_run=False) +# DataJointError("Attempt to delete part table MyMerge.Part before ... + +from spyglass.example import MyMerge + +restricted_nwbfile.delete_downstream_merge(reload_cache=True, dry_run=False) +``` + +Because each table keeps a cache of downsteam merge tables, it is important to +reload the cache if the table has been imported after the cache was created. +Speed gains can also be achieved by avoiding re-instancing the table each time. + +```python +# Slow +from spyglass.common import Nwbfile + +(Nwbfile() & "nwb_file_name LIKE 'Name%'").ddm(dry_run=False) +(Nwbfile() & "nwb_file_name LIKE 'Other%'").ddm(dry_run=False) + +# Faster +from spyglass.common import Nwbfile + +nwbfile = Nwbfile() +(nwbfile & "nwb_file_name LIKE 'Name%'").ddm(dry_run=False) +(nwbfile & "nwb_file_name LIKE 'Other%'").ddm(dry_run=False) +``` diff --git a/src/spyglass/position/v1/position_trodes_position.py b/src/spyglass/position/v1/position_trodes_position.py index b2e2157d0..80f1cb700 100644 --- a/src/spyglass/position/v1/position_trodes_position.py +++ b/src/spyglass/position/v1/position_trodes_position.py @@ -288,8 +288,13 @@ def make(self, key): f"{current_dir.as_posix()}/{nwb_base_filename}_" f"{epoch:02d}_{key['trodes_pos_params_name']}.mp4" ) + red_cols = ( + ["xloc", "yloc"] + if "xloc" in raw_position_df.columns + else ["xloc1", "yloc1"] + ) centroids = { - "red": np.asarray(raw_position_df[["xloc", "yloc"]]), + "red": np.asarray(raw_position_df[red_cols]), "green": np.asarray(raw_position_df[["xloc2", "yloc2"]]), } position_mean = np.asarray( @@ -330,6 +335,7 @@ def convert_to_pixels(data, frame_size, cm_to_pixels=1.0): @staticmethod def fill_nan(variable, video_time, variable_time): + # TODO: Reduce duplication across dlc_utils and common_position video_ind = np.digitize(variable_time, video_time[1:]) n_video_time = len(video_time) @@ -338,6 +344,7 @@ def fill_nan(variable, video_time, variable_time): filled_variable = np.full((n_video_time, n_variable_dims), np.nan) except IndexError: filled_variable = np.full((n_video_time,), np.nan) + filled_variable[video_ind] = variable return filled_variable @@ -450,4 +457,7 @@ def make_video( video.release() out.release() - cv2.destroyAllWindows() + try: + cv2.destroyAllWindows() + except cv2.error: # if cv is already closed or does not have func + pass diff --git a/src/spyglass/utils/dj_chains.py b/src/spyglass/utils/dj_chains.py index 76ffeb107..fe9cebc02 100644 --- a/src/spyglass/utils/dj_chains.py +++ b/src/spyglass/utils/dj_chains.py @@ -282,7 +282,10 @@ def find_path(self, directed=True) -> OrderedDict: if table.isnumeric(): # get proj() attribute map for alias node if not prev_table: raise ValueError("Alias node found without prev table.") - attr_map = self.graph[table][prev_table]["attr_map"] + try: + attr_map = self.graph[table][prev_table]["attr_map"] + except KeyError: # Why is this only DLCCentroid?? + attr_map = self.graph[prev_table][table]["attr_map"] ret[prev_table]["attr_map"] = attr_map else: free_table = dj.FreeTable(self._connection, table) diff --git a/src/spyglass/utils/dj_merge_tables.py b/src/spyglass/utils/dj_merge_tables.py index c5461e75c..1e14468da 100644 --- a/src/spyglass/utils/dj_merge_tables.py +++ b/src/spyglass/utils/dj_merge_tables.py @@ -18,6 +18,30 @@ RESERVED_PRIMARY_KEY = "merge_id" RESERVED_SECONDARY_KEY = "source" RESERVED_SK_LENGTH = 32 +MERGE_DEFINITION = ( + f"\n {RESERVED_PRIMARY_KEY}: uuid\n ---\n" + + f" {RESERVED_SECONDARY_KEY}: varchar({RESERVED_SK_LENGTH})\n " +) + + +def is_merge_table(table): + """Return True if table definition matches the default Merge table. + + Regex removes comments and blank lines before comparison. + """ + if not isinstance(table, dj.Table): + return False + if isinstance(table, dj.FreeTable): + fields, pk = table.heading.names, table.primary_key + return fields == [ + RESERVED_PRIMARY_KEY, + RESERVED_SECONDARY_KEY, + ] and pk == [RESERVED_PRIMARY_KEY] + return MERGE_DEFINITION == re.sub( + r"\n\s*\n", + "\n", + re.sub(r"#.*\n", "\n", getattr(table, "definition", "")), + ) class Merge(dj.Manual): @@ -34,21 +58,16 @@ def __init__(self): super().__init__() self._reserved_pk = RESERVED_PRIMARY_KEY self._reserved_sk = RESERVED_SECONDARY_KEY - merge_def = ( - f"\n {self._reserved_pk}: uuid\n ---\n" - + f" {self._reserved_sk}: varchar({RESERVED_SK_LENGTH})\n " - ) if not self.is_declared: - # remove comments after # from each line of definition - if self._remove_comments(self.definition) != merge_def: + if not is_merge_table(self): # Check definition logger.warn( - "Merge table with non-default definition\n\t" - + f"Expected: {merge_def.strip()}\n\t" + "Merge table with non-default definition\n" + + f"Expected: {MERGE_DEFINITION.strip()}\n" + f"Actual : {self.definition.strip()}" ) for part in self.parts(as_objects=True): if part.primary_key != self.primary_key: - logger.warn( + logger.warn( # PK is only 'merge_id' in parts, no others f"Unexpected primary key in {part.table_name}" + f"\n\tExpected: {self.primary_key}" + f"\n\tActual : {part.primary_key}" diff --git a/src/spyglass/utils/dj_mixin.py b/src/spyglass/utils/dj_mixin.py index 082116bf6..9377b30c2 100644 --- a/src/spyglass/utils/dj_mixin.py +++ b/src/spyglass/utils/dj_mixin.py @@ -9,12 +9,14 @@ from datajoint.logging import logger as dj_logger from datajoint.table import Table from datajoint.utils import get_master, user_choice +from networkx import NetworkXError from pymysql.err import DataError from spyglass.utils.database_settings import SHARED_MODULES from spyglass.utils.dj_chains import TableChain, TableChains from spyglass.utils.dj_helper_fn import fetch_nwb, get_nwb_table from spyglass.utils.dj_merge_tables import RESERVED_PRIMARY_KEY as MERGE_PK +from spyglass.utils.dj_merge_tables import Merge, is_merge_table from spyglass.utils.logging import logger try: @@ -67,20 +69,22 @@ def __init__(self, *args, **kwargs): Checks that schema prefix is in SHARED_MODULES. """ - if ( - self.database # Connected to a database - and not self.is_declared # New table - and self.database.split("_")[0] # Prefix - not in [ - *SHARED_MODULES, # Shared modules - dj.config["database.user"], # User schema - "temp", - "test", - ] - ): + if self.is_declared: + return + if self.database and self.database.split("_")[0] not in [ + *SHARED_MODULES, + dj.config["database.user"], + "temp", + "test", + ]: logger.error( f"Schema prefix not in SHARED_MODULES: {self.database}" ) + if is_merge_table(self) and not isinstance(self, Merge): + raise TypeError( + "Table definition matches Merge but does not inherit class: " + + self.full_table_name + ) # ------------------------------- fetch_nwb ------------------------------- @@ -175,6 +179,7 @@ def _merge_tables(self) -> Dict[str, dj.FreeTable]: """ self.connection.dependencies.load() merge_tables = {} + visited = set() def search_descendants(parent): for desc in parent.descendants(as_objects=True): @@ -184,12 +189,18 @@ def search_descendants(parent): or master_name in merge_tables ): continue - master = dj.FreeTable(self.connection, master_name) - if MERGE_PK in master.heading.names: - merge_tables[master_name] = master - search_descendants(master) + master_ft = dj.FreeTable(self.connection, master_name) + if is_merge_table(master_ft): + merge_tables[master_name] = master_ft + if master_name not in visited: + visited.add(master_name) + search_descendants(master_ft) - _ = search_descendants(self) + try: + _ = search_descendants(self) + except NetworkXError as e: + table_name = "".join(e.args[0].split("`")[1:4]) + raise ValueError(f"Please import {table_name} and try again.") logger.info( f"Building merge cache for {self.table_name}.\n\t" @@ -231,6 +242,7 @@ def _get_chain(self, substring) -> TableChains: for name, chain in self._merge_chains.items(): if substring.lower() in name: return chain + raise ValueError(f"No chain found with '{substring}' in name.") def _commit_merge_deletes( self, merge_join_dict: Dict[str, List[QueryExpression]], **kwargs @@ -355,14 +367,19 @@ def _get_exp_summary(self): str Summary of experimenters for session(s). """ + Session = self._delete_deps[-1] SesExp = Session.Experimenter - empty_pk = {self._member_pk: "NULL"} + # Not called in delete permission check, only bare _get_exp_summary + if self._member_pk in self.heading.names: + return self * SesExp + + empty_pk = {self._member_pk: "NULL"} format = dj.U(self._session_pk, self._member_pk) - sess_link = self._session_connection.join( - self.restriction, reverse_order=True - ) + + restr = self.restriction or True + sess_link = self._session_connection.join(restr, reverse_order=True) exp_missing = format & (sess_link - SesExp).proj(**empty_pk) exp_present = format & (sess_link * SesExp - exp_missing).proj() @@ -404,7 +421,10 @@ def _check_delete_permission(self) -> None: if dj_user in LabMember().admin: # bypass permission check for admin return - if not self._session_connection: + if ( + not self._session_connection # Table has no session + or self._member_pk in self.heading.names # Table has experimenter + ): logger.warn( # Permit delete if no session connection "Could not find lab team associated with " + f"{self.__class__.__name__}." @@ -415,7 +435,6 @@ def _check_delete_permission(self) -> None: sess_summary = self._get_exp_summary() experimenters = sess_summary.fetch(self._member_pk) if None in experimenters: - # TODO: Check if allow delete of remainder? raise PermissionError( "Please ensure all Sessions have an experimenter in " + f"SessionExperimenter:\n{sess_summary}"
Document use of mixin class for custom spyglass tables New users may not be informed about the mixin class and its usage. We should make sure there is documentation on how to include this class for custom spyglass tables and what it does.
2024-03-29T16:12:14
0.0
[]
[]
uber/causalml
uber__causalml-778
a0315660d9b14f5d943aa688d8242eb621d2ba76
diff --git a/docs/installation.rst b/docs/installation.rst index 32be82c4..58cdaa1e 100644 --- a/docs/installation.rst +++ b/docs/installation.rst @@ -34,6 +34,8 @@ Install from the ``conda`` virtual environment This will create a new ``conda`` virtual environment named ``causalml-[tf-]py3x``, where ``x`` is in ``[7, 8, 9]``. e.g. ``causalml-py37`` or ``causalml-tf-py38``. If you want to change the name of the environment, update the relevant YAML file in ``envs/``. +Note: if you do not have the base cxx-compiler libraries installed, follow the ``Install from source`` instructions below to install them first. + .. code-block:: bash git clone https://github.com/uber/causalml.git @@ -80,6 +82,7 @@ Create a clean ``conda`` environment. conda install -c conda-forge cxx-compiler conda install python-graphviz conda install -c conda-forge xorg-libxrender + conda install -c conda-forge libxcrypt Then:
build from source failing: no such file or directory <crpyt.h> [Builds from source](https://github.com/uber/causalml/actions/workflows/test-build-from-source.yml) are failing for python 3.7 and 3.8 due to a missing file <crypt.h>: ![image](https://github.com/uber/causalml/assets/9282281/5cd547d3-898e-4d45-98c6-255ce2509aa4) **To Reproduce** Follow build from source installation steps **Expected behavior** No failure. **Environment (please complete the following information):** See run details at: https://github.com/uber/causalml/actions/runs/8746279634 build from source failing: no such file or directory <crpyt.h> [Builds from source](https://github.com/uber/causalml/actions/workflows/test-build-from-source.yml) are failing for python 3.7 and 3.8 due to a missing file <crypt.h>: ![image](https://github.com/uber/causalml/assets/9282281/5cd547d3-898e-4d45-98c6-255ce2509aa4) **To Reproduce** Follow build from source installation steps **Expected behavior** No failure. **Environment (please complete the following information):** See run details at: https://github.com/uber/causalml/actions/runs/8746279634 build from source failing: no such file or directory <crpyt.h> [Builds from source](https://github.com/uber/causalml/actions/workflows/test-build-from-source.yml) are failing for python 3.7 and 3.8 due to a missing file <crypt.h>: ![image](https://github.com/uber/causalml/assets/9282281/5cd547d3-898e-4d45-98c6-255ce2509aa4) **To Reproduce** Follow build from source installation steps **Expected behavior** No failure. **Environment (please complete the following information):** See run details at: https://github.com/uber/causalml/actions/runs/8746279634 build from source failing: no such file or directory <crpyt.h> [Builds from source](https://github.com/uber/causalml/actions/workflows/test-build-from-source.yml) are failing for python 3.7 and 3.8 due to a missing file <crypt.h>: ![image](https://github.com/uber/causalml/assets/9282281/5cd547d3-898e-4d45-98c6-255ce2509aa4) **To Reproduce** Follow build from source installation steps **Expected behavior** No failure. **Environment (please complete the following information):** See run details at: https://github.com/uber/causalml/actions/runs/8746279634
This seems related to [this change in conda-forge](https://github.com/conda-forge/linux-sysroot-feedstock/issues/52#issuecomment-1946615469). This seems related to [this change in conda-forge](https://github.com/conda-forge/linux-sysroot-feedstock/issues/52#issuecomment-1946615469). This seems related to [this change in conda-forge](https://github.com/conda-forge/linux-sysroot-feedstock/issues/52#issuecomment-1946615469). This seems related to [this change in conda-forge](https://github.com/conda-forge/linux-sysroot-feedstock/issues/52#issuecomment-1946615469).
2024-06-05T22:09:13
0.0
[]
[]
uber/causalml
uber__causalml-773
cc30c4a698f8f3a96e9646a118c014963519c5db
diff --git a/causalml/metrics/visualize.py b/causalml/metrics/visualize.py index 39fe4bf7..7dd13980 100644 --- a/causalml/metrics/visualize.py +++ b/causalml/metrics/visualize.py @@ -340,12 +340,16 @@ def get_tmlegain( lift_ub = [] for col in model_names: + # Create `n_segment` equal segments from sorted model estimates. Rank is used to break ties. + # ref: https://stackoverflow.com/a/46979206/3216742 + segments = pd.qcut(df[col].rank(method="first"), n_segment, labels=False) + ate_model, ate_model_lb, ate_model_ub = tmle.estimate_ate( X=df[inference_col], p=df[p_col], treatment=df[treatment_col], y=df[outcome_col], - segment=pd.qcut(df[col], n_segment, labels=False), + segment=segments, ) lift_model = [0.0] * (n_segment + 1) lift_model[n_segment] = ate_all[0] @@ -446,19 +450,21 @@ def get_tmleqini( qini_ub = [] for col in model_names: + # Create `n_segment` equal segments from sorted model estimates. Rank is used to break ties. + # ref: https://stackoverflow.com/a/46979206/3216742 + segments = pd.qcut(df[col].rank(method="first"), n_segment, labels=False) + ate_model, ate_model_lb, ate_model_ub = tmle.estimate_ate( X=df[inference_col], p=df[p_col], treatment=df[treatment_col], y=df[outcome_col], - segment=pd.qcut(df[col], n_segment, labels=False), + segment=segments, ) qini_model = [0] for i in range(1, n_segment): - n_tr = df[pd.qcut(df[col], n_segment, labels=False) == (n_segment - i)][ - treatment_col - ].sum() + n_tr = df[segments == (n_segment - i)][treatment_col].sum() qini_model.append(ate_model[0][n_segment - i] * n_tr) qini.append(qini_model) @@ -467,9 +473,7 @@ def get_tmleqini( qini_lb_model = [0] qini_ub_model = [0] for i in range(1, n_segment): - n_tr = df[pd.qcut(df[col], n_segment, labels=False) == (n_segment - i)][ - treatment_col - ].sum() + n_tr = df[segments == (n_segment - i)][treatment_col].sum() qini_lb_model.append(ate_model_lb[0][n_segment - i] * n_tr) qini_ub_model.append(ate_model_ub[0][n_segment - i] * n_tr) diff --git a/pyproject.toml b/pyproject.toml index 4c599fd3..286c9c1b 100755 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [project] name = "causalml" -version = "0.15.1" +version = "0.15.2dev" description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms" readme = { file = "README.md", content-type = "text/markdown" }
get_tmlegain() ValueError: Bin edges must be unique --------------------------------------------------------------------------- ValueError Traceback (most recent call last) --> 324 plot_tmlegain(pred_df, inference_col, outcome_col=y_col, 325 treatment_col=treatment_col, p_col=p_col) 326 ~/anaconda3/envs/myenv/lib/python3.8/site-packages/causalml/metrics/visualize.py in plot_tmlegain(df, inference_col, learner, outcome_col, treatment_col, p_col, n_segment, cv, calibrate_propensity, ci, figsize) 656 ci (bool, optional): whether return confidence intervals for ATE or not 657 """ --> 658 plot_df = get_tmlegain( 659 df, 660 learner=learner, ~/anaconda3/envs/myenv/lib/python3.8/site-packages/causalml/metrics/visualize.py in get_tmlegain(df, inference_col, learner, outcome_col, treatment_col, p_col, n_segment, cv, calibrate_propensity, ci) 341 treatment=df[treatment_col], 342 y=df[outcome_col], --> 343 segment=pd.qcut(df[col], n_segment, labels=False), 344 ) 345 lift_model = [0.0] * (n_segment + 1) ~/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/core/reshape/tile.py in qcut(x, q, labels, retbins, precision, duplicates) 370 quantiles = q 371 bins = algos.quantile(x, quantiles) --> 372 fac, bins = _bins_to_cuts( 373 x, 374 bins, ~/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/core/reshape/tile.py in _bins_to_cuts(x, bins, right, labels, precision, include_lowest, dtype, duplicates, ordered) 411 if len(unique_bins) < len(bins) and len(bins) != 2: 412 if duplicates == "raise": --> 413 raise ValueError( 414 f"Bin edges must be unique: {repr(bins)}.\n" 415 f"You can drop duplicate edges by setting the 'duplicates' kwarg" ValueError: Bin edges must be unique: array([-0.08210021, 0. , 0. , 0. , 0. , 0.07284003]). You can drop duplicate edges by setting the 'duplicates' kwarg **Environment (please complete the following information):** - OS: [Linux] - Python Version: [e.g. 3.8.19] - Versions of Major Dependencies (`pandas`, `scikit-learn`, `cython`): [e.g. `pandas==1.3.5`, `scikit-learn==1.0.2`, `cython==0.29.34`] get_tmlegain() ValueError: Bin edges must be unique --------------------------------------------------------------------------- ValueError Traceback (most recent call last) --> 324 plot_tmlegain(pred_df, inference_col, outcome_col=y_col, 325 treatment_col=treatment_col, p_col=p_col) 326 ~/anaconda3/envs/myenv/lib/python3.8/site-packages/causalml/metrics/visualize.py in plot_tmlegain(df, inference_col, learner, outcome_col, treatment_col, p_col, n_segment, cv, calibrate_propensity, ci, figsize) 656 ci (bool, optional): whether return confidence intervals for ATE or not 657 """ --> 658 plot_df = get_tmlegain( 659 df, 660 learner=learner, ~/anaconda3/envs/myenv/lib/python3.8/site-packages/causalml/metrics/visualize.py in get_tmlegain(df, inference_col, learner, outcome_col, treatment_col, p_col, n_segment, cv, calibrate_propensity, ci) 341 treatment=df[treatment_col], 342 y=df[outcome_col], --> 343 segment=pd.qcut(df[col], n_segment, labels=False), 344 ) 345 lift_model = [0.0] * (n_segment + 1) ~/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/core/reshape/tile.py in qcut(x, q, labels, retbins, precision, duplicates) 370 quantiles = q 371 bins = algos.quantile(x, quantiles) --> 372 fac, bins = _bins_to_cuts( 373 x, 374 bins, ~/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/core/reshape/tile.py in _bins_to_cuts(x, bins, right, labels, precision, include_lowest, dtype, duplicates, ordered) 411 if len(unique_bins) < len(bins) and len(bins) != 2: 412 if duplicates == "raise": --> 413 raise ValueError( 414 f"Bin edges must be unique: {repr(bins)}.\n" 415 f"You can drop duplicate edges by setting the 'duplicates' kwarg" ValueError: Bin edges must be unique: array([-0.08210021, 0. , 0. , 0. , 0. , 0.07284003]). You can drop duplicate edges by setting the 'duplicates' kwarg **Environment (please complete the following information):** - OS: [Linux] - Python Version: [e.g. 3.8.19] - Versions of Major Dependencies (`pandas`, `scikit-learn`, `cython`): [e.g. `pandas==1.3.5`, `scikit-learn==1.0.2`, `cython==0.29.34`]
use pd.qcut(df[col], n_segment, labels=False, duplicates='drop') to solve thie problem? use pd.qcut(df[col], n_segment, labels=False, duplicates='drop') to solve thie problem?
2024-05-09T18:27:27
0.0
[]
[]
openwisp/openwisp-firmware-upgrader
openwisp__openwisp-firmware-upgrader-270
cff1f2bcb71245ea382c8a6645da2d95bb51a6be
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index efc5df7..6a37db5 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -54,7 +54,7 @@ jobs: pip install -U pip wheel setuptools pip install -U -r requirements-test.txt pip install -U -e . - pip install ${{ matrix.django-version }} + pip install -U ${{ matrix.django-version }} sudo npm install -g jshint stylelint - name: QA checks diff --git a/openwisp_firmware_upgrader/admin.py b/openwisp_firmware_upgrader/admin.py index c8acbf4..cb12306 100644 --- a/openwisp_firmware_upgrader/admin.py +++ b/openwisp_firmware_upgrader/admin.py @@ -20,7 +20,7 @@ from django.utils.translation import gettext_lazy as _ from reversion.admin import VersionAdmin -from openwisp_controller.config.admin import DeviceAdmin +from openwisp_controller.config.admin import DeactivatedDeviceReadOnlyMixin, DeviceAdmin from openwisp_users.multitenancy import MultitenantAdminMixin, MultitenantOrgFilter from openwisp_utils.admin import ReadOnlyAdmin, TimeReadonlyAdminMixin @@ -402,7 +402,9 @@ def get_form_kwargs(self, index): return kwargs -class DeviceFirmwareInline(MultitenantAdminMixin, admin.StackedInline): +class DeviceFirmwareInline( + MultitenantAdminMixin, DeactivatedDeviceReadOnlyMixin, admin.StackedInline +): model = DeviceFirmware formset = DeviceFormSet form = DeviceFirmwareForm diff --git a/openwisp_firmware_upgrader/api/views.py b/openwisp_firmware_upgrader/api/views.py index af6d419..12256af 100644 --- a/openwisp_firmware_upgrader/api/views.py +++ b/openwisp_firmware_upgrader/api/views.py @@ -3,7 +3,7 @@ from django.http import Http404 from django_filters.rest_framework import DjangoFilterBackend from rest_framework import filters, generics, pagination, serializers, status -from rest_framework.exceptions import NotFound +from rest_framework.exceptions import NotFound, PermissionDenied from rest_framework.request import clone_request from rest_framework.response import Response from rest_framework.utils.serializer_helpers import ReturnDict @@ -257,6 +257,12 @@ class DeviceFirmwareDetailView( lookup_url_kwarg = 'pk' organization_field = 'device__organization' + def get_object(self): + obj = super().get_object() + if self.request.method not in ('GET', 'HEAD') and obj.device.is_deactivated(): + raise PermissionDenied + return obj + def get_serializer_context(self): context = super().get_serializer_context() context.update({'device_id': self.kwargs['pk']})
[change] Don't allow performing firmware upgrades on deactivated device Depends on https://github.com/openwisp/openwisp-controller/pull/840 The DeviceFirmware inline admin should become readonly on deactivated devices. It should not be possible to upgrade a deactivated device.
2024-08-01T15:56:40
0.0
[]
[]
openwisp/openwisp-firmware-upgrader
openwisp__openwisp-firmware-upgrader-255
17e91257d9928b25a0338b9a04448ddd74a38d71
diff --git a/openwisp_firmware_upgrader/base/models.py b/openwisp_firmware_upgrader/base/models.py index 0b33d565..e9091b31 100644 --- a/openwisp_firmware_upgrader/base/models.py +++ b/openwisp_firmware_upgrader/base/models.py @@ -211,6 +211,7 @@ class AbstractFirmwareImage(TimeStampedEditableModel): upload_to=get_build_directory, max_file_size=app_settings.MAX_FILE_SIZE, storage=app_settings.PRIVATE_STORAGE_INSTANCE, + max_length=255, ) type = models.CharField( blank=True, diff --git a/openwisp_firmware_upgrader/migrations/0010_alter_firmwareimage_file.py b/openwisp_firmware_upgrader/migrations/0010_alter_firmwareimage_file.py new file mode 100644 index 00000000..b78a68c1 --- /dev/null +++ b/openwisp_firmware_upgrader/migrations/0010_alter_firmwareimage_file.py @@ -0,0 +1,30 @@ +# Generated by Django 4.2.13 on 2024-05-30 07:38 + +from urllib.parse import urljoin +from ..settings import FIRMWARE_API_BASEURL, IMAGE_URL_PATH +from django.db import migrations +import openwisp_firmware_upgrader.base.models +import private_storage.fields +import private_storage.storage.files + + +class Migration(migrations.Migration): + + dependencies = [ + ("firmware_upgrader", "0009_upgrade_options"), + ] + + operations = [ + migrations.AlterField( + model_name="firmwareimage", + name="file", + field=private_storage.fields.PrivateFileField( + max_length=255, + storage=private_storage.storage.files.PrivateFileSystemStorage( + base_url=urljoin(FIRMWARE_API_BASEURL, IMAGE_URL_PATH), + ), + upload_to=openwisp_firmware_upgrader.base.models.get_build_directory, + verbose_name="File", + ), + ), + ]
Firmware filename is limited to 100 characters Hi, I'm compiling my own openWRT firmware for a Xiaomi Mi Router 4a Gigabyte edition, but the generated firmware can't be upload to the openWISP server because the limit of firmware filename: `ramips-mt7621-xiaomi_mi-router-4a-gigabit-squashfs-sysupgrade.bin` ![image](https://github.com/openwisp/openwisp-firmware-upgrader/assets/119055741/6e3c6751-4a76-45e4-a3a3-b49684281098) There are some config to change this or this is a simple limit of model on django migrations? I'm using the imagebuilder ansible playbook and tested directly on the site too.
I think it may be a deafult limit as I don't see an explicit one defined in the code: https://github.com/openwisp/openwisp-firmware-upgrader/blob/master/openwisp_firmware_upgrader/base/models.py#L209-L214 As a temporary workaround you should be able to reduce the file name and select the image type manually. Hii I would like to work on this issue Please assign me this issue > I think it may be a deafult limit as I don't see an explicit one defined in the code: https://github.com/openwisp/openwisp-firmware-upgrader/blob/master/openwisp_firmware_upgrader/base/models.py#L209-L214 > > As a temporary workaround you should be able to reduce the file name and select the image type manually. Yup, the Django FileField default is 100 characters
2024-05-14T19:58:41
0.0
[]
[]
openwisp/openwisp-firmware-upgrader
openwisp__openwisp-firmware-upgrader-231
0ded06ae2dd8233c8c4ff590c2ac3c08d448a3cb
diff --git a/README.rst b/README.rst index 45b45b32..24abac5f 100644 --- a/README.rst +++ b/README.rst @@ -471,6 +471,20 @@ List mass upgrade operations GET /api/v1/firmware-upgrader/batch-upgrade-operation/ +**Available filters** + +The list of batch upgrade operations provides the following filters: + +- ``build`` (Firmware build ID) +- ``status`` (One of: idle, in-progress, success, failed) + +Here's a few examples: + +.. code-block:: text + + GET /api/v1/firmware-upgrader/batch-upgrade-operation/?build={build_id} + GET /api/v1/firmware-upgrader/batch-upgrade-operation/?status={status} + Get mass upgrade operation detail ################################# @@ -485,6 +499,22 @@ List firmware builds GET /api/v1/firmware-upgrader/build/ +**Available filters** + +The list of firmware builds provides the following filters: + +- ``category`` (Firmware category ID) +- ``version`` (Firmware build version) +- ``os`` (Firmware build os identifier) + +Here's a few examples: + +.. code-block:: text + + GET /api/v1/firmware-upgrader/build/?category={category_id} + GET /api/v1/firmware-upgrader/build/?version={version} + GET /api/v1/firmware-upgrader/build/?os={os} + Create firmware build ##################### @@ -527,6 +557,15 @@ Get list of images of a firmware build GET /api/v1/firmware-upgrader/build/{id}/image/ +**Available filters** + +The list of images of a firmware build can be filtered by using +``type`` (any one of the available firmware image types). + +.. code-block:: text + + GET /api/v1/firmware-upgrader/build/{id}/image/?type={type} + Upload new firmware image to the build ###################################### @@ -539,21 +578,21 @@ Get firmware image details .. code-block:: text - GET /api/v1/firmware-upgrader/build/{build_pk}/image/{id}/ + GET /api/v1/firmware-upgrader/build/{build_id}/image/{id}/ Delete firmware image ##################### .. code-block:: text - DELETE /api/v1/firmware-upgrader/build/{build_pk}/image/{id}/ + DELETE /api/v1/firmware-upgrader/build/{build_id}/image/{id}/ Download firmware image ####################### .. code-block:: text - GET /api/v1/firmware-upgrader/build/{build_pk}/image/{id}/download/ + GET /api/v1/firmware-upgrader/build/{build_id}/image/{id}/download/ Perform batch upgrade ##################### @@ -619,6 +658,95 @@ Delete a firmware category DELETE /api/v1/firmware-upgrader/category/{id}/ +List upgrade operations +####################### + +.. code-block:: text + + GET /api/v1/firmware-upgrader/upgrade-operation/ + +**Available filters** + +The list of upgrade operations provides the following filters: + +- ``device__organization`` (Organization ID of the device) +- ``device__organization_slug`` (Organization slug of the device) +- ``device`` (Device ID) +- ``image`` (Firmware image ID) +- ``status`` (One of: in-progress, success, failed, aborted) + + +Here's a few examples: + +.. code-block:: text + + GET /api/v1/firmware-upgrader/upgrade-operation/?device__organization={organization_id} + GET /api/v1/firmware-upgrader/upgrade-operation/?device__organization__slug={organization_slug} + GET /api/v1/firmware-upgrader/upgrade-operation/?device={device_id} + GET /api/v1/firmware-upgrader/upgrade-operation/?image={image_id} + GET /api/v1/firmware-upgrader/upgrade-operation/?status={status} + +Get upgrade operation details +############################# + +.. code-block:: text + + GET /api/v1/firmware-upgrader/upgrade-operation/{id} + +List device upgrade operations +############################## + +.. code-block:: text + + GET /api/v1/firmware-upgrader/device/{device_id}/upgrade-operation/ + +**Available filters** + +The list of device upgrade operations can be filtered by +``status`` (one of: in-progress, success, failed, aborted). + +.. code-block:: text + + GET /api/v1/firmware-upgrader/device/{device_id}/upgrade-operation/?status={status} + +Create device firmware +###################### + +Sending a PUT request to the endpoint below will +create a new device firmware if it does not already exist. + +.. code-block:: text + + PUT /api/v1/firmware-upgrader/device/{device_id}/firmware/ + +Get device firmware details +########################### + +.. code-block:: text + + GET /api/v1/firmware-upgrader/device/{device_id}/firmware/ + +Change details of device firmware +################################# + +.. code-block:: text + + PUT /api/v1/firmware-upgrader/device/{device_id}/firmware/ + +Patch details of device firmware +################################# + +.. code-block:: text + + PATCH /api/v1/firmware-upgrader/device/{device_id}/firmware/ + +Delete device firmware +###################### + +.. code-block:: text + + DELETE /api/v1/firmware-upgrader/device/{device_pk}/firmware/ + Settings -------- diff --git a/openwisp_firmware_upgrader/api/filters.py b/openwisp_firmware_upgrader/api/filters.py new file mode 100644 index 00000000..17b58ba7 --- /dev/null +++ b/openwisp_firmware_upgrader/api/filters.py @@ -0,0 +1,41 @@ +from django.utils.translation import gettext_lazy as _ +from django_filters import rest_framework as filters + +from openwisp_users.api.mixins import FilterDjangoByOrgManaged + +from ..swapper import load_model + +UpgradeOperation = load_model('UpgradeOperation') + + +class UpgradeOperationFilter(FilterDjangoByOrgManaged): + device = filters.CharFilter( + field_name='device', + ) + image = filters.CharFilter( + field_name='image', + ) + + def _set_valid_filterform_lables(self): + self.filters['device__organization'].label = _('Organization') + self.filters['device__organization__slug'].label = _('Organization slug') + + def __init__(self, *args, **kwargs): + super(UpgradeOperationFilter, self).__init__(*args, **kwargs) + self._set_valid_filterform_lables() + + class Meta: + model = UpgradeOperation + fields = [ + 'device__organization', + 'device__organization__slug', + 'device', + 'image', + 'status', + ] + + +class DeviceUpgradeOperationFilter(FilterDjangoByOrgManaged): + class Meta: + model = UpgradeOperation + fields = ['status'] diff --git a/openwisp_firmware_upgrader/api/serializers.py b/openwisp_firmware_upgrader/api/serializers.py index 7b0e1d76..74384250 100644 --- a/openwisp_firmware_upgrader/api/serializers.py +++ b/openwisp_firmware_upgrader/api/serializers.py @@ -1,3 +1,6 @@ +import swapper +from django.core.exceptions import ValidationError +from django.utils.translation import gettext_lazy as _ from rest_framework import serializers from openwisp_users.api.mixins import FilterSerializerByOrgManaged @@ -10,6 +13,8 @@ Category = load_model('Category') FirmwareImage = load_model('FirmwareImage') UpgradeOperation = load_model('UpgradeOperation') +DeviceFirmware = load_model('DeviceFirmware') +Device = swapper.load_model('config', 'Device') class BaseMeta: @@ -51,10 +56,16 @@ class Meta(BaseMeta): fields = '__all__' -class UpgradeOperationSerializer(BaseSerializer): +class UpgradeOperationSerializer(serializers.ModelSerializer): class Meta: model = UpgradeOperation - exclude = ['batch'] + fields = ('id', 'device', 'image', 'status', 'log', 'modified', 'created') + + +class DeviceUpgradeOperationSerializer(serializers.ModelSerializer): + class Meta: + model = UpgradeOperation + fields = ('id', 'device', 'image', 'status', 'log', 'modified') class BatchUpgradeOperationListSerializer(BaseSerializer): @@ -77,3 +88,39 @@ class BatchUpgradeOperationSerializer(BatchUpgradeOperationListSerializer): class Meta: model = BatchUpgradeOperation fields = '__all__' + + +class DeviceFirmwareSerializer(ValidatedModelSerializer): + class Meta: + model = DeviceFirmware + fields = ('id', 'image', 'installed', 'modified') + read_only_fields = ('installed', 'modified') + + def validate(self, data): + if not data.get('device'): + device_id = self.context.get('device_id') + device = self._get_device_object(device_id) + data.update({'device': device}) + image = data.get('image') + device = data.get('device') + if ( + image + and device + and image.build.category.organization != device.organization + ): + raise ValidationError( + { + 'image': _( + 'The organization of the image doesn\'t ' + 'match the organization of the device' + ) + } + ) + return super().validate(data) + + def _get_device_object(self, device_id): + try: + device = Device.objects.get(id=device_id) + return device + except Device.DoesNotExist: + return None diff --git a/openwisp_firmware_upgrader/api/urls.py b/openwisp_firmware_upgrader/api/urls.py index 6cd83498..294cb221 100644 --- a/openwisp_firmware_upgrader/api/urls.py +++ b/openwisp_firmware_upgrader/api/urls.py @@ -47,6 +47,26 @@ views.batch_upgrade_operation_detail, name='api_batchupgradeoperation_detail', ), + path( + 'upgrade-operation/', + views.upgrade_operation_list, + name='api_upgradeoperation_list', + ), + path( + 'upgrade-operation/<uuid:pk>/', + views.upgrade_operation_detail, + name='api_upgradeoperation_detail', + ), + path( + 'device/<uuid:pk>/upgrade-operation/', + views.device_upgrade_operation_list, + name='api_deviceupgradeoperation_list', + ), + path( + 'device/<uuid:pk>/firmware/', + views.device_firmware_detail, + name='api_devicefirmware_detail', + ), ] ), ), diff --git a/openwisp_firmware_upgrader/api/views.py b/openwisp_firmware_upgrader/api/views.py index e47e424b..af6d419f 100644 --- a/openwisp_firmware_upgrader/api/views.py +++ b/openwisp_firmware_upgrader/api/views.py @@ -1,26 +1,38 @@ +import swapper from django.core.exceptions import ValidationError +from django.http import Http404 from django_filters.rest_framework import DjangoFilterBackend -from rest_framework import filters, generics, pagination, serializers +from rest_framework import filters, generics, pagination, serializers, status from rest_framework.exceptions import NotFound +from rest_framework.request import clone_request from rest_framework.response import Response +from rest_framework.utils.serializer_helpers import ReturnDict from openwisp_firmware_upgrader import private_storage from openwisp_users.api.mixins import FilterByOrganizationManaged from openwisp_users.api.mixins import ProtectedAPIMixin as BaseProtectedAPIMixin +from ..hardware import REVERSE_FIRMWARE_IMAGE_MAP from ..swapper import load_model +from .filters import DeviceUpgradeOperationFilter, UpgradeOperationFilter from .serializers import ( BatchUpgradeOperationListSerializer, BatchUpgradeOperationSerializer, BuildSerializer, CategorySerializer, + DeviceFirmwareSerializer, + DeviceUpgradeOperationSerializer, FirmwareImageSerializer, + UpgradeOperationSerializer, ) BatchUpgradeOperation = load_model('BatchUpgradeOperation') +UpgradeOperation = load_model('UpgradeOperation') Build = load_model('Build') Category = load_model('Category') FirmwareImage = load_model('FirmwareImage') +DeviceFirmware = load_model('DeviceFirmware') +Device = swapper.load_model('config', 'Device') class ListViewPagination(pagination.PageNumberPagination): @@ -182,6 +194,162 @@ def retrieve(self, request, *args, **kwargs): ) +class DeviceUpgradeOperationMixin(ProtectedAPIMixin): + queryset = UpgradeOperation.objects.all() + parent = None + + def get_parent_queryset(self): + return Device.objects.filter(pk=self.kwargs['pk']) + + def assert_parent_exists(self): + try: + assert self.get_parent_queryset().exists() + except (AssertionError, ValidationError): + raise NotFound(detail='device not found') + + def get_queryset(self): + return super().get_queryset().filter(device=self.kwargs['pk']) + + def initial(self, *args, **kwargs): + self.assert_parent_exists() + super().initial(*args, **kwargs) + + +class UpgradeOperationListView(ProtectedAPIMixin, generics.ListAPIView): + queryset = UpgradeOperation.objects.select_related('device', 'image') + serializer_class = UpgradeOperationSerializer + organization_field = 'device__organization' + filter_backends = [filters.OrderingFilter, DjangoFilterBackend] + ordering_fields = ['device_id', 'created', 'modified'] + ordering = ['-created'] + filterset_class = UpgradeOperationFilter + + +class UpgradeOperationDetailView(ProtectedAPIMixin, generics.RetrieveAPIView): + queryset = UpgradeOperation.objects.select_related('device', 'image').order_by( + '-created' + ) + serializer_class = UpgradeOperationSerializer + lookup_fields = ['pk'] + organization_field = 'device__organization' + + +class DeviceUpgradeOperationListView(DeviceUpgradeOperationMixin, generics.ListAPIView): + queryset = UpgradeOperation.objects.select_related('device', 'image').order_by( + '-created' + ) + serializer_class = DeviceUpgradeOperationSerializer + organization_field = 'device__organization' + filter_backends = [DjangoFilterBackend] + filterset_class = DeviceUpgradeOperationFilter + + def get_queryset(self): + qs = super().get_queryset() + return qs.filter(device__pk=self.kwargs['pk']) + + +class DeviceFirmwareDetailView( + ProtectedAPIMixin, generics.RetrieveUpdateDestroyAPIView +): + serializer_class = DeviceFirmwareSerializer + queryset = DeviceFirmware.objects.select_related('device', 'image') + lookup_field = 'device' + lookup_url_kwarg = 'pk' + organization_field = 'device__organization' + + def get_serializer_context(self): + context = super().get_serializer_context() + context.update({'device_id': self.kwargs['pk']}) + return context + + def get_serializer(self, *args, **kwargs): + serializer = super().get_serializer(*args, **kwargs) + if kwargs.get('instance'): + image_qs = self._get_image_queryset( + kwargs.get('instance'), kwargs.get('instance').device + ) + serializer.fields['image'].queryset = image_qs + else: + device = self._get_device_object(serializer.context.get('device_id')) + image_qs = self._get_image_queryset(device=device) + serializer.fields['image'].queryset = image_qs + return serializer + + def _get_device_object(self, device_id): + try: + device = Device.objects.get(id=device_id) + return device + except Device.DoesNotExist: + return None + + def _get_image_queryset(self, device_firmware=None, device=None): + if not device_firmware and not device: + return + image_qs = ( + FirmwareImage.objects.filter( + build__category__organization_id=device.organization_id + ) + .order_by('-created') + .select_related('build', 'build__category') + ) + if device.model and device.model in REVERSE_FIRMWARE_IMAGE_MAP: + image_qs = image_qs.filter(type=REVERSE_FIRMWARE_IMAGE_MAP[device.model]) + if device_firmware: + image_qs = image_qs.filter( + build__category_id=device_firmware.image.build.category_id + ) + return image_qs + + def _get_response_data(self, serializer, upgrade_operation=None): + data = {**serializer.data} + if upgrade_operation: + data.update({'upgrade_operation': {'id': upgrade_operation.id}}) + return ReturnDict(data, serializer=serializer) + + def update(self, request, *args, **kwargs): + partial = kwargs.pop('partial', False) + instance = self.get_object_or_none() + serializer = self.get_serializer(instance, data=request.data, partial=partial) + serializer.is_valid(raise_exception=True) + + if instance is None: + self.perform_create(serializer) + instance = self.get_object_or_none() + uo = instance.device.upgradeoperation_set.latest('created') + data = self._get_response_data(serializer, uo) + image_qs = self._get_image_queryset(uo, instance.device) + serializer.fields['image'].queryset = image_qs + return Response(data, status=status.HTTP_201_CREATED) + + self.perform_update(serializer) + uo = instance.device.upgradeoperation_set.latest('created') + data = self._get_response_data(serializer, uo) + image_qs = self._get_image_queryset(uo, instance.device) + serializer.fields['image'].queryset = image_qs + return Response(data, status=status.HTTP_200_OK) + + def perform_create(self, serializer): + serializer.save() + + def perform_update(self, serializer): + serializer.save() + + def get_object_or_none(self): + try: + return self.get_object() + except Http404: + if self.request.method == 'PUT': + # For PUT-as-create operation, we need to ensure that we have + # relevant permissions, as if this was a POST request. This + # will either raise a PermissionDenied exception, or simply + # return None. + self.check_permissions(clone_request(self.request, 'POST')) + else: + # PATCH requests where the object does not exist should still + # return a 404 response. + raise + + build_list = BuildListView.as_view() build_detail = BuildDetailView.as_view() api_batch_upgrade = BuildBatchUpgradeView.as_view() @@ -192,3 +360,7 @@ def retrieve(self, request, *args, **kwargs): firmware_image_list = FirmwareImageListView.as_view() firmware_image_detail = FirmwareImageDetailView.as_view() firmware_image_download = FirmwareImageDownloadView.as_view() +upgrade_operation_list = UpgradeOperationListView.as_view() +upgrade_operation_detail = UpgradeOperationDetailView.as_view() +device_upgrade_operation_list = DeviceUpgradeOperationListView.as_view() +device_firmware_detail = DeviceFirmwareDetailView.as_view()
[feature] REST API is missing endpoints for DeviceFirmware Right now it is not possible to do the following: - CRUD operations of DeviceFirmware objects, eg: `/api/v1/firmware-upgrader/device/[id]/firmware/` - When a device firmwre is created or changed (which creates an upgrade operation) we should return the upgrade operation ID to allow clients to check the state of the upgrade - Read UpgradeOperation objects, eg: `/api/v1/firmware-upgrader/device/[id]/upgrade-operation/`
I would like to take a shot on this one but would like the expert's opinion to see whats the right approach. Any comments are greatly appreciated. On the endpoints, I think we could go 2 ways: 1. Follow the pattern in DeviceUpgradeOperationInline and do a search of the UpgradeOperations and return the UpgradeOperation ID of last XXX upgrades done for the device 2. Build a separate model per device that saves all the UpgradeOperation done and return the IDs from that model I am not sure I am on the right track, any suggestion as to the best approach?
2023-05-04T11:21:23
0.0
[]
[]