body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
89f6f4083ef78a4648fbfbd93d3d9474b165f6b75e75c0a1620acd7cf3589e90 | def rename_frame_field(self, field_name, new_field_name):
'Renames the frame-level field to the given new name.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._rename_frame_fields({field_name: new_field_name}) | Renames the frame-level field to the given new name.
You can use dot notation (``embedded.field.name``) to rename embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name`` | fiftyone/core/dataset.py | rename_frame_field | dadounhind/fiftyone | 1 | python | def rename_frame_field(self, field_name, new_field_name):
'Renames the frame-level field to the given new name.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._rename_frame_fields({field_name: new_field_name}) | def rename_frame_field(self, field_name, new_field_name):
'Renames the frame-level field to the given new name.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._rename_frame_fields({field_name: new_field_name})<|docstring|>Renames the frame-level field to the given new name.
You can use dot notation (``embedded.field.name``) to rename embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name``<|endoftext|> |
cbff95dc9523e2436fb4e8ac80024b0c0fcf72963a2c1b536fb89093db8f65aa | def rename_frame_fields(self, field_mapping):
'Renames the frame-level fields to the given new names.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n frame fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names\n '
self._rename_frame_fields(field_mapping) | Renames the frame-level fields to the given new names.
You can use dot notation (``embedded.field.name``) to rename embedded
frame fields.
Args:
field_mapping: a dict mapping field names to new field names | fiftyone/core/dataset.py | rename_frame_fields | dadounhind/fiftyone | 1 | python | def rename_frame_fields(self, field_mapping):
'Renames the frame-level fields to the given new names.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n frame fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names\n '
self._rename_frame_fields(field_mapping) | def rename_frame_fields(self, field_mapping):
'Renames the frame-level fields to the given new names.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n frame fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names\n '
self._rename_frame_fields(field_mapping)<|docstring|>Renames the frame-level fields to the given new names.
You can use dot notation (``embedded.field.name``) to rename embedded
frame fields.
Args:
field_mapping: a dict mapping field names to new field names<|endoftext|> |
108eb56b71817de0b30c3dd6abb40fe70bc9d862366dc7e5c7776e8955a29ef9 | def clone_sample_field(self, field_name, new_field_name):
'Clones the given sample field into a new field of the dataset.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._clone_sample_fields({field_name: new_field_name}) | Clones the given sample field into a new field of the dataset.
You can use dot notation (``embedded.field.name``) to clone embedded
fields.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name`` | fiftyone/core/dataset.py | clone_sample_field | dadounhind/fiftyone | 1 | python | def clone_sample_field(self, field_name, new_field_name):
'Clones the given sample field into a new field of the dataset.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._clone_sample_fields({field_name: new_field_name}) | def clone_sample_field(self, field_name, new_field_name):
'Clones the given sample field into a new field of the dataset.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._clone_sample_fields({field_name: new_field_name})<|docstring|>Clones the given sample field into a new field of the dataset.
You can use dot notation (``embedded.field.name``) to clone embedded
fields.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name``<|endoftext|> |
2d9b50a37a42130549d7843077490d867f312f6fb46d219497b9d4656ad4aed3 | def clone_sample_fields(self, field_mapping):
'Clones the given sample fields into new fields of the dataset.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names into\n which to clone each field\n '
self._clone_sample_fields(field_mapping) | Clones the given sample fields into new fields of the dataset.
You can use dot notation (``embedded.field.name``) to clone embedded
fields.
Args:
field_mapping: a dict mapping field names to new field names into
which to clone each field | fiftyone/core/dataset.py | clone_sample_fields | dadounhind/fiftyone | 1 | python | def clone_sample_fields(self, field_mapping):
'Clones the given sample fields into new fields of the dataset.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names into\n which to clone each field\n '
self._clone_sample_fields(field_mapping) | def clone_sample_fields(self, field_mapping):
'Clones the given sample fields into new fields of the dataset.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names into\n which to clone each field\n '
self._clone_sample_fields(field_mapping)<|docstring|>Clones the given sample fields into new fields of the dataset.
You can use dot notation (``embedded.field.name``) to clone embedded
fields.
Args:
field_mapping: a dict mapping field names to new field names into
which to clone each field<|endoftext|> |
5c96f959a3c957b0a28c063c8041347e8cb368075a4f4a4a2b6909cf8d8c77f0 | def clone_frame_field(self, field_name, new_field_name):
'Clones the frame-level field into a new field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._clone_frame_fields({field_name: new_field_name}) | Clones the frame-level field into a new field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name`` | fiftyone/core/dataset.py | clone_frame_field | dadounhind/fiftyone | 1 | python | def clone_frame_field(self, field_name, new_field_name):
'Clones the frame-level field into a new field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._clone_frame_fields({field_name: new_field_name}) | def clone_frame_field(self, field_name, new_field_name):
'Clones the frame-level field into a new field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._clone_frame_fields({field_name: new_field_name})<|docstring|>Clones the frame-level field into a new field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name``<|endoftext|> |
18c1ba87ce34e959b9a3ad3b3b0c72eee08bc81c7ad55c75f8f40ceec311e019 | def clone_frame_fields(self, field_mapping):
'Clones the frame-level fields into new fields.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_mapping: a dict mapping field names to new field names into\n which to clone each field\n '
self._clone_frame_fields(field_mapping) | Clones the frame-level fields into new fields.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_mapping: a dict mapping field names to new field names into
which to clone each field | fiftyone/core/dataset.py | clone_frame_fields | dadounhind/fiftyone | 1 | python | def clone_frame_fields(self, field_mapping):
'Clones the frame-level fields into new fields.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_mapping: a dict mapping field names to new field names into\n which to clone each field\n '
self._clone_frame_fields(field_mapping) | def clone_frame_fields(self, field_mapping):
'Clones the frame-level fields into new fields.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_mapping: a dict mapping field names to new field names into\n which to clone each field\n '
self._clone_frame_fields(field_mapping)<|docstring|>Clones the frame-level fields into new fields.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_mapping: a dict mapping field names to new field names into
which to clone each field<|endoftext|> |
2ff67b926dd3cb97bbc14bea2dc01ec5867f314a146df6b4f8925bc6c6b28db3 | def clear_sample_field(self, field_name):
"Clears the values of the field from all samples in the dataset.\n\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n "
self._clear_sample_fields(field_name) | Clears the values of the field from all samples in the dataset.
The field will remain in the dataset's schema, and all samples will
have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Args:
field_name: the field name or ``embedded.field.name`` | fiftyone/core/dataset.py | clear_sample_field | dadounhind/fiftyone | 1 | python | def clear_sample_field(self, field_name):
"Clears the values of the field from all samples in the dataset.\n\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n "
self._clear_sample_fields(field_name) | def clear_sample_field(self, field_name):
"Clears the values of the field from all samples in the dataset.\n\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n "
self._clear_sample_fields(field_name)<|docstring|>Clears the values of the field from all samples in the dataset.
The field will remain in the dataset's schema, and all samples will
have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Args:
field_name: the field name or ``embedded.field.name``<|endoftext|> |
84815b77f2242d914f0cb42032a156ae3b2da2897270ae38f90a297abcabae5e | def clear_sample_fields(self, field_names):
"Clears the values of the fields from all samples in the dataset.\n\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Args:\n field_names: the field name or iterable of field names\n "
self._clear_sample_fields(field_names) | Clears the values of the fields from all samples in the dataset.
The field will remain in the dataset's schema, and all samples will
have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Args:
field_names: the field name or iterable of field names | fiftyone/core/dataset.py | clear_sample_fields | dadounhind/fiftyone | 1 | python | def clear_sample_fields(self, field_names):
"Clears the values of the fields from all samples in the dataset.\n\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Args:\n field_names: the field name or iterable of field names\n "
self._clear_sample_fields(field_names) | def clear_sample_fields(self, field_names):
"Clears the values of the fields from all samples in the dataset.\n\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Args:\n field_names: the field name or iterable of field names\n "
self._clear_sample_fields(field_names)<|docstring|>Clears the values of the fields from all samples in the dataset.
The field will remain in the dataset's schema, and all samples will
have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Args:
field_names: the field name or iterable of field names<|endoftext|> |
0a6de84947f1babffe09e2799dad94ce9e3086a7c09b5f25c36a80f1400b683b | def clear_frame_field(self, field_name):
"Clears the values of the frame-level field from all samples in the\n dataset.\n\n The field will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n "
self._clear_frame_fields(field_name) | Clears the values of the frame-level field from all samples in the
dataset.
The field will remain in the dataset's frame schema, and all frames
will have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name`` | fiftyone/core/dataset.py | clear_frame_field | dadounhind/fiftyone | 1 | python | def clear_frame_field(self, field_name):
"Clears the values of the frame-level field from all samples in the\n dataset.\n\n The field will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n "
self._clear_frame_fields(field_name) | def clear_frame_field(self, field_name):
"Clears the values of the frame-level field from all samples in the\n dataset.\n\n The field will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n "
self._clear_frame_fields(field_name)<|docstring|>Clears the values of the frame-level field from all samples in the
dataset.
The field will remain in the dataset's frame schema, and all frames
will have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``<|endoftext|> |
fe4fc4424a0914dd0af4fa72e4801baec184968dbade8affb1385ef428af3dcd | def clear_frame_fields(self, field_names):
"Clears the values of the frame-level fields from all samples in the\n dataset.\n\n The fields will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_names: the field name or iterable of field names\n "
self._clear_frame_fields(field_names) | Clears the values of the frame-level fields from all samples in the
dataset.
The fields will remain in the dataset's frame schema, and all frames
will have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_names: the field name or iterable of field names | fiftyone/core/dataset.py | clear_frame_fields | dadounhind/fiftyone | 1 | python | def clear_frame_fields(self, field_names):
"Clears the values of the frame-level fields from all samples in the\n dataset.\n\n The fields will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_names: the field name or iterable of field names\n "
self._clear_frame_fields(field_names) | def clear_frame_fields(self, field_names):
"Clears the values of the frame-level fields from all samples in the\n dataset.\n\n The fields will remain in the dataset's frame schema, and all frames\n will have the value ``None`` for the field.\n\n You can use dot notation (``embedded.field.name``) to clone embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_names: the field name or iterable of field names\n "
self._clear_frame_fields(field_names)<|docstring|>Clears the values of the frame-level fields from all samples in the
dataset.
The fields will remain in the dataset's frame schema, and all frames
will have the value ``None`` for the field.
You can use dot notation (``embedded.field.name``) to clone embedded
frame fields.
Only applicable to video datasets.
Args:
field_names: the field name or iterable of field names<|endoftext|> |
492a851a99a057c2b916fc6529d210048dbb3477e39fccbb28aa0d1de602ede9 | def delete_sample_field(self, field_name, error_level=0):
'Deletes the field from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_sample_fields(field_name, error_level) | Deletes the field from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
fields.
Args:
field_name: the field name or ``embedded.field.name``
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted | fiftyone/core/dataset.py | delete_sample_field | dadounhind/fiftyone | 1 | python | def delete_sample_field(self, field_name, error_level=0):
'Deletes the field from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_sample_fields(field_name, error_level) | def delete_sample_field(self, field_name, error_level=0):
'Deletes the field from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_sample_fields(field_name, error_level)<|docstring|>Deletes the field from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
fields.
Args:
field_name: the field name or ``embedded.field.name``
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted<|endoftext|> |
bada8acebbdbd6a74fbdd348868bac5d405d091c15bd7e629d65f2c04351643f | def delete_sample_fields(self, field_names, error_level=0):
'Deletes the fields from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n fields.\n\n Args:\n field_names: the field name or iterable of field names\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_sample_fields(field_names, error_level) | Deletes the fields from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
fields.
Args:
field_names: the field name or iterable of field names
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted | fiftyone/core/dataset.py | delete_sample_fields | dadounhind/fiftyone | 1 | python | def delete_sample_fields(self, field_names, error_level=0):
'Deletes the fields from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n fields.\n\n Args:\n field_names: the field name or iterable of field names\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_sample_fields(field_names, error_level) | def delete_sample_fields(self, field_names, error_level=0):
'Deletes the fields from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n fields.\n\n Args:\n field_names: the field name or iterable of field names\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_sample_fields(field_names, error_level)<|docstring|>Deletes the fields from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
fields.
Args:
field_names: the field name or iterable of field names
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted<|endoftext|> |
772f448419f2cf7da72733145084b454c3a40b50ee004c204b7c4a5a90998912 | def delete_frame_field(self, field_name, error_level=0):
'Deletes the frame-level field from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_frame_fields(field_name, error_level) | Deletes the frame-level field from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted | fiftyone/core/dataset.py | delete_frame_field | dadounhind/fiftyone | 1 | python | def delete_frame_field(self, field_name, error_level=0):
'Deletes the frame-level field from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_frame_fields(field_name, error_level) | def delete_frame_field(self, field_name, error_level=0):
'Deletes the frame-level field from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_frame_fields(field_name, error_level)<|docstring|>Deletes the frame-level field from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
frame fields.
Only applicable to video datasets.
Args:
field_name: the field name or ``embedded.field.name``
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted<|endoftext|> |
697e8126e7f88e47fd62a5dcff70b0af010e0ca1ac6502fb416a40ed5bc799da | def delete_frame_fields(self, field_names, error_level=0):
'Deletes the frame-level fields from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_names: a field name or iterable of field names\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_frame_fields(field_names, error_level) | Deletes the frame-level fields from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
frame fields.
Only applicable to video datasets.
Args:
field_names: a field name or iterable of field names
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted | fiftyone/core/dataset.py | delete_frame_fields | dadounhind/fiftyone | 1 | python | def delete_frame_fields(self, field_names, error_level=0):
'Deletes the frame-level fields from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_names: a field name or iterable of field names\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_frame_fields(field_names, error_level) | def delete_frame_fields(self, field_names, error_level=0):
'Deletes the frame-level fields from all samples in the dataset.\n\n You can use dot notation (``embedded.field.name``) to delete embedded\n frame fields.\n\n Only applicable to video datasets.\n\n Args:\n field_names: a field name or iterable of field names\n error_level (0): the error level to use. Valid values are:\n\n 0: raise error if a top-level field cannot be deleted\n 1: log warning if a top-level field cannot be deleted\n 2: ignore top-level fields that cannot be deleted\n '
self._delete_frame_fields(field_names, error_level)<|docstring|>Deletes the frame-level fields from all samples in the dataset.
You can use dot notation (``embedded.field.name``) to delete embedded
frame fields.
Only applicable to video datasets.
Args:
field_names: a field name or iterable of field names
error_level (0): the error level to use. Valid values are:
0: raise error if a top-level field cannot be deleted
1: log warning if a top-level field cannot be deleted
2: ignore top-level fields that cannot be deleted<|endoftext|> |
b1f4d91fac73b05a2fe3151749d6e9c35e917a21323d06e65007a04c9efe3b0b | def iter_samples(self):
'Returns an iterator over the samples in the dataset.\n\n Returns:\n an iterator over :class:`fiftyone.core.sample.Sample` instances\n '
for d in self._aggregate(detach_frames=True):
doc = self._sample_dict_to_doc(d)
sample = fos.Sample.from_doc(doc, dataset=self)
(yield sample) | Returns an iterator over the samples in the dataset.
Returns:
an iterator over :class:`fiftyone.core.sample.Sample` instances | fiftyone/core/dataset.py | iter_samples | dadounhind/fiftyone | 1 | python | def iter_samples(self):
'Returns an iterator over the samples in the dataset.\n\n Returns:\n an iterator over :class:`fiftyone.core.sample.Sample` instances\n '
for d in self._aggregate(detach_frames=True):
doc = self._sample_dict_to_doc(d)
sample = fos.Sample.from_doc(doc, dataset=self)
(yield sample) | def iter_samples(self):
'Returns an iterator over the samples in the dataset.\n\n Returns:\n an iterator over :class:`fiftyone.core.sample.Sample` instances\n '
for d in self._aggregate(detach_frames=True):
doc = self._sample_dict_to_doc(d)
sample = fos.Sample.from_doc(doc, dataset=self)
(yield sample)<|docstring|>Returns an iterator over the samples in the dataset.
Returns:
an iterator over :class:`fiftyone.core.sample.Sample` instances<|endoftext|> |
7dceca8c6cd20cfe2a60c45b17c97d606fd576490818c2ee4e52f1a6fb345163 | def add_sample(self, sample, expand_schema=True):
"Adds the given sample to the dataset.\n\n If the sample instance does not belong to a dataset, it is updated\n in-place to reflect its membership in this dataset. If the sample\n instance belongs to another dataset, it is not modified.\n\n Args:\n sample: a :class:`fiftyone.core.sample.Sample`\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample's schema is not a subset of the dataset schema\n\n Returns:\n the ID of the sample in the dataset\n\n Raises:\n ``mongoengine.errors.ValidationError``: if a field of the sample\n has a type that is inconsistent with the dataset schema, or if\n ``expand_schema == False`` and a new field is encountered\n "
if sample._in_db:
sample = sample.copy()
if (self.media_type is None):
self.media_type = sample.media_type
if expand_schema:
self._expand_schema([sample])
self._validate_sample(sample)
d = sample.to_mongo_dict()
d.pop('_id', None)
self._sample_collection.insert_one(d)
if (not sample._in_db):
doc = self._sample_doc_cls.from_dict(d, extended=False)
sample._set_backing_doc(doc, dataset=self)
if (self.media_type == fom.VIDEO):
sample.frames._serve(sample)
sample.frames._save(insert=True)
return str(d['_id']) | Adds the given sample to the dataset.
If the sample instance does not belong to a dataset, it is updated
in-place to reflect its membership in this dataset. If the sample
instance belongs to another dataset, it is not modified.
Args:
sample: a :class:`fiftyone.core.sample.Sample`
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if the sample's schema is not a subset of the dataset schema
Returns:
the ID of the sample in the dataset
Raises:
``mongoengine.errors.ValidationError``: if a field of the sample
has a type that is inconsistent with the dataset schema, or if
``expand_schema == False`` and a new field is encountered | fiftyone/core/dataset.py | add_sample | dadounhind/fiftyone | 1 | python | def add_sample(self, sample, expand_schema=True):
"Adds the given sample to the dataset.\n\n If the sample instance does not belong to a dataset, it is updated\n in-place to reflect its membership in this dataset. If the sample\n instance belongs to another dataset, it is not modified.\n\n Args:\n sample: a :class:`fiftyone.core.sample.Sample`\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample's schema is not a subset of the dataset schema\n\n Returns:\n the ID of the sample in the dataset\n\n Raises:\n ``mongoengine.errors.ValidationError``: if a field of the sample\n has a type that is inconsistent with the dataset schema, or if\n ``expand_schema == False`` and a new field is encountered\n "
if sample._in_db:
sample = sample.copy()
if (self.media_type is None):
self.media_type = sample.media_type
if expand_schema:
self._expand_schema([sample])
self._validate_sample(sample)
d = sample.to_mongo_dict()
d.pop('_id', None)
self._sample_collection.insert_one(d)
if (not sample._in_db):
doc = self._sample_doc_cls.from_dict(d, extended=False)
sample._set_backing_doc(doc, dataset=self)
if (self.media_type == fom.VIDEO):
sample.frames._serve(sample)
sample.frames._save(insert=True)
return str(d['_id']) | def add_sample(self, sample, expand_schema=True):
"Adds the given sample to the dataset.\n\n If the sample instance does not belong to a dataset, it is updated\n in-place to reflect its membership in this dataset. If the sample\n instance belongs to another dataset, it is not modified.\n\n Args:\n sample: a :class:`fiftyone.core.sample.Sample`\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample's schema is not a subset of the dataset schema\n\n Returns:\n the ID of the sample in the dataset\n\n Raises:\n ``mongoengine.errors.ValidationError``: if a field of the sample\n has a type that is inconsistent with the dataset schema, or if\n ``expand_schema == False`` and a new field is encountered\n "
if sample._in_db:
sample = sample.copy()
if (self.media_type is None):
self.media_type = sample.media_type
if expand_schema:
self._expand_schema([sample])
self._validate_sample(sample)
d = sample.to_mongo_dict()
d.pop('_id', None)
self._sample_collection.insert_one(d)
if (not sample._in_db):
doc = self._sample_doc_cls.from_dict(d, extended=False)
sample._set_backing_doc(doc, dataset=self)
if (self.media_type == fom.VIDEO):
sample.frames._serve(sample)
sample.frames._save(insert=True)
return str(d['_id'])<|docstring|>Adds the given sample to the dataset.
If the sample instance does not belong to a dataset, it is updated
in-place to reflect its membership in this dataset. If the sample
instance belongs to another dataset, it is not modified.
Args:
sample: a :class:`fiftyone.core.sample.Sample`
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if the sample's schema is not a subset of the dataset schema
Returns:
the ID of the sample in the dataset
Raises:
``mongoengine.errors.ValidationError``: if a field of the sample
has a type that is inconsistent with the dataset schema, or if
``expand_schema == False`` and a new field is encountered<|endoftext|> |
7203caa7bee46514f0eb46c4f03b3594f709df83130aa08976df8c353f060399 | def add_samples(self, samples, expand_schema=True, num_samples=None):
"Adds the given samples to the dataset.\n\n Any sample instances that do not belong to a dataset are updated\n in-place to reflect membership in this dataset. Any sample instances\n that belong to other datasets are not modified.\n\n Args:\n samples: an iterable of :class:`fiftyone.core.sample.Sample`\n instances. For example, ``samples`` may be a :class:`Dataset`\n or a :class:`fiftyone.core.views.DatasetView`\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample's schema is not a subset of the dataset schema\n num_samples (None): the number of samples in ``samples``. If not\n provided, this is computed via ``len(samples)``, if possible.\n This value is optional and is used only for optimization and\n progress tracking\n\n Returns:\n a list of IDs of the samples in the dataset\n\n Raises:\n ``mongoengine.errors.ValidationError``: if a field of a sample has\n a type that is inconsistent with the dataset schema, or if\n ``expand_schema == False`` and a new field is encountered\n "
if (num_samples is None):
try:
num_samples = len(samples)
except:
pass
batch_size = (128 if (self.media_type == fom.IMAGE) else 1)
sample_ids = []
with fou.ProgressBar(total=num_samples) as pb:
for batch in fou.iter_batches(samples, batch_size):
sample_ids.extend(self._add_samples_batch(batch, expand_schema))
pb.update(count=len(batch))
return sample_ids | Adds the given samples to the dataset.
Any sample instances that do not belong to a dataset are updated
in-place to reflect membership in this dataset. Any sample instances
that belong to other datasets are not modified.
Args:
samples: an iterable of :class:`fiftyone.core.sample.Sample`
instances. For example, ``samples`` may be a :class:`Dataset`
or a :class:`fiftyone.core.views.DatasetView`
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
num_samples (None): the number of samples in ``samples``. If not
provided, this is computed via ``len(samples)``, if possible.
This value is optional and is used only for optimization and
progress tracking
Returns:
a list of IDs of the samples in the dataset
Raises:
``mongoengine.errors.ValidationError``: if a field of a sample has
a type that is inconsistent with the dataset schema, or if
``expand_schema == False`` and a new field is encountered | fiftyone/core/dataset.py | add_samples | dadounhind/fiftyone | 1 | python | def add_samples(self, samples, expand_schema=True, num_samples=None):
"Adds the given samples to the dataset.\n\n Any sample instances that do not belong to a dataset are updated\n in-place to reflect membership in this dataset. Any sample instances\n that belong to other datasets are not modified.\n\n Args:\n samples: an iterable of :class:`fiftyone.core.sample.Sample`\n instances. For example, ``samples`` may be a :class:`Dataset`\n or a :class:`fiftyone.core.views.DatasetView`\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample's schema is not a subset of the dataset schema\n num_samples (None): the number of samples in ``samples``. If not\n provided, this is computed via ``len(samples)``, if possible.\n This value is optional and is used only for optimization and\n progress tracking\n\n Returns:\n a list of IDs of the samples in the dataset\n\n Raises:\n ``mongoengine.errors.ValidationError``: if a field of a sample has\n a type that is inconsistent with the dataset schema, or if\n ``expand_schema == False`` and a new field is encountered\n "
if (num_samples is None):
try:
num_samples = len(samples)
except:
pass
batch_size = (128 if (self.media_type == fom.IMAGE) else 1)
sample_ids = []
with fou.ProgressBar(total=num_samples) as pb:
for batch in fou.iter_batches(samples, batch_size):
sample_ids.extend(self._add_samples_batch(batch, expand_schema))
pb.update(count=len(batch))
return sample_ids | def add_samples(self, samples, expand_schema=True, num_samples=None):
"Adds the given samples to the dataset.\n\n Any sample instances that do not belong to a dataset are updated\n in-place to reflect membership in this dataset. Any sample instances\n that belong to other datasets are not modified.\n\n Args:\n samples: an iterable of :class:`fiftyone.core.sample.Sample`\n instances. For example, ``samples`` may be a :class:`Dataset`\n or a :class:`fiftyone.core.views.DatasetView`\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample's schema is not a subset of the dataset schema\n num_samples (None): the number of samples in ``samples``. If not\n provided, this is computed via ``len(samples)``, if possible.\n This value is optional and is used only for optimization and\n progress tracking\n\n Returns:\n a list of IDs of the samples in the dataset\n\n Raises:\n ``mongoengine.errors.ValidationError``: if a field of a sample has\n a type that is inconsistent with the dataset schema, or if\n ``expand_schema == False`` and a new field is encountered\n "
if (num_samples is None):
try:
num_samples = len(samples)
except:
pass
batch_size = (128 if (self.media_type == fom.IMAGE) else 1)
sample_ids = []
with fou.ProgressBar(total=num_samples) as pb:
for batch in fou.iter_batches(samples, batch_size):
sample_ids.extend(self._add_samples_batch(batch, expand_schema))
pb.update(count=len(batch))
return sample_ids<|docstring|>Adds the given samples to the dataset.
Any sample instances that do not belong to a dataset are updated
in-place to reflect membership in this dataset. Any sample instances
that belong to other datasets are not modified.
Args:
samples: an iterable of :class:`fiftyone.core.sample.Sample`
instances. For example, ``samples`` may be a :class:`Dataset`
or a :class:`fiftyone.core.views.DatasetView`
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
num_samples (None): the number of samples in ``samples``. If not
provided, this is computed via ``len(samples)``, if possible.
This value is optional and is used only for optimization and
progress tracking
Returns:
a list of IDs of the samples in the dataset
Raises:
``mongoengine.errors.ValidationError``: if a field of a sample has
a type that is inconsistent with the dataset schema, or if
``expand_schema == False`` and a new field is encountered<|endoftext|> |
363d92b49674d238b7efa43612ef73e84fb6c2c758465038b41b63358468b046 | def merge_samples(self, samples, key_field='filepath', key_fcn=None, omit_none_fields=True, skip_existing=False, insert_new=True, omit_default_fields=False, overwrite=True):
'Merges the given samples into this dataset.\n\n By default, samples with the same absolute ``filepath`` are merged.\n You can customize this behavior via the ``key_field`` and ``key_fcn``\n parameters. For example, you could set\n ``key_fcn = lambda sample: os.path.basename(sample.filepath)`` to merge\n samples with the same base filename.\n\n Args:\n samples: an iterable of :class:`fiftyone.core.sample.Sample`\n instances. For example, ``samples`` may be a :class:`Dataset`\n or a :class:`fiftyone.core.views.DatasetView`\n key_field ("filepath"): the sample field to use to decide whether\n to join with an existing sample\n key_fcn (None): a function that accepts a\n :class:`fiftyone.core.sample.Sample` instance and computes a\n key to decide if two samples should be merged. If a ``key_fcn``\n is provided, ``key_field`` is ignored\n omit_none_fields (True): whether to omit ``None``-valued fields of\n the provided samples when merging their fields\n skip_existing (False): whether to skip existing samples (True) or\n merge them (False)\n insert_new (True): whether to insert new samples (True) or skip\n them (False)\n omit_default_fields (False): whether to omit default sample fields\n when merging. If ``True``, ``insert_new`` must be ``False``\n overwrite (True): whether to overwrite (True) or skip (False)\n existing sample fields\n '
if (isinstance(samples, foc.SampleCollection) and (key_fcn is None) and overwrite):
self._merge_samples(samples, key_field=key_field, omit_none_fields=omit_none_fields, skip_existing=skip_existing, insert_new=insert_new, omit_default_fields=omit_default_fields)
return
if (key_fcn is None):
key_fcn = (lambda sample: sample[key_field])
if omit_default_fields:
if insert_new:
raise ValueError('Cannot omit default fields when `insert_new=True`')
omit_fields = fos.get_default_sample_fields()
else:
omit_fields = None
id_map = {}
logger.info('Indexing dataset...')
with fou.ProgressBar() as pb:
for sample in pb(self):
id_map[key_fcn(sample)] = sample.id
logger.info('Merging samples...')
with fou.ProgressBar() as pb:
for sample in pb(samples):
key = key_fcn(sample)
if (key in id_map):
if (not skip_existing):
existing_sample = self[id_map[key]]
existing_sample.merge(sample, omit_fields=omit_fields, omit_none_fields=omit_none_fields, overwrite=overwrite)
existing_sample.save()
elif insert_new:
self.add_sample(sample) | Merges the given samples into this dataset.
By default, samples with the same absolute ``filepath`` are merged.
You can customize this behavior via the ``key_field`` and ``key_fcn``
parameters. For example, you could set
``key_fcn = lambda sample: os.path.basename(sample.filepath)`` to merge
samples with the same base filename.
Args:
samples: an iterable of :class:`fiftyone.core.sample.Sample`
instances. For example, ``samples`` may be a :class:`Dataset`
or a :class:`fiftyone.core.views.DatasetView`
key_field ("filepath"): the sample field to use to decide whether
to join with an existing sample
key_fcn (None): a function that accepts a
:class:`fiftyone.core.sample.Sample` instance and computes a
key to decide if two samples should be merged. If a ``key_fcn``
is provided, ``key_field`` is ignored
omit_none_fields (True): whether to omit ``None``-valued fields of
the provided samples when merging their fields
skip_existing (False): whether to skip existing samples (True) or
merge them (False)
insert_new (True): whether to insert new samples (True) or skip
them (False)
omit_default_fields (False): whether to omit default sample fields
when merging. If ``True``, ``insert_new`` must be ``False``
overwrite (True): whether to overwrite (True) or skip (False)
existing sample fields | fiftyone/core/dataset.py | merge_samples | dadounhind/fiftyone | 1 | python | def merge_samples(self, samples, key_field='filepath', key_fcn=None, omit_none_fields=True, skip_existing=False, insert_new=True, omit_default_fields=False, overwrite=True):
'Merges the given samples into this dataset.\n\n By default, samples with the same absolute ``filepath`` are merged.\n You can customize this behavior via the ``key_field`` and ``key_fcn``\n parameters. For example, you could set\n ``key_fcn = lambda sample: os.path.basename(sample.filepath)`` to merge\n samples with the same base filename.\n\n Args:\n samples: an iterable of :class:`fiftyone.core.sample.Sample`\n instances. For example, ``samples`` may be a :class:`Dataset`\n or a :class:`fiftyone.core.views.DatasetView`\n key_field ("filepath"): the sample field to use to decide whether\n to join with an existing sample\n key_fcn (None): a function that accepts a\n :class:`fiftyone.core.sample.Sample` instance and computes a\n key to decide if two samples should be merged. If a ``key_fcn``\n is provided, ``key_field`` is ignored\n omit_none_fields (True): whether to omit ``None``-valued fields of\n the provided samples when merging their fields\n skip_existing (False): whether to skip existing samples (True) or\n merge them (False)\n insert_new (True): whether to insert new samples (True) or skip\n them (False)\n omit_default_fields (False): whether to omit default sample fields\n when merging. If ``True``, ``insert_new`` must be ``False``\n overwrite (True): whether to overwrite (True) or skip (False)\n existing sample fields\n '
if (isinstance(samples, foc.SampleCollection) and (key_fcn is None) and overwrite):
self._merge_samples(samples, key_field=key_field, omit_none_fields=omit_none_fields, skip_existing=skip_existing, insert_new=insert_new, omit_default_fields=omit_default_fields)
return
if (key_fcn is None):
key_fcn = (lambda sample: sample[key_field])
if omit_default_fields:
if insert_new:
raise ValueError('Cannot omit default fields when `insert_new=True`')
omit_fields = fos.get_default_sample_fields()
else:
omit_fields = None
id_map = {}
logger.info('Indexing dataset...')
with fou.ProgressBar() as pb:
for sample in pb(self):
id_map[key_fcn(sample)] = sample.id
logger.info('Merging samples...')
with fou.ProgressBar() as pb:
for sample in pb(samples):
key = key_fcn(sample)
if (key in id_map):
if (not skip_existing):
existing_sample = self[id_map[key]]
existing_sample.merge(sample, omit_fields=omit_fields, omit_none_fields=omit_none_fields, overwrite=overwrite)
existing_sample.save()
elif insert_new:
self.add_sample(sample) | def merge_samples(self, samples, key_field='filepath', key_fcn=None, omit_none_fields=True, skip_existing=False, insert_new=True, omit_default_fields=False, overwrite=True):
'Merges the given samples into this dataset.\n\n By default, samples with the same absolute ``filepath`` are merged.\n You can customize this behavior via the ``key_field`` and ``key_fcn``\n parameters. For example, you could set\n ``key_fcn = lambda sample: os.path.basename(sample.filepath)`` to merge\n samples with the same base filename.\n\n Args:\n samples: an iterable of :class:`fiftyone.core.sample.Sample`\n instances. For example, ``samples`` may be a :class:`Dataset`\n or a :class:`fiftyone.core.views.DatasetView`\n key_field ("filepath"): the sample field to use to decide whether\n to join with an existing sample\n key_fcn (None): a function that accepts a\n :class:`fiftyone.core.sample.Sample` instance and computes a\n key to decide if two samples should be merged. If a ``key_fcn``\n is provided, ``key_field`` is ignored\n omit_none_fields (True): whether to omit ``None``-valued fields of\n the provided samples when merging their fields\n skip_existing (False): whether to skip existing samples (True) or\n merge them (False)\n insert_new (True): whether to insert new samples (True) or skip\n them (False)\n omit_default_fields (False): whether to omit default sample fields\n when merging. If ``True``, ``insert_new`` must be ``False``\n overwrite (True): whether to overwrite (True) or skip (False)\n existing sample fields\n '
if (isinstance(samples, foc.SampleCollection) and (key_fcn is None) and overwrite):
self._merge_samples(samples, key_field=key_field, omit_none_fields=omit_none_fields, skip_existing=skip_existing, insert_new=insert_new, omit_default_fields=omit_default_fields)
return
if (key_fcn is None):
key_fcn = (lambda sample: sample[key_field])
if omit_default_fields:
if insert_new:
raise ValueError('Cannot omit default fields when `insert_new=True`')
omit_fields = fos.get_default_sample_fields()
else:
omit_fields = None
id_map = {}
logger.info('Indexing dataset...')
with fou.ProgressBar() as pb:
for sample in pb(self):
id_map[key_fcn(sample)] = sample.id
logger.info('Merging samples...')
with fou.ProgressBar() as pb:
for sample in pb(samples):
key = key_fcn(sample)
if (key in id_map):
if (not skip_existing):
existing_sample = self[id_map[key]]
existing_sample.merge(sample, omit_fields=omit_fields, omit_none_fields=omit_none_fields, overwrite=overwrite)
existing_sample.save()
elif insert_new:
self.add_sample(sample)<|docstring|>Merges the given samples into this dataset.
By default, samples with the same absolute ``filepath`` are merged.
You can customize this behavior via the ``key_field`` and ``key_fcn``
parameters. For example, you could set
``key_fcn = lambda sample: os.path.basename(sample.filepath)`` to merge
samples with the same base filename.
Args:
samples: an iterable of :class:`fiftyone.core.sample.Sample`
instances. For example, ``samples`` may be a :class:`Dataset`
or a :class:`fiftyone.core.views.DatasetView`
key_field ("filepath"): the sample field to use to decide whether
to join with an existing sample
key_fcn (None): a function that accepts a
:class:`fiftyone.core.sample.Sample` instance and computes a
key to decide if two samples should be merged. If a ``key_fcn``
is provided, ``key_field`` is ignored
omit_none_fields (True): whether to omit ``None``-valued fields of
the provided samples when merging their fields
skip_existing (False): whether to skip existing samples (True) or
merge them (False)
insert_new (True): whether to insert new samples (True) or skip
them (False)
omit_default_fields (False): whether to omit default sample fields
when merging. If ``True``, ``insert_new`` must be ``False``
overwrite (True): whether to overwrite (True) or skip (False)
existing sample fields<|endoftext|> |
ce68d66c8dcbae60c607adc70f54907a9e909af24e9064ee33b8d810136f665b | def _merge_samples(self, sample_collection, key_field='filepath', omit_none_fields=True, skip_existing=False, insert_new=True, omit_default_fields=False):
'Merges the given sample collection into this dataset.\n\n By default, samples with the same absolute ``filepath`` are merged.\n You can customize this behavior via the ``key_field`` parameter.\n\n Args:\n sample_collection: a\n :class:`fiftyone.core.collections.SampleCollection`\n key_field ("filepath"): the sample field to use to decide whether\n to join with an existing sample\n omit_none_fields (True): whether to omit ``None``-valued fields of\n the provided samples when merging their fields\n skip_existing (False): whether to skip existing samples (True) or\n merge them (False)\n insert_new (True): whether to insert new samples (True) or skip\n them (False)\n omit_default_fields (False): whether to omit default sample fields\n when merging. If ``True``, ``insert_new`` must be ``False``\n '
if (self.media_type == fom.VIDEO):
raise ValueError('Merging video collections is not yet supported')
if (omit_default_fields and insert_new):
raise ValueError('Cannot omit default fields when `insert_new=True`')
if (key_field == 'id'):
key_field = '_id'
if skip_existing:
when_matched = 'keepExisting'
else:
when_matched = 'merge'
if insert_new:
when_not_matched = 'insert'
else:
when_not_matched = 'discard'
self.create_index(key_field, unique=True)
sample_collection.create_index(key_field, unique=True)
schema = sample_collection.get_field_schema()
self._sample_doc_cls.merge_field_schema(schema)
if omit_default_fields:
omit_fields = list(self.get_default_sample_fields(include_private=True))
else:
omit_fields = ['_id']
try:
omit_fields.remove(key_field)
except ValueError:
pass
pipeline = []
if omit_fields:
pipeline.append({'$unset': omit_fields})
if omit_none_fields:
pipeline.append({'$replaceWith': {'$arrayToObject': {'$filter': {'input': {'$objectToArray': '$$ROOT'}, 'as': 'item', 'cond': {'$ne': ['$$item.v', None]}}}}})
pipeline.append({'$merge': {'into': self._sample_collection_name, 'on': key_field, 'whenMatched': when_matched, 'whenNotMatched': when_not_matched}})
sample_collection._aggregate(pipeline=pipeline, attach_frames=False)
fos.Sample._reload_docs(self._sample_collection_name) | Merges the given sample collection into this dataset.
By default, samples with the same absolute ``filepath`` are merged.
You can customize this behavior via the ``key_field`` parameter.
Args:
sample_collection: a
:class:`fiftyone.core.collections.SampleCollection`
key_field ("filepath"): the sample field to use to decide whether
to join with an existing sample
omit_none_fields (True): whether to omit ``None``-valued fields of
the provided samples when merging their fields
skip_existing (False): whether to skip existing samples (True) or
merge them (False)
insert_new (True): whether to insert new samples (True) or skip
them (False)
omit_default_fields (False): whether to omit default sample fields
when merging. If ``True``, ``insert_new`` must be ``False`` | fiftyone/core/dataset.py | _merge_samples | dadounhind/fiftyone | 1 | python | def _merge_samples(self, sample_collection, key_field='filepath', omit_none_fields=True, skip_existing=False, insert_new=True, omit_default_fields=False):
'Merges the given sample collection into this dataset.\n\n By default, samples with the same absolute ``filepath`` are merged.\n You can customize this behavior via the ``key_field`` parameter.\n\n Args:\n sample_collection: a\n :class:`fiftyone.core.collections.SampleCollection`\n key_field ("filepath"): the sample field to use to decide whether\n to join with an existing sample\n omit_none_fields (True): whether to omit ``None``-valued fields of\n the provided samples when merging their fields\n skip_existing (False): whether to skip existing samples (True) or\n merge them (False)\n insert_new (True): whether to insert new samples (True) or skip\n them (False)\n omit_default_fields (False): whether to omit default sample fields\n when merging. If ``True``, ``insert_new`` must be ``False``\n '
if (self.media_type == fom.VIDEO):
raise ValueError('Merging video collections is not yet supported')
if (omit_default_fields and insert_new):
raise ValueError('Cannot omit default fields when `insert_new=True`')
if (key_field == 'id'):
key_field = '_id'
if skip_existing:
when_matched = 'keepExisting'
else:
when_matched = 'merge'
if insert_new:
when_not_matched = 'insert'
else:
when_not_matched = 'discard'
self.create_index(key_field, unique=True)
sample_collection.create_index(key_field, unique=True)
schema = sample_collection.get_field_schema()
self._sample_doc_cls.merge_field_schema(schema)
if omit_default_fields:
omit_fields = list(self.get_default_sample_fields(include_private=True))
else:
omit_fields = ['_id']
try:
omit_fields.remove(key_field)
except ValueError:
pass
pipeline = []
if omit_fields:
pipeline.append({'$unset': omit_fields})
if omit_none_fields:
pipeline.append({'$replaceWith': {'$arrayToObject': {'$filter': {'input': {'$objectToArray': '$$ROOT'}, 'as': 'item', 'cond': {'$ne': ['$$item.v', None]}}}}})
pipeline.append({'$merge': {'into': self._sample_collection_name, 'on': key_field, 'whenMatched': when_matched, 'whenNotMatched': when_not_matched}})
sample_collection._aggregate(pipeline=pipeline, attach_frames=False)
fos.Sample._reload_docs(self._sample_collection_name) | def _merge_samples(self, sample_collection, key_field='filepath', omit_none_fields=True, skip_existing=False, insert_new=True, omit_default_fields=False):
'Merges the given sample collection into this dataset.\n\n By default, samples with the same absolute ``filepath`` are merged.\n You can customize this behavior via the ``key_field`` parameter.\n\n Args:\n sample_collection: a\n :class:`fiftyone.core.collections.SampleCollection`\n key_field ("filepath"): the sample field to use to decide whether\n to join with an existing sample\n omit_none_fields (True): whether to omit ``None``-valued fields of\n the provided samples when merging their fields\n skip_existing (False): whether to skip existing samples (True) or\n merge them (False)\n insert_new (True): whether to insert new samples (True) or skip\n them (False)\n omit_default_fields (False): whether to omit default sample fields\n when merging. If ``True``, ``insert_new`` must be ``False``\n '
if (self.media_type == fom.VIDEO):
raise ValueError('Merging video collections is not yet supported')
if (omit_default_fields and insert_new):
raise ValueError('Cannot omit default fields when `insert_new=True`')
if (key_field == 'id'):
key_field = '_id'
if skip_existing:
when_matched = 'keepExisting'
else:
when_matched = 'merge'
if insert_new:
when_not_matched = 'insert'
else:
when_not_matched = 'discard'
self.create_index(key_field, unique=True)
sample_collection.create_index(key_field, unique=True)
schema = sample_collection.get_field_schema()
self._sample_doc_cls.merge_field_schema(schema)
if omit_default_fields:
omit_fields = list(self.get_default_sample_fields(include_private=True))
else:
omit_fields = ['_id']
try:
omit_fields.remove(key_field)
except ValueError:
pass
pipeline = []
if omit_fields:
pipeline.append({'$unset': omit_fields})
if omit_none_fields:
pipeline.append({'$replaceWith': {'$arrayToObject': {'$filter': {'input': {'$objectToArray': '$$ROOT'}, 'as': 'item', 'cond': {'$ne': ['$$item.v', None]}}}}})
pipeline.append({'$merge': {'into': self._sample_collection_name, 'on': key_field, 'whenMatched': when_matched, 'whenNotMatched': when_not_matched}})
sample_collection._aggregate(pipeline=pipeline, attach_frames=False)
fos.Sample._reload_docs(self._sample_collection_name)<|docstring|>Merges the given sample collection into this dataset.
By default, samples with the same absolute ``filepath`` are merged.
You can customize this behavior via the ``key_field`` parameter.
Args:
sample_collection: a
:class:`fiftyone.core.collections.SampleCollection`
key_field ("filepath"): the sample field to use to decide whether
to join with an existing sample
omit_none_fields (True): whether to omit ``None``-valued fields of
the provided samples when merging their fields
skip_existing (False): whether to skip existing samples (True) or
merge them (False)
insert_new (True): whether to insert new samples (True) or skip
them (False)
omit_default_fields (False): whether to omit default sample fields
when merging. If ``True``, ``insert_new`` must be ``False``<|endoftext|> |
9a728886d116ec3503f415c0f77f5666c9677626e6eef6bb1d95faededc2a174 | def remove_sample(self, sample_or_id):
'Removes the given sample from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n\n Args:\n sample_or_id: the sample to remove. Can be any of the following:\n\n - a sample ID\n - a :class:`fiftyone.core.sample.Sample`\n - a :class:`fiftyone.core.sample.SampleView`\n '
if isinstance(sample_or_id, (fos.Sample, fos.SampleView)):
sample_id = sample_or_id.id
else:
sample_id = sample_or_id
self._sample_collection.delete_one({'_id': ObjectId(sample_id)})
fos.Sample._reset_docs(self._sample_collection_name, doc_ids=[sample_id])
if (self.media_type == fom.VIDEO):
fofr.Frame._reset_docs(self._frame_collection_name, sample_ids=[sample_id]) | Removes the given sample from the dataset.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``.
Args:
sample_or_id: the sample to remove. Can be any of the following:
- a sample ID
- a :class:`fiftyone.core.sample.Sample`
- a :class:`fiftyone.core.sample.SampleView` | fiftyone/core/dataset.py | remove_sample | dadounhind/fiftyone | 1 | python | def remove_sample(self, sample_or_id):
'Removes the given sample from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n\n Args:\n sample_or_id: the sample to remove. Can be any of the following:\n\n - a sample ID\n - a :class:`fiftyone.core.sample.Sample`\n - a :class:`fiftyone.core.sample.SampleView`\n '
if isinstance(sample_or_id, (fos.Sample, fos.SampleView)):
sample_id = sample_or_id.id
else:
sample_id = sample_or_id
self._sample_collection.delete_one({'_id': ObjectId(sample_id)})
fos.Sample._reset_docs(self._sample_collection_name, doc_ids=[sample_id])
if (self.media_type == fom.VIDEO):
fofr.Frame._reset_docs(self._frame_collection_name, sample_ids=[sample_id]) | def remove_sample(self, sample_or_id):
'Removes the given sample from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n\n Args:\n sample_or_id: the sample to remove. Can be any of the following:\n\n - a sample ID\n - a :class:`fiftyone.core.sample.Sample`\n - a :class:`fiftyone.core.sample.SampleView`\n '
if isinstance(sample_or_id, (fos.Sample, fos.SampleView)):
sample_id = sample_or_id.id
else:
sample_id = sample_or_id
self._sample_collection.delete_one({'_id': ObjectId(sample_id)})
fos.Sample._reset_docs(self._sample_collection_name, doc_ids=[sample_id])
if (self.media_type == fom.VIDEO):
fofr.Frame._reset_docs(self._frame_collection_name, sample_ids=[sample_id])<|docstring|>Removes the given sample from the dataset.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``.
Args:
sample_or_id: the sample to remove. Can be any of the following:
- a sample ID
- a :class:`fiftyone.core.sample.Sample`
- a :class:`fiftyone.core.sample.SampleView`<|endoftext|> |
15267624e969f4b75283d9dc6c28b0a54f3c930de1f7b0f132da5bd40ab91847 | def remove_samples(self, samples_or_ids):
'Removes the given samples from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n\n Args:\n samples_or_ids: the samples to remove. Can be any of the following:\n\n - a sample ID\n - an iterable of sample IDs\n - a :class:`fiftyone.core.sample.Sample` or\n :class:`fiftyone.core.sample.SampleView`\n - an iterable of sample IDs\n - a :class:`fiftyone.core.collections.SampleCollection`\n - an iterable of :class:`fiftyone.core.sample.Sample` or\n :class:`fiftyone.core.sample.SampleView` instances\n '
sample_ids = _get_sample_ids(samples_or_ids)
self._sample_collection.delete_many({'_id': {'$in': [ObjectId(_id) for _id in sample_ids]}})
fos.Sample._reset_docs(self._sample_collection_name, doc_ids=sample_ids)
if (self.media_type == fom.VIDEO):
fofr.Frame._reset_docs(self._frame_collection_name, sample_ids=sample_ids) | Removes the given samples from the dataset.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``.
Args:
samples_or_ids: the samples to remove. Can be any of the following:
- a sample ID
- an iterable of sample IDs
- a :class:`fiftyone.core.sample.Sample` or
:class:`fiftyone.core.sample.SampleView`
- an iterable of sample IDs
- a :class:`fiftyone.core.collections.SampleCollection`
- an iterable of :class:`fiftyone.core.sample.Sample` or
:class:`fiftyone.core.sample.SampleView` instances | fiftyone/core/dataset.py | remove_samples | dadounhind/fiftyone | 1 | python | def remove_samples(self, samples_or_ids):
'Removes the given samples from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n\n Args:\n samples_or_ids: the samples to remove. Can be any of the following:\n\n - a sample ID\n - an iterable of sample IDs\n - a :class:`fiftyone.core.sample.Sample` or\n :class:`fiftyone.core.sample.SampleView`\n - an iterable of sample IDs\n - a :class:`fiftyone.core.collections.SampleCollection`\n - an iterable of :class:`fiftyone.core.sample.Sample` or\n :class:`fiftyone.core.sample.SampleView` instances\n '
sample_ids = _get_sample_ids(samples_or_ids)
self._sample_collection.delete_many({'_id': {'$in': [ObjectId(_id) for _id in sample_ids]}})
fos.Sample._reset_docs(self._sample_collection_name, doc_ids=sample_ids)
if (self.media_type == fom.VIDEO):
fofr.Frame._reset_docs(self._frame_collection_name, sample_ids=sample_ids) | def remove_samples(self, samples_or_ids):
'Removes the given samples from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n\n Args:\n samples_or_ids: the samples to remove. Can be any of the following:\n\n - a sample ID\n - an iterable of sample IDs\n - a :class:`fiftyone.core.sample.Sample` or\n :class:`fiftyone.core.sample.SampleView`\n - an iterable of sample IDs\n - a :class:`fiftyone.core.collections.SampleCollection`\n - an iterable of :class:`fiftyone.core.sample.Sample` or\n :class:`fiftyone.core.sample.SampleView` instances\n '
sample_ids = _get_sample_ids(samples_or_ids)
self._sample_collection.delete_many({'_id': {'$in': [ObjectId(_id) for _id in sample_ids]}})
fos.Sample._reset_docs(self._sample_collection_name, doc_ids=sample_ids)
if (self.media_type == fom.VIDEO):
fofr.Frame._reset_docs(self._frame_collection_name, sample_ids=sample_ids)<|docstring|>Removes the given samples from the dataset.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``.
Args:
samples_or_ids: the samples to remove. Can be any of the following:
- a sample ID
- an iterable of sample IDs
- a :class:`fiftyone.core.sample.Sample` or
:class:`fiftyone.core.sample.SampleView`
- an iterable of sample IDs
- a :class:`fiftyone.core.collections.SampleCollection`
- an iterable of :class:`fiftyone.core.sample.Sample` or
:class:`fiftyone.core.sample.SampleView` instances<|endoftext|> |
112292f735154f8ca2ad165dea160e74ef31b8cf041b5550d599f9e6f7e1eb54 | def save(self):
'Saves the dataset to the database.\n\n This only needs to be called when dataset-level information such as its\n :meth:`Dataset.info` is modified.\n '
self._save() | Saves the dataset to the database.
This only needs to be called when dataset-level information such as its
:meth:`Dataset.info` is modified. | fiftyone/core/dataset.py | save | dadounhind/fiftyone | 1 | python | def save(self):
'Saves the dataset to the database.\n\n This only needs to be called when dataset-level information such as its\n :meth:`Dataset.info` is modified.\n '
self._save() | def save(self):
'Saves the dataset to the database.\n\n This only needs to be called when dataset-level information such as its\n :meth:`Dataset.info` is modified.\n '
self._save()<|docstring|>Saves the dataset to the database.
This only needs to be called when dataset-level information such as its
:meth:`Dataset.info` is modified.<|endoftext|> |
4611df2bc861dc6f769ab738b7eadce1858d28c428db04729ff2f87a10bdd685 | def clone(self, name=None):
'Creates a clone of the dataset containing deep copies of all samples\n and dataset-level information in this dataset.\n\n Args:\n name (None): a name for the cloned dataset. By default,\n :func:`get_default_dataset_name` is used\n\n Returns:\n the new :class:`Dataset`\n '
return self._clone(name=name) | Creates a clone of the dataset containing deep copies of all samples
and dataset-level information in this dataset.
Args:
name (None): a name for the cloned dataset. By default,
:func:`get_default_dataset_name` is used
Returns:
the new :class:`Dataset` | fiftyone/core/dataset.py | clone | dadounhind/fiftyone | 1 | python | def clone(self, name=None):
'Creates a clone of the dataset containing deep copies of all samples\n and dataset-level information in this dataset.\n\n Args:\n name (None): a name for the cloned dataset. By default,\n :func:`get_default_dataset_name` is used\n\n Returns:\n the new :class:`Dataset`\n '
return self._clone(name=name) | def clone(self, name=None):
'Creates a clone of the dataset containing deep copies of all samples\n and dataset-level information in this dataset.\n\n Args:\n name (None): a name for the cloned dataset. By default,\n :func:`get_default_dataset_name` is used\n\n Returns:\n the new :class:`Dataset`\n '
return self._clone(name=name)<|docstring|>Creates a clone of the dataset containing deep copies of all samples
and dataset-level information in this dataset.
Args:
name (None): a name for the cloned dataset. By default,
:func:`get_default_dataset_name` is used
Returns:
the new :class:`Dataset`<|endoftext|> |
7837480cb2b5e3ac7de8c290c46e6d7989b779fed3166b3a750dd97651313b56 | def clear(self):
'Removes all samples from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n '
self._sample_doc_cls.drop_collection()
fos.Sample._reset_docs(self._sample_collection_name)
self._frame_doc_cls.drop_collection()
fofr.Frame._reset_docs(self._frame_collection_name) | Removes all samples from the dataset.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``. | fiftyone/core/dataset.py | clear | dadounhind/fiftyone | 1 | python | def clear(self):
'Removes all samples from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n '
self._sample_doc_cls.drop_collection()
fos.Sample._reset_docs(self._sample_collection_name)
self._frame_doc_cls.drop_collection()
fofr.Frame._reset_docs(self._frame_collection_name) | def clear(self):
'Removes all samples from the dataset.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n '
self._sample_doc_cls.drop_collection()
fos.Sample._reset_docs(self._sample_collection_name)
self._frame_doc_cls.drop_collection()
fofr.Frame._reset_docs(self._frame_collection_name)<|docstring|>Removes all samples from the dataset.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``.<|endoftext|> |
aec894215780fac2733b17142b9f81bc5da8a4233ed68c65b0c8044052b4eec7 | def delete(self):
'Deletes the dataset.\n\n Once deleted, only the ``name`` and ``deleted`` attributes of a dataset\n may be accessed.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n '
self.clear()
_delete_dataset_doc(self._doc)
self._deleted = True | Deletes the dataset.
Once deleted, only the ``name`` and ``deleted`` attributes of a dataset
may be accessed.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``. | fiftyone/core/dataset.py | delete | dadounhind/fiftyone | 1 | python | def delete(self):
'Deletes the dataset.\n\n Once deleted, only the ``name`` and ``deleted`` attributes of a dataset\n may be accessed.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n '
self.clear()
_delete_dataset_doc(self._doc)
self._deleted = True | def delete(self):
'Deletes the dataset.\n\n Once deleted, only the ``name`` and ``deleted`` attributes of a dataset\n may be accessed.\n\n If reference to a sample exists in memory, the sample object will be\n updated such that ``sample.in_dataset == False``.\n '
self.clear()
_delete_dataset_doc(self._doc)
self._deleted = True<|docstring|>Deletes the dataset.
Once deleted, only the ``name`` and ``deleted`` attributes of a dataset
may be accessed.
If reference to a sample exists in memory, the sample object will be
updated such that ``sample.in_dataset == False``.<|endoftext|> |
85ca42c10306346a9e9a5eb90d63c62be842f669fb9c183ea113d34e8a6b5b32 | def add_dir(self, dataset_dir, dataset_type, label_field='ground_truth', tags=None, expand_schema=True, add_info=True, **kwargs):
'Adds the contents of the given directory to the dataset.\n\n See :doc:`this guide </user_guide/dataset_creation/datasets>` for\n descriptions of available dataset types.\n\n Args:\n dataset_dir: the dataset directory\n dataset_type (None): the\n :class:`fiftyone.types.dataset_types.Dataset` type of the\n dataset in ``dataset_dir``\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n add_info (True): whether to add dataset info from the importer (if\n any) to the dataset\'s ``info``\n **kwargs: optional keyword arguments to pass to the constructor of\n the :class:`fiftyone.utils.data.importers.DatasetImporter` for\n the specified ``dataset_type`` via the syntax\n ``DatasetImporter(dataset_dir, **kwargs)``\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if inspect.isclass(dataset_type):
dataset_type = dataset_type()
if (isinstance(dataset_type, (fot.TFImageClassificationDataset, fot.TFObjectDetectionDataset)) and ('images_dir' not in kwargs)):
images_dir = get_default_dataset_dir(self.name)
logger.info("Unpacking images to '%s'", images_dir)
kwargs['images_dir'] = images_dir
dataset_importer_cls = dataset_type.get_dataset_importer_cls()
try:
dataset_importer = dataset_importer_cls(dataset_dir, **kwargs)
except Exception as e:
importer_name = dataset_importer_cls.__name__
raise ValueError(('Failed to construct importer using syntax %s(dataset_dir, **kwargs); you may need to supply mandatory arguments to the constructor via `kwargs`. Please consult the documentation of `%s` to learn more' % (importer_name, etau.get_class_name(dataset_importer_cls)))) from e
return self.add_importer(dataset_importer, label_field=label_field, tags=tags, expand_schema=expand_schema, add_info=add_info) | Adds the contents of the given directory to the dataset.
See :doc:`this guide </user_guide/dataset_creation/datasets>` for
descriptions of available dataset types.
Args:
dataset_dir: the dataset directory
dataset_type (None): the
:class:`fiftyone.types.dataset_types.Dataset` type of the
dataset in ``dataset_dir``
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
add_info (True): whether to add dataset info from the importer (if
any) to the dataset's ``info``
**kwargs: optional keyword arguments to pass to the constructor of
the :class:`fiftyone.utils.data.importers.DatasetImporter` for
the specified ``dataset_type`` via the syntax
``DatasetImporter(dataset_dir, **kwargs)``
Returns:
a list of IDs of the samples that were added to the dataset | fiftyone/core/dataset.py | add_dir | dadounhind/fiftyone | 1 | python | def add_dir(self, dataset_dir, dataset_type, label_field='ground_truth', tags=None, expand_schema=True, add_info=True, **kwargs):
'Adds the contents of the given directory to the dataset.\n\n See :doc:`this guide </user_guide/dataset_creation/datasets>` for\n descriptions of available dataset types.\n\n Args:\n dataset_dir: the dataset directory\n dataset_type (None): the\n :class:`fiftyone.types.dataset_types.Dataset` type of the\n dataset in ``dataset_dir``\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n add_info (True): whether to add dataset info from the importer (if\n any) to the dataset\'s ``info``\n **kwargs: optional keyword arguments to pass to the constructor of\n the :class:`fiftyone.utils.data.importers.DatasetImporter` for\n the specified ``dataset_type`` via the syntax\n ``DatasetImporter(dataset_dir, **kwargs)``\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if inspect.isclass(dataset_type):
dataset_type = dataset_type()
if (isinstance(dataset_type, (fot.TFImageClassificationDataset, fot.TFObjectDetectionDataset)) and ('images_dir' not in kwargs)):
images_dir = get_default_dataset_dir(self.name)
logger.info("Unpacking images to '%s'", images_dir)
kwargs['images_dir'] = images_dir
dataset_importer_cls = dataset_type.get_dataset_importer_cls()
try:
dataset_importer = dataset_importer_cls(dataset_dir, **kwargs)
except Exception as e:
importer_name = dataset_importer_cls.__name__
raise ValueError(('Failed to construct importer using syntax %s(dataset_dir, **kwargs); you may need to supply mandatory arguments to the constructor via `kwargs`. Please consult the documentation of `%s` to learn more' % (importer_name, etau.get_class_name(dataset_importer_cls)))) from e
return self.add_importer(dataset_importer, label_field=label_field, tags=tags, expand_schema=expand_schema, add_info=add_info) | def add_dir(self, dataset_dir, dataset_type, label_field='ground_truth', tags=None, expand_schema=True, add_info=True, **kwargs):
'Adds the contents of the given directory to the dataset.\n\n See :doc:`this guide </user_guide/dataset_creation/datasets>` for\n descriptions of available dataset types.\n\n Args:\n dataset_dir: the dataset directory\n dataset_type (None): the\n :class:`fiftyone.types.dataset_types.Dataset` type of the\n dataset in ``dataset_dir``\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n add_info (True): whether to add dataset info from the importer (if\n any) to the dataset\'s ``info``\n **kwargs: optional keyword arguments to pass to the constructor of\n the :class:`fiftyone.utils.data.importers.DatasetImporter` for\n the specified ``dataset_type`` via the syntax\n ``DatasetImporter(dataset_dir, **kwargs)``\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if inspect.isclass(dataset_type):
dataset_type = dataset_type()
if (isinstance(dataset_type, (fot.TFImageClassificationDataset, fot.TFObjectDetectionDataset)) and ('images_dir' not in kwargs)):
images_dir = get_default_dataset_dir(self.name)
logger.info("Unpacking images to '%s'", images_dir)
kwargs['images_dir'] = images_dir
dataset_importer_cls = dataset_type.get_dataset_importer_cls()
try:
dataset_importer = dataset_importer_cls(dataset_dir, **kwargs)
except Exception as e:
importer_name = dataset_importer_cls.__name__
raise ValueError(('Failed to construct importer using syntax %s(dataset_dir, **kwargs); you may need to supply mandatory arguments to the constructor via `kwargs`. Please consult the documentation of `%s` to learn more' % (importer_name, etau.get_class_name(dataset_importer_cls)))) from e
return self.add_importer(dataset_importer, label_field=label_field, tags=tags, expand_schema=expand_schema, add_info=add_info)<|docstring|>Adds the contents of the given directory to the dataset.
See :doc:`this guide </user_guide/dataset_creation/datasets>` for
descriptions of available dataset types.
Args:
dataset_dir: the dataset directory
dataset_type (None): the
:class:`fiftyone.types.dataset_types.Dataset` type of the
dataset in ``dataset_dir``
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
add_info (True): whether to add dataset info from the importer (if
any) to the dataset's ``info``
**kwargs: optional keyword arguments to pass to the constructor of
the :class:`fiftyone.utils.data.importers.DatasetImporter` for
the specified ``dataset_type`` via the syntax
``DatasetImporter(dataset_dir, **kwargs)``
Returns:
a list of IDs of the samples that were added to the dataset<|endoftext|> |
e886a43e5abc31a53ccf92b1d54187ed4a712bea851f1adf40e04f3d15a69140 | def add_importer(self, dataset_importer, label_field='ground_truth', tags=None, expand_schema=True, add_info=True):
'Adds the samples from the given\n :class:`fiftyone.utils.data.importers.DatasetImporter` to the dataset.\n\n See :ref:`this guide <custom-dataset-importer>` for more details about\n importing datasets in custom formats by defining your own\n :class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`.\n\n Args:\n dataset_importer: a\n :class:`fiftyone.utils.data.importers.DatasetImporter`\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n add_info (True): whether to add dataset info from the importer (if\n any) to the dataset\'s ``info``\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.import_samples(self, dataset_importer, label_field=label_field, tags=tags, expand_schema=expand_schema, add_info=add_info) | Adds the samples from the given
:class:`fiftyone.utils.data.importers.DatasetImporter` to the dataset.
See :ref:`this guide <custom-dataset-importer>` for more details about
importing datasets in custom formats by defining your own
:class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`.
Args:
dataset_importer: a
:class:`fiftyone.utils.data.importers.DatasetImporter`
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
add_info (True): whether to add dataset info from the importer (if
any) to the dataset's ``info``
Returns:
a list of IDs of the samples that were added to the dataset | fiftyone/core/dataset.py | add_importer | dadounhind/fiftyone | 1 | python | def add_importer(self, dataset_importer, label_field='ground_truth', tags=None, expand_schema=True, add_info=True):
'Adds the samples from the given\n :class:`fiftyone.utils.data.importers.DatasetImporter` to the dataset.\n\n See :ref:`this guide <custom-dataset-importer>` for more details about\n importing datasets in custom formats by defining your own\n :class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`.\n\n Args:\n dataset_importer: a\n :class:`fiftyone.utils.data.importers.DatasetImporter`\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n add_info (True): whether to add dataset info from the importer (if\n any) to the dataset\'s ``info``\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.import_samples(self, dataset_importer, label_field=label_field, tags=tags, expand_schema=expand_schema, add_info=add_info) | def add_importer(self, dataset_importer, label_field='ground_truth', tags=None, expand_schema=True, add_info=True):
'Adds the samples from the given\n :class:`fiftyone.utils.data.importers.DatasetImporter` to the dataset.\n\n See :ref:`this guide <custom-dataset-importer>` for more details about\n importing datasets in custom formats by defining your own\n :class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`.\n\n Args:\n dataset_importer: a\n :class:`fiftyone.utils.data.importers.DatasetImporter`\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n add_info (True): whether to add dataset info from the importer (if\n any) to the dataset\'s ``info``\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.import_samples(self, dataset_importer, label_field=label_field, tags=tags, expand_schema=expand_schema, add_info=add_info)<|docstring|>Adds the samples from the given
:class:`fiftyone.utils.data.importers.DatasetImporter` to the dataset.
See :ref:`this guide <custom-dataset-importer>` for more details about
importing datasets in custom formats by defining your own
:class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`.
Args:
dataset_importer: a
:class:`fiftyone.utils.data.importers.DatasetImporter`
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
add_info (True): whether to add dataset info from the importer (if
any) to the dataset's ``info``
Returns:
a list of IDs of the samples that were added to the dataset<|endoftext|> |
ab0b0db16143dbf022ab1b2a7690176cdb96519170f55db1cbd2497ccd655d63 | def add_images(self, samples, sample_parser=None, tags=None):
'Adds the given images to the dataset.\n\n This operation does not read the images.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding images to a dataset by defining your own\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of image paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if (sample_parser is None):
sample_parser = foud.ImageSampleParser()
return foud.add_images(self, samples, sample_parser, tags=tags) | Adds the given images to the dataset.
This operation does not read the images.
See :ref:`this guide <custom-sample-parser>` for more details about
adding images to a dataset by defining your own
:class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of image paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples that were added to the dataset | fiftyone/core/dataset.py | add_images | dadounhind/fiftyone | 1 | python | def add_images(self, samples, sample_parser=None, tags=None):
'Adds the given images to the dataset.\n\n This operation does not read the images.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding images to a dataset by defining your own\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of image paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if (sample_parser is None):
sample_parser = foud.ImageSampleParser()
return foud.add_images(self, samples, sample_parser, tags=tags) | def add_images(self, samples, sample_parser=None, tags=None):
'Adds the given images to the dataset.\n\n This operation does not read the images.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding images to a dataset by defining your own\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of image paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if (sample_parser is None):
sample_parser = foud.ImageSampleParser()
return foud.add_images(self, samples, sample_parser, tags=tags)<|docstring|>Adds the given images to the dataset.
This operation does not read the images.
See :ref:`this guide <custom-sample-parser>` for more details about
adding images to a dataset by defining your own
:class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of image paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples that were added to the dataset<|endoftext|> |
4b9c72a9fa7a0a8c8c684f56524cf755b641abb7d0bd959eb3874cf6d9417aff | def add_labeled_images(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True):
'Adds the given labeled images to the dataset.\n\n This operation will iterate over all provided samples, but the images\n will not be read (unless the sample parser requires it in order to\n compute image metadata).\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding labeled images to a dataset by defining your own\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.add_labeled_images(self, samples, sample_parser, label_field=label_field, tags=tags, expand_schema=expand_schema) | Adds the given labeled images to the dataset.
This operation will iterate over all provided samples, but the images
will not be read (unless the sample parser requires it in order to
compute image metadata).
See :ref:`this guide <custom-sample-parser>` for more details about
adding labeled images to a dataset by defining your own
:class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`
instance to use to parse the samples
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
Returns:
a list of IDs of the samples that were added to the dataset | fiftyone/core/dataset.py | add_labeled_images | dadounhind/fiftyone | 1 | python | def add_labeled_images(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True):
'Adds the given labeled images to the dataset.\n\n This operation will iterate over all provided samples, but the images\n will not be read (unless the sample parser requires it in order to\n compute image metadata).\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding labeled images to a dataset by defining your own\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.add_labeled_images(self, samples, sample_parser, label_field=label_field, tags=tags, expand_schema=expand_schema) | def add_labeled_images(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True):
'Adds the given labeled images to the dataset.\n\n This operation will iterate over all provided samples, but the images\n will not be read (unless the sample parser requires it in order to\n compute image metadata).\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding labeled images to a dataset by defining your own\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.add_labeled_images(self, samples, sample_parser, label_field=label_field, tags=tags, expand_schema=expand_schema)<|docstring|>Adds the given labeled images to the dataset.
This operation will iterate over all provided samples, but the images
will not be read (unless the sample parser requires it in order to
compute image metadata).
See :ref:`this guide <custom-sample-parser>` for more details about
adding labeled images to a dataset by defining your own
:class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`
instance to use to parse the samples
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
Returns:
a list of IDs of the samples that were added to the dataset<|endoftext|> |
2f2ad1e4faa7f67bf39f01a1a32e1d2104e6fa7cb40b2b6e9132b791e45bd122 | def add_images_dir(self, images_dir, tags=None, recursive=True):
'Adds the given directory of images to the dataset.\n\n See :class:`fiftyone.types.dataset_types.ImageDirectory` for format\n details. In particular, note that files with non-image MIME types are\n omitted.\n\n This operation does not read the images.\n\n Args:\n images_dir: a directory of images\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a list of IDs of the samples in the dataset\n '
image_paths = foud.parse_images_dir(images_dir, recursive=recursive)
sample_parser = foud.ImageSampleParser()
return self.add_images(image_paths, sample_parser, tags=tags) | Adds the given directory of images to the dataset.
See :class:`fiftyone.types.dataset_types.ImageDirectory` for format
details. In particular, note that files with non-image MIME types are
omitted.
This operation does not read the images.
Args:
images_dir: a directory of images
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | add_images_dir | dadounhind/fiftyone | 1 | python | def add_images_dir(self, images_dir, tags=None, recursive=True):
'Adds the given directory of images to the dataset.\n\n See :class:`fiftyone.types.dataset_types.ImageDirectory` for format\n details. In particular, note that files with non-image MIME types are\n omitted.\n\n This operation does not read the images.\n\n Args:\n images_dir: a directory of images\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a list of IDs of the samples in the dataset\n '
image_paths = foud.parse_images_dir(images_dir, recursive=recursive)
sample_parser = foud.ImageSampleParser()
return self.add_images(image_paths, sample_parser, tags=tags) | def add_images_dir(self, images_dir, tags=None, recursive=True):
'Adds the given directory of images to the dataset.\n\n See :class:`fiftyone.types.dataset_types.ImageDirectory` for format\n details. In particular, note that files with non-image MIME types are\n omitted.\n\n This operation does not read the images.\n\n Args:\n images_dir: a directory of images\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a list of IDs of the samples in the dataset\n '
image_paths = foud.parse_images_dir(images_dir, recursive=recursive)
sample_parser = foud.ImageSampleParser()
return self.add_images(image_paths, sample_parser, tags=tags)<|docstring|>Adds the given directory of images to the dataset.
See :class:`fiftyone.types.dataset_types.ImageDirectory` for format
details. In particular, note that files with non-image MIME types are
omitted.
This operation does not read the images.
Args:
images_dir: a directory of images
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
fd325de05929f08e3e81812a86c26a38faff84f858cd0370af24aefabf537714 | def add_images_patt(self, images_patt, tags=None):
'Adds the given glob pattern of images to the dataset.\n\n This operation does not read the images.\n\n Args:\n images_patt: a glob pattern of images like\n ``/path/to/images/*.jpg``\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples in the dataset\n '
image_paths = etau.get_glob_matches(images_patt)
sample_parser = foud.ImageSampleParser()
return self.add_images(image_paths, sample_parser, tags=tags) | Adds the given glob pattern of images to the dataset.
This operation does not read the images.
Args:
images_patt: a glob pattern of images like
``/path/to/images/*.jpg``
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | add_images_patt | dadounhind/fiftyone | 1 | python | def add_images_patt(self, images_patt, tags=None):
'Adds the given glob pattern of images to the dataset.\n\n This operation does not read the images.\n\n Args:\n images_patt: a glob pattern of images like\n ``/path/to/images/*.jpg``\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples in the dataset\n '
image_paths = etau.get_glob_matches(images_patt)
sample_parser = foud.ImageSampleParser()
return self.add_images(image_paths, sample_parser, tags=tags) | def add_images_patt(self, images_patt, tags=None):
'Adds the given glob pattern of images to the dataset.\n\n This operation does not read the images.\n\n Args:\n images_patt: a glob pattern of images like\n ``/path/to/images/*.jpg``\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples in the dataset\n '
image_paths = etau.get_glob_matches(images_patt)
sample_parser = foud.ImageSampleParser()
return self.add_images(image_paths, sample_parser, tags=tags)<|docstring|>Adds the given glob pattern of images to the dataset.
This operation does not read the images.
Args:
images_patt: a glob pattern of images like
``/path/to/images/*.jpg``
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
e1f4dfb923c65ed81d5f31b2a64ff973422623484c35f966caee964fe4e3f7a4 | def ingest_images(self, samples, sample_parser=None, tags=None, dataset_dir=None, image_format=None):
'Ingests the given iterable of images into the dataset.\n\n The images are read in-memory and written to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting images into a dataset by defining your own\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of image paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n dataset_dir (None): the directory in which the images will be\n written. By default, :func:`get_default_dataset_dir` is used\n image_format (None): the image format to use to write the images to\n disk. By default, ``fiftyone.config.default_image_ext`` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (sample_parser is None):
sample_parser = foud.ImageSampleParser()
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.UnlabeledImageDatasetIngestor(dataset_dir, samples, sample_parser, image_format=image_format)
return self.add_importer(dataset_ingestor, tags=tags) | Ingests the given iterable of images into the dataset.
The images are read in-memory and written to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting images into a dataset by defining your own
:class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of image paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
dataset_dir (None): the directory in which the images will be
written. By default, :func:`get_default_dataset_dir` is used
image_format (None): the image format to use to write the images to
disk. By default, ``fiftyone.config.default_image_ext`` is used
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | ingest_images | dadounhind/fiftyone | 1 | python | def ingest_images(self, samples, sample_parser=None, tags=None, dataset_dir=None, image_format=None):
'Ingests the given iterable of images into the dataset.\n\n The images are read in-memory and written to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting images into a dataset by defining your own\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of image paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n dataset_dir (None): the directory in which the images will be\n written. By default, :func:`get_default_dataset_dir` is used\n image_format (None): the image format to use to write the images to\n disk. By default, ``fiftyone.config.default_image_ext`` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (sample_parser is None):
sample_parser = foud.ImageSampleParser()
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.UnlabeledImageDatasetIngestor(dataset_dir, samples, sample_parser, image_format=image_format)
return self.add_importer(dataset_ingestor, tags=tags) | def ingest_images(self, samples, sample_parser=None, tags=None, dataset_dir=None, image_format=None):
'Ingests the given iterable of images into the dataset.\n\n The images are read in-memory and written to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting images into a dataset by defining your own\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of image paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n dataset_dir (None): the directory in which the images will be\n written. By default, :func:`get_default_dataset_dir` is used\n image_format (None): the image format to use to write the images to\n disk. By default, ``fiftyone.config.default_image_ext`` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (sample_parser is None):
sample_parser = foud.ImageSampleParser()
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.UnlabeledImageDatasetIngestor(dataset_dir, samples, sample_parser, image_format=image_format)
return self.add_importer(dataset_ingestor, tags=tags)<|docstring|>Ingests the given iterable of images into the dataset.
The images are read in-memory and written to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting images into a dataset by defining your own
:class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of image paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
dataset_dir (None): the directory in which the images will be
written. By default, :func:`get_default_dataset_dir` is used
image_format (None): the image format to use to write the images to
disk. By default, ``fiftyone.config.default_image_ext`` is used
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
59653a4df4203d395de075e5e48eb807dbbe6aebc96d2a1f055a37c7c451067e | def ingest_labeled_images(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True, dataset_dir=None, skip_unlabeled=False, image_format=None):
'Ingests the given iterable of labeled image samples into the\n dataset.\n\n The images are read in-memory and written to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting labeled images into a dataset by defining your own\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample\'s schema is not a subset of the dataset schema\n dataset_dir (None): the directory in which the images will be\n written. By default, :func:`get_default_dataset_dir` is used\n skip_unlabeled (False): whether to skip unlabeled images when\n importing\n image_format (None): the image format to use to write the images to\n disk. By default, ``fiftyone.config.default_image_ext`` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.LabeledImageDatasetIngestor(dataset_dir, samples, sample_parser, skip_unlabeled=skip_unlabeled, image_format=image_format)
return self.add_importer(dataset_ingestor, label_field=label_field, tags=tags, expand_schema=expand_schema) | Ingests the given iterable of labeled image samples into the
dataset.
The images are read in-memory and written to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting labeled images into a dataset by defining your own
:class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`
instance to use to parse the samples
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if the sample's schema is not a subset of the dataset schema
dataset_dir (None): the directory in which the images will be
written. By default, :func:`get_default_dataset_dir` is used
skip_unlabeled (False): whether to skip unlabeled images when
importing
image_format (None): the image format to use to write the images to
disk. By default, ``fiftyone.config.default_image_ext`` is used
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | ingest_labeled_images | dadounhind/fiftyone | 1 | python | def ingest_labeled_images(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True, dataset_dir=None, skip_unlabeled=False, image_format=None):
'Ingests the given iterable of labeled image samples into the\n dataset.\n\n The images are read in-memory and written to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting labeled images into a dataset by defining your own\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample\'s schema is not a subset of the dataset schema\n dataset_dir (None): the directory in which the images will be\n written. By default, :func:`get_default_dataset_dir` is used\n skip_unlabeled (False): whether to skip unlabeled images when\n importing\n image_format (None): the image format to use to write the images to\n disk. By default, ``fiftyone.config.default_image_ext`` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.LabeledImageDatasetIngestor(dataset_dir, samples, sample_parser, skip_unlabeled=skip_unlabeled, image_format=image_format)
return self.add_importer(dataset_ingestor, label_field=label_field, tags=tags, expand_schema=expand_schema) | def ingest_labeled_images(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True, dataset_dir=None, skip_unlabeled=False, image_format=None):
'Ingests the given iterable of labeled image samples into the\n dataset.\n\n The images are read in-memory and written to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting labeled images into a dataset by defining your own\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample\'s schema is not a subset of the dataset schema\n dataset_dir (None): the directory in which the images will be\n written. By default, :func:`get_default_dataset_dir` is used\n skip_unlabeled (False): whether to skip unlabeled images when\n importing\n image_format (None): the image format to use to write the images to\n disk. By default, ``fiftyone.config.default_image_ext`` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.LabeledImageDatasetIngestor(dataset_dir, samples, sample_parser, skip_unlabeled=skip_unlabeled, image_format=image_format)
return self.add_importer(dataset_ingestor, label_field=label_field, tags=tags, expand_schema=expand_schema)<|docstring|>Ingests the given iterable of labeled image samples into the
dataset.
The images are read in-memory and written to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting labeled images into a dataset by defining your own
:class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`
instance to use to parse the samples
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if the sample's schema is not a subset of the dataset schema
dataset_dir (None): the directory in which the images will be
written. By default, :func:`get_default_dataset_dir` is used
skip_unlabeled (False): whether to skip unlabeled images when
importing
image_format (None): the image format to use to write the images to
disk. By default, ``fiftyone.config.default_image_ext`` is used
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
d10e8aefa985e6e7fe04a905913204a3bfe4c88404d397751e408a9f28b3a265 | def add_videos(self, samples, sample_parser=None, tags=None):
'Adds the given videos to the dataset.\n\n This operation does not read the videos.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding videos to a dataset by defining your own\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of video paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if (sample_parser is None):
sample_parser = foud.VideoSampleParser()
return foud.add_videos(self, samples, sample_parser, tags=tags) | Adds the given videos to the dataset.
This operation does not read the videos.
See :ref:`this guide <custom-sample-parser>` for more details about
adding videos to a dataset by defining your own
:class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of video paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples that were added to the dataset | fiftyone/core/dataset.py | add_videos | dadounhind/fiftyone | 1 | python | def add_videos(self, samples, sample_parser=None, tags=None):
'Adds the given videos to the dataset.\n\n This operation does not read the videos.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding videos to a dataset by defining your own\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of video paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if (sample_parser is None):
sample_parser = foud.VideoSampleParser()
return foud.add_videos(self, samples, sample_parser, tags=tags) | def add_videos(self, samples, sample_parser=None, tags=None):
'Adds the given videos to the dataset.\n\n This operation does not read the videos.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding videos to a dataset by defining your own\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of video paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
if (sample_parser is None):
sample_parser = foud.VideoSampleParser()
return foud.add_videos(self, samples, sample_parser, tags=tags)<|docstring|>Adds the given videos to the dataset.
This operation does not read the videos.
See :ref:`this guide <custom-sample-parser>` for more details about
adding videos to a dataset by defining your own
:class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of video paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples that were added to the dataset<|endoftext|> |
3424848ce5085d72836a08e3a1c6fec1764136c07a8ed7a11f14397b70c82abd | def add_labeled_videos(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True):
'Adds the given labeled videos to the dataset.\n\n This operation will iterate over all provided samples, but the videos\n will not be read/decoded/etc.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding labeled videos to a dataset by defining your own\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n frame field(s) to use for the labels\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.add_labeled_videos(self, samples, sample_parser, label_field=label_field, tags=tags, expand_schema=expand_schema) | Adds the given labeled videos to the dataset.
This operation will iterate over all provided samples, but the videos
will not be read/decoded/etc.
See :ref:`this guide <custom-sample-parser>` for more details about
adding labeled videos to a dataset by defining your own
:class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`
instance to use to parse the samples
label_field ("ground_truth"): the name (or root name) of the
frame field(s) to use for the labels
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
Returns:
a list of IDs of the samples that were added to the dataset | fiftyone/core/dataset.py | add_labeled_videos | dadounhind/fiftyone | 1 | python | def add_labeled_videos(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True):
'Adds the given labeled videos to the dataset.\n\n This operation will iterate over all provided samples, but the videos\n will not be read/decoded/etc.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding labeled videos to a dataset by defining your own\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n frame field(s) to use for the labels\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.add_labeled_videos(self, samples, sample_parser, label_field=label_field, tags=tags, expand_schema=expand_schema) | def add_labeled_videos(self, samples, sample_parser, label_field='ground_truth', tags=None, expand_schema=True):
'Adds the given labeled videos to the dataset.\n\n This operation will iterate over all provided samples, but the videos\n will not be read/decoded/etc.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n adding labeled videos to a dataset by defining your own\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n label_field ("ground_truth"): the name (or root name) of the\n frame field(s) to use for the labels\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if a sample\'s schema is not a subset of the dataset schema\n\n Returns:\n a list of IDs of the samples that were added to the dataset\n '
return foud.add_labeled_videos(self, samples, sample_parser, label_field=label_field, tags=tags, expand_schema=expand_schema)<|docstring|>Adds the given labeled videos to the dataset.
This operation will iterate over all provided samples, but the videos
will not be read/decoded/etc.
See :ref:`this guide <custom-sample-parser>` for more details about
adding labeled videos to a dataset by defining your own
:class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`
instance to use to parse the samples
label_field ("ground_truth"): the name (or root name) of the
frame field(s) to use for the labels
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if a sample's schema is not a subset of the dataset schema
Returns:
a list of IDs of the samples that were added to the dataset<|endoftext|> |
020be7f16d0dee02c2d88c50243c04a33d23fb62528af87e887f9d9483cc189e | def add_videos_dir(self, videos_dir, tags=None, recursive=True):
'Adds the given directory of videos to the dataset.\n\n See :class:`fiftyone.types.dataset_types.VideoDirectory` for format\n details. In particular, note that files with non-video MIME types are\n omitted.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_dir: a directory of videos\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a list of IDs of the samples in the dataset\n '
video_paths = foud.parse_videos_dir(videos_dir, recursive=recursive)
sample_parser = foud.VideoSampleParser()
return self.add_videos(video_paths, sample_parser, tags=tags) | Adds the given directory of videos to the dataset.
See :class:`fiftyone.types.dataset_types.VideoDirectory` for format
details. In particular, note that files with non-video MIME types are
omitted.
This operation does not read/decode the videos.
Args:
videos_dir: a directory of videos
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | add_videos_dir | dadounhind/fiftyone | 1 | python | def add_videos_dir(self, videos_dir, tags=None, recursive=True):
'Adds the given directory of videos to the dataset.\n\n See :class:`fiftyone.types.dataset_types.VideoDirectory` for format\n details. In particular, note that files with non-video MIME types are\n omitted.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_dir: a directory of videos\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a list of IDs of the samples in the dataset\n '
video_paths = foud.parse_videos_dir(videos_dir, recursive=recursive)
sample_parser = foud.VideoSampleParser()
return self.add_videos(video_paths, sample_parser, tags=tags) | def add_videos_dir(self, videos_dir, tags=None, recursive=True):
'Adds the given directory of videos to the dataset.\n\n See :class:`fiftyone.types.dataset_types.VideoDirectory` for format\n details. In particular, note that files with non-video MIME types are\n omitted.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_dir: a directory of videos\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a list of IDs of the samples in the dataset\n '
video_paths = foud.parse_videos_dir(videos_dir, recursive=recursive)
sample_parser = foud.VideoSampleParser()
return self.add_videos(video_paths, sample_parser, tags=tags)<|docstring|>Adds the given directory of videos to the dataset.
See :class:`fiftyone.types.dataset_types.VideoDirectory` for format
details. In particular, note that files with non-video MIME types are
omitted.
This operation does not read/decode the videos.
Args:
videos_dir: a directory of videos
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
1b03c73505551ff74b90defb16636399248412ef90850dbea100b248af2e74ae | def add_videos_patt(self, videos_patt, tags=None):
'Adds the given glob pattern of videos to the dataset.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_patt: a glob pattern of videos like\n ``/path/to/videos/*.mp4``\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples in the dataset\n '
video_paths = etau.get_glob_matches(videos_patt)
sample_parser = foud.VideoSampleParser()
return self.add_videos(video_paths, sample_parser, tags=tags) | Adds the given glob pattern of videos to the dataset.
This operation does not read/decode the videos.
Args:
videos_patt: a glob pattern of videos like
``/path/to/videos/*.mp4``
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | add_videos_patt | dadounhind/fiftyone | 1 | python | def add_videos_patt(self, videos_patt, tags=None):
'Adds the given glob pattern of videos to the dataset.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_patt: a glob pattern of videos like\n ``/path/to/videos/*.mp4``\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples in the dataset\n '
video_paths = etau.get_glob_matches(videos_patt)
sample_parser = foud.VideoSampleParser()
return self.add_videos(video_paths, sample_parser, tags=tags) | def add_videos_patt(self, videos_patt, tags=None):
'Adds the given glob pattern of videos to the dataset.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_patt: a glob pattern of videos like\n ``/path/to/videos/*.mp4``\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a list of IDs of the samples in the dataset\n '
video_paths = etau.get_glob_matches(videos_patt)
sample_parser = foud.VideoSampleParser()
return self.add_videos(video_paths, sample_parser, tags=tags)<|docstring|>Adds the given glob pattern of videos to the dataset.
This operation does not read/decode the videos.
Args:
videos_patt: a glob pattern of videos like
``/path/to/videos/*.mp4``
tags (None): an optional list of tags to attach to each sample
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
f618bd17ed1f1eeb4273d1e0cfb7e230ecca50b1fc3e33afa76f298f20947d2e | def ingest_videos(self, samples, sample_parser=None, tags=None, dataset_dir=None):
'Ingests the given iterable of videos into the dataset.\n\n The videos are copied to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting videos into a dataset by defining your own\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of video paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n dataset_dir (None): the directory in which the videos will be\n written. By default, :func:`get_default_dataset_dir` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (sample_parser is None):
sample_parser = foud.VideoSampleParser()
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.UnlabeledVideoDatasetIngestor(dataset_dir, samples, sample_parser)
return self.add_importer(dataset_ingestor, tags=tags) | Ingests the given iterable of videos into the dataset.
The videos are copied to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting videos into a dataset by defining your own
:class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of video paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
dataset_dir (None): the directory in which the videos will be
written. By default, :func:`get_default_dataset_dir` is used
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | ingest_videos | dadounhind/fiftyone | 1 | python | def ingest_videos(self, samples, sample_parser=None, tags=None, dataset_dir=None):
'Ingests the given iterable of videos into the dataset.\n\n The videos are copied to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting videos into a dataset by defining your own\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of video paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n dataset_dir (None): the directory in which the videos will be\n written. By default, :func:`get_default_dataset_dir` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (sample_parser is None):
sample_parser = foud.VideoSampleParser()
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.UnlabeledVideoDatasetIngestor(dataset_dir, samples, sample_parser)
return self.add_importer(dataset_ingestor, tags=tags) | def ingest_videos(self, samples, sample_parser=None, tags=None, dataset_dir=None):
'Ingests the given iterable of videos into the dataset.\n\n The videos are copied to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting videos into a dataset by defining your own\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples. If no ``sample_parser`` is\n provided, this must be an iterable of video paths. If a\n ``sample_parser`` is provided, this can be an arbitrary\n iterable whose elements can be parsed by the sample parser\n sample_parser (None): a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n dataset_dir (None): the directory in which the videos will be\n written. By default, :func:`get_default_dataset_dir` is used\n\n Returns:\n a list of IDs of the samples in the dataset\n '
if (sample_parser is None):
sample_parser = foud.VideoSampleParser()
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.UnlabeledVideoDatasetIngestor(dataset_dir, samples, sample_parser)
return self.add_importer(dataset_ingestor, tags=tags)<|docstring|>Ingests the given iterable of videos into the dataset.
The videos are copied to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting videos into a dataset by defining your own
:class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`.
Args:
samples: an iterable of samples. If no ``sample_parser`` is
provided, this must be an iterable of video paths. If a
``sample_parser`` is provided, this can be an arbitrary
iterable whose elements can be parsed by the sample parser
sample_parser (None): a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
dataset_dir (None): the directory in which the videos will be
written. By default, :func:`get_default_dataset_dir` is used
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
48ac3001b73b5b333d1039db8781ec4c3fa1966d3d626cfe336c6c19bfa68fdd | def ingest_labeled_videos(self, samples, sample_parser, tags=None, expand_schema=True, dataset_dir=None, skip_unlabeled=False):
"Ingests the given iterable of labeled video samples into the\n dataset.\n\n The videos are copied to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting labeled videos into a dataset by defining your own\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample's schema is not a subset of the dataset schema\n dataset_dir (None): the directory in which the videos will be\n written. By default, :func:`get_default_dataset_dir` is used\n skip_unlabeled (False): whether to skip unlabeled videos when\n importing\n\n Returns:\n a list of IDs of the samples in the dataset\n "
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.LabeledVideoDatasetIngestor(dataset_dir, samples, sample_parser, skip_unlabeled=skip_unlabeled)
return self.add_importer(dataset_ingestor, tags=tags, expand_schema=expand_schema) | Ingests the given iterable of labeled video samples into the
dataset.
The videos are copied to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting labeled videos into a dataset by defining your own
:class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if the sample's schema is not a subset of the dataset schema
dataset_dir (None): the directory in which the videos will be
written. By default, :func:`get_default_dataset_dir` is used
skip_unlabeled (False): whether to skip unlabeled videos when
importing
Returns:
a list of IDs of the samples in the dataset | fiftyone/core/dataset.py | ingest_labeled_videos | dadounhind/fiftyone | 1 | python | def ingest_labeled_videos(self, samples, sample_parser, tags=None, expand_schema=True, dataset_dir=None, skip_unlabeled=False):
"Ingests the given iterable of labeled video samples into the\n dataset.\n\n The videos are copied to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting labeled videos into a dataset by defining your own\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample's schema is not a subset of the dataset schema\n dataset_dir (None): the directory in which the videos will be\n written. By default, :func:`get_default_dataset_dir` is used\n skip_unlabeled (False): whether to skip unlabeled videos when\n importing\n\n Returns:\n a list of IDs of the samples in the dataset\n "
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.LabeledVideoDatasetIngestor(dataset_dir, samples, sample_parser, skip_unlabeled=skip_unlabeled)
return self.add_importer(dataset_ingestor, tags=tags, expand_schema=expand_schema) | def ingest_labeled_videos(self, samples, sample_parser, tags=None, expand_schema=True, dataset_dir=None, skip_unlabeled=False):
"Ingests the given iterable of labeled video samples into the\n dataset.\n\n The videos are copied to ``dataset_dir``.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n ingesting labeled videos into a dataset by defining your own\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n tags (None): an optional list of tags to attach to each sample\n expand_schema (True): whether to dynamically add new sample fields\n encountered to the dataset schema. If False, an error is raised\n if the sample's schema is not a subset of the dataset schema\n dataset_dir (None): the directory in which the videos will be\n written. By default, :func:`get_default_dataset_dir` is used\n skip_unlabeled (False): whether to skip unlabeled videos when\n importing\n\n Returns:\n a list of IDs of the samples in the dataset\n "
if (dataset_dir is None):
dataset_dir = get_default_dataset_dir(self.name)
dataset_ingestor = foud.LabeledVideoDatasetIngestor(dataset_dir, samples, sample_parser, skip_unlabeled=skip_unlabeled)
return self.add_importer(dataset_ingestor, tags=tags, expand_schema=expand_schema)<|docstring|>Ingests the given iterable of labeled video samples into the
dataset.
The videos are copied to ``dataset_dir``.
See :ref:`this guide <custom-sample-parser>` for more details about
ingesting labeled videos into a dataset by defining your own
:class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`
instance to use to parse the samples
tags (None): an optional list of tags to attach to each sample
expand_schema (True): whether to dynamically add new sample fields
encountered to the dataset schema. If False, an error is raised
if the sample's schema is not a subset of the dataset schema
dataset_dir (None): the directory in which the videos will be
written. By default, :func:`get_default_dataset_dir` is used
skip_unlabeled (False): whether to skip unlabeled videos when
importing
Returns:
a list of IDs of the samples in the dataset<|endoftext|> |
2265364259db49da2c102821898a2747ca0b1d62ec87d182345934cef35a5d0c | @classmethod
def from_dir(cls, dataset_dir, dataset_type, name=None, label_field='ground_truth', tags=None, **kwargs):
'Creates a :class:`Dataset` from the contents of the given directory.\n\n See :doc:`this guide </user_guide/dataset_creation/datasets>` for\n descriptions of available dataset types.\n\n Args:\n dataset_dir: the dataset directory\n dataset_type: the :class:`fiftyone.types.dataset_types.Dataset`\n type of the dataset in ``dataset_dir``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n **kwargs: optional keyword arguments to pass to the constructor of\n the :class:`fiftyone.utils.data.importers.DatasetImporter` for\n the specified ``dataset_type`` via the syntax\n ``DatasetImporter(dataset_dir, **kwargs)``\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_dir(dataset_dir, dataset_type, label_field=label_field, tags=tags, **kwargs)
return dataset | Creates a :class:`Dataset` from the contents of the given directory.
See :doc:`this guide </user_guide/dataset_creation/datasets>` for
descriptions of available dataset types.
Args:
dataset_dir: the dataset directory
dataset_type: the :class:`fiftyone.types.dataset_types.Dataset`
type of the dataset in ``dataset_dir``
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
**kwargs: optional keyword arguments to pass to the constructor of
the :class:`fiftyone.utils.data.importers.DatasetImporter` for
the specified ``dataset_type`` via the syntax
``DatasetImporter(dataset_dir, **kwargs)``
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_dir | dadounhind/fiftyone | 1 | python | @classmethod
def from_dir(cls, dataset_dir, dataset_type, name=None, label_field='ground_truth', tags=None, **kwargs):
'Creates a :class:`Dataset` from the contents of the given directory.\n\n See :doc:`this guide </user_guide/dataset_creation/datasets>` for\n descriptions of available dataset types.\n\n Args:\n dataset_dir: the dataset directory\n dataset_type: the :class:`fiftyone.types.dataset_types.Dataset`\n type of the dataset in ``dataset_dir``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n **kwargs: optional keyword arguments to pass to the constructor of\n the :class:`fiftyone.utils.data.importers.DatasetImporter` for\n the specified ``dataset_type`` via the syntax\n ``DatasetImporter(dataset_dir, **kwargs)``\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_dir(dataset_dir, dataset_type, label_field=label_field, tags=tags, **kwargs)
return dataset | @classmethod
def from_dir(cls, dataset_dir, dataset_type, name=None, label_field='ground_truth', tags=None, **kwargs):
'Creates a :class:`Dataset` from the contents of the given directory.\n\n See :doc:`this guide </user_guide/dataset_creation/datasets>` for\n descriptions of available dataset types.\n\n Args:\n dataset_dir: the dataset directory\n dataset_type: the :class:`fiftyone.types.dataset_types.Dataset`\n type of the dataset in ``dataset_dir``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n **kwargs: optional keyword arguments to pass to the constructor of\n the :class:`fiftyone.utils.data.importers.DatasetImporter` for\n the specified ``dataset_type`` via the syntax\n ``DatasetImporter(dataset_dir, **kwargs)``\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_dir(dataset_dir, dataset_type, label_field=label_field, tags=tags, **kwargs)
return dataset<|docstring|>Creates a :class:`Dataset` from the contents of the given directory.
See :doc:`this guide </user_guide/dataset_creation/datasets>` for
descriptions of available dataset types.
Args:
dataset_dir: the dataset directory
dataset_type: the :class:`fiftyone.types.dataset_types.Dataset`
type of the dataset in ``dataset_dir``
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
**kwargs: optional keyword arguments to pass to the constructor of
the :class:`fiftyone.utils.data.importers.DatasetImporter` for
the specified ``dataset_type`` via the syntax
``DatasetImporter(dataset_dir, **kwargs)``
Returns:
a :class:`Dataset`<|endoftext|> |
524a18e90d3711a185c73a4c129cb49f9ae6c8151d9abb20da56a78b40898ad9 | @classmethod
def from_importer(cls, dataset_importer, name=None, label_field='ground_truth', tags=None):
'Creates a :class:`Dataset` by importing the samples in the given\n :class:`fiftyone.utils.data.importers.DatasetImporter`.\n\n See :ref:`this guide <custom-dataset-importer>` for more details about\n providing a custom\n :class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`\n to import datasets into FiftyOne.\n\n Args:\n dataset_importer: a\n :class:`fiftyone.utils.data.importers.DatasetImporter`\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_importer(dataset_importer, label_field=label_field, tags=tags)
return dataset | Creates a :class:`Dataset` by importing the samples in the given
:class:`fiftyone.utils.data.importers.DatasetImporter`.
See :ref:`this guide <custom-dataset-importer>` for more details about
providing a custom
:class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`
to import datasets into FiftyOne.
Args:
dataset_importer: a
:class:`fiftyone.utils.data.importers.DatasetImporter`
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_importer | dadounhind/fiftyone | 1 | python | @classmethod
def from_importer(cls, dataset_importer, name=None, label_field='ground_truth', tags=None):
'Creates a :class:`Dataset` by importing the samples in the given\n :class:`fiftyone.utils.data.importers.DatasetImporter`.\n\n See :ref:`this guide <custom-dataset-importer>` for more details about\n providing a custom\n :class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`\n to import datasets into FiftyOne.\n\n Args:\n dataset_importer: a\n :class:`fiftyone.utils.data.importers.DatasetImporter`\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_importer(dataset_importer, label_field=label_field, tags=tags)
return dataset | @classmethod
def from_importer(cls, dataset_importer, name=None, label_field='ground_truth', tags=None):
'Creates a :class:`Dataset` by importing the samples in the given\n :class:`fiftyone.utils.data.importers.DatasetImporter`.\n\n See :ref:`this guide <custom-dataset-importer>` for more details about\n providing a custom\n :class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`\n to import datasets into FiftyOne.\n\n Args:\n dataset_importer: a\n :class:`fiftyone.utils.data.importers.DatasetImporter`\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels (if applicable)\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_importer(dataset_importer, label_field=label_field, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` by importing the samples in the given
:class:`fiftyone.utils.data.importers.DatasetImporter`.
See :ref:`this guide <custom-dataset-importer>` for more details about
providing a custom
:class:`DatasetImporter <fiftyone.utils.data.importers.DatasetImporter>`
to import datasets into FiftyOne.
Args:
dataset_importer: a
:class:`fiftyone.utils.data.importers.DatasetImporter`
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels (if applicable)
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
02d358488dec035c663cb98c0f9df27b32bf8576f17ce12b47c8fb00af4da899 | @classmethod
def from_images(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given images.\n\n This operation does not read the images.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`\n to load image samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images(samples, sample_parser, tags=tags)
return dataset | Creates a :class:`Dataset` from the given images.
This operation does not read the images.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`
to load image samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_images | dadounhind/fiftyone | 1 | python | @classmethod
def from_images(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given images.\n\n This operation does not read the images.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`\n to load image samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images(samples, sample_parser, tags=tags)
return dataset | @classmethod
def from_images(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given images.\n\n This operation does not read the images.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`\n to load image samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images(samples, sample_parser, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` from the given images.
This operation does not read the images.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`UnlabeledImageSampleParser <fiftyone.utils.data.parsers.UnlabeledImageSampleParser>`
to load image samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.UnlabeledImageSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
c2452bc0ce40718cab02ffe9e272f4c01ce2fcdc2100a9ac66d2aceeb5c0e437 | @classmethod
def from_labeled_images(cls, samples, sample_parser, name=None, label_field='ground_truth', tags=None):
'Creates a :class:`Dataset` from the given labeled images.\n\n This operation will iterate over all provided samples, but the images\n will not be read.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`\n to load labeled image samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_labeled_images(samples, sample_parser, label_field=label_field, tags=tags)
return dataset | Creates a :class:`Dataset` from the given labeled images.
This operation will iterate over all provided samples, but the images
will not be read.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`
to load labeled image samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_labeled_images | dadounhind/fiftyone | 1 | python | @classmethod
def from_labeled_images(cls, samples, sample_parser, name=None, label_field='ground_truth', tags=None):
'Creates a :class:`Dataset` from the given labeled images.\n\n This operation will iterate over all provided samples, but the images\n will not be read.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`\n to load labeled image samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_labeled_images(samples, sample_parser, label_field=label_field, tags=tags)
return dataset | @classmethod
def from_labeled_images(cls, samples, sample_parser, name=None, label_field='ground_truth', tags=None):
'Creates a :class:`Dataset` from the given labeled images.\n\n This operation will iterate over all provided samples, but the images\n will not be read.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`\n to load labeled image samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n label_field ("ground_truth"): the name (or root name) of the\n field(s) to use for the labels\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_labeled_images(samples, sample_parser, label_field=label_field, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` from the given labeled images.
This operation will iterate over all provided samples, but the images
will not be read.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`LabeledImageSampleParser <fiftyone.utils.data.parsers.LabeledImageSampleParser>`
to load labeled image samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledImageSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
label_field ("ground_truth"): the name (or root name) of the
field(s) to use for the labels
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
86a02acb783469576100943f5aa4fa374f1c07e69d6931f13fb7a92b5292ec85 | @classmethod
def from_images_dir(cls, images_dir, name=None, tags=None, recursive=True):
'Creates a :class:`Dataset` from the given directory of images.\n\n This operation does not read the images.\n\n Args:\n images_dir: a directory of images\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images_dir(images_dir, tags=tags, recursive=recursive)
return dataset | Creates a :class:`Dataset` from the given directory of images.
This operation does not read the images.
Args:
images_dir: a directory of images
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_images_dir | dadounhind/fiftyone | 1 | python | @classmethod
def from_images_dir(cls, images_dir, name=None, tags=None, recursive=True):
'Creates a :class:`Dataset` from the given directory of images.\n\n This operation does not read the images.\n\n Args:\n images_dir: a directory of images\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images_dir(images_dir, tags=tags, recursive=recursive)
return dataset | @classmethod
def from_images_dir(cls, images_dir, name=None, tags=None, recursive=True):
'Creates a :class:`Dataset` from the given directory of images.\n\n This operation does not read the images.\n\n Args:\n images_dir: a directory of images\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images_dir(images_dir, tags=tags, recursive=recursive)
return dataset<|docstring|>Creates a :class:`Dataset` from the given directory of images.
This operation does not read the images.
Args:
images_dir: a directory of images
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a :class:`Dataset`<|endoftext|> |
4074c575ea208f319b57ea428fe2a03631d38a907b081c32991e5184c8e47ab2 | @classmethod
def from_images_patt(cls, images_patt, name=None, tags=None):
'Creates a :class:`Dataset` from the given glob pattern of images.\n\n This operation does not read the images.\n\n Args:\n images_patt: a glob pattern of images like\n ``/path/to/images/*.jpg``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images_patt(images_patt, tags=tags)
return dataset | Creates a :class:`Dataset` from the given glob pattern of images.
This operation does not read the images.
Args:
images_patt: a glob pattern of images like
``/path/to/images/*.jpg``
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_images_patt | dadounhind/fiftyone | 1 | python | @classmethod
def from_images_patt(cls, images_patt, name=None, tags=None):
'Creates a :class:`Dataset` from the given glob pattern of images.\n\n This operation does not read the images.\n\n Args:\n images_patt: a glob pattern of images like\n ``/path/to/images/*.jpg``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images_patt(images_patt, tags=tags)
return dataset | @classmethod
def from_images_patt(cls, images_patt, name=None, tags=None):
'Creates a :class:`Dataset` from the given glob pattern of images.\n\n This operation does not read the images.\n\n Args:\n images_patt: a glob pattern of images like\n ``/path/to/images/*.jpg``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_images_patt(images_patt, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` from the given glob pattern of images.
This operation does not read the images.
Args:
images_patt: a glob pattern of images like
``/path/to/images/*.jpg``
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
94d3734f3a00e9af39843565be21e5e8b9640546958c5f8da6e773e95600489f | @classmethod
def from_videos(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given videos.\n\n This operation does not read/decode the videos.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`\n to load video samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.UnlabeledVideoSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos(samples, sample_parser, tags=tags)
return dataset | Creates a :class:`Dataset` from the given videos.
This operation does not read/decode the videos.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`
to load video samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.UnlabeledVideoSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_videos | dadounhind/fiftyone | 1 | python | @classmethod
def from_videos(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given videos.\n\n This operation does not read/decode the videos.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`\n to load video samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.UnlabeledVideoSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos(samples, sample_parser, tags=tags)
return dataset | @classmethod
def from_videos(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given videos.\n\n This operation does not read/decode the videos.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`\n to load video samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.UnlabeledVideoSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos(samples, sample_parser, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` from the given videos.
This operation does not read/decode the videos.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`UnlabeledVideoSampleParser <fiftyone.utils.data.parsers.UnlabeledVideoSampleParser>`
to load video samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.UnlabeledVideoSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
e170960d40cc602f2d9b1468100aff21e411fc2494b81b2206e70a93cdbb92ac | @classmethod
def from_labeled_videos(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given labeled videos.\n\n This operation will iterate over all provided samples, but the videos\n will not be read/decoded/etc.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`\n to load labeled video samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_labeled_videos(samples, sample_parser, tags=tags)
return dataset | Creates a :class:`Dataset` from the given labeled videos.
This operation will iterate over all provided samples, but the videos
will not be read/decoded/etc.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`
to load labeled video samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_labeled_videos | dadounhind/fiftyone | 1 | python | @classmethod
def from_labeled_videos(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given labeled videos.\n\n This operation will iterate over all provided samples, but the videos\n will not be read/decoded/etc.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`\n to load labeled video samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_labeled_videos(samples, sample_parser, tags=tags)
return dataset | @classmethod
def from_labeled_videos(cls, samples, sample_parser, name=None, tags=None):
'Creates a :class:`Dataset` from the given labeled videos.\n\n This operation will iterate over all provided samples, but the videos\n will not be read/decoded/etc.\n\n See :ref:`this guide <custom-sample-parser>` for more details about\n providing a custom\n :class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`\n to load labeled video samples into FiftyOne.\n\n Args:\n samples: an iterable of samples\n sample_parser: a\n :class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`\n instance to use to parse the samples\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_labeled_videos(samples, sample_parser, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` from the given labeled videos.
This operation will iterate over all provided samples, but the videos
will not be read/decoded/etc.
See :ref:`this guide <custom-sample-parser>` for more details about
providing a custom
:class:`LabeledVideoSampleParser <fiftyone.utils.data.parsers.LabeledVideoSampleParser>`
to load labeled video samples into FiftyOne.
Args:
samples: an iterable of samples
sample_parser: a
:class:`fiftyone.utils.data.parsers.LabeledVideoSampleParser`
instance to use to parse the samples
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
b45112c870af4b032642b88e4221b88968b7df31f823a9957389e1fd34330928 | @classmethod
def from_videos_dir(cls, videos_dir, name=None, tags=None, recursive=True):
'Creates a :class:`Dataset` from the given directory of videos.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_dir: a directory of videos\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos_dir(videos_dir, tags=tags, recursive=recursive)
return dataset | Creates a :class:`Dataset` from the given directory of videos.
This operation does not read/decode the videos.
Args:
videos_dir: a directory of videos
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_videos_dir | dadounhind/fiftyone | 1 | python | @classmethod
def from_videos_dir(cls, videos_dir, name=None, tags=None, recursive=True):
'Creates a :class:`Dataset` from the given directory of videos.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_dir: a directory of videos\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos_dir(videos_dir, tags=tags, recursive=recursive)
return dataset | @classmethod
def from_videos_dir(cls, videos_dir, name=None, tags=None, recursive=True):
'Creates a :class:`Dataset` from the given directory of videos.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_dir: a directory of videos\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n recursive (True): whether to recursively traverse subdirectories\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos_dir(videos_dir, tags=tags, recursive=recursive)
return dataset<|docstring|>Creates a :class:`Dataset` from the given directory of videos.
This operation does not read/decode the videos.
Args:
videos_dir: a directory of videos
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
recursive (True): whether to recursively traverse subdirectories
Returns:
a :class:`Dataset`<|endoftext|> |
82bd14f299138101e9e1207399fda168bdf54154b10f2369f646b291874545dc | @classmethod
def from_videos_patt(cls, videos_patt, name=None, tags=None):
'Creates a :class:`Dataset` from the given glob pattern of videos.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_patt: a glob pattern of videos like\n ``/path/to/videos/*.mp4``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos_patt(videos_patt, tags=tags)
return dataset | Creates a :class:`Dataset` from the given glob pattern of videos.
This operation does not read/decode the videos.
Args:
videos_patt: a glob pattern of videos like
``/path/to/videos/*.mp4``
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_videos_patt | dadounhind/fiftyone | 1 | python | @classmethod
def from_videos_patt(cls, videos_patt, name=None, tags=None):
'Creates a :class:`Dataset` from the given glob pattern of videos.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_patt: a glob pattern of videos like\n ``/path/to/videos/*.mp4``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos_patt(videos_patt, tags=tags)
return dataset | @classmethod
def from_videos_patt(cls, videos_patt, name=None, tags=None):
'Creates a :class:`Dataset` from the given glob pattern of videos.\n\n This operation does not read/decode the videos.\n\n Args:\n videos_patt: a glob pattern of videos like\n ``/path/to/videos/*.mp4``\n name (None): a name for the dataset. By default,\n :func:`get_default_dataset_name` is used\n tags (None): an optional list of tags to attach to each sample\n\n Returns:\n a :class:`Dataset`\n '
dataset = cls(name)
dataset.add_videos_patt(videos_patt, tags=tags)
return dataset<|docstring|>Creates a :class:`Dataset` from the given glob pattern of videos.
This operation does not read/decode the videos.
Args:
videos_patt: a glob pattern of videos like
``/path/to/videos/*.mp4``
name (None): a name for the dataset. By default,
:func:`get_default_dataset_name` is used
tags (None): an optional list of tags to attach to each sample
Returns:
a :class:`Dataset`<|endoftext|> |
d418e5043bdd2f6caed93cb3d563952574d83dc55aa43382630c06c0576c9f79 | def list_indexes(self):
'Returns the fields of the dataset that are indexed.\n\n Returns:\n a list of field names\n '
index_info = self._sample_collection.index_information()
index_fields = [v['key'][0][0] for v in index_info.values()]
return [f for f in index_fields if (not f.startswith('_'))] | Returns the fields of the dataset that are indexed.
Returns:
a list of field names | fiftyone/core/dataset.py | list_indexes | dadounhind/fiftyone | 1 | python | def list_indexes(self):
'Returns the fields of the dataset that are indexed.\n\n Returns:\n a list of field names\n '
index_info = self._sample_collection.index_information()
index_fields = [v['key'][0][0] for v in index_info.values()]
return [f for f in index_fields if (not f.startswith('_'))] | def list_indexes(self):
'Returns the fields of the dataset that are indexed.\n\n Returns:\n a list of field names\n '
index_info = self._sample_collection.index_information()
index_fields = [v['key'][0][0] for v in index_info.values()]
return [f for f in index_fields if (not f.startswith('_'))]<|docstring|>Returns the fields of the dataset that are indexed.
Returns:
a list of field names<|endoftext|> |
b2b525f0dbb426db05e02acd1f5a641755e4ff1b234dda53661931e6816e28d4 | def create_index(self, field_name, unique=False, sphere2d=False):
'Creates an index on the given field.\n\n If the given field already has a unique index, it will be retained\n regardless of the ``unique`` value you specify.\n\n If the given field already has a non-unique index but you requested a\n unique index, the existing index will be dropped.\n\n Indexes enable efficient sorting, merging, and other such operations.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n unique (False): whether to add a uniqueness constraint to the index\n sphere2d (False): whether the field is a GeoJSON field that\n requires a sphere2d index\n '
root = field_name.split('.', 1)[0]
if (root not in self.get_field_schema()):
raise ValueError(("Dataset has no field '%s'" % root))
index_info = self._sample_collection.index_information()
index_map = {v['key'][0][0]: v.get('unique', False) for v in index_info.values()}
if (field_name in index_map):
_unique = index_map[field_name]
if (_unique or (unique == _unique)):
return
self.drop_index(field_name)
if sphere2d:
index_spec = [(field_name, '2dsphere')]
else:
index_spec = field_name
self._sample_collection.create_index(index_spec, unique=unique) | Creates an index on the given field.
If the given field already has a unique index, it will be retained
regardless of the ``unique`` value you specify.
If the given field already has a non-unique index but you requested a
unique index, the existing index will be dropped.
Indexes enable efficient sorting, merging, and other such operations.
Args:
field_name: the field name or ``embedded.field.name``
unique (False): whether to add a uniqueness constraint to the index
sphere2d (False): whether the field is a GeoJSON field that
requires a sphere2d index | fiftyone/core/dataset.py | create_index | dadounhind/fiftyone | 1 | python | def create_index(self, field_name, unique=False, sphere2d=False):
'Creates an index on the given field.\n\n If the given field already has a unique index, it will be retained\n regardless of the ``unique`` value you specify.\n\n If the given field already has a non-unique index but you requested a\n unique index, the existing index will be dropped.\n\n Indexes enable efficient sorting, merging, and other such operations.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n unique (False): whether to add a uniqueness constraint to the index\n sphere2d (False): whether the field is a GeoJSON field that\n requires a sphere2d index\n '
root = field_name.split('.', 1)[0]
if (root not in self.get_field_schema()):
raise ValueError(("Dataset has no field '%s'" % root))
index_info = self._sample_collection.index_information()
index_map = {v['key'][0][0]: v.get('unique', False) for v in index_info.values()}
if (field_name in index_map):
_unique = index_map[field_name]
if (_unique or (unique == _unique)):
return
self.drop_index(field_name)
if sphere2d:
index_spec = [(field_name, '2dsphere')]
else:
index_spec = field_name
self._sample_collection.create_index(index_spec, unique=unique) | def create_index(self, field_name, unique=False, sphere2d=False):
'Creates an index on the given field.\n\n If the given field already has a unique index, it will be retained\n regardless of the ``unique`` value you specify.\n\n If the given field already has a non-unique index but you requested a\n unique index, the existing index will be dropped.\n\n Indexes enable efficient sorting, merging, and other such operations.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n unique (False): whether to add a uniqueness constraint to the index\n sphere2d (False): whether the field is a GeoJSON field that\n requires a sphere2d index\n '
root = field_name.split('.', 1)[0]
if (root not in self.get_field_schema()):
raise ValueError(("Dataset has no field '%s'" % root))
index_info = self._sample_collection.index_information()
index_map = {v['key'][0][0]: v.get('unique', False) for v in index_info.values()}
if (field_name in index_map):
_unique = index_map[field_name]
if (_unique or (unique == _unique)):
return
self.drop_index(field_name)
if sphere2d:
index_spec = [(field_name, '2dsphere')]
else:
index_spec = field_name
self._sample_collection.create_index(index_spec, unique=unique)<|docstring|>Creates an index on the given field.
If the given field already has a unique index, it will be retained
regardless of the ``unique`` value you specify.
If the given field already has a non-unique index but you requested a
unique index, the existing index will be dropped.
Indexes enable efficient sorting, merging, and other such operations.
Args:
field_name: the field name or ``embedded.field.name``
unique (False): whether to add a uniqueness constraint to the index
sphere2d (False): whether the field is a GeoJSON field that
requires a sphere2d index<|endoftext|> |
0b3f5236925d47baa1d2051ef450a7ad0f2b48a4fcf2129bf4043e9988b87670 | def drop_index(self, field_name):
'Drops the index on the given field.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n '
index_info = self._sample_collection.index_information()
index_map = {v['key'][0][0]: k for (k, v) in index_info.items()}
if (field_name not in index_map):
if (('.' not in field_name) and (field_name not in self.get_field_schema())):
raise ValueError(("Dataset has no field '%s'" % field_name))
raise ValueError(("Dataset field '%s' is not indexed" % field_name))
self._sample_collection.drop_index(index_map[field_name]) | Drops the index on the given field.
Args:
field_name: the field name or ``embedded.field.name`` | fiftyone/core/dataset.py | drop_index | dadounhind/fiftyone | 1 | python | def drop_index(self, field_name):
'Drops the index on the given field.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n '
index_info = self._sample_collection.index_information()
index_map = {v['key'][0][0]: k for (k, v) in index_info.items()}
if (field_name not in index_map):
if (('.' not in field_name) and (field_name not in self.get_field_schema())):
raise ValueError(("Dataset has no field '%s'" % field_name))
raise ValueError(("Dataset field '%s' is not indexed" % field_name))
self._sample_collection.drop_index(index_map[field_name]) | def drop_index(self, field_name):
'Drops the index on the given field.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n '
index_info = self._sample_collection.index_information()
index_map = {v['key'][0][0]: k for (k, v) in index_info.items()}
if (field_name not in index_map):
if (('.' not in field_name) and (field_name not in self.get_field_schema())):
raise ValueError(("Dataset has no field '%s'" % field_name))
raise ValueError(("Dataset field '%s' is not indexed" % field_name))
self._sample_collection.drop_index(index_map[field_name])<|docstring|>Drops the index on the given field.
Args:
field_name: the field name or ``embedded.field.name``<|endoftext|> |
e34d4998aae9147fcb4a5e2b020727c6d4196e41b56f6c9c6895d3965ba8b4ab | @classmethod
def from_dict(cls, d, name=None, rel_dir=None, frame_labels_dir=None):
'Loads a :class:`Dataset` from a JSON dictionary generated by\n :func:`fiftyone.core.collections.SampleCollection.to_dict`.\n\n The JSON dictionary can contain an export of any\n :class:`fiftyone.core.collections.SampleCollection`, e.g.,\n :class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.\n\n Args:\n d: a JSON dictionary\n name (None): a name for the new dataset. By default, ``d["name"]``\n is used\n rel_dir (None): a relative directory to prepend to the ``filepath``\n of each sample, if the filepath is not absolute (begins with a\n path separator). The path is converted to an absolute path\n (if necessary) via\n ``os.path.abspath(os.path.expanduser(rel_dir))``\n frame_labels_dir (None): a directory of per-sample JSON files\n containing the frame labels for video samples. If omitted, it\n is assumed that the frame labels are included directly in the\n provided JSON dict. Only applicable to video datasets\n\n Returns:\n a :class:`Dataset`\n '
if (name is None):
name = d['name']
if (rel_dir is not None):
rel_dir = os.path.abspath(os.path.expanduser(rel_dir))
name = make_unique_dataset_name(name)
dataset = cls(name)
media_type = d.get('media_type', None)
if (media_type is not None):
dataset.media_type = media_type
dataset._apply_field_schema(d['sample_fields'])
if (media_type == fom.VIDEO):
dataset._apply_frame_field_schema(d['frame_fields'])
dataset.info = d.get('info', {})
dataset.classes = d.get('classes', {})
dataset.default_classes = d.get('default_classes', [])
dataset.mask_targets = dataset._parse_mask_targets(d.get('mask_targets', {}))
dataset.default_mask_targets = dataset._parse_default_mask_targets(d.get('default_mask_targets', {}))
def parse_sample(sd):
if (rel_dir and (not sd['filepath'].startswith(os.path.sep))):
sd['filepath'] = os.path.join(rel_dir, sd['filepath'])
if (media_type == fom.VIDEO):
frames = sd.pop('frames', {})
if etau.is_str(frames):
frames_path = os.path.join(frame_labels_dir, frames)
frames = etas.load_json(frames_path).get('frames', {})
sample = fos.Sample.from_dict(sd)
sample._frames = fofr.Frames()
for (key, value) in frames.items():
sample.frames[int(key)] = fofr.Frame.from_dict(value)
else:
sample = fos.Sample.from_dict(sd)
return sample
samples = d['samples']
num_samples = len(samples)
_samples = map(parse_sample, samples)
dataset.add_samples(_samples, expand_schema=False, num_samples=num_samples)
return dataset | Loads a :class:`Dataset` from a JSON dictionary generated by
:func:`fiftyone.core.collections.SampleCollection.to_dict`.
The JSON dictionary can contain an export of any
:class:`fiftyone.core.collections.SampleCollection`, e.g.,
:class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.
Args:
d: a JSON dictionary
name (None): a name for the new dataset. By default, ``d["name"]``
is used
rel_dir (None): a relative directory to prepend to the ``filepath``
of each sample, if the filepath is not absolute (begins with a
path separator). The path is converted to an absolute path
(if necessary) via
``os.path.abspath(os.path.expanduser(rel_dir))``
frame_labels_dir (None): a directory of per-sample JSON files
containing the frame labels for video samples. If omitted, it
is assumed that the frame labels are included directly in the
provided JSON dict. Only applicable to video datasets
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_dict | dadounhind/fiftyone | 1 | python | @classmethod
def from_dict(cls, d, name=None, rel_dir=None, frame_labels_dir=None):
'Loads a :class:`Dataset` from a JSON dictionary generated by\n :func:`fiftyone.core.collections.SampleCollection.to_dict`.\n\n The JSON dictionary can contain an export of any\n :class:`fiftyone.core.collections.SampleCollection`, e.g.,\n :class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.\n\n Args:\n d: a JSON dictionary\n name (None): a name for the new dataset. By default, ``d["name"]``\n is used\n rel_dir (None): a relative directory to prepend to the ``filepath``\n of each sample, if the filepath is not absolute (begins with a\n path separator). The path is converted to an absolute path\n (if necessary) via\n ``os.path.abspath(os.path.expanduser(rel_dir))``\n frame_labels_dir (None): a directory of per-sample JSON files\n containing the frame labels for video samples. If omitted, it\n is assumed that the frame labels are included directly in the\n provided JSON dict. Only applicable to video datasets\n\n Returns:\n a :class:`Dataset`\n '
if (name is None):
name = d['name']
if (rel_dir is not None):
rel_dir = os.path.abspath(os.path.expanduser(rel_dir))
name = make_unique_dataset_name(name)
dataset = cls(name)
media_type = d.get('media_type', None)
if (media_type is not None):
dataset.media_type = media_type
dataset._apply_field_schema(d['sample_fields'])
if (media_type == fom.VIDEO):
dataset._apply_frame_field_schema(d['frame_fields'])
dataset.info = d.get('info', {})
dataset.classes = d.get('classes', {})
dataset.default_classes = d.get('default_classes', [])
dataset.mask_targets = dataset._parse_mask_targets(d.get('mask_targets', {}))
dataset.default_mask_targets = dataset._parse_default_mask_targets(d.get('default_mask_targets', {}))
def parse_sample(sd):
if (rel_dir and (not sd['filepath'].startswith(os.path.sep))):
sd['filepath'] = os.path.join(rel_dir, sd['filepath'])
if (media_type == fom.VIDEO):
frames = sd.pop('frames', {})
if etau.is_str(frames):
frames_path = os.path.join(frame_labels_dir, frames)
frames = etas.load_json(frames_path).get('frames', {})
sample = fos.Sample.from_dict(sd)
sample._frames = fofr.Frames()
for (key, value) in frames.items():
sample.frames[int(key)] = fofr.Frame.from_dict(value)
else:
sample = fos.Sample.from_dict(sd)
return sample
samples = d['samples']
num_samples = len(samples)
_samples = map(parse_sample, samples)
dataset.add_samples(_samples, expand_schema=False, num_samples=num_samples)
return dataset | @classmethod
def from_dict(cls, d, name=None, rel_dir=None, frame_labels_dir=None):
'Loads a :class:`Dataset` from a JSON dictionary generated by\n :func:`fiftyone.core.collections.SampleCollection.to_dict`.\n\n The JSON dictionary can contain an export of any\n :class:`fiftyone.core.collections.SampleCollection`, e.g.,\n :class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.\n\n Args:\n d: a JSON dictionary\n name (None): a name for the new dataset. By default, ``d["name"]``\n is used\n rel_dir (None): a relative directory to prepend to the ``filepath``\n of each sample, if the filepath is not absolute (begins with a\n path separator). The path is converted to an absolute path\n (if necessary) via\n ``os.path.abspath(os.path.expanduser(rel_dir))``\n frame_labels_dir (None): a directory of per-sample JSON files\n containing the frame labels for video samples. If omitted, it\n is assumed that the frame labels are included directly in the\n provided JSON dict. Only applicable to video datasets\n\n Returns:\n a :class:`Dataset`\n '
if (name is None):
name = d['name']
if (rel_dir is not None):
rel_dir = os.path.abspath(os.path.expanduser(rel_dir))
name = make_unique_dataset_name(name)
dataset = cls(name)
media_type = d.get('media_type', None)
if (media_type is not None):
dataset.media_type = media_type
dataset._apply_field_schema(d['sample_fields'])
if (media_type == fom.VIDEO):
dataset._apply_frame_field_schema(d['frame_fields'])
dataset.info = d.get('info', {})
dataset.classes = d.get('classes', {})
dataset.default_classes = d.get('default_classes', [])
dataset.mask_targets = dataset._parse_mask_targets(d.get('mask_targets', {}))
dataset.default_mask_targets = dataset._parse_default_mask_targets(d.get('default_mask_targets', {}))
def parse_sample(sd):
if (rel_dir and (not sd['filepath'].startswith(os.path.sep))):
sd['filepath'] = os.path.join(rel_dir, sd['filepath'])
if (media_type == fom.VIDEO):
frames = sd.pop('frames', {})
if etau.is_str(frames):
frames_path = os.path.join(frame_labels_dir, frames)
frames = etas.load_json(frames_path).get('frames', {})
sample = fos.Sample.from_dict(sd)
sample._frames = fofr.Frames()
for (key, value) in frames.items():
sample.frames[int(key)] = fofr.Frame.from_dict(value)
else:
sample = fos.Sample.from_dict(sd)
return sample
samples = d['samples']
num_samples = len(samples)
_samples = map(parse_sample, samples)
dataset.add_samples(_samples, expand_schema=False, num_samples=num_samples)
return dataset<|docstring|>Loads a :class:`Dataset` from a JSON dictionary generated by
:func:`fiftyone.core.collections.SampleCollection.to_dict`.
The JSON dictionary can contain an export of any
:class:`fiftyone.core.collections.SampleCollection`, e.g.,
:class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.
Args:
d: a JSON dictionary
name (None): a name for the new dataset. By default, ``d["name"]``
is used
rel_dir (None): a relative directory to prepend to the ``filepath``
of each sample, if the filepath is not absolute (begins with a
path separator). The path is converted to an absolute path
(if necessary) via
``os.path.abspath(os.path.expanduser(rel_dir))``
frame_labels_dir (None): a directory of per-sample JSON files
containing the frame labels for video samples. If omitted, it
is assumed that the frame labels are included directly in the
provided JSON dict. Only applicable to video datasets
Returns:
a :class:`Dataset`<|endoftext|> |
001932c356a6e48d14d84771386ac185a16576a1c0c301e7fd3d572657ee35fa | @classmethod
def from_json(cls, path_or_str, name=None, rel_dir=None, frame_labels_dir=None):
'Loads a :class:`Dataset` from JSON generated by\n :func:`fiftyone.core.collections.SampleCollection.write_json` or\n :func:`fiftyone.core.collections.SampleCollection.to_json`.\n\n The JSON file can contain an export of any\n :class:`fiftyone.core.collections.SampleCollection`, e.g.,\n :class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.\n\n Args:\n path_or_str: the path to a JSON file on disk or a JSON string\n name (None): a name for the new dataset. By default, ``d["name"]``\n is used\n rel_dir (None): a relative directory to prepend to the ``filepath``\n of each sample, if the filepath is not absolute (begins with a\n path separator). The path is converted to an absolute path\n (if necessary) via\n ``os.path.abspath(os.path.expanduser(rel_dir))``\n\n Returns:\n a :class:`Dataset`\n '
d = etas.load_json(path_or_str)
return cls.from_dict(d, name=name, rel_dir=rel_dir, frame_labels_dir=frame_labels_dir) | Loads a :class:`Dataset` from JSON generated by
:func:`fiftyone.core.collections.SampleCollection.write_json` or
:func:`fiftyone.core.collections.SampleCollection.to_json`.
The JSON file can contain an export of any
:class:`fiftyone.core.collections.SampleCollection`, e.g.,
:class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.
Args:
path_or_str: the path to a JSON file on disk or a JSON string
name (None): a name for the new dataset. By default, ``d["name"]``
is used
rel_dir (None): a relative directory to prepend to the ``filepath``
of each sample, if the filepath is not absolute (begins with a
path separator). The path is converted to an absolute path
(if necessary) via
``os.path.abspath(os.path.expanduser(rel_dir))``
Returns:
a :class:`Dataset` | fiftyone/core/dataset.py | from_json | dadounhind/fiftyone | 1 | python | @classmethod
def from_json(cls, path_or_str, name=None, rel_dir=None, frame_labels_dir=None):
'Loads a :class:`Dataset` from JSON generated by\n :func:`fiftyone.core.collections.SampleCollection.write_json` or\n :func:`fiftyone.core.collections.SampleCollection.to_json`.\n\n The JSON file can contain an export of any\n :class:`fiftyone.core.collections.SampleCollection`, e.g.,\n :class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.\n\n Args:\n path_or_str: the path to a JSON file on disk or a JSON string\n name (None): a name for the new dataset. By default, ``d["name"]``\n is used\n rel_dir (None): a relative directory to prepend to the ``filepath``\n of each sample, if the filepath is not absolute (begins with a\n path separator). The path is converted to an absolute path\n (if necessary) via\n ``os.path.abspath(os.path.expanduser(rel_dir))``\n\n Returns:\n a :class:`Dataset`\n '
d = etas.load_json(path_or_str)
return cls.from_dict(d, name=name, rel_dir=rel_dir, frame_labels_dir=frame_labels_dir) | @classmethod
def from_json(cls, path_or_str, name=None, rel_dir=None, frame_labels_dir=None):
'Loads a :class:`Dataset` from JSON generated by\n :func:`fiftyone.core.collections.SampleCollection.write_json` or\n :func:`fiftyone.core.collections.SampleCollection.to_json`.\n\n The JSON file can contain an export of any\n :class:`fiftyone.core.collections.SampleCollection`, e.g.,\n :class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.\n\n Args:\n path_or_str: the path to a JSON file on disk or a JSON string\n name (None): a name for the new dataset. By default, ``d["name"]``\n is used\n rel_dir (None): a relative directory to prepend to the ``filepath``\n of each sample, if the filepath is not absolute (begins with a\n path separator). The path is converted to an absolute path\n (if necessary) via\n ``os.path.abspath(os.path.expanduser(rel_dir))``\n\n Returns:\n a :class:`Dataset`\n '
d = etas.load_json(path_or_str)
return cls.from_dict(d, name=name, rel_dir=rel_dir, frame_labels_dir=frame_labels_dir)<|docstring|>Loads a :class:`Dataset` from JSON generated by
:func:`fiftyone.core.collections.SampleCollection.write_json` or
:func:`fiftyone.core.collections.SampleCollection.to_json`.
The JSON file can contain an export of any
:class:`fiftyone.core.collections.SampleCollection`, e.g.,
:class:`Dataset` or :class:`fiftyone.core.view.DatasetView`.
Args:
path_or_str: the path to a JSON file on disk or a JSON string
name (None): a name for the new dataset. By default, ``d["name"]``
is used
rel_dir (None): a relative directory to prepend to the ``filepath``
of each sample, if the filepath is not absolute (begins with a
path separator). The path is converted to an absolute path
(if necessary) via
``os.path.abspath(os.path.expanduser(rel_dir))``
Returns:
a :class:`Dataset`<|endoftext|> |
a3c261e517d5f62115129d0468b829cd89d6c0675c35691ca236045520161bda | def plugin_unloaded(self) -> None:
'\n This is called **from the main thread** when the plugin unloads. In that case we must destroy all sessions\n from the main thread. That could lead to some dict/list being mutated while iterated over, so be careful\n '
self._end_sessions_async() | This is called **from the main thread** when the plugin unloads. In that case we must destroy all sessions
from the main thread. That could lead to some dict/list being mutated while iterated over, so be careful | plugin/core/windows.py | plugin_unloaded | chendesheng/LSP | 0 | python | def plugin_unloaded(self) -> None:
'\n This is called **from the main thread** when the plugin unloads. In that case we must destroy all sessions\n from the main thread. That could lead to some dict/list being mutated while iterated over, so be careful\n '
self._end_sessions_async() | def plugin_unloaded(self) -> None:
'\n This is called **from the main thread** when the plugin unloads. In that case we must destroy all sessions\n from the main thread. That could lead to some dict/list being mutated while iterated over, so be careful\n '
self._end_sessions_async()<|docstring|>This is called **from the main thread** when the plugin unloads. In that case we must destroy all sessions
from the main thread. That could lead to some dict/list being mutated while iterated over, so be careful<|endoftext|> |
b6329cfd9ce5533cfc0cb03a381c3b2178c8db0dd1e313b498daa35352d7a166 | def stderr_message(self, message: str) -> None:
'\n Not handled here as stderr messages are handled by WindowManager regardless\n if this logger is enabled.\n '
pass | Not handled here as stderr messages are handled by WindowManager regardless
if this logger is enabled. | plugin/core/windows.py | stderr_message | chendesheng/LSP | 0 | python | def stderr_message(self, message: str) -> None:
'\n Not handled here as stderr messages are handled by WindowManager regardless\n if this logger is enabled.\n '
pass | def stderr_message(self, message: str) -> None:
'\n Not handled here as stderr messages are handled by WindowManager regardless\n if this logger is enabled.\n '
pass<|docstring|>Not handled here as stderr messages are handled by WindowManager regardless
if this logger is enabled.<|endoftext|> |
2ddba18b5447029135df4290f497b149c433c957a7e82446fd382e567e0a55cf | def _on_new_client(self, client: Dict, server: WebsocketServer) -> None:
'Called for every client connecting (after handshake).'
debug(('New client connected and was given id %d' % client['id'])) | Called for every client connecting (after handshake). | plugin/core/windows.py | _on_new_client | chendesheng/LSP | 0 | python | def _on_new_client(self, client: Dict, server: WebsocketServer) -> None:
debug(('New client connected and was given id %d' % client['id'])) | def _on_new_client(self, client: Dict, server: WebsocketServer) -> None:
debug(('New client connected and was given id %d' % client['id']))<|docstring|>Called for every client connecting (after handshake).<|endoftext|> |
6b6a7a987b39358e1eb823041e53c65f1613655813dc9e075ccf23c9f1848722 | def _on_client_left(self, client: Dict, server: WebsocketServer) -> None:
'Called for every client disconnecting.'
debug(('Client(%d) disconnected' % client['id'])) | Called for every client disconnecting. | plugin/core/windows.py | _on_client_left | chendesheng/LSP | 0 | python | def _on_client_left(self, client: Dict, server: WebsocketServer) -> None:
debug(('Client(%d) disconnected' % client['id'])) | def _on_client_left(self, client: Dict, server: WebsocketServer) -> None:
debug(('Client(%d) disconnected' % client['id']))<|docstring|>Called for every client disconnecting.<|endoftext|> |
c7071900d1a3322aff7654ae75be251879333999c43babdcf30119d07a7e18cb | def _on_message_received(self, client: Dict, server: WebsocketServer, message: str) -> None:
'Called when a client sends a message.'
debug(('Client(%d) said: %s' % (client['id'], message))) | Called when a client sends a message. | plugin/core/windows.py | _on_message_received | chendesheng/LSP | 0 | python | def _on_message_received(self, client: Dict, server: WebsocketServer, message: str) -> None:
debug(('Client(%d) said: %s' % (client['id'], message))) | def _on_message_received(self, client: Dict, server: WebsocketServer, message: str) -> None:
debug(('Client(%d) said: %s' % (client['id'], message)))<|docstring|>Called when a client sends a message.<|endoftext|> |
da05866294adef670d63a4dac5e8492aa1df718e996f65752a0ade06843de666 | def printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):
'\n Call in a loop to create terminal progress bar\n @params:\n iteration - Required : current iteration (Int)\n total - Required : total iterations (Int)\n prefix - Optional : prefix string (Str)\n suffix - Optional : suffix string (Str)\n decimals - Optional : positive number of decimals in percent complete (Int)\n barLength - Optional : character length of bar (Int)\n '
formatStr = (('{0:.' + str(decimals)) + 'f}')
percents = formatStr.format((100 * (iteration / float(total))))
filledLength = int(round(((barLength * iteration) / float(total))))
bar = (('' * filledLength) + ('-' * (barLength - filledLength)))
(sys.stdout.write(('\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix))),)
if (iteration == total):
sys.stdout.write('\x1b[2K\r')
sys.stdout.flush() | Call in a loop to create terminal progress bar
@params:
iteration - Required : current iteration (Int)
total - Required : total iterations (Int)
prefix - Optional : prefix string (Str)
suffix - Optional : suffix string (Str)
decimals - Optional : positive number of decimals in percent complete (Int)
barLength - Optional : character length of bar (Int) | pysot/datasets/creation/vid.py | printProgress | eldercrow/tracking-pytorch | 0 | python | def printProgress(iteration, total, prefix=, suffix=, decimals=1, barLength=100):
'\n Call in a loop to create terminal progress bar\n @params:\n iteration - Required : current iteration (Int)\n total - Required : total iterations (Int)\n prefix - Optional : prefix string (Str)\n suffix - Optional : suffix string (Str)\n decimals - Optional : positive number of decimals in percent complete (Int)\n barLength - Optional : character length of bar (Int)\n '
formatStr = (('{0:.' + str(decimals)) + 'f}')
percents = formatStr.format((100 * (iteration / float(total))))
filledLength = int(round(((barLength * iteration) / float(total))))
bar = (( * filledLength) + ('-' * (barLength - filledLength)))
(sys.stdout.write(('\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix))),)
if (iteration == total):
sys.stdout.write('\x1b[2K\r')
sys.stdout.flush() | def printProgress(iteration, total, prefix=, suffix=, decimals=1, barLength=100):
'\n Call in a loop to create terminal progress bar\n @params:\n iteration - Required : current iteration (Int)\n total - Required : total iterations (Int)\n prefix - Optional : prefix string (Str)\n suffix - Optional : suffix string (Str)\n decimals - Optional : positive number of decimals in percent complete (Int)\n barLength - Optional : character length of bar (Int)\n '
formatStr = (('{0:.' + str(decimals)) + 'f}')
percents = formatStr.format((100 * (iteration / float(total))))
filledLength = int(round(((barLength * iteration) / float(total))))
bar = (( * filledLength) + ('-' * (barLength - filledLength)))
(sys.stdout.write(('\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix))),)
if (iteration == total):
sys.stdout.write('\x1b[2K\r')
sys.stdout.flush()<|docstring|>Call in a loop to create terminal progress bar
@params:
iteration - Required : current iteration (Int)
total - Required : total iterations (Int)
prefix - Optional : prefix string (Str)
suffix - Optional : suffix string (Str)
decimals - Optional : positive number of decimals in percent complete (Int)
barLength - Optional : character length of bar (Int)<|endoftext|> |
3504eac8cb1c07455d3b20099515cc3cb190ebebc458dd35cb0bc9a9004dc7a2 | def load_concepts(skip_homonym=False) -> List[List[str]]:
'Load concepts from disk. '
data = {}
feature_vocab = set()
category_vocab = set()
ref_vocab = set()
for f in os.listdir('resources/concepts/'):
tree = et.parse(('resources/concepts/%s' % f)).getroot()
cat = tree.get('category')
category_vocab.add(cat)
for subcat in tree:
if (subcat.tag == 'concept'):
subcatname = cat
concept = subcat
name = concept.get('name')
if (('_' in name) and skip_homonym):
continue
ref_vocab.add(name)
feats = []
for aspect in concept:
attrs = aspect.text.split()
feats += attrs
feature_vocab.update(set(attrs))
data[(cat, subcatname, name)] = feats
elif (subcat.tag == 'subcategory'):
subcatname = subcat.get('name')
category_vocab.add(subcatname)
for concept in subcat.findall('concept'):
name = concept.get('name')
if (('_' in name) and skip_homonym):
continue
ref_vocab.add(name)
feats = []
for aspect in concept:
attrs = aspect.text.split()
feats += attrs
feature_vocab.update(set(attrs))
data[(cat, subcatname, name)] = feats
else:
assert False, '`concept` and `subcategory` should be exhaustive.'
return (data, feature_vocab, category_vocab) | Load concepts from disk. | egg/zoo/objects_game_concepts/concepts.py | load_concepts | cjlovering/EGG | 0 | python | def load_concepts(skip_homonym=False) -> List[List[str]]:
' '
data = {}
feature_vocab = set()
category_vocab = set()
ref_vocab = set()
for f in os.listdir('resources/concepts/'):
tree = et.parse(('resources/concepts/%s' % f)).getroot()
cat = tree.get('category')
category_vocab.add(cat)
for subcat in tree:
if (subcat.tag == 'concept'):
subcatname = cat
concept = subcat
name = concept.get('name')
if (('_' in name) and skip_homonym):
continue
ref_vocab.add(name)
feats = []
for aspect in concept:
attrs = aspect.text.split()
feats += attrs
feature_vocab.update(set(attrs))
data[(cat, subcatname, name)] = feats
elif (subcat.tag == 'subcategory'):
subcatname = subcat.get('name')
category_vocab.add(subcatname)
for concept in subcat.findall('concept'):
name = concept.get('name')
if (('_' in name) and skip_homonym):
continue
ref_vocab.add(name)
feats = []
for aspect in concept:
attrs = aspect.text.split()
feats += attrs
feature_vocab.update(set(attrs))
data[(cat, subcatname, name)] = feats
else:
assert False, '`concept` and `subcategory` should be exhaustive.'
return (data, feature_vocab, category_vocab) | def load_concepts(skip_homonym=False) -> List[List[str]]:
' '
data = {}
feature_vocab = set()
category_vocab = set()
ref_vocab = set()
for f in os.listdir('resources/concepts/'):
tree = et.parse(('resources/concepts/%s' % f)).getroot()
cat = tree.get('category')
category_vocab.add(cat)
for subcat in tree:
if (subcat.tag == 'concept'):
subcatname = cat
concept = subcat
name = concept.get('name')
if (('_' in name) and skip_homonym):
continue
ref_vocab.add(name)
feats = []
for aspect in concept:
attrs = aspect.text.split()
feats += attrs
feature_vocab.update(set(attrs))
data[(cat, subcatname, name)] = feats
elif (subcat.tag == 'subcategory'):
subcatname = subcat.get('name')
category_vocab.add(subcatname)
for concept in subcat.findall('concept'):
name = concept.get('name')
if (('_' in name) and skip_homonym):
continue
ref_vocab.add(name)
feats = []
for aspect in concept:
attrs = aspect.text.split()
feats += attrs
feature_vocab.update(set(attrs))
data[(cat, subcatname, name)] = feats
else:
assert False, '`concept` and `subcategory` should be exhaustive.'
return (data, feature_vocab, category_vocab)<|docstring|>Load concepts from disk.<|endoftext|> |
dc257c81c7602dd0897484e2a669dce2b5229786782869927cfd2519202d1da8 | def save_summary(dataset_path: str, train, test, dev):
'Summarize the dataset. '
data = ((train + test) + dev)
features = sorted(list(set(list(itertools.chain.from_iterable(data)))))
with open(f'{dataset_path}/summary.json', 'w') as f:
json.dump({'num_train_items': len(train), 'num_test_items': len(test), 'num_dev_items': len(dev), 'num_feature_sets': len(features), 'num_feature_values': 1, 'num_features': len(features), 'features': features}, f, indent=2) | Summarize the dataset. | egg/zoo/objects_game_concepts/concepts.py | save_summary | cjlovering/EGG | 0 | python | def save_summary(dataset_path: str, train, test, dev):
' '
data = ((train + test) + dev)
features = sorted(list(set(list(itertools.chain.from_iterable(data)))))
with open(f'{dataset_path}/summary.json', 'w') as f:
json.dump({'num_train_items': len(train), 'num_test_items': len(test), 'num_dev_items': len(dev), 'num_feature_sets': len(features), 'num_feature_values': 1, 'num_features': len(features), 'features': features}, f, indent=2) | def save_summary(dataset_path: str, train, test, dev):
' '
data = ((train + test) + dev)
features = sorted(list(set(list(itertools.chain.from_iterable(data)))))
with open(f'{dataset_path}/summary.json', 'w') as f:
json.dump({'num_train_items': len(train), 'num_test_items': len(test), 'num_dev_items': len(dev), 'num_feature_sets': len(features), 'num_feature_values': 1, 'num_features': len(features), 'features': features}, f, indent=2)<|docstring|>Summarize the dataset.<|endoftext|> |
14a9305996725fff2ddfb52d377e86561494ce47f1c1a52754c78f1bfda67b28 | def format_client_list_result(result, exclude_attributes=None):
'\n Format an API client list return which contains a list of objects.\n\n :param exclude_attributes: Optional list of attributes to exclude from the item.\n :type exclude_attributes: ``list``\n\n :rtype: ``list`` of ``dict``\n '
formatted = []
for item in result:
value = item.to_dict(exclude_attributes=exclude_attributes)
formatted.append(value)
return formatted | Format an API client list return which contains a list of objects.
:param exclude_attributes: Optional list of attributes to exclude from the item.
:type exclude_attributes: ``list``
:rtype: ``list`` of ``dict`` | packs/st2/actions/lib/formatters.py | format_client_list_result | Mierdin/st2contrib | 164 | python | def format_client_list_result(result, exclude_attributes=None):
'\n Format an API client list return which contains a list of objects.\n\n :param exclude_attributes: Optional list of attributes to exclude from the item.\n :type exclude_attributes: ``list``\n\n :rtype: ``list`` of ``dict``\n '
formatted = []
for item in result:
value = item.to_dict(exclude_attributes=exclude_attributes)
formatted.append(value)
return formatted | def format_client_list_result(result, exclude_attributes=None):
'\n Format an API client list return which contains a list of objects.\n\n :param exclude_attributes: Optional list of attributes to exclude from the item.\n :type exclude_attributes: ``list``\n\n :rtype: ``list`` of ``dict``\n '
formatted = []
for item in result:
value = item.to_dict(exclude_attributes=exclude_attributes)
formatted.append(value)
return formatted<|docstring|>Format an API client list return which contains a list of objects.
:param exclude_attributes: Optional list of attributes to exclude from the item.
:type exclude_attributes: ``list``
:rtype: ``list`` of ``dict``<|endoftext|> |
203adfd203a6bb76718cfa3909310e39494b1c8804532411e1b3d8cc0f0a5603 | def _get_dictionary(variables: List[Tuple[(str, List[str])]], dict_name: str, dict_items: List[str]):
'\n Get the dictionary whose keys are the autocompletion options\n ${dict_name}([dict_key])*[<dictionary.keys()>]\n '
for (var_name, var_value) in variables:
if (not var_name.startswith(dict_name)):
continue
dictionary = _as_dictionary(var_value)
dict_keys = dictionary.keys()
dict_entry = dict_items.pop()
if (dict_entry == ''):
return dictionary
matching_keys = [key for key in dict_keys if (dict_entry in key)]
if (len(matching_keys) == 0):
return {}
if ((len(matching_keys) == 1) and (dict_entry == matching_keys[0])):
dict_value = dictionary[dict_entry]
if dict_value.startswith('&'):
dict_name = _get_dict_name(dict_value)
dict_items += _get_dict_keys(dict_value)
return _get_dictionary(variables, dict_name, dict_items)
else:
return {}
else:
return {key: dictionary[key] for key in matching_keys}
return None | Get the dictionary whose keys are the autocompletion options
${dict_name}([dict_key])*[<dictionary.keys()>] | robotframework-ls/src/robotframework_ls/impl/dictionary_completions.py | _get_dictionary | JoeyGrajciar/robotframework-lsp | 92 | python | def _get_dictionary(variables: List[Tuple[(str, List[str])]], dict_name: str, dict_items: List[str]):
'\n Get the dictionary whose keys are the autocompletion options\n ${dict_name}([dict_key])*[<dictionary.keys()>]\n '
for (var_name, var_value) in variables:
if (not var_name.startswith(dict_name)):
continue
dictionary = _as_dictionary(var_value)
dict_keys = dictionary.keys()
dict_entry = dict_items.pop()
if (dict_entry == ):
return dictionary
matching_keys = [key for key in dict_keys if (dict_entry in key)]
if (len(matching_keys) == 0):
return {}
if ((len(matching_keys) == 1) and (dict_entry == matching_keys[0])):
dict_value = dictionary[dict_entry]
if dict_value.startswith('&'):
dict_name = _get_dict_name(dict_value)
dict_items += _get_dict_keys(dict_value)
return _get_dictionary(variables, dict_name, dict_items)
else:
return {}
else:
return {key: dictionary[key] for key in matching_keys}
return None | def _get_dictionary(variables: List[Tuple[(str, List[str])]], dict_name: str, dict_items: List[str]):
'\n Get the dictionary whose keys are the autocompletion options\n ${dict_name}([dict_key])*[<dictionary.keys()>]\n '
for (var_name, var_value) in variables:
if (not var_name.startswith(dict_name)):
continue
dictionary = _as_dictionary(var_value)
dict_keys = dictionary.keys()
dict_entry = dict_items.pop()
if (dict_entry == ):
return dictionary
matching_keys = [key for key in dict_keys if (dict_entry in key)]
if (len(matching_keys) == 0):
return {}
if ((len(matching_keys) == 1) and (dict_entry == matching_keys[0])):
dict_value = dictionary[dict_entry]
if dict_value.startswith('&'):
dict_name = _get_dict_name(dict_value)
dict_items += _get_dict_keys(dict_value)
return _get_dictionary(variables, dict_name, dict_items)
else:
return {}
else:
return {key: dictionary[key] for key in matching_keys}
return None<|docstring|>Get the dictionary whose keys are the autocompletion options
${dict_name}([dict_key])*[<dictionary.keys()>]<|endoftext|> |
6f97a969bab54b4ec1b2e7c19055863b057756ac3ab29a6bdc85d5aa0f4694b3 | def _as_dictionary(dict_tokens: List[str]):
'\n Parse ["key1=val1", "key2=val2",...] as a dictionary\n '
dictionary = {}
for token in dict_tokens:
(key, val) = token.split('=')
dictionary.update({key: val})
return dictionary | Parse ["key1=val1", "key2=val2",...] as a dictionary | robotframework-ls/src/robotframework_ls/impl/dictionary_completions.py | _as_dictionary | JoeyGrajciar/robotframework-lsp | 92 | python | def _as_dictionary(dict_tokens: List[str]):
'\n \n '
dictionary = {}
for token in dict_tokens:
(key, val) = token.split('=')
dictionary.update({key: val})
return dictionary | def _as_dictionary(dict_tokens: List[str]):
'\n \n '
dictionary = {}
for token in dict_tokens:
(key, val) = token.split('=')
dictionary.update({key: val})
return dictionary<|docstring|>Parse ["key1=val1", "key2=val2",...] as a dictionary<|endoftext|> |
e97cb964616153d87fd00cafb506625d72958e1abc1a43a779b81accf3e4fc0a | def get(self, telegram_id: int) -> Optional[User]:
'\n Find user by his telegram_id\n :param telegram_id: User telegram_id\n :rtype: Optional[User]\n :return: Found user or None\n '
return db.session.query(User).from_statement(text('SELECT * FROM users WHERE telegram_id = :telegram_id')).params(telegram_id=telegram_id).first() | Find user by his telegram_id
:param telegram_id: User telegram_id
:rtype: Optional[User]
:return: Found user or None | bot/src/repository/user_repository.py | get | demidovakatya/telegram-channels-feed | 37 | python | def get(self, telegram_id: int) -> Optional[User]:
'\n Find user by his telegram_id\n :param telegram_id: User telegram_id\n :rtype: Optional[User]\n :return: Found user or None\n '
return db.session.query(User).from_statement(text('SELECT * FROM users WHERE telegram_id = :telegram_id')).params(telegram_id=telegram_id).first() | def get(self, telegram_id: int) -> Optional[User]:
'\n Find user by his telegram_id\n :param telegram_id: User telegram_id\n :rtype: Optional[User]\n :return: Found user or None\n '
return db.session.query(User).from_statement(text('SELECT * FROM users WHERE telegram_id = :telegram_id')).params(telegram_id=telegram_id).first()<|docstring|>Find user by his telegram_id
:param telegram_id: User telegram_id
:rtype: Optional[User]
:return: Found user or None<|endoftext|> |
24baab672ef977b45b830bd85b325d3f24b35ec61cee66dbd4484c62bbdf3c11 | def get_or_create(self, telegram_id: int) -> User:
"\n Creates user if he doesn't exists or simply returns found user\n :param telegram_id: User telegram_id\n :rtype: User\n :return: Created or existing user\n "
return db.session.query(User).from_statement(text('\n INSERT INTO users (telegram_id)\n VALUES (:telegram_id)\n ON CONFLICT DO NOTHING;\n SELECT * FROM users WHERE telegram_id = :telegram_id;\n ')).params(telegram_id=telegram_id).first() | Creates user if he doesn't exists or simply returns found user
:param telegram_id: User telegram_id
:rtype: User
:return: Created or existing user | bot/src/repository/user_repository.py | get_or_create | demidovakatya/telegram-channels-feed | 37 | python | def get_or_create(self, telegram_id: int) -> User:
"\n Creates user if he doesn't exists or simply returns found user\n :param telegram_id: User telegram_id\n :rtype: User\n :return: Created or existing user\n "
return db.session.query(User).from_statement(text('\n INSERT INTO users (telegram_id)\n VALUES (:telegram_id)\n ON CONFLICT DO NOTHING;\n SELECT * FROM users WHERE telegram_id = :telegram_id;\n ')).params(telegram_id=telegram_id).first() | def get_or_create(self, telegram_id: int) -> User:
"\n Creates user if he doesn't exists or simply returns found user\n :param telegram_id: User telegram_id\n :rtype: User\n :return: Created or existing user\n "
return db.session.query(User).from_statement(text('\n INSERT INTO users (telegram_id)\n VALUES (:telegram_id)\n ON CONFLICT DO NOTHING;\n SELECT * FROM users WHERE telegram_id = :telegram_id;\n ')).params(telegram_id=telegram_id).first()<|docstring|>Creates user if he doesn't exists or simply returns found user
:param telegram_id: User telegram_id
:rtype: User
:return: Created or existing user<|endoftext|> |
8b2f0f6cabfbf034e3b41ddf2aaaabea2ddf14729f64598a3bc8d2f9f7d4c558 | def change_settings(self, telegram_id: int, redirect_url: Optional[str]) -> bool:
'\n Change user settings\n :param telegram_id: User telegram ID\n :param redirect_url: Redirect URL. Can be None\n :rtype: bool\n :return: Update successful or not\n '
return (db.session.execute(text('UPDATE users SET redirect_url = :redirect_url WHERE telegram_id = :telegram_id'), {'telegram_id': telegram_id, 'redirect_url': redirect_url}).rowcount > 0) | Change user settings
:param telegram_id: User telegram ID
:param redirect_url: Redirect URL. Can be None
:rtype: bool
:return: Update successful or not | bot/src/repository/user_repository.py | change_settings | demidovakatya/telegram-channels-feed | 37 | python | def change_settings(self, telegram_id: int, redirect_url: Optional[str]) -> bool:
'\n Change user settings\n :param telegram_id: User telegram ID\n :param redirect_url: Redirect URL. Can be None\n :rtype: bool\n :return: Update successful or not\n '
return (db.session.execute(text('UPDATE users SET redirect_url = :redirect_url WHERE telegram_id = :telegram_id'), {'telegram_id': telegram_id, 'redirect_url': redirect_url}).rowcount > 0) | def change_settings(self, telegram_id: int, redirect_url: Optional[str]) -> bool:
'\n Change user settings\n :param telegram_id: User telegram ID\n :param redirect_url: Redirect URL. Can be None\n :rtype: bool\n :return: Update successful or not\n '
return (db.session.execute(text('UPDATE users SET redirect_url = :redirect_url WHERE telegram_id = :telegram_id'), {'telegram_id': telegram_id, 'redirect_url': redirect_url}).rowcount > 0)<|docstring|>Change user settings
:param telegram_id: User telegram ID
:param redirect_url: Redirect URL. Can be None
:rtype: bool
:return: Update successful or not<|endoftext|> |
b468f39ea09445b7f941ee7ffea8929b6c20e91852c9fdf049fdc3667e30e4d3 | def train_step(x_batch, y_batch):
'\n A single training step\n '
feed_dict = {cnn.input_x: x_batch, cnn.input_y: y_batch, cnn.dropout_keep_prob: args.dropout_keep_prob}
(_, step, loss) = sess.run([train_op, global_step, cnn.loss], feed_dict) | A single training step | ConvKB_tf/train.py | train_step | daiquocnguyen/ConvKB | 188 | python | def train_step(x_batch, y_batch):
'\n \n '
feed_dict = {cnn.input_x: x_batch, cnn.input_y: y_batch, cnn.dropout_keep_prob: args.dropout_keep_prob}
(_, step, loss) = sess.run([train_op, global_step, cnn.loss], feed_dict) | def train_step(x_batch, y_batch):
'\n \n '
feed_dict = {cnn.input_x: x_batch, cnn.input_y: y_batch, cnn.dropout_keep_prob: args.dropout_keep_prob}
(_, step, loss) = sess.run([train_op, global_step, cnn.loss], feed_dict)<|docstring|>A single training step<|endoftext|> |
8ab25d8ff196eb97a47c08ecc2917626ead49d035eb0834ae61fc5d8d2e0b58d | async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry, async_add_entities: AddEntitiesCallback) -> None:
'Set up the sensor config entry.'
controller_data = get_controller_data(hass, entry)
async_add_entities([VeraSensor(device, controller_data) for device in controller_data.devices[Platform.SENSOR]], True) | Set up the sensor config entry. | homeassistant/components/vera/sensor.py | async_setup_entry | a-p-z/core | 30,023 | python | async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry, async_add_entities: AddEntitiesCallback) -> None:
controller_data = get_controller_data(hass, entry)
async_add_entities([VeraSensor(device, controller_data) for device in controller_data.devices[Platform.SENSOR]], True) | async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry, async_add_entities: AddEntitiesCallback) -> None:
controller_data = get_controller_data(hass, entry)
async_add_entities([VeraSensor(device, controller_data) for device in controller_data.devices[Platform.SENSOR]], True)<|docstring|>Set up the sensor config entry.<|endoftext|> |
a172d4668611aa3833d2002b92dbd5f077128eaa9a9efb9a5a375437c8a16562 | def __init__(self, vera_device: veraApi.VeraSensor, controller_data: ControllerData) -> None:
'Initialize the sensor.'
self.current_value: StateType = None
self._temperature_units: (str | None) = None
self.last_changed_time = None
VeraDevice.__init__(self, vera_device, controller_data)
self.entity_id = ENTITY_ID_FORMAT.format(self.vera_id) | Initialize the sensor. | homeassistant/components/vera/sensor.py | __init__ | a-p-z/core | 30,023 | python | def __init__(self, vera_device: veraApi.VeraSensor, controller_data: ControllerData) -> None:
self.current_value: StateType = None
self._temperature_units: (str | None) = None
self.last_changed_time = None
VeraDevice.__init__(self, vera_device, controller_data)
self.entity_id = ENTITY_ID_FORMAT.format(self.vera_id) | def __init__(self, vera_device: veraApi.VeraSensor, controller_data: ControllerData) -> None:
self.current_value: StateType = None
self._temperature_units: (str | None) = None
self.last_changed_time = None
VeraDevice.__init__(self, vera_device, controller_data)
self.entity_id = ENTITY_ID_FORMAT.format(self.vera_id)<|docstring|>Initialize the sensor.<|endoftext|> |
7895def382c720515d107eabfb8cf4c221bdf94f47ee80c7e27df0c53ef34f12 | @property
def native_value(self) -> StateType:
'Return the name of the sensor.'
return self.current_value | Return the name of the sensor. | homeassistant/components/vera/sensor.py | native_value | a-p-z/core | 30,023 | python | @property
def native_value(self) -> StateType:
return self.current_value | @property
def native_value(self) -> StateType:
return self.current_value<|docstring|>Return the name of the sensor.<|endoftext|> |
18a8f45897a36af0bcff6877a29f3fd284b53637b6dae6b2ce1688fcfd585b05 | @property
def device_class(self) -> (str | None):
'Return the class of this entity.'
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
return SensorDeviceClass.TEMPERATURE
if (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
return SensorDeviceClass.ILLUMINANCE
if (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
return SensorDeviceClass.HUMIDITY
if (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
return SensorDeviceClass.POWER
return None | Return the class of this entity. | homeassistant/components/vera/sensor.py | device_class | a-p-z/core | 30,023 | python | @property
def device_class(self) -> (str | None):
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
return SensorDeviceClass.TEMPERATURE
if (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
return SensorDeviceClass.ILLUMINANCE
if (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
return SensorDeviceClass.HUMIDITY
if (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
return SensorDeviceClass.POWER
return None | @property
def device_class(self) -> (str | None):
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
return SensorDeviceClass.TEMPERATURE
if (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
return SensorDeviceClass.ILLUMINANCE
if (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
return SensorDeviceClass.HUMIDITY
if (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
return SensorDeviceClass.POWER
return None<|docstring|>Return the class of this entity.<|endoftext|> |
70428b9fae81591ff11d6be7ce972d4ba0f9bffabdb2a5b56f89924410fbc4da | @property
def native_unit_of_measurement(self) -> (str | None):
'Return the unit of measurement of this entity, if any.'
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
return self._temperature_units
if (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
return LIGHT_LUX
if (self.vera_device.category == veraApi.CATEGORY_UV_SENSOR):
return 'level'
if (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
return PERCENTAGE
if (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
return POWER_WATT
return None | Return the unit of measurement of this entity, if any. | homeassistant/components/vera/sensor.py | native_unit_of_measurement | a-p-z/core | 30,023 | python | @property
def native_unit_of_measurement(self) -> (str | None):
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
return self._temperature_units
if (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
return LIGHT_LUX
if (self.vera_device.category == veraApi.CATEGORY_UV_SENSOR):
return 'level'
if (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
return PERCENTAGE
if (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
return POWER_WATT
return None | @property
def native_unit_of_measurement(self) -> (str | None):
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
return self._temperature_units
if (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
return LIGHT_LUX
if (self.vera_device.category == veraApi.CATEGORY_UV_SENSOR):
return 'level'
if (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
return PERCENTAGE
if (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
return POWER_WATT
return None<|docstring|>Return the unit of measurement of this entity, if any.<|endoftext|> |
52cf0466461c6023119c87e309bdbbde1ff667235ecb88588ce2f4171945351d | def update(self) -> None:
'Update the state.'
super().update()
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
self.current_value = self.vera_device.temperature
vera_temp_units = self.vera_device.vera_controller.temperature_units
if (vera_temp_units == 'F'):
self._temperature_units = TEMP_FAHRENHEIT
else:
self._temperature_units = TEMP_CELSIUS
elif (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
self.current_value = self.vera_device.light
elif (self.vera_device.category == veraApi.CATEGORY_UV_SENSOR):
self.current_value = self.vera_device.light
elif (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
self.current_value = self.vera_device.humidity
elif (self.vera_device.category == veraApi.CATEGORY_SCENE_CONTROLLER):
controller = cast(veraApi.VeraSceneController, self.vera_device)
value = controller.get_last_scene_id(True)
time = controller.get_last_scene_time(True)
if (time == self.last_changed_time):
self.current_value = None
else:
self.current_value = value
self.last_changed_time = time
elif (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
self.current_value = self.vera_device.power
elif self.vera_device.is_trippable:
tripped = self.vera_device.is_tripped
self.current_value = ('Tripped' if tripped else 'Not Tripped')
else:
self.current_value = 'Unknown' | Update the state. | homeassistant/components/vera/sensor.py | update | a-p-z/core | 30,023 | python | def update(self) -> None:
super().update()
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
self.current_value = self.vera_device.temperature
vera_temp_units = self.vera_device.vera_controller.temperature_units
if (vera_temp_units == 'F'):
self._temperature_units = TEMP_FAHRENHEIT
else:
self._temperature_units = TEMP_CELSIUS
elif (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
self.current_value = self.vera_device.light
elif (self.vera_device.category == veraApi.CATEGORY_UV_SENSOR):
self.current_value = self.vera_device.light
elif (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
self.current_value = self.vera_device.humidity
elif (self.vera_device.category == veraApi.CATEGORY_SCENE_CONTROLLER):
controller = cast(veraApi.VeraSceneController, self.vera_device)
value = controller.get_last_scene_id(True)
time = controller.get_last_scene_time(True)
if (time == self.last_changed_time):
self.current_value = None
else:
self.current_value = value
self.last_changed_time = time
elif (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
self.current_value = self.vera_device.power
elif self.vera_device.is_trippable:
tripped = self.vera_device.is_tripped
self.current_value = ('Tripped' if tripped else 'Not Tripped')
else:
self.current_value = 'Unknown' | def update(self) -> None:
super().update()
if (self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR):
self.current_value = self.vera_device.temperature
vera_temp_units = self.vera_device.vera_controller.temperature_units
if (vera_temp_units == 'F'):
self._temperature_units = TEMP_FAHRENHEIT
else:
self._temperature_units = TEMP_CELSIUS
elif (self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR):
self.current_value = self.vera_device.light
elif (self.vera_device.category == veraApi.CATEGORY_UV_SENSOR):
self.current_value = self.vera_device.light
elif (self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR):
self.current_value = self.vera_device.humidity
elif (self.vera_device.category == veraApi.CATEGORY_SCENE_CONTROLLER):
controller = cast(veraApi.VeraSceneController, self.vera_device)
value = controller.get_last_scene_id(True)
time = controller.get_last_scene_time(True)
if (time == self.last_changed_time):
self.current_value = None
else:
self.current_value = value
self.last_changed_time = time
elif (self.vera_device.category == veraApi.CATEGORY_POWER_METER):
self.current_value = self.vera_device.power
elif self.vera_device.is_trippable:
tripped = self.vera_device.is_tripped
self.current_value = ('Tripped' if tripped else 'Not Tripped')
else:
self.current_value = 'Unknown'<|docstring|>Update the state.<|endoftext|> |
8407e368d55eb7ec4cfd54748f6fdc542dc1ca8a201b50169b29ce5c450b8747 | def __init__(self, settings, screen):
'Initialise the board and set its starting position.'
self.settings = settings
self.screen = screen
self.n_rows = settings.n_rows
self.n_cols = settings.n_cols
self.cell_length = settings.cell_size[0]
self.grid = np.zeros((settings.n_rows, settings.n_cols))
self.rects = self.init_rect_grid()
self.rect = pygame.Rect((self.settings.padding_left, self.settings.padding_top), settings.board_size)
self.dark_tile_image = pygame.transform.scale(pygame.image.load(settings.tile_image_paths['dark']), settings.cell_size)
self.light_tile_image = pygame.transform.scale(pygame.image.load(settings.tile_image_paths['light']), settings.cell_size)
self.image = self.init_board_image() | Initialise the board and set its starting position. | board.py | __init__ | jonjau/connect-4-pancake | 0 | python | def __init__(self, settings, screen):
self.settings = settings
self.screen = screen
self.n_rows = settings.n_rows
self.n_cols = settings.n_cols
self.cell_length = settings.cell_size[0]
self.grid = np.zeros((settings.n_rows, settings.n_cols))
self.rects = self.init_rect_grid()
self.rect = pygame.Rect((self.settings.padding_left, self.settings.padding_top), settings.board_size)
self.dark_tile_image = pygame.transform.scale(pygame.image.load(settings.tile_image_paths['dark']), settings.cell_size)
self.light_tile_image = pygame.transform.scale(pygame.image.load(settings.tile_image_paths['light']), settings.cell_size)
self.image = self.init_board_image() | def __init__(self, settings, screen):
self.settings = settings
self.screen = screen
self.n_rows = settings.n_rows
self.n_cols = settings.n_cols
self.cell_length = settings.cell_size[0]
self.grid = np.zeros((settings.n_rows, settings.n_cols))
self.rects = self.init_rect_grid()
self.rect = pygame.Rect((self.settings.padding_left, self.settings.padding_top), settings.board_size)
self.dark_tile_image = pygame.transform.scale(pygame.image.load(settings.tile_image_paths['dark']), settings.cell_size)
self.light_tile_image = pygame.transform.scale(pygame.image.load(settings.tile_image_paths['light']), settings.cell_size)
self.image = self.init_board_image()<|docstring|>Initialise the board and set its starting position.<|endoftext|> |
5465728cbbfce5260d56279ca0cc15156e8758afc1516fbc158394075f69aea8 | def init_rect_grid(self):
"\n Initialises and returns a 2D grid of pygame.Rect's, representing\n the tiles on the board.\n "
n_rows = self.n_rows
n_cols = self.n_cols
cell_length = self.cell_length
rects = [[None for i in range(n_cols)] for j in range(n_rows)]
for i in range(n_rows):
for j in range(n_cols):
rects[i][j] = pygame.Rect((i * cell_length), (j * cell_length), cell_length, cell_length)
return rects | Initialises and returns a 2D grid of pygame.Rect's, representing
the tiles on the board. | board.py | init_rect_grid | jonjau/connect-4-pancake | 0 | python | def init_rect_grid(self):
"\n Initialises and returns a 2D grid of pygame.Rect's, representing\n the tiles on the board.\n "
n_rows = self.n_rows
n_cols = self.n_cols
cell_length = self.cell_length
rects = [[None for i in range(n_cols)] for j in range(n_rows)]
for i in range(n_rows):
for j in range(n_cols):
rects[i][j] = pygame.Rect((i * cell_length), (j * cell_length), cell_length, cell_length)
return rects | def init_rect_grid(self):
"\n Initialises and returns a 2D grid of pygame.Rect's, representing\n the tiles on the board.\n "
n_rows = self.n_rows
n_cols = self.n_cols
cell_length = self.cell_length
rects = [[None for i in range(n_cols)] for j in range(n_rows)]
for i in range(n_rows):
for j in range(n_cols):
rects[i][j] = pygame.Rect((i * cell_length), (j * cell_length), cell_length, cell_length)
return rects<|docstring|>Initialises and returns a 2D grid of pygame.Rect's, representing
the tiles on the board.<|endoftext|> |
0e2353c0608a4488792a43897c6bf6c6f5e80e8a860edf129bca7175c1b348cd | def init_board_image(self):
'\n Draws the board at its current position, tile by tile, then\n returns that image as a `pygame.Surface`.\n '
image = pygame.Surface(self.settings.board_size)
for row in range(self.n_rows):
for col in range(self.n_cols):
if ((row % 2) == (col % 2)):
tile_image = self.light_tile_image
else:
tile_image = self.dark_tile_image
image.blit(tile_image, self.rects[row][col].topleft[::(- 1)])
return image | Draws the board at its current position, tile by tile, then
returns that image as a `pygame.Surface`. | board.py | init_board_image | jonjau/connect-4-pancake | 0 | python | def init_board_image(self):
'\n Draws the board at its current position, tile by tile, then\n returns that image as a `pygame.Surface`.\n '
image = pygame.Surface(self.settings.board_size)
for row in range(self.n_rows):
for col in range(self.n_cols):
if ((row % 2) == (col % 2)):
tile_image = self.light_tile_image
else:
tile_image = self.dark_tile_image
image.blit(tile_image, self.rects[row][col].topleft[::(- 1)])
return image | def init_board_image(self):
'\n Draws the board at its current position, tile by tile, then\n returns that image as a `pygame.Surface`.\n '
image = pygame.Surface(self.settings.board_size)
for row in range(self.n_rows):
for col in range(self.n_cols):
if ((row % 2) == (col % 2)):
tile_image = self.light_tile_image
else:
tile_image = self.dark_tile_image
image.blit(tile_image, self.rects[row][col].topleft[::(- 1)])
return image<|docstring|>Draws the board at its current position, tile by tile, then
returns that image as a `pygame.Surface`.<|endoftext|> |
deb8e9c70f1374fc74eb08c30dd817fd030f564ce23a2029a76681cce5070091 | def draw(self):
'Blit the board to the screen at its current position.'
self.screen.blit(self.image, self.rect) | Blit the board to the screen at its current position. | board.py | draw | jonjau/connect-4-pancake | 0 | python | def draw(self):
self.screen.blit(self.image, self.rect) | def draw(self):
self.screen.blit(self.image, self.rect)<|docstring|>Blit the board to the screen at its current position.<|endoftext|> |
e70a5c0f295127abab423823ea8e3ba03f987e9d4e6447f2d3fdd4df259065a1 | def _set_up_test_uptime(self):
'\n Define common mock data for status.uptime tests\n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.now = 1477004312
m.ut = 1540154.0
m.idle = 3047777.32
m.ret = {'users': 3, 'seconds': 1540154, 'since_t': 1475464158, 'days': 17, 'since_iso': '2016-10-03T03:09:18', 'time': '19:49'}
return m | Define common mock data for status.uptime tests | tests/unit/modules/test_status.py | _set_up_test_uptime | johnskopis/salt | 12 | python | def _set_up_test_uptime(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.now = 1477004312
m.ut = 1540154.0
m.idle = 3047777.32
m.ret = {'users': 3, 'seconds': 1540154, 'since_t': 1475464158, 'days': 17, 'since_iso': '2016-10-03T03:09:18', 'time': '19:49'}
return m | def _set_up_test_uptime(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.now = 1477004312
m.ut = 1540154.0
m.idle = 3047777.32
m.ret = {'users': 3, 'seconds': 1540154, 'since_t': 1475464158, 'days': 17, 'since_iso': '2016-10-03T03:09:18', 'time': '19:49'}
return m<|docstring|>Define common mock data for status.uptime tests<|endoftext|> |
79303cfec42c76b40e099e9fc068c2c8df122785d69905061617122c235647ca | def _set_up_test_uptime_sunos(self):
'\n Define common mock data for cmd.run_all for status.uptime on SunOS\n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = {'retcode': 0, 'stdout': 'unix:0:system_misc:boot_time 1475464158'}
return m | Define common mock data for cmd.run_all for status.uptime on SunOS | tests/unit/modules/test_status.py | _set_up_test_uptime_sunos | johnskopis/salt | 12 | python | def _set_up_test_uptime_sunos(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = {'retcode': 0, 'stdout': 'unix:0:system_misc:boot_time 1475464158'}
return m | def _set_up_test_uptime_sunos(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = {'retcode': 0, 'stdout': 'unix:0:system_misc:boot_time 1475464158'}
return m<|docstring|>Define common mock data for cmd.run_all for status.uptime on SunOS<|endoftext|> |
62c97889c272655af3f48949ed5e9604954e6fc00fb68f6035d41c3dab376582 | def test_uptime_linux(self):
'\n Test modules.status.uptime function for Linux\n '
m = self._set_up_test_uptime()
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=True), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3']))}), patch('time.time', MagicMock(return_value=m.now)), patch('os.path.exists', MagicMock(return_value=True)):
proc_uptime = salt.utils.stringutils.to_str('{0} {1}'.format(m.ut, m.idle))
with patch('salt.utils.files.fopen', mock_open(read_data=proc_uptime)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)
with patch('os.path.exists', MagicMock(return_value=False)):
with self.assertRaises(CommandExecutionError):
status.uptime() | Test modules.status.uptime function for Linux | tests/unit/modules/test_status.py | test_uptime_linux | johnskopis/salt | 12 | python | def test_uptime_linux(self):
'\n \n '
m = self._set_up_test_uptime()
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=True), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3']))}), patch('time.time', MagicMock(return_value=m.now)), patch('os.path.exists', MagicMock(return_value=True)):
proc_uptime = salt.utils.stringutils.to_str('{0} {1}'.format(m.ut, m.idle))
with patch('salt.utils.files.fopen', mock_open(read_data=proc_uptime)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)
with patch('os.path.exists', MagicMock(return_value=False)):
with self.assertRaises(CommandExecutionError):
status.uptime() | def test_uptime_linux(self):
'\n \n '
m = self._set_up_test_uptime()
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=True), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3']))}), patch('time.time', MagicMock(return_value=m.now)), patch('os.path.exists', MagicMock(return_value=True)):
proc_uptime = salt.utils.stringutils.to_str('{0} {1}'.format(m.ut, m.idle))
with patch('salt.utils.files.fopen', mock_open(read_data=proc_uptime)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)
with patch('os.path.exists', MagicMock(return_value=False)):
with self.assertRaises(CommandExecutionError):
status.uptime()<|docstring|>Test modules.status.uptime function for Linux<|endoftext|> |
e6f4d8327d2e8e2eda424a1351c5d2769c94061b4ba11b39d0d10339f370babe | def test_uptime_sunos(self):
'\n Test modules.status.uptime function for SunOS\n '
m = self._set_up_test_uptime()
m2 = self._set_up_test_uptime_sunos()
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=True), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3'])), 'cmd.run_all': MagicMock(return_value=m2.ret)}), patch('time.time', MagicMock(return_value=m.now)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret) | Test modules.status.uptime function for SunOS | tests/unit/modules/test_status.py | test_uptime_sunos | johnskopis/salt | 12 | python | def test_uptime_sunos(self):
'\n \n '
m = self._set_up_test_uptime()
m2 = self._set_up_test_uptime_sunos()
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=True), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3'])), 'cmd.run_all': MagicMock(return_value=m2.ret)}), patch('time.time', MagicMock(return_value=m.now)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret) | def test_uptime_sunos(self):
'\n \n '
m = self._set_up_test_uptime()
m2 = self._set_up_test_uptime_sunos()
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=True), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3'])), 'cmd.run_all': MagicMock(return_value=m2.ret)}), patch('time.time', MagicMock(return_value=m.now)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)<|docstring|>Test modules.status.uptime function for SunOS<|endoftext|> |
d4507b0183a2e8239e992cbdefdae9602b2e570631bfc4aa86d2e96fc381bc96 | def test_uptime_macos(self):
'\n Test modules.status.uptime function for macOS\n '
m = self._set_up_test_uptime()
kern_boottime = '{{ sec = {0}, usec = {1:0<6} }} Mon Oct 03 03:09:18.23 2016'.format(*six.text_type((m.now - m.ut)).split('.'))
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=True), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3'])), 'sysctl.get': MagicMock(return_value=kern_boottime)}), patch('time.time', MagicMock(return_value=m.now)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)
with patch.dict(status.__salt__, {'sysctl.get': MagicMock(return_value='')}):
with self.assertRaises(CommandExecutionError):
status.uptime() | Test modules.status.uptime function for macOS | tests/unit/modules/test_status.py | test_uptime_macos | johnskopis/salt | 12 | python | def test_uptime_macos(self):
'\n \n '
m = self._set_up_test_uptime()
kern_boottime = '{{ sec = {0}, usec = {1:0<6} }} Mon Oct 03 03:09:18.23 2016'.format(*six.text_type((m.now - m.ut)).split('.'))
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=True), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3'])), 'sysctl.get': MagicMock(return_value=kern_boottime)}), patch('time.time', MagicMock(return_value=m.now)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)
with patch.dict(status.__salt__, {'sysctl.get': MagicMock(return_value=)}):
with self.assertRaises(CommandExecutionError):
status.uptime() | def test_uptime_macos(self):
'\n \n '
m = self._set_up_test_uptime()
kern_boottime = '{{ sec = {0}, usec = {1:0<6} }} Mon Oct 03 03:09:18.23 2016'.format(*six.text_type((m.now - m.ut)).split('.'))
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=True), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=os.linesep.join(['1', '2', '3'])), 'sysctl.get': MagicMock(return_value=kern_boottime)}), patch('time.time', MagicMock(return_value=m.now)):
ret = status.uptime()
self.assertDictEqual(ret, m.ret)
with patch.dict(status.__salt__, {'sysctl.get': MagicMock(return_value=)}):
with self.assertRaises(CommandExecutionError):
status.uptime()<|docstring|>Test modules.status.uptime function for macOS<|endoftext|> |
1855376d12642a6392a8700e673c8db749238744b2e8e49bee2e9132dc976db9 | def test_uptime_return_success_not_supported(self):
'\n Test modules.status.uptime function for other platforms\n '
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)):
exc_mock = MagicMock(side_effect=CommandExecutionError)
with self.assertRaises(CommandExecutionError):
with patch.dict(status.__salt__, {'cmd.run': exc_mock}):
status.uptime() | Test modules.status.uptime function for other platforms | tests/unit/modules/test_status.py | test_uptime_return_success_not_supported | johnskopis/salt | 12 | python | def test_uptime_return_success_not_supported(self):
'\n \n '
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)):
exc_mock = MagicMock(side_effect=CommandExecutionError)
with self.assertRaises(CommandExecutionError):
with patch.dict(status.__salt__, {'cmd.run': exc_mock}):
status.uptime() | def test_uptime_return_success_not_supported(self):
'\n \n '
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=False), is_netbsd=MagicMock(return_value=False)):
exc_mock = MagicMock(side_effect=CommandExecutionError)
with self.assertRaises(CommandExecutionError):
with patch.dict(status.__salt__, {'cmd.run': exc_mock}):
status.uptime()<|docstring|>Test modules.status.uptime function for other platforms<|endoftext|> |
74c3d3e9f3a9110724a1d8a4bf02dc0eacd1565bea57641ac693d5cb6142f906 | def _set_up_test_cpustats_openbsd(self):
'\n Define mock data for status.cpustats on OpenBSD\n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = {'0': {'User': '0.0%', 'Nice': '0.0%', 'System': '4.5%', 'Interrupt': '0.5%', 'Idle': '95.0%'}}
return m | Define mock data for status.cpustats on OpenBSD | tests/unit/modules/test_status.py | _set_up_test_cpustats_openbsd | johnskopis/salt | 12 | python | def _set_up_test_cpustats_openbsd(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = {'0': {'User': '0.0%', 'Nice': '0.0%', 'System': '4.5%', 'Interrupt': '0.5%', 'Idle': '95.0%'}}
return m | def _set_up_test_cpustats_openbsd(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = {'0': {'User': '0.0%', 'Nice': '0.0%', 'System': '4.5%', 'Interrupt': '0.5%', 'Idle': '95.0%'}}
return m<|docstring|>Define mock data for status.cpustats on OpenBSD<|endoftext|> |
cfa312847fdfd8334e220c475d3d434cf8a7a037151212146bbdddedca5b146d | def test_cpustats_openbsd(self):
'\n Test modules.status.cpustats function for OpenBSD\n '
m = self._set_up_test_cpustats_openbsd()
systat = '\n\n 1 users Load 0.20 0.07 0.05 salt.localdomain 09:42:42\nCPU User Nice System Interrupt Idle\n0 0.0% 0.0% 4.5% 0.5% 95.0%\n'
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=True), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__grains__, {'kernel': 'OpenBSD'}), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=systat)}):
ret = status.cpustats()
self.assertDictEqual(ret, m.ret) | Test modules.status.cpustats function for OpenBSD | tests/unit/modules/test_status.py | test_cpustats_openbsd | johnskopis/salt | 12 | python | def test_cpustats_openbsd(self):
'\n \n '
m = self._set_up_test_cpustats_openbsd()
systat = '\n\n 1 users Load 0.20 0.07 0.05 salt.localdomain 09:42:42\nCPU User Nice System Interrupt Idle\n0 0.0% 0.0% 4.5% 0.5% 95.0%\n'
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=True), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__grains__, {'kernel': 'OpenBSD'}), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=systat)}):
ret = status.cpustats()
self.assertDictEqual(ret, m.ret) | def test_cpustats_openbsd(self):
'\n \n '
m = self._set_up_test_cpustats_openbsd()
systat = '\n\n 1 users Load 0.20 0.07 0.05 salt.localdomain 09:42:42\nCPU User Nice System Interrupt Idle\n0 0.0% 0.0% 4.5% 0.5% 95.0%\n'
with patch.multiple(salt.utils.platform, is_linux=MagicMock(return_value=False), is_sunos=MagicMock(return_value=False), is_darwin=MagicMock(return_value=False), is_freebsd=MagicMock(return_value=False), is_openbsd=MagicMock(return_value=True), is_netbsd=MagicMock(return_value=False)), patch('salt.utils.path.which', MagicMock(return_value=True)), patch.dict(status.__grains__, {'kernel': 'OpenBSD'}), patch.dict(status.__salt__, {'cmd.run': MagicMock(return_value=systat)}):
ret = status.cpustats()
self.assertDictEqual(ret, m.ret)<|docstring|>Test modules.status.cpustats function for OpenBSD<|endoftext|> |
00090ec01fc7a13d2132a1aaaa388c595e9cfc3efc3454e6eb6cb501bacbbfce | def _set_up_test_w_linux(self):
'\n Define mock data for status.w on Linux\n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = [{'idle': '0s', 'jcpu': '0.24s', 'login': '13:42', 'pcpu': '0.16s', 'tty': 'pts/1', 'user': 'root', 'what': 'nmap -sV 10.2.2.2'}]
return m | Define mock data for status.w on Linux | tests/unit/modules/test_status.py | _set_up_test_w_linux | johnskopis/salt | 12 | python | def _set_up_test_w_linux(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = [{'idle': '0s', 'jcpu': '0.24s', 'login': '13:42', 'pcpu': '0.16s', 'tty': 'pts/1', 'user': 'root', 'what': 'nmap -sV 10.2.2.2'}]
return m | def _set_up_test_w_linux(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = [{'idle': '0s', 'jcpu': '0.24s', 'login': '13:42', 'pcpu': '0.16s', 'tty': 'pts/1', 'user': 'root', 'what': 'nmap -sV 10.2.2.2'}]
return m<|docstring|>Define mock data for status.w on Linux<|endoftext|> |
665d37643909e5919ab8fe1b27cc281fa46d172b7135554e6b004f4867ff8276 | def _set_up_test_w_bsd(self):
'\n Define mock data for status.w on Linux\n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = [{'idle': '0', 'from': '10.2.2.1', 'login': '1:42PM', 'tty': 'p1', 'user': 'root', 'what': 'nmap -sV 10.2.2.2'}]
return m | Define mock data for status.w on Linux | tests/unit/modules/test_status.py | _set_up_test_w_bsd | johnskopis/salt | 12 | python | def _set_up_test_w_bsd(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = [{'idle': '0', 'from': '10.2.2.1', 'login': '1:42PM', 'tty': 'p1', 'user': 'root', 'what': 'nmap -sV 10.2.2.2'}]
return m | def _set_up_test_w_bsd(self):
'\n \n '
class MockData(object):
'\n Store mock data\n '
m = MockData()
m.ret = [{'idle': '0', 'from': '10.2.2.1', 'login': '1:42PM', 'tty': 'p1', 'user': 'root', 'what': 'nmap -sV 10.2.2.2'}]
return m<|docstring|>Define mock data for status.w on Linux<|endoftext|> |
e7d34f0b86d78db1d50fb0b62b04792cbeb32048afb80b30c7d9a585154a0559 | def __init__(self, endpoint: Endpoint, *, address: str):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint\n '
self._endpoint = endpoint
self._address: str = address
self._session: aiohttp.ClientSession = None
self._verb2coro = dict() | Initializes instance.
Args:
address (str): Resource endpoint | src/rmlab_http_client/client/_core.py | __init__ | antonrv/rmlab-py-http-client | 0 | python | def __init__(self, endpoint: Endpoint, *, address: str):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint\n '
self._endpoint = endpoint
self._address: str = address
self._session: aiohttp.ClientSession = None
self._verb2coro = dict() | def __init__(self, endpoint: Endpoint, *, address: str):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint\n '
self._endpoint = endpoint
self._address: str = address
self._session: aiohttp.ClientSession = None
self._verb2coro = dict()<|docstring|>Initializes instance.
Args:
address (str): Resource endpoint<|endoftext|> |
c6a87ec22fc96e18a22a3ed518c8867de5d9cb5e8b91fa5473d9e89078ca1d22 | def __init__(self, endpoint: Endpoint, *, address: str):
'Initializes instance.\n\n Args:\n address (str): Public resource endpoint\n '
super(HTTPClientPublic, self).__init__(endpoint=endpoint, address=address) | Initializes instance.
Args:
address (str): Public resource endpoint | src/rmlab_http_client/client/_core.py | __init__ | antonrv/rmlab-py-http-client | 0 | python | def __init__(self, endpoint: Endpoint, *, address: str):
'Initializes instance.\n\n Args:\n address (str): Public resource endpoint\n '
super(HTTPClientPublic, self).__init__(endpoint=endpoint, address=address) | def __init__(self, endpoint: Endpoint, *, address: str):
'Initializes instance.\n\n Args:\n address (str): Public resource endpoint\n '
super(HTTPClientPublic, self).__init__(endpoint=endpoint, address=address)<|docstring|>Initializes instance.
Args:
address (str): Public resource endpoint<|endoftext|> |
151471b51634e8009351aab707492c68900326a60346059aab9ba275c1179576 | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client for public resources.\n\n Returns:\n HTTPClientPublic: This client instance.\n '
self._session = aiohttp.ClientSession(raise_for_status=False)
return (await super(HTTPClientPublic, self).__aenter__()) | Initializes asynchronous context manager, creating a http client for public resources.
Returns:
HTTPClientPublic: This client instance. | src/rmlab_http_client/client/_core.py | __aenter__ | antonrv/rmlab-py-http-client | 0 | python | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client for public resources.\n\n Returns:\n HTTPClientPublic: This client instance.\n '
self._session = aiohttp.ClientSession(raise_for_status=False)
return (await super(HTTPClientPublic, self).__aenter__()) | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client for public resources.\n\n Returns:\n HTTPClientPublic: This client instance.\n '
self._session = aiohttp.ClientSession(raise_for_status=False)
return (await super(HTTPClientPublic, self).__aenter__())<|docstring|>Initializes asynchronous context manager, creating a http client for public resources.
Returns:
HTTPClientPublic: This client instance.<|endoftext|> |
5b02882eabad2d344b92394698698031ad6d9e6a6f5966d7d09db93385cc3178 | def __init__(self, endpoint: Endpoint, address: str, *, basic_auth: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the basic auth\n basic_auth (Optional[str]): Basic authentication data. Defaults to None.\n '
super(HTTPClientBasic, self).__init__(endpoint=endpoint, address=address)
basic_auth = (basic_auth or Cache.get_credential('basic_auth'))
if (basic_auth is None):
raise ValueError(f'Undefined Basic auth')
self._basic_auth = base64.b64encode(basic_auth.encode()).decode('utf-8') | Initializes instance.
Args:
address (str): Resource endpoint behind the basic auth
basic_auth (Optional[str]): Basic authentication data. Defaults to None. | src/rmlab_http_client/client/_core.py | __init__ | antonrv/rmlab-py-http-client | 0 | python | def __init__(self, endpoint: Endpoint, address: str, *, basic_auth: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the basic auth\n basic_auth (Optional[str]): Basic authentication data. Defaults to None.\n '
super(HTTPClientBasic, self).__init__(endpoint=endpoint, address=address)
basic_auth = (basic_auth or Cache.get_credential('basic_auth'))
if (basic_auth is None):
raise ValueError(f'Undefined Basic auth')
self._basic_auth = base64.b64encode(basic_auth.encode()).decode('utf-8') | def __init__(self, endpoint: Endpoint, address: str, *, basic_auth: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the basic auth\n basic_auth (Optional[str]): Basic authentication data. Defaults to None.\n '
super(HTTPClientBasic, self).__init__(endpoint=endpoint, address=address)
basic_auth = (basic_auth or Cache.get_credential('basic_auth'))
if (basic_auth is None):
raise ValueError(f'Undefined Basic auth')
self._basic_auth = base64.b64encode(basic_auth.encode()).decode('utf-8')<|docstring|>Initializes instance.
Args:
address (str): Resource endpoint behind the basic auth
basic_auth (Optional[str]): Basic authentication data. Defaults to None.<|endoftext|> |
4342f380549800e32029becc8f1dd1483ef70e1f4523b99c3d32c3d7ecbaecfc | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind basic auth.\n\n Returns:\n HTTPClientBasic: This client instance.\n '
auth_headers = {'Authorization': ('Basic ' + self._basic_auth)}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientBasic, self).__aenter__()) | Initializes asynchronous context manager, creating a http client session
for resources behind basic auth.
Returns:
HTTPClientBasic: This client instance. | src/rmlab_http_client/client/_core.py | __aenter__ | antonrv/rmlab-py-http-client | 0 | python | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind basic auth.\n\n Returns:\n HTTPClientBasic: This client instance.\n '
auth_headers = {'Authorization': ('Basic ' + self._basic_auth)}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientBasic, self).__aenter__()) | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind basic auth.\n\n Returns:\n HTTPClientBasic: This client instance.\n '
auth_headers = {'Authorization': ('Basic ' + self._basic_auth)}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientBasic, self).__aenter__())<|docstring|>Initializes asynchronous context manager, creating a http client session
for resources behind basic auth.
Returns:
HTTPClientBasic: This client instance.<|endoftext|> |
4a32cd10861ad7e701740ee809880ba328d78d0f4bbbb2604a058401b32db080 | def __init__(self, endpoint: Endpoint, address: str, *, api_key: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the api key\n api_key (Optional[str]): Api key. Defaults to None.\n '
super(HTTPClientApiKey, self).__init__(endpoint=endpoint, address=address)
self._api_key = (api_key or Cache.get_credential('api_key'))
if (self._api_key is None):
raise ValueError(f'Undefined Api Key') | Initializes instance.
Args:
address (str): Resource endpoint behind the api key
api_key (Optional[str]): Api key. Defaults to None. | src/rmlab_http_client/client/_core.py | __init__ | antonrv/rmlab-py-http-client | 0 | python | def __init__(self, endpoint: Endpoint, address: str, *, api_key: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the api key\n api_key (Optional[str]): Api key. Defaults to None.\n '
super(HTTPClientApiKey, self).__init__(endpoint=endpoint, address=address)
self._api_key = (api_key or Cache.get_credential('api_key'))
if (self._api_key is None):
raise ValueError(f'Undefined Api Key') | def __init__(self, endpoint: Endpoint, address: str, *, api_key: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the api key\n api_key (Optional[str]): Api key. Defaults to None.\n '
super(HTTPClientApiKey, self).__init__(endpoint=endpoint, address=address)
self._api_key = (api_key or Cache.get_credential('api_key'))
if (self._api_key is None):
raise ValueError(f'Undefined Api Key')<|docstring|>Initializes instance.
Args:
address (str): Resource endpoint behind the api key
api_key (Optional[str]): Api key. Defaults to None.<|endoftext|> |
1051986b8b68a543484eb3864599c76f956e925ff7a6b8da13955c23b8a06f8f | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind a API key.\n\n Returns:\n HTTPClientApiKey: This client instance.\n '
auth_headers = {'X-Api-Key': self._api_key}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientApiKey, self).__aenter__()) | Initializes asynchronous context manager, creating a http client session
for resources behind a API key.
Returns:
HTTPClientApiKey: This client instance. | src/rmlab_http_client/client/_core.py | __aenter__ | antonrv/rmlab-py-http-client | 0 | python | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind a API key.\n\n Returns:\n HTTPClientApiKey: This client instance.\n '
auth_headers = {'X-Api-Key': self._api_key}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientApiKey, self).__aenter__()) | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind a API key.\n\n Returns:\n HTTPClientApiKey: This client instance.\n '
auth_headers = {'X-Api-Key': self._api_key}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientApiKey, self).__aenter__())<|docstring|>Initializes asynchronous context manager, creating a http client session
for resources behind a API key.
Returns:
HTTPClientApiKey: This client instance.<|endoftext|> |
4974e248766e93d366e6339b59319afb323516f4dedb4bf107551c2ebd5e8865 | def __init__(self, endpoint: Endpoint, address: str, *, jwt: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the access token\n jwt (Optional[str]): JWT (access or refresh). Defaults to None.\n '
super(HTTPClientJWT, self).__init__(endpoint=endpoint, address=address)
self._jwt = (jwt or Cache.get_credential('access_token'))
if (self._jwt is None):
raise ValueError(f'Undefined JWT') | Initializes instance.
Args:
address (str): Resource endpoint behind the access token
jwt (Optional[str]): JWT (access or refresh). Defaults to None. | src/rmlab_http_client/client/_core.py | __init__ | antonrv/rmlab-py-http-client | 0 | python | def __init__(self, endpoint: Endpoint, address: str, *, jwt: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the access token\n jwt (Optional[str]): JWT (access or refresh). Defaults to None.\n '
super(HTTPClientJWT, self).__init__(endpoint=endpoint, address=address)
self._jwt = (jwt or Cache.get_credential('access_token'))
if (self._jwt is None):
raise ValueError(f'Undefined JWT') | def __init__(self, endpoint: Endpoint, address: str, *, jwt: Optional[str]=None):
'Initializes instance.\n\n Args:\n address (str): Resource endpoint behind the access token\n jwt (Optional[str]): JWT (access or refresh). Defaults to None.\n '
super(HTTPClientJWT, self).__init__(endpoint=endpoint, address=address)
self._jwt = (jwt or Cache.get_credential('access_token'))
if (self._jwt is None):
raise ValueError(f'Undefined JWT')<|docstring|>Initializes instance.
Args:
address (str): Resource endpoint behind the access token
jwt (Optional[str]): JWT (access or refresh). Defaults to None.<|endoftext|> |
ae6a103b10e886fc35fae7071bd3899b07f7fc1196abc8981996d546129940ac | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind JWT auth.\n\n Returns:\n HTTPClientJWT: This client instance.\n '
auth_headers = {'Authorization': ('Bearer ' + self._jwt)}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientJWT, self).__aenter__()) | Initializes asynchronous context manager, creating a http client session
for resources behind JWT auth.
Returns:
HTTPClientJWT: This client instance. | src/rmlab_http_client/client/_core.py | __aenter__ | antonrv/rmlab-py-http-client | 0 | python | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind JWT auth.\n\n Returns:\n HTTPClientJWT: This client instance.\n '
auth_headers = {'Authorization': ('Bearer ' + self._jwt)}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientJWT, self).__aenter__()) | async def __aenter__(self):
'Initializes asynchronous context manager, creating a http client session\n for resources behind JWT auth.\n\n Returns:\n HTTPClientJWT: This client instance.\n '
auth_headers = {'Authorization': ('Bearer ' + self._jwt)}
self._session = aiohttp.ClientSession(headers=auth_headers, raise_for_status=False)
return (await super(HTTPClientJWT, self).__aenter__())<|docstring|>Initializes asynchronous context manager, creating a http client session
for resources behind JWT auth.
Returns:
HTTPClientJWT: This client instance.<|endoftext|> |
02e760f1a1ad5f64d6ee189549f1d904e295ecc85944ad1530a47eb7613ef93e | def plot(density_map, name):
'\n @brief density map contour and heat map \n '
print(np.amax(density_map))
print(np.mean(density_map))
fig = plt.figure(figsize=(4, 3))
ax = fig.gca(projection='3d')
x = np.arange(density_map.shape[0])
y = np.arange(density_map.shape[1])
(x, y) = np.meshgrid(x, y)
ax.plot_surface(x, y, density_map, alpha=0.8)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('density')
plt.savefig((name + '.3d.png'))
plt.clf()
(fig, ax) = plt.subplots()
ax.pcolor(density_map)
fig.tight_layout()
plt.savefig((name + '.2d.png')) | @brief density map contour and heat map | dreamplace/ops/density_map/density_map.py | plot | ArEsKay3/DREAMPlace | 323 | python | def plot(density_map, name):
'\n \n '
print(np.amax(density_map))
print(np.mean(density_map))
fig = plt.figure(figsize=(4, 3))
ax = fig.gca(projection='3d')
x = np.arange(density_map.shape[0])
y = np.arange(density_map.shape[1])
(x, y) = np.meshgrid(x, y)
ax.plot_surface(x, y, density_map, alpha=0.8)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('density')
plt.savefig((name + '.3d.png'))
plt.clf()
(fig, ax) = plt.subplots()
ax.pcolor(density_map)
fig.tight_layout()
plt.savefig((name + '.2d.png')) | def plot(density_map, name):
'\n \n '
print(np.amax(density_map))
print(np.mean(density_map))
fig = plt.figure(figsize=(4, 3))
ax = fig.gca(projection='3d')
x = np.arange(density_map.shape[0])
y = np.arange(density_map.shape[1])
(x, y) = np.meshgrid(x, y)
ax.plot_surface(x, y, density_map, alpha=0.8)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('density')
plt.savefig((name + '.3d.png'))
plt.clf()
(fig, ax) = plt.subplots()
ax.pcolor(density_map)
fig.tight_layout()
plt.savefig((name + '.2d.png'))<|docstring|>@brief density map contour and heat map<|endoftext|> |
755bfb27be57c6c6e3c3f5e156972430910c68f88dd7f4b2664535cbae83f066 | def __init__(self, node_size_x, node_size_y, bin_center_x, bin_center_y, xl, yl, xh, yh, bin_size_x, bin_size_y, num_movable_nodes, num_terminals, num_filler_nodes):
'\n @brief initialization \n @param node_size_x cell width array consisting of movable cells, fixed cells, and filler cells in order \n @param node_size_y cell height array consisting of movable cells, fixed cells, and filler cells in order \n @param bin_center_x bin center x locations \n @param bin_center_y bin center y locations \n @param xl left boundary \n @param yl bottom boundary \n @param xh right boundary \n @param yh top boundary \n @param bin_size_x bin width \n @param bin_size_y bin height \n @param num_movable_nodes number of movable cells \n @param num_terminals number of fixed cells \n @param num_filler_nodes number of filler cells \n '
super(DensityMap, self).__init__()
self.node_size_x = node_size_x
self.node_size_y = node_size_y
self.bin_center_x = bin_center_x
self.bin_center_y = bin_center_y
self.xl = xl
self.yl = yl
self.xh = xh
self.yh = yh
self.bin_size_x = bin_size_x
self.bin_size_y = bin_size_y
self.num_movable_nodes = num_movable_nodes
self.num_terminals = num_terminals
self.num_filler_nodes = num_filler_nodes
self.initial_density_map = None | @brief initialization
@param node_size_x cell width array consisting of movable cells, fixed cells, and filler cells in order
@param node_size_y cell height array consisting of movable cells, fixed cells, and filler cells in order
@param bin_center_x bin center x locations
@param bin_center_y bin center y locations
@param xl left boundary
@param yl bottom boundary
@param xh right boundary
@param yh top boundary
@param bin_size_x bin width
@param bin_size_y bin height
@param num_movable_nodes number of movable cells
@param num_terminals number of fixed cells
@param num_filler_nodes number of filler cells | dreamplace/ops/density_map/density_map.py | __init__ | ArEsKay3/DREAMPlace | 323 | python | def __init__(self, node_size_x, node_size_y, bin_center_x, bin_center_y, xl, yl, xh, yh, bin_size_x, bin_size_y, num_movable_nodes, num_terminals, num_filler_nodes):
'\n @brief initialization \n @param node_size_x cell width array consisting of movable cells, fixed cells, and filler cells in order \n @param node_size_y cell height array consisting of movable cells, fixed cells, and filler cells in order \n @param bin_center_x bin center x locations \n @param bin_center_y bin center y locations \n @param xl left boundary \n @param yl bottom boundary \n @param xh right boundary \n @param yh top boundary \n @param bin_size_x bin width \n @param bin_size_y bin height \n @param num_movable_nodes number of movable cells \n @param num_terminals number of fixed cells \n @param num_filler_nodes number of filler cells \n '
super(DensityMap, self).__init__()
self.node_size_x = node_size_x
self.node_size_y = node_size_y
self.bin_center_x = bin_center_x
self.bin_center_y = bin_center_y
self.xl = xl
self.yl = yl
self.xh = xh
self.yh = yh
self.bin_size_x = bin_size_x
self.bin_size_y = bin_size_y
self.num_movable_nodes = num_movable_nodes
self.num_terminals = num_terminals
self.num_filler_nodes = num_filler_nodes
self.initial_density_map = None | def __init__(self, node_size_x, node_size_y, bin_center_x, bin_center_y, xl, yl, xh, yh, bin_size_x, bin_size_y, num_movable_nodes, num_terminals, num_filler_nodes):
'\n @brief initialization \n @param node_size_x cell width array consisting of movable cells, fixed cells, and filler cells in order \n @param node_size_y cell height array consisting of movable cells, fixed cells, and filler cells in order \n @param bin_center_x bin center x locations \n @param bin_center_y bin center y locations \n @param xl left boundary \n @param yl bottom boundary \n @param xh right boundary \n @param yh top boundary \n @param bin_size_x bin width \n @param bin_size_y bin height \n @param num_movable_nodes number of movable cells \n @param num_terminals number of fixed cells \n @param num_filler_nodes number of filler cells \n '
super(DensityMap, self).__init__()
self.node_size_x = node_size_x
self.node_size_y = node_size_y
self.bin_center_x = bin_center_x
self.bin_center_y = bin_center_y
self.xl = xl
self.yl = yl
self.xh = xh
self.yh = yh
self.bin_size_x = bin_size_x
self.bin_size_y = bin_size_y
self.num_movable_nodes = num_movable_nodes
self.num_terminals = num_terminals
self.num_filler_nodes = num_filler_nodes
self.initial_density_map = None<|docstring|>@brief initialization
@param node_size_x cell width array consisting of movable cells, fixed cells, and filler cells in order
@param node_size_y cell height array consisting of movable cells, fixed cells, and filler cells in order
@param bin_center_x bin center x locations
@param bin_center_y bin center y locations
@param xl left boundary
@param yl bottom boundary
@param xh right boundary
@param yh top boundary
@param bin_size_x bin width
@param bin_size_y bin height
@param num_movable_nodes number of movable cells
@param num_terminals number of fixed cells
@param num_filler_nodes number of filler cells<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.