response
stringlengths 1
33.1k
| instruction
stringlengths 22
582k
|
---|---|
Applies a given PIL filter to the input image using `Image.filter()`
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param filter_type: the PIL ImageFilter to apply to the image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def apply_pil_filter(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
filter_type: Union[Callable, ImageFilter.Filter] = ImageFilter.EDGE_ENHANCE_MORE,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Applies a given PIL filter to the input image using `Image.filter()`
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param filter_type: the PIL ImageFilter to apply to the image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = deepcopy(locals())
ftr = filter_type() if isinstance(filter_type, Callable) else filter_type
assert isinstance(
ftr, ImageFilter.Filter
), "Filter type must be a PIL.ImageFilter.Filter class"
func_kwargs = imutils.get_func_kwargs(
metadata, func_kwargs, filter_type=getattr(ftr, "name", filter_type)
)
src_mode = image.mode
aug_image = image.filter(ftr)
imutils.get_metadata(
metadata=metadata,
function_name="apply_pil_filter",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Blurs the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param radius: the larger the radius, the blurrier the image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def blur(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
radius: float = 2.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Blurs the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param radius: the larger the radius, the blurrier the image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert radius > 0, "Radius cannot be negative"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
aug_image = image.filter(ImageFilter.GaussianBlur(radius))
imutils.get_metadata(
metadata=metadata,
function_name="blur",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Changes the brightness of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: values less than 1.0 darken the image and values greater than 1.0
brighten the image. Setting factor to 1.0 will not alter the image's brightness
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def brightness(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Changes the brightness of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: values less than 1.0 darken the image and values greater than 1.0
brighten the image. Setting factor to 1.0 will not alter the image's brightness
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
aug_image = ImageEnhance.Brightness(image).enhance(factor)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
imutils.get_metadata(metadata=metadata, function_name="brightness", **func_kwargs)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Changes the aspect ratio of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param ratio: aspect ratio, i.e. width/height, of the new image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def change_aspect_ratio(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
ratio: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Changes the aspect ratio of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param ratio: aspect ratio, i.e. width/height, of the new image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert ratio > 0, "Ratio cannot be negative"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
width, height = image.size
area = width * height
new_width = int(math.sqrt(ratio * area))
new_height = int(area / new_width)
aug_image = image.resize((new_width, new_height))
imutils.get_metadata(
metadata=metadata,
function_name="change_aspect_ratio",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Scales the image up or down if necessary to fit in the given min and max resolution
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param min_resolution: the minimum resolution, i.e. width * height, that the
augmented image should have; if the input image has a lower resolution than this,
the image will be scaled up as necessary
@param max_resolution: the maximum resolution, i.e. width * height, that the
augmented image should have; if the input image has a higher resolution than
this, the image will be scaled down as necessary
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def clip_image_size(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
min_resolution: Optional[int] = None,
max_resolution: Optional[int] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Scales the image up or down if necessary to fit in the given min and max resolution
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param min_resolution: the minimum resolution, i.e. width * height, that the
augmented image should have; if the input image has a lower resolution than this,
the image will be scaled up as necessary
@param max_resolution: the maximum resolution, i.e. width * height, that the
augmented image should have; if the input image has a higher resolution than
this, the image will be scaled down as necessary
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert min_resolution is None or (
isinstance(min_resolution, int) and min_resolution >= 0
), "min_resolution must be None or a nonnegative int"
assert max_resolution is None or (
isinstance(max_resolution, int) and max_resolution >= 0
), "max_resolution must be None or a nonnegative int"
assert not (
min_resolution is not None
and max_resolution is not None
and min_resolution > max_resolution
), "min_resolution cannot be greater than max_resolution"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
aug_image = image
if min_resolution is not None and image.width * image.height < min_resolution:
resize_factor = math.sqrt(min_resolution / (image.width * image.height))
aug_image = scale(aug_image, factor=resize_factor)
elif max_resolution is not None and image.width * image.height > max_resolution:
resize_factor = math.sqrt(max_resolution / (image.width * image.height))
aug_image = scale(aug_image, factor=resize_factor)
imutils.get_metadata(
metadata=metadata,
function_name="clip_image_size",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Color jitters the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param brightness_factor: a brightness factor below 1.0 darkens the image, a factor
of 1.0 does not alter the image, and a factor greater than 1.0 brightens the image
@param contrast_factor: a contrast factor below 1.0 removes contrast, a factor of
1.0 gives the original image, and a factor greater than 1.0 adds contrast
@param saturation_factor: a saturation factor of below 1.0 lowers the saturation,
a factor of 1.0 gives the original image, and a factor greater than 1.0
adds saturation
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def color_jitter(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
brightness_factor: float = 1.0,
contrast_factor: float = 1.0,
saturation_factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Color jitters the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param brightness_factor: a brightness factor below 1.0 darkens the image, a factor
of 1.0 does not alter the image, and a factor greater than 1.0 brightens the image
@param contrast_factor: a contrast factor below 1.0 removes contrast, a factor of
1.0 gives the original image, and a factor greater than 1.0 adds contrast
@param saturation_factor: a saturation factor of below 1.0 lowers the saturation,
a factor of 1.0 gives the original image, and a factor greater than 1.0
adds saturation
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
aug_image = ImageEnhance.Brightness(image).enhance(brightness_factor)
aug_image = ImageEnhance.Contrast(aug_image).enhance(contrast_factor)
aug_image = ImageEnhance.Color(aug_image).enhance(saturation_factor)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
imutils.get_metadata(metadata=metadata, function_name="color_jitter", **func_kwargs)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Alters the contrast of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: zero gives a grayscale image, values below 1.0 decreases contrast,
a factor of 1.0 gives the original image, and a factor greater than 1.0
increases contrast
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: Image.Image - Augmented PIL Image | def contrast(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Alters the contrast of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: zero gives a grayscale image, values below 1.0 decreases contrast,
a factor of 1.0 gives the original image, and a factor greater than 1.0
increases contrast
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: Image.Image - Augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
enhancer = ImageEnhance.Contrast(image)
aug_image = enhancer.enhance(factor)
imutils.get_metadata(
metadata=metadata,
function_name="contrast",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Converts the image in terms of color modes
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mode: defines the type and depth of a pixel in the image. If mode is omitted,
a mode is chosen so that all information in the image and the palette can be
represented without a palette. For list of available modes, check:
https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes
@param matrix: an optional conversion matrix. If given, this should be 4- or
12-tuple containing floating point values
@param dither: dithering method, used when converting from mode βRGBβ to βPβ or from
βRGBβ or βLβ to β1β. Available methods are NONE or FLOYDSTEINBERG (default).
@param palette: palette to use when converting from mode βRGBβ to βPβ. Available
palettes are WEB or ADAPTIVE
@param colors: number of colors to use for the ADAPTIVE palette. Defaults to 256.
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: Image.Image - Augmented PIL Image | def convert_color(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
mode: Optional[str] = None,
matrix: Union[
None,
Tuple[float, float, float, float],
Tuple[
float,
float,
float,
float,
float,
float,
float,
float,
float,
float,
float,
float,
],
] = None,
dither: Optional[int] = None,
palette: int = 0,
colors: int = 256,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Converts the image in terms of color modes
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mode: defines the type and depth of a pixel in the image. If mode is omitted,
a mode is chosen so that all information in the image and the palette can be
represented without a palette. For list of available modes, check:
https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes
@param matrix: an optional conversion matrix. If given, this should be 4- or
12-tuple containing floating point values
@param dither: dithering method, used when converting from mode βRGBβ to βPβ or from
βRGBβ or βLβ to β1β. Available methods are NONE or FLOYDSTEINBERG (default).
@param palette: palette to use when converting from mode βRGBβ to βPβ. Available
palettes are WEB or ADAPTIVE
@param colors: number of colors to use for the ADAPTIVE palette. Defaults to 256.
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: Image.Image - Augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
# pyre-fixme[6]: Expected `Union[typing_extensions.Literal[0],
# typing_extensions.Literal[1]]` for 4th param but got `int`.
aug_image = image.convert(mode, matrix, dither, palette, colors)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
imutils.get_metadata(
metadata=metadata,
function_name="convert_color",
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path) |
Crops the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param x1: position of the left edge of cropped image relative to the width of
the original image; must be a float value between 0 and 1
@param y1: position of the top edge of cropped image relative to the height of
the original image; must be a float value between 0 and 1
@param x2: position of the right edge of cropped image relative to the width of
the original image; must be a float value between 0 and 1
@param y2: position of the bottom edge of cropped image relative to the height of
the original image; must be a float value between 0 and 1
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def crop(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
x1: float = 0.25,
y1: float = 0.25,
x2: float = 0.75,
y2: float = 0.75,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Crops the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param x1: position of the left edge of cropped image relative to the width of
the original image; must be a float value between 0 and 1
@param y1: position of the top edge of cropped image relative to the height of
the original image; must be a float value between 0 and 1
@param x2: position of the right edge of cropped image relative to the width of
the original image; must be a float value between 0 and 1
@param y2: position of the bottom edge of cropped image relative to the height of
the original image; must be a float value between 0 and 1
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert 0 <= x1 <= 1.0, "x1 must be a value in the range [0, 1]"
assert 0 <= y1 <= 1.0, "y1 must be a value in the range [0, 1]"
assert x1 < x2 <= 1.0, "x2 must be a value in the range [x1, 1]"
assert y1 < y2 <= 1.0, "y2 must be a value in the range [y1, 1]"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
width, height = image.size
left, right = int(width * x1), int(width * x2)
top, bottom = int(height * y1), int(height * y2)
aug_image = image.crop((left, top, right, bottom))
imutils.get_metadata(
metadata=metadata,
function_name="crop",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Changes the JPEG encoding quality level
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param quality: JPEG encoding quality. 0 is lowest quality, 100 is highest
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def encoding_quality(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
quality: int = 50,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Changes the JPEG encoding quality level
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param quality: JPEG encoding quality. 0 is lowest quality, 100 is highest
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert 0 <= quality <= 100, "'quality' must be a value in the range [0, 100]"
image = imutils.validate_and_load_image(image).convert("RGB")
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
buffer = io.BytesIO()
image.save(buffer, format="JPEG", quality=quality)
aug_image = Image.open(buffer)
imutils.get_metadata(
metadata=metadata,
function_name="encoding_quality",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Changes an image to be grayscale
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mode: the type of greyscale conversion to perform; two options
are supported ("luminosity" and "average")
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def grayscale(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
mode: str = "luminosity",
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Changes an image to be grayscale
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mode: the type of greyscale conversion to perform; two options
are supported ("luminosity" and "average")
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert mode in [
"luminosity",
"average",
], "Greyscale mode not supported -- choose either 'luminosity' or 'average'"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
# If grayscale image is passed in, return it
if image.mode == "L":
aug_image = image
else:
if mode == "luminosity":
aug_image = image.convert(mode="L")
elif mode == "average":
np_image = np.asarray(image).astype(np.float32)
np_image = np.average(np_image, axis=2)
aug_image = Image.fromarray(np.uint8(np_image))
aug_image = aug_image.convert(mode="RGB")
imutils.get_metadata(
metadata=metadata,
function_name="grayscale",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Horizontally flips an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def hflip(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Horizontally flips an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
aug_image = image.transpose(Image.FLIP_LEFT_RIGHT)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
imutils.get_metadata(metadata=metadata, function_name="hflip", **func_kwargs)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Applies given augmentation function to the masked area of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mask: the path to an image or a variable of type PIL.Image.Image for
masking. This image can have mode β1β, βLβ, or βRGBAβ, and must have the
same size as the other two images. If the mask is not provided the function
returns the augmented image
@param transform_function: the augmentation function to be applied. If
transform_function is not provided, the function returns the input image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def masked_composite(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
mask: Optional[Union[str, Image.Image]] = None,
transform_function: Optional[Callable] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Applies given augmentation function to the masked area of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mask: the path to an image or a variable of type PIL.Image.Image for
masking. This image can have mode β1β, βLβ, or βRGBAβ, and must have the
same size as the other two images. If the mask is not provided the function
returns the augmented image
@param transform_function: the augmentation function to be applied. If
transform_function is not provided, the function returns the input image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = deepcopy(locals())
if transform_function is not None:
try:
func_kwargs["transform_function"] = transform_function.__name__
except AttributeError:
func_kwargs["transform_function"] = type(transform_function).__name__
func_kwargs = imutils.get_func_kwargs(metadata, func_kwargs)
src_mode = image.mode
if transform_function is None:
masked_image = imutils.ret_and_save_image(image, output_path)
else:
aug_image = transform_function(image)
if mask is None:
masked_image = imutils.ret_and_save_image(aug_image, output_path, src_mode)
else:
mask = imutils.validate_and_load_image(mask)
assert image.size == mask.size, "Mask size must be equal to image size"
masked_image = Image.composite(aug_image, image, mask)
imutils.get_metadata(
metadata=metadata,
function_name="masked_composite",
aug_image=masked_image,
**func_kwargs,
)
return imutils.ret_and_save_image(masked_image, output_path, src_mode) |
Creates a new image that looks like a meme, given text and an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param text: the text to be overlaid/used in the meme. note: if using a very
long string, please add in newline characters such that the text remains
in a readable font size.
@param font_file: iopath uri to a .ttf font file
@param opacity: the lower the opacity, the more transparent the text
@param text_color: color of the text in RGB values
@param caption_height: the height of the meme caption
@param meme_bg_color: background color of the meme caption in RGB values
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def meme_format(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
text: str = "LOL",
font_file: str = utils.MEME_DEFAULT_FONT,
opacity: float = 1.0,
text_color: Tuple[int, int, int] = utils.DEFAULT_COLOR,
caption_height: int = 250,
meme_bg_color: Tuple[int, int, int] = utils.WHITE_RGB_COLOR,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Creates a new image that looks like a meme, given text and an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param text: the text to be overlaid/used in the meme. note: if using a very
long string, please add in newline characters such that the text remains
in a readable font size.
@param font_file: iopath uri to a .ttf font file
@param opacity: the lower the opacity, the more transparent the text
@param text_color: color of the text in RGB values
@param caption_height: the height of the meme caption
@param meme_bg_color: background color of the meme caption in RGB values
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert isinstance(text, str), "Expected variable `text` to be a string"
assert 0.0 <= opacity <= 1.0, "Opacity must be a value in the range [0.0, 1.0]"
assert caption_height > 10, "Caption height must be greater than 10"
utils.validate_rgb_color(text_color)
utils.validate_rgb_color(meme_bg_color)
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
width, height = image.size
local_font_path = utils.pathmgr.get_local_path(font_file)
font_size = caption_height - 10
meme = Image.new("RGB", (width, height + caption_height), meme_bg_color)
meme.paste(image, (0, caption_height))
draw = ImageDraw.Draw(meme)
x_pos, y_pos = 5, 5
ascender_adjustment = 40
while True:
font = ImageFont.truetype(local_font_path, font_size)
text_bbox = draw.multiline_textbbox(
(x_pos, y_pos),
text,
# pyre-fixme[6]: Expected `Optional[ImageFont._Font]` for 3rd param but got
# `FreeTypeFont`.
font=font,
anchor="la",
align="center",
)
text_width, text_height = (
text_bbox[2] - text_bbox[0],
text_bbox[3] - text_bbox[1],
)
x_pos = round((width - text_width) / 2)
y_pos = round((caption_height - text_height) / 2) - ascender_adjustment
if text_width <= (width - 10) and text_height <= (caption_height - 10):
break
font_size -= 5
draw.multiline_text(
(x_pos, y_pos),
text,
# pyre-fixme[6]: Expected `Optional[ImageFont._Font]` for 3rd param but got
# `FreeTypeFont`.
font=font,
anchor="la",
fill=(text_color[0], text_color[1], text_color[2], round(opacity * 255)),
align="center",
)
imutils.get_metadata(
metadata=metadata,
function_name="meme_format",
aug_image=meme,
**func_kwargs,
)
return imutils.ret_and_save_image(meme, output_path, src_mode) |
Alter the opacity of an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param level: the level the opacity should be set to, where 0 means
completely transparent and 1 means no transparency at all
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def opacity(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
level: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Alter the opacity of an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param level: the level the opacity should be set to, where 0 means
completely transparent and 1 means no transparency at all
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert 0 <= level <= 1, "level must be a value in the range [0, 1]"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
image = image.convert(mode="RGBA")
mask = image.convert("RGBA").getchannel("A")
mask = Image.fromarray((np.array(mask) * level).astype(np.uint8))
background = Image.new("RGBA", image.size, (255, 255, 255, 0))
aug_image = Image.composite(image, background, mask)
imutils.get_metadata(
metadata=metadata,
function_name="opacity",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Overlay an emoji onto the original image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param emoji_path: iopath uri to the emoji image
@param opacity: the lower the opacity, the more transparent the overlaid emoji
@param emoji_size: size of the emoji is emoji_size * height of the original image
@param x_pos: position of emoji relative to the image width
@param y_pos: position of emoji relative to the image height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def overlay_emoji(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
emoji_path: str = utils.EMOJI_PATH,
opacity: float = 1.0,
emoji_size: float = 0.15,
x_pos: float = 0.4,
y_pos: float = 0.8,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Overlay an emoji onto the original image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param emoji_path: iopath uri to the emoji image
@param opacity: the lower the opacity, the more transparent the overlaid emoji
@param emoji_size: size of the emoji is emoji_size * height of the original image
@param x_pos: position of emoji relative to the image width
@param y_pos: position of emoji relative to the image height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
local_emoji_path = utils.pathmgr.get_local_path(emoji_path)
aug_image = overlay_image(
image,
overlay=local_emoji_path,
output_path=output_path,
opacity=opacity,
overlay_size=emoji_size,
x_pos=x_pos,
y_pos=y_pos,
)
imutils.get_metadata(
metadata=metadata,
function_name="overlay_emoji",
aug_image=aug_image,
**func_kwargs,
)
return aug_image |
Overlays an image onto another image at position (width * x_pos, height * y_pos)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param overlay: the path to an image or a variable of type PIL.Image.Image
that will be overlaid
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param opacity: the lower the opacity, the more transparent the overlaid image
@param overlay_size: size of the overlaid image is overlay_size * height
of the original image
@param x_pos: position of overlaid image relative to the image width
@param max_visible_opacity: if bboxes are passed in, this param will be used as the
maximum opacity value through which the src image will still be considered
visible; see the function `overlay_image_bboxes_helper` in `utils/bboxes.py` for
more details about how this is used. If bboxes are not passed in this is not used
@param y_pos: position of overlaid image relative to the image height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def overlay_image(
image: Union[str, Image.Image],
overlay: Union[str, Image.Image],
output_path: Optional[str] = None,
opacity: float = 1.0,
overlay_size: float = 1.0,
x_pos: float = 0.4,
y_pos: float = 0.4,
max_visible_opacity: float = 0.75,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Overlays an image onto another image at position (width * x_pos, height * y_pos)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param overlay: the path to an image or a variable of type PIL.Image.Image
that will be overlaid
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param opacity: the lower the opacity, the more transparent the overlaid image
@param overlay_size: size of the overlaid image is overlay_size * height
of the original image
@param x_pos: position of overlaid image relative to the image width
@param max_visible_opacity: if bboxes are passed in, this param will be used as the
maximum opacity value through which the src image will still be considered
visible; see the function `overlay_image_bboxes_helper` in `utils/bboxes.py` for
more details about how this is used. If bboxes are not passed in this is not used
@param y_pos: position of overlaid image relative to the image height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert 0.0 <= opacity <= 1.0, "Opacity must be a value in the range [0, 1]"
assert 0.0 <= overlay_size <= 1.0, "Image size must be a value in the range [0, 1]"
assert 0.0 <= x_pos <= 1.0, "x_pos must be a value in the range [0, 1]"
assert 0.0 <= y_pos <= 1.0, "y_pos must be a value in the range [0, 1]"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
overlay = imutils.validate_and_load_image(overlay)
im_width, im_height = image.size
overlay_width, overlay_height = overlay.size
new_height = max(1, int(im_height * overlay_size))
new_width = int(overlay_width * new_height / overlay_height)
overlay = overlay.resize((new_width, new_height))
try:
mask = overlay.convert("RGBA").getchannel("A")
mask = Image.fromarray((np.array(mask) * opacity).astype(np.uint8))
except ValueError:
mask = Image.new(mode="L", size=overlay.size, color=int(opacity * 255))
x = int(im_width * x_pos)
y = int(im_height * y_pos)
aug_image = image.convert(mode="RGBA")
aug_image.paste(im=overlay, box=(x, y), mask=mask)
imutils.get_metadata(
metadata=metadata,
function_name="overlay_image",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Overlays the image onto a given background image at position
(width * x_pos, height * y_pos)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param background_image: the path to an image or a variable of type PIL.Image.Image
onto which the source image will be overlaid
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param opacity: the lower the opacity, the more transparent the overlaid image
@param overlay_size: size of the overlaid image is overlay_size * height
of the background image
@param x_pos: position of overlaid image relative to the background image width with
respect to the x-axis
@param y_pos: position of overlaid image relative to the background image height with
respect to the y-axis
@param scale_bg: if True, the background image will be scaled up or down so that
overlay_size is respected; if False, the source image will be scaled instead
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def overlay_onto_background_image(
image: Union[str, Image.Image],
background_image: Union[str, Image.Image],
output_path: Optional[str] = None,
opacity: float = 1.0,
overlay_size: float = 1.0,
x_pos: float = 0.4,
y_pos: float = 0.4,
scale_bg: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Overlays the image onto a given background image at position
(width * x_pos, height * y_pos)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param background_image: the path to an image or a variable of type PIL.Image.Image
onto which the source image will be overlaid
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param opacity: the lower the opacity, the more transparent the overlaid image
@param overlay_size: size of the overlaid image is overlay_size * height
of the background image
@param x_pos: position of overlaid image relative to the background image width with
respect to the x-axis
@param y_pos: position of overlaid image relative to the background image height with
respect to the y-axis
@param scale_bg: if True, the background image will be scaled up or down so that
overlay_size is respected; if False, the source image will be scaled instead
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert 0.0 <= overlay_size <= 1.0, "Image size must be a value in the range [0, 1]"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
if scale_bg:
background_image = resize(
background_image,
width=math.floor(image.width / overlay_size),
height=math.floor(image.height / overlay_size),
)
aug_image = overlay_image(
background_image,
overlay=image,
output_path=output_path,
opacity=opacity,
overlay_size=overlay_size,
x_pos=x_pos,
y_pos=y_pos,
)
imutils.get_metadata(
metadata=metadata,
function_name="overlay_onto_background_image",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Overlay the image onto a screenshot template so it looks like it was
screenshotted on Instagram
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param template_filepath: iopath uri to the screenshot template
@param template_bboxes_filepath: iopath uri to the file containing the
bounding box for each template
@param max_image_size_pixels: if provided, the template image and/or src image
will be scaled down to avoid an output image with an area greater than this
size (in pixels)
@param crop_src_to_fit: if True, the src image will be cropped if necessary to fit
into the template image if the aspect ratios are different. If False, the src
image will instead be resized if needed
@param resize_src_to_match_template: if True, the src image will be resized if it is
too big or small in both dimensions to better match the template image. If False,
the template image will be resized to match the src image instead. It can be
useful to set this to True if the src image is very large so that the augmented
image isn't huge, but instead is the same size as the template image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def overlay_onto_screenshot(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
template_filepath: str = utils.TEMPLATE_PATH,
template_bboxes_filepath: str = utils.BBOXES_PATH,
max_image_size_pixels: Optional[int] = None,
crop_src_to_fit: bool = False,
resize_src_to_match_template: bool = True,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Overlay the image onto a screenshot template so it looks like it was
screenshotted on Instagram
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param template_filepath: iopath uri to the screenshot template
@param template_bboxes_filepath: iopath uri to the file containing the
bounding box for each template
@param max_image_size_pixels: if provided, the template image and/or src image
will be scaled down to avoid an output image with an area greater than this
size (in pixels)
@param crop_src_to_fit: if True, the src image will be cropped if necessary to fit
into the template image if the aspect ratios are different. If False, the src
image will instead be resized if needed
@param resize_src_to_match_template: if True, the src image will be resized if it is
too big or small in both dimensions to better match the template image. If False,
the template image will be resized to match the src image instead. It can be
useful to set this to True if the src image is very large so that the augmented
image isn't huge, but instead is the same size as the template image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
template, bbox = imutils.get_template_and_bbox(
template_filepath, template_bboxes_filepath
)
if resize_src_to_match_template:
bbox_w, bbox_h = bbox[2] - bbox[0], bbox[3] - bbox[1]
image = scale(image, factor=min(bbox_w / image.width, bbox_h / image.height))
else:
template, bbox = imutils.scale_template_image(
image.size[0],
image.size[1],
template,
bbox,
max_image_size_pixels,
crop_src_to_fit,
)
bbox_w, bbox_h = bbox[2] - bbox[0], bbox[3] - bbox[1]
cropped_src = imutils.resize_and_pad_to_given_size(
image, bbox_w, bbox_h, crop=crop_src_to_fit
)
template.paste(cropped_src, box=bbox)
imutils.get_metadata(
metadata=metadata,
function_name="overlay_onto_screenshot",
aug_image=template,
**func_kwargs,
)
return imutils.ret_and_save_image(template, output_path, src_mode) |
Overlay stripe pattern onto the image (by default, white horizontal
stripes are overlaid)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param line_width: the width of individual stripes as a float value ranging
from 0 to 1. Defaults to 0.5
@param line_color: color of the overlaid stripes in RGB values
@param line_angle: the angle of the stripes in degrees, ranging from
-360Β° to 360Β°. Defaults to 0Β° or horizontal stripes
@param line_density: controls the distance between stripes represented as a
float value ranging from 0 to 1, with 1 indicating more densely spaced
stripes. Defaults to 0.5
@param line_type: the type of stripes. Current options include: dotted,
dashed, and solid. Defaults to solid
@param line_opacity: the opacity of the stripes, ranging from 0 to 1 with
1 being opaque. Defaults to 1.0
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def overlay_stripes(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
line_width: float = 0.5,
line_color: Tuple[int, int, int] = utils.WHITE_RGB_COLOR,
line_angle: float = 0,
line_density: float = 0.5,
line_type: Optional[str] = "solid",
line_opacity: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Overlay stripe pattern onto the image (by default, white horizontal
stripes are overlaid)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param line_width: the width of individual stripes as a float value ranging
from 0 to 1. Defaults to 0.5
@param line_color: color of the overlaid stripes in RGB values
@param line_angle: the angle of the stripes in degrees, ranging from
-360Β° to 360Β°. Defaults to 0Β° or horizontal stripes
@param line_density: controls the distance between stripes represented as a
float value ranging from 0 to 1, with 1 indicating more densely spaced
stripes. Defaults to 0.5
@param line_type: the type of stripes. Current options include: dotted,
dashed, and solid. Defaults to solid
@param line_opacity: the opacity of the stripes, ranging from 0 to 1 with
1 being opaque. Defaults to 1.0
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert (
0.0 <= line_width <= 1.0
), "Line width must be a value in the range [0.0, 1.0]"
assert (
-360.0 <= line_angle <= 360.0
), "Line angle must be a degree in the range [360.0, 360.0]"
assert (
0.0 <= line_density <= 1.0
), "Line density must be a value in the range [0.0, 1.0]"
assert (
0.0 <= line_opacity <= 1.0
), "Line opacity must be a value in the range [0.0, 1.0]"
assert line_type in utils.SUPPORTED_LINE_TYPES, "Stripe type not supported"
utils.validate_rgb_color(line_color)
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
width, height = image.size
binary_mask = imutils.compute_stripe_mask(
src_w=width,
src_h=height,
line_width=line_width,
line_angle=line_angle,
line_density=line_density,
)
if line_type == "dotted":
# To create dotted effect, multiply mask by stripes in perpendicular direction
perpendicular_mask = imutils.compute_stripe_mask(
src_w=width,
src_h=height,
line_width=line_width,
line_angle=line_angle + 90,
line_density=line_density,
)
binary_mask *= perpendicular_mask
elif line_type == "dashed":
# To create dashed effect, multiply mask by stripes with a larger line
# width in perpendicular direction
perpendicular_mask = imutils.compute_stripe_mask(
src_w=width,
src_h=height,
line_width=0.7,
line_angle=line_angle + 90,
line_density=line_density,
)
binary_mask *= perpendicular_mask
mask = Image.fromarray(np.uint8(binary_mask * line_opacity * 255))
foreground = Image.new("RGB", image.size, line_color)
aug_image = image.copy() # to avoid modifying the input image
aug_image.paste(foreground, (0, 0), mask=mask)
imutils.get_metadata(
metadata=metadata,
function_name="overlay_stripes",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Overlay text onto the image (by default, text is randomly overlaid)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param text: indices (into the file) of the characters to be overlaid. Each line of
text is represented as a list of int indices; if a list of lists is supplied,
multiple lines of text will be overlaid
@param font_file: iopath uri to the .ttf font file
@param font_size: size of the overlaid characters, calculated as
font_size * min(height, width) of the original image
@param opacity: the lower the opacity, the more transparent the overlaid text
@param color: color of the overlaid text in RGB values
@param x_pos: position of the overlaid text relative to the image width
@param y_pos: position of the overlaid text relative to the image height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def overlay_text(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
text: List[Union[int, List[int]]] = utils.DEFAULT_TEXT_INDICES,
font_file: str = utils.FONT_PATH,
font_size: float = 0.15,
opacity: float = 1.0,
color: Tuple[int, int, int] = utils.RED_RGB_COLOR,
x_pos: float = 0.0,
y_pos: float = 0.5,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Overlay text onto the image (by default, text is randomly overlaid)
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param text: indices (into the file) of the characters to be overlaid. Each line of
text is represented as a list of int indices; if a list of lists is supplied,
multiple lines of text will be overlaid
@param font_file: iopath uri to the .ttf font file
@param font_size: size of the overlaid characters, calculated as
font_size * min(height, width) of the original image
@param opacity: the lower the opacity, the more transparent the overlaid text
@param color: color of the overlaid text in RGB values
@param x_pos: position of the overlaid text relative to the image width
@param y_pos: position of the overlaid text relative to the image height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert 0.0 <= opacity <= 1.0, "Opacity must be a value in the range [0.0, 1.0]"
assert 0.0 <= font_size <= 1.0, "Font size must be a value in the range [0.0, 1.0]"
assert 0.0 <= x_pos <= 1.0, "x_pos must be a value in the range [0.0, 1.0]"
assert 0.0 <= y_pos <= 1.0, "y_pos must be a value in the range [0.0, 1.0]"
utils.validate_rgb_color(color)
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
text_lists = text if all(isinstance(t, list) for t in text) else [text]
assert all(isinstance(t, list) for t in text_lists) and all(
all(isinstance(t, int) for t in text_l) # pyre-ignore text_l is a List[int]
for text_l in text_lists
), "Text must be a list of ints or a list of list of ints for multiple lines"
image = image.convert("RGBA")
width, height = image.size
local_font_path = utils.pathmgr.get_local_path(font_file)
font_size = int(min(width, height) * font_size)
font = ImageFont.truetype(local_font_path, font_size)
pkl_file = os.path.splitext(font_file)[0] + ".pkl"
local_pkl_path = utils.pathmgr.get_local_path(pkl_file)
with open(local_pkl_path, "rb") as f:
chars = pickle.load(f)
try:
text_strs = [
# pyre-fixme[16]: Item `int` of `Union[List[int], List[Union[List[int],
# int]], int]` has no attribute `__iter__`.
"".join([chr(chars[c % len(chars)]) for c in t])
for t in text_lists
]
except Exception:
raise IndexError("Invalid text indices specified")
draw = ImageDraw.Draw(image)
for i, text_str in enumerate(text_strs):
draw.text(
xy=(x_pos * width, y_pos * height + i * (font_size + 5)),
text=text_str,
fill=(color[0], color[1], color[2], round(opacity * 255)),
# pyre-fixme[6]: Expected `Optional[ImageFont._Font]` for 4th param but got
# `FreeTypeFont`.
font=font,
)
imutils.get_metadata(
metadata=metadata,
function_name="overlay_text",
aug_image=image,
**func_kwargs,
)
return imutils.ret_and_save_image(image, output_path, src_mode) |
Pads the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param w_factor: width * w_factor pixels are padded to both left and right
of the image
@param h_factor: height * h_factor pixels are padded to the top and the
bottom of the image
@param color: color of the padded border in RGB values
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def pad(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
w_factor: float = 0.25,
h_factor: float = 0.25,
color: Tuple[int, int, int] = utils.DEFAULT_COLOR,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Pads the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param w_factor: width * w_factor pixels are padded to both left and right
of the image
@param h_factor: height * h_factor pixels are padded to the top and the
bottom of the image
@param color: color of the padded border in RGB values
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert w_factor >= 0, "w_factor cannot be a negative number"
assert h_factor >= 0, "h_factor cannot be a negative number"
utils.validate_rgb_color(color)
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
width, height = image.size
left = right = int(w_factor * width)
top = bottom = int(h_factor * height)
aug_image = Image.new(
mode="RGB",
size=(width + left + right, height + top + bottom),
color=color,
)
aug_image.paste(image, box=(left, top))
imutils.get_metadata(
metadata=metadata,
function_name="pad",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Pads the shorter edge of the image such that it is now square-shaped
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param color: color of the padded border in RGB values
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def pad_square(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
color: Tuple[int, int, int] = utils.DEFAULT_COLOR,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Pads the shorter edge of the image such that it is now square-shaped
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param color: color of the padded border in RGB values
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
utils.validate_rgb_color(color)
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
width, height = image.size
if width < height:
h_factor = 0
dw = height - width
w_factor = dw / (2 * width)
else:
w_factor = 0
dh = width - height
h_factor = dh / (2 * height)
aug_image = pad(image, output_path, w_factor, h_factor, color)
imutils.get_metadata(
metadata=metadata,
function_name="pad_square",
aug_image=aug_image,
**func_kwargs,
)
return aug_image |
Apply a perspective transform to the image so it looks like it was taken
as a photo from another device (e.g. taking a picture from your phone of a
picture on a computer).
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param sigma: the standard deviation of the distribution of destination
coordinates. the larger the sigma value, the more intense the transform
@param dx: change in x for the perspective transform; instead of providing
`sigma` you can provide a scalar value to be precise
@param dy: change in y for the perspective transform; instead of providing
`sigma` you can provide a scalar value to be precise
@param seed: if provided, this will set the random seed to ensure consistency
between runs
@param crop_out_black_border: if True, will crop out the black border resulting
from the perspective transform by cropping to the largest center rectangle
with no black
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def perspective_transform(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
sigma: float = 50.0,
dx: float = 0.0,
dy: float = 0.0,
seed: Optional[int] = 42,
crop_out_black_border: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Apply a perspective transform to the image so it looks like it was taken
as a photo from another device (e.g. taking a picture from your phone of a
picture on a computer).
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param sigma: the standard deviation of the distribution of destination
coordinates. the larger the sigma value, the more intense the transform
@param dx: change in x for the perspective transform; instead of providing
`sigma` you can provide a scalar value to be precise
@param dy: change in y for the perspective transform; instead of providing
`sigma` you can provide a scalar value to be precise
@param seed: if provided, this will set the random seed to ensure consistency
between runs
@param crop_out_black_border: if True, will crop out the black border resulting
from the perspective transform by cropping to the largest center rectangle
with no black
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert sigma >= 0, "Expected 'sigma' to be nonnegative"
assert isinstance(dx, (int, float)), "Expected 'dx' to be a number"
assert isinstance(dy, (int, float)), "Expected 'dy' to be a number"
assert seed is None or isinstance(
seed, int
), "Expected 'seed' to be an integer or set to None"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
rng = np.random.RandomState(seed) if seed is not None else np.random
width, height = image.size
src_coords = [(0, 0), (width, 0), (width, height), (0, height)]
dst_coords = [
(rng.normal(point[0], sigma) + dx, rng.normal(point[1], sigma) + dy)
for point in src_coords
]
perspective_transform_coeffs = imutils.compute_transform_coeffs(
src_coords, dst_coords
)
aug_image = image.transform(
(width, height), Image.PERSPECTIVE, perspective_transform_coeffs, Image.BICUBIC
)
if crop_out_black_border:
top_left, top_right, bottom_right, bottom_left = dst_coords
new_left = max(0, top_left[0], bottom_left[0])
new_right = min(width, top_right[0], bottom_right[0])
new_top = max(0, top_left[1], top_right[1])
new_bottom = min(height, bottom_left[1], bottom_right[1])
if new_left >= new_right or new_top >= new_bottom:
raise Exception(
"Cannot crop out black border of a perspective transform this intense"
)
aug_image = aug_image.crop((new_left, new_top, new_right, new_bottom))
imutils.get_metadata(
metadata=metadata,
function_name="perspective_transform",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Pixelizes an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param ratio: smaller values result in a more pixelated image, 1.0 indicates
no change, and any value above one doesn't have a noticeable effect
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def pixelization(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
ratio: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Pixelizes an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param ratio: smaller values result in a more pixelated image, 1.0 indicates
no change, and any value above one doesn't have a noticeable effect
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert ratio > 0, "Expected 'ratio' to be a positive number"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
width, height = image.size
aug_image = image.resize((int(width * ratio), int(height * ratio)))
aug_image = aug_image.resize((width, height))
imutils.get_metadata(
metadata=metadata,
function_name="pixelization",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Adds random noise to the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mean: mean of the gaussian noise added
@param var: variance of the gaussian noise added
@param seed: if provided, this will set the random seed before generating noise
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def random_noise(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
mean: float = 0.0,
var: float = 0.01,
seed: int = 42,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Adds random noise to the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param mean: mean of the gaussian noise added
@param var: variance of the gaussian noise added
@param seed: if provided, this will set the random seed before generating noise
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert type(mean) in [float, int], "Mean must be an integer or a float"
assert type(var) in [float, int], "Variance must be an integer or a float"
assert type(seed) == int, "Seed must be an integer"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
if seed is not None:
np.random.seed(seed=seed)
np_image = np.asarray(image).astype(np.float32)
np_image = np_image / 255.0
if np_image.min() < 0:
low_clip = -1.0
else:
low_clip = 0.0
sigma = var**0.5
gauss = np.random.normal(mean, sigma, (np_image.shape))
noisy_image = np_image + gauss
noisy_image = np.clip(noisy_image, low_clip, 1.0)
noisy_image *= 255.0
aug_image = Image.fromarray(np.uint8(noisy_image))
imutils.get_metadata(
metadata=metadata,
function_name="random_noise",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Resizes an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param width: the desired width the image should be resized to have. If
None, the original image width will be used
@param height: the desired height the image should be resized to have. If
None, the original image height will be used
@param resample: A resampling filter. This can be one of PIL.Image.NEAREST,
PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC, or
PIL.Image.LANCZOS
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def resize(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
width: Optional[int] = None,
height: Optional[int] = None,
resample: Any = Image.BILINEAR,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Resizes an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param width: the desired width the image should be resized to have. If
None, the original image width will be used
@param height: the desired height the image should be resized to have. If
None, the original image height will be used
@param resample: A resampling filter. This can be one of PIL.Image.NEAREST,
PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC, or
PIL.Image.LANCZOS
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert width is None or type(width) == int, "Width must be an integer"
assert height is None or type(height) == int, "Height must be an integer"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
im_w, im_h = image.size
aug_image = image.resize((width or im_w, height or im_h), resample)
imutils.get_metadata(
metadata=metadata,
function_name="resize",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Rotates the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param degrees: the amount of degrees that the original image will be rotated
counter clockwise
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def rotate(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
degrees: float = 15.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Rotates the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param degrees: the amount of degrees that the original image will be rotated
counter clockwise
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert type(degrees) in [float, int], "Degrees must be an integer or a float"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
rotated_image = image.rotate(degrees, expand=True)
center_x, center_y = rotated_image.width / 2, rotated_image.height / 2
wr, hr = imutils.rotated_rect_with_max_area(image.width, image.height, degrees)
aug_image = rotated_image.crop(
(
int(center_x - wr / 2),
int(center_y - hr / 2),
int(center_x + wr / 2),
int(center_y + hr / 2),
)
)
imutils.get_metadata(
metadata=metadata,
function_name="rotate",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Alters the saturation of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: a saturation factor of below 1.0 lowers the saturation, a
factor of 1.0 gives the original image, and a factor greater than 1.0
adds saturation
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def saturation(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Alters the saturation of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: a saturation factor of below 1.0 lowers the saturation, a
factor of 1.0 gives the original image, and a factor greater than 1.0
adds saturation
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
aug_image = ImageEnhance.Color(image).enhance(factor)
imutils.get_metadata(
metadata=metadata,
function_name="saturation",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Alters the resolution of an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: the ratio by which the image should be downscaled or upscaled
@param interpolation: interpolation method. This can be one of PIL.Image.NEAREST,
PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC or
PIL.Image.LANCZOS
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def scale(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
factor: float = 0.5,
interpolation: Optional[int] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Alters the resolution of an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: the ratio by which the image should be downscaled or upscaled
@param interpolation: interpolation method. This can be one of PIL.Image.NEAREST,
PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC or
PIL.Image.LANCZOS
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
assert factor > 0, "Expected 'factor' to be a positive number"
assert interpolation in [
Image.NEAREST,
Image.BOX,
Image.BILINEAR,
Image.HAMMING,
Image.BICUBIC,
Image.LANCZOS,
None,
], "Invalid interpolation specified"
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
if interpolation is None:
interpolation = Image.LANCZOS if factor < 1 else Image.BILINEAR
width, height = image.size
scaled_width = int(width * factor)
scaled_height = int(height * factor)
# pyre-fixme[6]: Expected `Union[typing_extensions.Literal[0],
# typing_extensions.Literal[1], typing_extensions.Literal[2],
# typing_extensions.Literal[3], typing_extensions.Literal[4],
# typing_extensions.Literal[5], None]` for 2nd param but got `int`.
aug_image = image.resize((scaled_width, scaled_height), resample=interpolation)
imutils.get_metadata(
metadata=metadata,
function_name="scale",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Changes the sharpness of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: a factor of below 1.0 blurs the image, a factor of 1.0 gives
the original image, and a factor greater than 1.0 sharpens the image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def sharpen(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Changes the sharpness of the image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: a factor of below 1.0 blurs the image, a factor of 1.0 gives
the original image, and a factor greater than 1.0 sharpens the image
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
aug_image = ImageEnhance.Sharpness(image).enhance(factor)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
imutils.get_metadata(metadata=metadata, function_name="sharpen", **func_kwargs)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Shuffles the pixels of an image with respect to the shuffling factor. The
factor denotes percentage of pixels to be shuffled and randomly selected
Note: The actual number of pixels will be less than the percentage given
due to the probability of pixels staying in place in the course of shuffling
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: a control parameter between 0.0 and 1.0. While a factor of
0.0 returns the original image, a factor of 1.0 performs full shuffling
@param seed: seed for numpy random generator to select random pixels for shuffling
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def shuffle_pixels(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
factor: float = 1.0,
seed: int = 10,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Shuffles the pixels of an image with respect to the shuffling factor. The
factor denotes percentage of pixels to be shuffled and randomly selected
Note: The actual number of pixels will be less than the percentage given
due to the probability of pixels staying in place in the course of shuffling
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param factor: a control parameter between 0.0 and 1.0. While a factor of
0.0 returns the original image, a factor of 1.0 performs full shuffling
@param seed: seed for numpy random generator to select random pixels for shuffling
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
np.random.seed(seed)
image = imutils.validate_and_load_image(image)
assert 0.0 <= factor <= 1.0, "'factor' must be a value in range [0, 1]"
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
if factor == 0.0:
aug_image = image
else:
aug_image = np.asarray(image, dtype=int)
height, width = aug_image.shape[:2]
number_of_channels = aug_image.size // (height * width)
number_of_pixels = height * width
aug_image = np.reshape(aug_image, (number_of_pixels, number_of_channels))
mask = np.random.choice(
number_of_pixels, size=int(factor * number_of_pixels), replace=False
)
pixels_to_be_shuffled = aug_image[mask]
np.random.shuffle(pixels_to_be_shuffled)
aug_image[mask] = pixels_to_be_shuffled
aug_image = np.reshape(aug_image, (height, width, number_of_channels))
aug_image = np.squeeze(aug_image)
aug_image = Image.fromarray(aug_image.astype("uint8"))
imutils.get_metadata(
metadata=metadata,
function_name="shuffle_pixels",
aug_image=aug_image,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Skews an image with respect to its x or y-axis
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param skew_factor: the level of skew to apply to the image; a larger absolute value will
result in a more intense skew. Recommended range is between [-2, 2]
@param axis: the axis along which the image will be skewed; can be set to 0 (x-axis)
or 1 (y-axis)
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def skew(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
skew_factor: float = 0.5,
axis: int = 0,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Skews an image with respect to its x or y-axis
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param skew_factor: the level of skew to apply to the image; a larger absolute value will
result in a more intense skew. Recommended range is between [-2, 2]
@param axis: the axis along which the image will be skewed; can be set to 0 (x-axis)
or 1 (y-axis)
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
w, h = image.size
if axis == 0:
data = (1, skew_factor, -skew_factor * h / 2, 0, 1, 0)
elif axis == 1:
data = (1, 0, 0, skew_factor, 1, -skew_factor * w / 2)
else:
raise AssertionError(
f"Invalid 'axis' value: Got '{axis}', expected 0 for 'x-axis' or 1 for 'y-axis'"
)
aug_image = image.transform((w, h), Image.AFFINE, data, resample=Image.BILINEAR)
imutils.get_metadata(
metadata=metadata,
function_name="skew",
aug_image=aug_image,
bboxes_helper_func=spatial_bbox_helper,
aug_function=skew,
**func_kwargs,
)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
Vertically flips an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image | def vflip(
image: Union[str, Image.Image],
output_path: Optional[str] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
bboxes: Optional[List[Tuple]] = None,
bbox_format: Optional[str] = None,
) -> Image.Image:
"""
Vertically flips an image
@param image: the path to an image or a variable of type PIL.Image.Image
to be augmented
@param output_path: the path in which the resulting image will be stored.
If None, the resulting PIL Image will still be returned
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest width, height, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param bboxes: a list of bounding boxes can be passed in here if desired. If
provided, this list will be modified in place such that each bounding box is
transformed according to this function
@param bbox_format: signifies what bounding box format was used in `bboxes`. Must
specify `bbox_format` if `bboxes` is provided. Supported bbox_format values are
"pascal_voc", "pascal_voc_norm", "coco", and "yolo"
@returns: the augmented PIL Image
"""
image = imutils.validate_and_load_image(image)
aug_image = image.transpose(Image.FLIP_TOP_BOTTOM)
func_kwargs = imutils.get_func_kwargs(metadata, locals())
src_mode = image.mode
imutils.get_metadata(metadata=metadata, function_name="vflip", **func_kwargs)
return imutils.ret_and_save_image(aug_image, output_path, src_mode) |
This function is a wrapper on all image augmentation functions
such that a numpy array could be passed in as input instead of providing
the path to the image or a PIL Image
@param image: the numpy array representing the image to be augmented
@param aug_function: the augmentation function to be applied onto the image
@param **kwargs: the input attributes to be passed into the augmentation function | def aug_np_wrapper(
image: np.ndarray, aug_function: Callable[..., None], **kwargs
) -> np.ndarray:
"""
This function is a wrapper on all image augmentation functions
such that a numpy array could be passed in as input instead of providing
the path to the image or a PIL Image
@param image: the numpy array representing the image to be augmented
@param aug_function: the augmentation function to be applied onto the image
@param **kwargs: the input attributes to be passed into the augmentation function
"""
pil_image = Image.fromarray(image)
aug_image = aug_function(pil_image, **kwargs)
return np.array(aug_image) |
Computes intensity of any transform that resizes the src image. For these
types of transforms the intensity is defined as the percentage of image
area that has been cut out (if cropped/resized to smaller) or added (if
padding/resized to bigger). When computing the percentage, the denominator
should be the larger of the src & dst areas so the resulting percentage
isn't greater than 100. | def resize_intensity_helper(metadata: Dict[str, Any]) -> float:
"""
Computes intensity of any transform that resizes the src image. For these
types of transforms the intensity is defined as the percentage of image
area that has been cut out (if cropped/resized to smaller) or added (if
padding/resized to bigger). When computing the percentage, the denominator
should be the larger of the src & dst areas so the resulting percentage
isn't greater than 100.
"""
src_area = metadata["src_width"] * metadata["src_height"]
dst_area = metadata["dst_width"] * metadata["dst_height"]
larger_area = max(src_area, dst_area)
return (abs(dst_area - src_area) / larger_area) * 100.0 |
If part of the bbox was cropped out in the x-axis, the left/right side will now be
0/1 respectively; otherwise the fraction x1 is cut off from the left & x2 from the
right and we renormalize with the new width. Analogous for the y-axis | def crop_bboxes_helper(
bbox: Tuple, x1: float, y1: float, x2: float, y2: float, **kwargs
) -> Tuple:
"""
If part of the bbox was cropped out in the x-axis, the left/right side will now be
0/1 respectively; otherwise the fraction x1 is cut off from the left & x2 from the
right and we renormalize with the new width. Analogous for the y-axis
"""
left_factor, upper_factor, right_factor, lower_factor = bbox
new_w, new_h = x2 - x1, y2 - y1
return (
max(0, (left_factor - x1) / new_w),
max(0, (upper_factor - y1) / new_h),
min(1, 1 - (x2 - right_factor) / new_w),
min(1, 1 - (y2 - lower_factor) / new_h),
) |
When the src image is horizontally flipped, the bounding box also gets horizontally
flipped | def hflip_bboxes_helper(bbox: Tuple, **kwargs) -> Tuple:
"""
When the src image is horizontally flipped, the bounding box also gets horizontally
flipped
"""
left_factor, upper_factor, right_factor, lower_factor = bbox
return (1 - right_factor, upper_factor, 1 - left_factor, lower_factor) |
The src image is offset vertically by caption_height pixels, so we normalize that to
get the y offset, add that to the upper & lower coordinates, & renormalize with the
new height. The x dimension is unaffected | def meme_format_bboxes_helper(
bbox: Tuple, src_w: int, src_h: int, caption_height: int, **kwargs
) -> Tuple:
"""
The src image is offset vertically by caption_height pixels, so we normalize that to
get the y offset, add that to the upper & lower coordinates, & renormalize with the
new height. The x dimension is unaffected
"""
left_f, upper_f, right_f, lower_f = bbox
y_off = caption_height / src_h
new_h = 1.0 + y_off
return left_f, (upper_f + y_off) / new_h, right_f, (lower_f + y_off) / new_h |
The src image is overlaid on the dst image offset by (`x_pos`, `y_pos`) & with a
size of `overlay_size` (all relative to the dst image dimensions). So the bounding
box is also offset by (`x_pos`, `y_pos`) & scaled by `overlay_size`. It is also
possible that some of the src image will be cut off, so we take the max with 0/min
with 1 in order to crop the bbox if needed | def overlay_onto_background_image_bboxes_helper(
bbox: Tuple, overlay_size: float, x_pos: float, y_pos: float, **kwargs
) -> Tuple:
"""
The src image is overlaid on the dst image offset by (`x_pos`, `y_pos`) & with a
size of `overlay_size` (all relative to the dst image dimensions). So the bounding
box is also offset by (`x_pos`, `y_pos`) & scaled by `overlay_size`. It is also
possible that some of the src image will be cut off, so we take the max with 0/min
with 1 in order to crop the bbox if needed
"""
left_factor, upper_factor, right_factor, lower_factor = bbox
return (
max(0, left_factor * overlay_size + x_pos),
max(0, upper_factor * overlay_size + y_pos),
min(1, right_factor * overlay_size + x_pos),
min(1, lower_factor * overlay_size + y_pos),
) |
We made a few decisions for this augmentation about how bboxes are defined:
1. If `opacity` < `max_visible_opacity` (default 0.75, can be specified by the user),
the bbox stays the same because it is still considered "visible" behind the
overlaid image
2. If the entire bbox is covered by the overlaid image, the bbox is no longer valid
so we return it as (0, 0, 0, 0), which will be turned to None in `check_bboxes()`
3. If the entire bottom of the bbox is covered by the overlaid image
(i.e. `x_pos < left_factor` & `x_pos + overlay_size > right_factor` &
`y_pos + overlay_size > lower_factor`), we crop out the lower part of the bbox
that is covered. The analogue is true for the top/left/right being occluded
4. If just the middle of the bbox is covered or a rectangle is sliced out of the
bbox, we consider that the bbox is unchanged, even though part of it is occluded.
This isn't ideal but otherwise it's very complicated; we could split the
remaining area into smaller visible bboxes, but then we would have to return
multiple dst bboxes corresponding to one src bbox | def overlay_image_bboxes_helper(
bbox: Tuple,
opacity: float,
overlay_size: float,
x_pos: float,
y_pos: float,
max_visible_opacity: float,
**kwargs,
) -> Tuple:
"""
We made a few decisions for this augmentation about how bboxes are defined:
1. If `opacity` < `max_visible_opacity` (default 0.75, can be specified by the user),
the bbox stays the same because it is still considered "visible" behind the
overlaid image
2. If the entire bbox is covered by the overlaid image, the bbox is no longer valid
so we return it as (0, 0, 0, 0), which will be turned to None in `check_bboxes()`
3. If the entire bottom of the bbox is covered by the overlaid image
(i.e. `x_pos < left_factor` & `x_pos + overlay_size > right_factor` &
`y_pos + overlay_size > lower_factor`), we crop out the lower part of the bbox
that is covered. The analogue is true for the top/left/right being occluded
4. If just the middle of the bbox is covered or a rectangle is sliced out of the
bbox, we consider that the bbox is unchanged, even though part of it is occluded.
This isn't ideal but otherwise it's very complicated; we could split the
remaining area into smaller visible bboxes, but then we would have to return
multiple dst bboxes corresponding to one src bbox
"""
left_factor, upper_factor, right_factor, lower_factor = bbox
if opacity >= max_visible_opacity:
occluded_left = x_pos < left_factor
occluded_upper = y_pos < upper_factor
occluded_right = x_pos + overlay_size > right_factor
occluded_lower = y_pos + overlay_size > lower_factor
if occluded_left and occluded_right:
# If the bbox is completely covered, it's no longer valid so return zeros
if occluded_upper and occluded_lower:
return (0.0, 0.0, 0.0, 0.0)
if occluded_lower:
lower_factor = y_pos
elif occluded_upper:
upper_factor = y_pos + overlay_size
elif occluded_upper and occluded_lower:
if occluded_right:
right_factor = x_pos
elif occluded_left:
left_factor = x_pos + overlay_size
return left_factor, upper_factor, right_factor, lower_factor |
We transform the bbox by applying all the same transformations as are applied in the
`overlay_onto_screenshot` function, each of which is mentioned below in comments | def overlay_onto_screenshot_bboxes_helper(
bbox: Tuple,
src_w: int,
src_h: int,
template_filepath: str,
template_bboxes_filepath: str,
resize_src_to_match_template: bool,
max_image_size_pixels: int,
crop_src_to_fit: bool,
**kwargs,
) -> Tuple:
"""
We transform the bbox by applying all the same transformations as are applied in the
`overlay_onto_screenshot` function, each of which is mentioned below in comments
"""
left_f, upper_f, right_f, lower_f = bbox
template, tbbox = imutils.get_template_and_bbox(
template_filepath, template_bboxes_filepath
)
# Either src image or template image is scaled
if resize_src_to_match_template:
tbbox_w, tbbox_h = tbbox[2] - tbbox[0], tbbox[3] - tbbox[1]
src_scale_factor = min(tbbox_w / src_w, tbbox_h / src_h)
else:
template, tbbox = imutils.scale_template_image(
src_w,
src_h,
template,
tbbox,
max_image_size_pixels,
crop_src_to_fit,
)
tbbox_w, tbbox_h = tbbox[2] - tbbox[0], tbbox[3] - tbbox[1]
src_scale_factor = 1
template_w, template_h = template.size
x_off, y_off = tbbox[:2]
# Src image is scaled (if resize_src_to_match_template)
curr_w, curr_h = src_w * src_scale_factor, src_h * src_scale_factor
left, upper, right, lower = (
left_f * curr_w,
upper_f * curr_h,
right_f * curr_w,
lower_f * curr_h,
)
# Src image is cropped to (tbbox_w, tbbox_h)
if crop_src_to_fit:
dx, dy = (curr_w - tbbox_w) // 2, (curr_h - tbbox_h) // 2
x1, y1, x2, y2 = dx, dy, dx + tbbox_w, dy + tbbox_h
left_f, upper_f, right_f, lower_f = crop_bboxes_helper(
bbox, x1 / curr_w, y1 / curr_h, x2 / curr_w, y2 / curr_h
)
left, upper, right, lower = (
left_f * tbbox_w,
upper_f * tbbox_h,
right_f * tbbox_w,
lower_f * tbbox_h,
)
# Src image is resized to (tbbox_w, tbbox_h)
else:
resize_f = min(tbbox_w / curr_w, tbbox_h / curr_h)
left, upper, right, lower = (
left * resize_f,
upper * resize_f,
right * resize_f,
lower * resize_f,
)
curr_w, curr_h = curr_w * resize_f, curr_h * resize_f
# Padding with black
padding_x = max(0, (tbbox_w - curr_w) // 2)
padding_y = max(0, (tbbox_h - curr_h) // 2)
left, upper, right, lower = (
left + padding_x,
upper + padding_y,
right + padding_x,
lower + padding_y,
)
# Src image is overlaid onto template image
left, upper, right, lower = (
left + x_off,
upper + y_off,
right + x_off,
lower + y_off,
)
return left / template_w, upper / template_h, right / template_w, lower / template_h |
The src image is padded horizontally with w_factor * src_w, so the bbox gets shifted
over by w_factor and then renormalized over the new width. Vertical padding is
analogous | def pad_bboxes_helper(bbox: Tuple, w_factor: float, h_factor: float, **kwargs) -> Tuple:
"""
The src image is padded horizontally with w_factor * src_w, so the bbox gets shifted
over by w_factor and then renormalized over the new width. Vertical padding is
analogous
"""
left_factor, upper_factor, right_factor, lower_factor = bbox
new_w = 1 + 2 * w_factor
new_h = 1 + 2 * h_factor
return (
(left_factor + w_factor) / new_w,
(upper_factor + h_factor) / new_h,
(right_factor + w_factor) / new_w,
(lower_factor + h_factor) / new_h,
) |
In pad_square, pad is called with w_factor & h_factor computed as follows, so we can
use the `pad_bboxes_helper` function to transform the bbox | def pad_square_bboxes_helper(bbox: Tuple, src_w: int, src_h: int, **kwargs) -> Tuple:
"""
In pad_square, pad is called with w_factor & h_factor computed as follows, so we can
use the `pad_bboxes_helper` function to transform the bbox
"""
w_factor, h_factor = 0, 0
if src_w < src_h:
w_factor = (src_h - src_w) / (2 * src_w)
else:
h_factor = (src_w - src_h) / (2 * src_h)
return pad_bboxes_helper(bbox, w_factor=w_factor, h_factor=h_factor) |
Computes the bbox that encloses the bbox in the perspective transformed image. Also
uses the `crop_bboxes_helper` function since the image is cropped if
`crop_out_black_border` is True. | def perspective_transform_bboxes_helper(
bbox: Tuple,
src_w: int,
src_h: int,
sigma: float,
dx: float,
dy: float,
crop_out_black_border: bool,
seed: Optional[int],
**kwargs,
) -> Tuple:
"""
Computes the bbox that encloses the bbox in the perspective transformed image. Also
uses the `crop_bboxes_helper` function since the image is cropped if
`crop_out_black_border` is True.
"""
def transform(x: float, y: float, a: List[float]) -> Tuple:
"""
Transforms a point in the image given the perspective transform matrix; we will
use this to transform the bounding box corners. Based on PIL source code:
https://github.com/python-pillow/Pillow/blob/master/src/libImaging/Geometry.c#L399
"""
return (
(a[0] * x + a[1] * y + a[2]) / (a[6] * x + a[7] * y + a[8]),
(a[3] * x + a[4] * y + a[5]) / (a[6] * x + a[7] * y + a[8]),
)
def get_perspective_transform(
src_coords: List[Tuple[int, int]], dst_coords: List[Tuple[int, int]]
) -> List[float]:
"""
Computes the transformation matrix used for the perspective transform with
the given src & dst corner coordinates. Based on OpenCV source code:
https://github.com/opencv/opencv/blob/master/modules/imgproc/src/imgwarp.cpp#L3277-L3304
"""
a = np.zeros((8, 8), dtype=float)
dst_x, dst_y = zip(*dst_coords)
b = np.asarray(list(dst_x) + list(dst_y))
for i, (sc, dc) in enumerate(zip(src_coords, dst_coords)):
a[i][0] = a[i + 4][3] = sc[0]
a[i][1] = a[i + 4][4] = sc[1]
a[i][2] = a[i + 4][5] = 1
a[i][6] = -sc[0] * dc[0]
a[i][7] = -sc[1] * dc[0]
a[i + 4][6] = -sc[0] * dc[1]
a[i + 4][7] = -sc[1] * dc[1]
A = np.matrix(a, dtype=float)
B = np.array(b).reshape(8)
res = np.linalg.solve(A, B)
return np.array(res).reshape(8).tolist() + [1.0]
assert (
seed is not None
), "Cannot transform bbox for perspective_transform if seed is not provided"
rng = np.random.RandomState(seed)
src_coords = [(0, 0), (src_w, 0), (src_w, src_h), (0, src_h)]
dst_coords = [
(rng.normal(point[0], sigma) + dx, rng.normal(point[1], sigma) + dy)
for point in src_coords
]
perspective_transform_coeffs = get_perspective_transform(src_coords, dst_coords)
left_f, upper_f, right_f, lower_f = bbox
left, upper, right, lower = (
left_f * src_w,
upper_f * src_h,
right_f * src_w,
lower_f * src_h,
)
bbox_coords = [(left, upper), (right, upper), (right, lower), (left, lower)]
transformed_bbox_coords = [
transform(x + 0.5, y + 0.5, perspective_transform_coeffs)
for x, y in bbox_coords
]
transformed_xs, transformed_ys = zip(*transformed_bbox_coords)
transformed_bbox = (
max(0, min(transformed_xs) / src_w),
max(0, min(transformed_ys) / src_h),
min(1, max(transformed_xs) / src_w),
min(1, max(transformed_ys) / src_h),
)
# This is copy-pasted from `functional.py`, exactly how the crop coords are computed
if crop_out_black_border:
top_left, top_right, bottom_right, bottom_left = dst_coords
new_left = max(0, top_left[0], bottom_left[0])
new_right = min(src_w, top_right[0], bottom_right[0])
new_top = max(0, top_left[1], top_right[1])
new_bottom = min(src_h, bottom_left[1], bottom_right[1])
transformed_bbox = crop_bboxes_helper(
transformed_bbox,
x1=new_left / src_w,
y1=new_top / src_h,
x2=new_right / src_w,
y2=new_bottom / src_h,
)
return transformed_bbox |
Computes the bbox that encloses the rotated bbox in the rotated image. This code was
informed by looking at the source code for PIL.Image.rotate
(https://pillow.readthedocs.io/en/stable/_modules/PIL/Image.html#Image.rotate).
Also uses the `crop_bboxes_helper` function since the image is cropped after being
rotated. | def rotate_bboxes_helper(
bbox: Tuple, src_w: int, src_h: int, degrees: float, **kwargs
) -> Tuple:
"""
Computes the bbox that encloses the rotated bbox in the rotated image. This code was
informed by looking at the source code for PIL.Image.rotate
(https://pillow.readthedocs.io/en/stable/_modules/PIL/Image.html#Image.rotate).
Also uses the `crop_bboxes_helper` function since the image is cropped after being
rotated.
"""
left_f, upper_f, right_f, lower_f = bbox
left, upper, right, lower = (
left_f * src_w,
upper_f * src_h,
right_f * src_w,
lower_f * src_h,
)
# Top left, upper right, lower right, & lower left corner coefficients (in pixels)
bbox_corners = [(left, upper), (right, upper), (right, lower), (left, lower)]
def transform(x: int, y: int, matrix: List[float]) -> Tuple[float, float]:
(a, b, c, d, e, f) = matrix
return a * x + b * y + c, d * x + e * y + f
def get_enclosing_bbox(
corners: List[Tuple[int, int]], rotation_matrix: List[float]
) -> Tuple[int, int, int, int]:
rotated_corners = [transform(x, y, rotation_matrix) for x, y in corners]
xs, ys = zip(*rotated_corners)
return (
math.floor(min(xs)),
math.floor(min(ys)),
math.ceil(max(xs)),
math.ceil(max(ys)),
)
# Get rotated bbox corner coefficients
rotation_center = (src_w // 2, src_h // 2)
angle_rad = -math.radians(degrees)
rotation_matrix = [
round(math.cos(angle_rad), 15),
round(math.sin(angle_rad), 15),
0.0,
round(math.sin(angle_rad), 15),
round(-math.cos(angle_rad), 15),
0.0,
]
rotation_matrix[2], rotation_matrix[5] = transform(
-rotation_center[0], -rotation_center[1], rotation_matrix
)
rotation_matrix[2] += rotation_center[0]
rotation_matrix[5] += rotation_center[1]
# Get rotated image dimensions
src_img_corners = [(0, 0), (src_w, 0), (src_w, src_h), (0, src_h)]
(
rotated_img_min_x,
rotated_img_min_y,
rotated_img_max_x,
rotated_img_max_y,
) = get_enclosing_bbox(src_img_corners, rotation_matrix)
rotated_img_w = rotated_img_max_x - rotated_img_min_x
rotated_img_h = rotated_img_max_y - rotated_img_min_y
# Get enclosing box corners around rotated bbox (on rotated image)
new_bbox_left, new_bbox_upper, new_bbox_right, new_bbox_lower = get_enclosing_bbox(
bbox_corners, rotation_matrix
)
bbox_enclosing_bbox = (
new_bbox_left / rotated_img_w,
new_bbox_upper / rotated_img_h,
new_bbox_right / rotated_img_w,
new_bbox_lower / rotated_img_h,
)
# Crop bbox as src image is cropped inside `rotate`
cropped_w, cropped_h = imutils.rotated_rect_with_max_area(src_w, src_h, degrees)
cropped_img_left, cropped_img_upper, cropped_img_right, cropped_img_lower = (
(rotated_img_w - cropped_w) // 2 + rotated_img_min_x,
(rotated_img_h - cropped_h) // 2 + rotated_img_min_y,
(rotated_img_w + cropped_w) // 2 + rotated_img_min_x,
(rotated_img_h + cropped_h) // 2 + rotated_img_min_y,
)
return crop_bboxes_helper(
bbox_enclosing_bbox,
x1=cropped_img_left / rotated_img_w,
y1=cropped_img_upper / rotated_img_h,
x2=cropped_img_right / rotated_img_w,
y2=cropped_img_lower / rotated_img_h,
) |
Computes the bbox that encloses the transformed bbox in the image transformed by
`aug_function`. This helper can be used to compute the transformed bbox for any
augmentation which doesn't affect the color of the source image (e.g. any spatial
augmentation). | def spatial_bbox_helper(
bbox: Tuple[float, float, float, float],
src_w: int,
src_h: int,
aug_function: Callable,
**kwargs,
) -> Tuple:
"""
Computes the bbox that encloses the transformed bbox in the image transformed by
`aug_function`. This helper can be used to compute the transformed bbox for any
augmentation which doesn't affect the color of the source image (e.g. any spatial
augmentation).
"""
dummy_image = Image.new("RGB", (src_w, src_h))
draw = ImageDraw.Draw(dummy_image)
draw.rectangle(
(bbox[0] * src_w, bbox[1] * src_h, bbox[2] * src_w, bbox[3] * src_h),
fill="white",
)
aug_image = aug_function(dummy_image, **kwargs)
aug_w, aug_h = aug_image.size
array_image = np.array(aug_image)
white_y, white_x, _ = np.where(array_image > 0)
min_x, max_x = np.min(white_x), np.max(white_x)
min_y, max_y = np.min(white_y), np.max(white_y)
return (min_x / aug_w, min_y / aug_h, max_x / aug_w, max_y / aug_h) |
Analogous to hflip, when the src image is vertically flipped, the bounding box also
gets vertically flipped | def vflip_bboxes_helper(bbox: Tuple, **kwargs) -> Tuple:
"""
Analogous to hflip, when the src image is vertically flipped, the bounding box also
gets vertically flipped
"""
left_factor, upper_factor, right_factor, lower_factor = bbox
return (left_factor, 1 - lower_factor, right_factor, 1 - upper_factor) |
When a bounding box is cropped out of the image or something is overlaid
which obfuscates it, we consider the bbox to no longer be visible/valid, so
we will return it as None | def check_for_gone_bboxes(transformed_bboxes: List[Tuple]) -> List[Optional[Tuple]]:
"""
When a bounding box is cropped out of the image or something is overlaid
which obfuscates it, we consider the bbox to no longer be visible/valid, so
we will return it as None
"""
checked_bboxes = []
for transformed_bbox in transformed_bboxes:
left_factor, upper_factor, right_factor, lower_factor = transformed_bbox
checked_bboxes.append(
None
if left_factor >= right_factor or upper_factor >= lower_factor
else transformed_bbox
)
return checked_bboxes |
If image is a str, loads the image as a PIL Image and returns it. Otherwise,
we assert that image is a PIL Image and then return it. | def validate_and_load_image(image: Union[str, Image.Image]) -> Image.Image:
"""
If image is a str, loads the image as a PIL Image and returns it. Otherwise,
we assert that image is a PIL Image and then return it.
"""
if isinstance(image, str):
local_path = utils.pathmgr.get_local_path(image)
utils.validate_image_path(local_path)
return Image.open(local_path)
assert isinstance(
image, Image.Image
), "Expected type PIL.Image.Image for variable 'image'"
return image |
Computes the width and height of the largest possible axis-aligned
rectangle (maximal area) within the rotated rectangle
source:
https://stackoverflow.com/questions/16702966/rotate-image-and-crop-out-black-borders # noqa: B950 | def rotated_rect_with_max_area(w: int, h: int, angle: float) -> Tuple[float, float]:
"""
Computes the width and height of the largest possible axis-aligned
rectangle (maximal area) within the rotated rectangle
source:
https://stackoverflow.com/questions/16702966/rotate-image-and-crop-out-black-borders # noqa: B950
"""
width_is_longer = w >= h
side_long, side_short = (w, h) if width_is_longer else (h, w)
sin_a = abs(math.sin(math.radians(angle)))
cos_a = abs(math.cos(math.radians(angle)))
if side_short <= 2.0 * sin_a * cos_a * side_long or abs(sin_a - cos_a) < 1e-10:
x = 0.5 * side_short
wr, hr = (x / sin_a, x / cos_a) if width_is_longer else (x / cos_a, x / sin_a)
else:
cos_2a = cos_a * cos_a - sin_a * sin_a
wr = (w * cos_a - h * sin_a) / cos_2a
hr = (h * cos_a - w * sin_a) / cos_2a
return wr, hr |
Returns the image src with the x dimension padded to width w if it was
smaller than w (and likewise for the y dimension with height h) | def pad_with_black(src: Image.Image, w: int, h: int) -> Image.Image:
"""
Returns the image src with the x dimension padded to width w if it was
smaller than w (and likewise for the y dimension with height h)
"""
curr_w, curr_h = src.size
dx = max(0, (w - curr_w) // 2)
dy = max(0, (h - curr_h) // 2)
padded = Image.new("RGB", (w, h))
padded.paste(src, (dx, dy, curr_w + dx, curr_h + dy))
return padded |
Returns the image src resized & padded with black if needed for the screenshot
transformation (i.e. if the spot for the image in the template is too small or
too big for the src image). If crop is True, will crop the src image if necessary
to fit into the template image; otherwise, will resize if necessary | def resize_and_pad_to_given_size(
src: Image.Image, w: int, h: int, crop: bool
) -> Image.Image:
"""
Returns the image src resized & padded with black if needed for the screenshot
transformation (i.e. if the spot for the image in the template is too small or
too big for the src image). If crop is True, will crop the src image if necessary
to fit into the template image; otherwise, will resize if necessary
"""
curr_w, curr_h = src.size
if crop:
dx = (curr_w - w) // 2
dy = (curr_h - h) // 2
src = src.crop((dx, dy, w + dx, h + dy))
curr_w, curr_h = src.size
elif curr_w > w or curr_h > h:
resize_factor = min(w / curr_w, h / curr_h)
new_w = int(curr_w * resize_factor)
new_h = int(curr_h * resize_factor)
src = src.resize((new_w, new_h), resample=Image.BILINEAR)
curr_w, curr_h = src.size
if curr_w < w or curr_h < h:
src = pad_with_black(src, w, h)
return src |
Return template_image, and bbox resized to fit the src image. Takes in the
width & height of the src image plus the bounding box where the src image
will be inserted into template_image. If the template bounding box is
bigger than src image in both dimensions, template_image is scaled down
such that the dimension that was closest to src_image matches, without
changing the aspect ratio (and bbox is scaled proportionally). Similarly if
src image is bigger than the bbox in both dimensions, template_image and
the bbox are scaled up. | def scale_template_image(
src_w: int,
src_h: int,
template_image: Image.Image,
bbox: Tuple[int, int, int, int],
max_image_size_pixels: Optional[int],
crop: bool,
) -> Tuple[Image.Image, Tuple[int, int, int, int]]:
"""
Return template_image, and bbox resized to fit the src image. Takes in the
width & height of the src image plus the bounding box where the src image
will be inserted into template_image. If the template bounding box is
bigger than src image in both dimensions, template_image is scaled down
such that the dimension that was closest to src_image matches, without
changing the aspect ratio (and bbox is scaled proportionally). Similarly if
src image is bigger than the bbox in both dimensions, template_image and
the bbox are scaled up.
"""
template_w, template_h = template_image.size
left, upper, right, lower = bbox
bbox_w, bbox_h = right - left, lower - upper
# Scale up/down template_image & bbox
if crop:
resize_factor = min(src_w / bbox_w, src_h / bbox_h)
else:
resize_factor = max(src_w / bbox_w, src_h / bbox_h)
# If a max image size is provided & the resized template image would be too large,
# resize the template image to the max image size.
if max_image_size_pixels is not None:
template_size = template_w * template_h
if template_size * resize_factor**2 > max_image_size_pixels:
resize_factor = math.sqrt(max_image_size_pixels / template_size)
template_w = int(template_w * resize_factor)
template_h = int(template_h * resize_factor)
bbox_w, bbox_h = int(bbox_w * resize_factor), int(bbox_h * resize_factor)
left, upper = int(left * resize_factor), int(upper * resize_factor)
right, lower = left + bbox_w, upper + bbox_h
bbox = (left, upper, right, lower)
template_image = template_image.resize(
(template_w, template_h), resample=Image.BILINEAR
)
return template_image, bbox |
Returns a square crop of the center of the image | def square_center_crop(src: Image.Image) -> Image.Image:
"""Returns a square crop of the center of the image"""
w, h = src.size
smallest_edge = min(w, h)
dx = (w - smallest_edge) // 2
dy = (h - smallest_edge) // 2
return src.crop((dx, dy, dx + smallest_edge, dy + smallest_edge)) |
Given the starting & desired corner coordinates, computes the
coefficients required by the perspective transform. | def compute_transform_coeffs(
src_coords: List[Tuple[int, int]], dst_coords: List[Tuple[float, float]]
) -> np.ndarray:
"""
Given the starting & desired corner coordinates, computes the
coefficients required by the perspective transform.
"""
matrix = []
for sc, dc in zip(src_coords, dst_coords):
matrix.append([dc[0], dc[1], 1, 0, 0, 0, -sc[0] * dc[0], -sc[0] * dc[1]])
matrix.append([0, 0, 0, dc[0], dc[1], 1, -sc[1] * dc[0], -sc[1] * dc[1]])
A = np.matrix(matrix, dtype=float)
B = np.array(src_coords).reshape(8)
res = np.dot(np.linalg.inv(A.T * A) * A.T, B)
return np.array(res).reshape(8) |
Given stripe parameters such as stripe width, angle, and density, returns
a binary mask of the same size as the source image indicating the location
of stripes. This implementation is inspired by
https://stackoverflow.com/questions/34043381/how-to-create-diagonal-stripe-patterns-and-checkerboard-patterns | def compute_stripe_mask(
src_w: int, src_h: int, line_width: float, line_angle: float, line_density: float
) -> np.ndarray:
"""
Given stripe parameters such as stripe width, angle, and density, returns
a binary mask of the same size as the source image indicating the location
of stripes. This implementation is inspired by
https://stackoverflow.com/questions/34043381/how-to-create-diagonal-stripe-patterns-and-checkerboard-patterns
"""
line_angle *= math.pi / 180
line_distance = (1 - line_density) * min(src_w, src_h)
y_period = math.cos(line_angle) / line_distance
x_period = math.sin(line_angle) / line_distance
y_coord_range = np.arange(0, src_h) - src_h / 2
x_coord_range = np.arange(0, src_w) - src_w / 2
x_grid_coords, y_grid_coords = np.meshgrid(x_coord_range, y_coord_range)
if abs(line_angle) == math.pi / 2 or abs(line_angle) == 3 * math.pi / 2:
# Compute mask for vertical stripes
softmax_mask = (np.cos(2 * math.pi * x_period * x_grid_coords) + 1) / 2
elif line_angle == 0 or abs(line_angle) == math.pi:
# Compute mask for horizontal stripes
softmax_mask = (np.cos(2 * math.pi * y_period * y_grid_coords) + 1) / 2
else:
# Compute mask for diagonal stripes
softmax_mask = (
np.cos(2 * math.pi * (x_period * x_grid_coords + y_period * y_grid_coords))
+ 1
) / 2
binary_mask = softmax_mask > (math.cos(math.pi * line_width) + 1) / 2
return binary_mask |
Apply a user-defined lambda on a list of text documents
@param texts: a string or a list of text documents to be augmented
@param aug_function: the augmentation function to be applied onto the text
(should expect a list of text documents as input and return a list of
text documents)
@param **kwargs: the input attributes to be passed into the augmentation
function to be applied
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def apply_lambda(
texts: Union[str, List[str]],
aug_function: Callable[..., List[str]] = lambda x: x,
metadata: Optional[List[Dict[str, Any]]] = None,
**kwargs,
) -> Union[str, List[str]]:
"""
Apply a user-defined lambda on a list of text documents
@param texts: a string or a list of text documents to be augmented
@param aug_function: the augmentation function to be applied onto the text
(should expect a list of text documents as input and return a list of
text documents)
@param **kwargs: the input attributes to be passed into the augmentation
function to be applied
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
assert callable(aug_function), (
repr(type(aug_function).__name__) + " object is not callable"
)
func_kwargs = deepcopy(locals())
if aug_function is not None:
try:
func_kwargs["aug_function"] = aug_function.__name__
except AttributeError:
func_kwargs["aug_function"] = type(aug_function).__name__
func_kwargs = txtutils.get_func_kwargs(metadata, func_kwargs)
aug_texts = aug_function(texts, **kwargs)
txtutils.get_metadata(
metadata=metadata,
function_name="apply_lambda",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Changes the case (e.g. upper, lower, title) of random chars, words, or the entire
text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' (case of the entire text is changed), 'word' (case of
random words is changed), or 'char' (case of random chars is changed)
@param cadence: how frequent (i.e. between this many characters/words) to change the
case. Must be at least 1.0. Non-integer values are used as an 'average' cadence.
Not used for granularity 'all'
@param case: the case to change words to; valid values are 'lower', 'upper', 'title',
or 'random' (in which case the case will randomly be changed to one of the
previous three)
@param seed: if provided, this will set the random seed to ensure consistency between
runs
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def change_case(
texts: Union[str, List[str]],
granularity: str = "word",
cadence: float = 1.0,
case: str = "random",
seed: Optional[int] = 10,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Changes the case (e.g. upper, lower, title) of random chars, words, or the entire
text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' (case of the entire text is changed), 'word' (case of
random words is changed), or 'char' (case of random chars is changed)
@param cadence: how frequent (i.e. between this many characters/words) to change the
case. Must be at least 1.0. Non-integer values are used as an 'average' cadence.
Not used for granularity 'all'
@param case: the case to change words to; valid values are 'lower', 'upper', 'title',
or 'random' (in which case the case will randomly be changed to one of the
previous three)
@param seed: if provided, this will set the random seed to ensure consistency between
runs
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
case_aug = a.CaseAugmenter(case, granularity, cadence, seed)
aug_texts = case_aug.augment(texts)
txtutils.get_metadata(
metadata=metadata,
function_name="change_case",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces pairs (or longer strings) of words with contractions given a mapping
@param texts: a string or a list of text documents to be augmented
@param aug_p: the probability that each pair (or longer string) of words will be
replaced with the corresponding contraction, if there is one in the mapping
@param mapping: either a dictionary representing the mapping or an iopath uri where
the mapping is stored
@param max_contraction_length: the words in each text will be checked for matches in
the mapping up to this length; i.e. if 'max_contraction_length' is 3 then every
substring of 2 *and* 3 words will be checked
@param seed: if provided, this will set the random seed to ensure consistency between
runs
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def contractions(
texts: Union[str, List[str]],
aug_p: float = 0.3,
mapping: Optional[Union[str, Dict[str, Any]]] = CONTRACTIONS_MAPPING,
max_contraction_length: int = 2,
seed: Optional[int] = 10,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces pairs (or longer strings) of words with contractions given a mapping
@param texts: a string or a list of text documents to be augmented
@param aug_p: the probability that each pair (or longer string) of words will be
replaced with the corresponding contraction, if there is one in the mapping
@param mapping: either a dictionary representing the mapping or an iopath uri where
the mapping is stored
@param max_contraction_length: the words in each text will be checked for matches in
the mapping up to this length; i.e. if 'max_contraction_length' is 3 then every
substring of 2 *and* 3 words will be checked
@param seed: if provided, this will set the random seed to ensure consistency between
runs
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
assert 0 <= aug_p <= 1, "'aug_p' must be in the range [0, 1]"
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
contraction_aug = a.ContractionAugmenter(
aug_p, mapping, max_contraction_length, seed
)
aug_texts = contraction_aug.augment(texts)
txtutils.get_metadata(
metadata=metadata,
function_name="contractions",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Generates a baseline by tokenizing and detokenizing the text
@param texts: a string or a list of text documents to be augmented
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def get_baseline(
texts: Union[str, List[str]],
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Generates a baseline by tokenizing and detokenizing the text
@param texts: a string or a list of text documents to be augmented
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
baseline_aug = a.BaselineAugmenter()
aug_texts = baseline_aug.augment(texts, 1)
txtutils.get_metadata(
metadata=metadata,
function_name="get_baseline",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Inserts punctuation characters in each input text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' or 'word' -- if 'word', a new char is picked and
the cadence resets for each word in the text
@param cadence: how frequent (i.e. between this many characters) to insert a
punctuation character. Must be at least 1.0. Non-integer values are used
as an 'average' cadence
@param vary_chars: if true, picks a different punctuation char each time one
is used instead of just one per word/text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts | def insert_punctuation_chars(
texts: Union[str, List[str]],
granularity: str = "all",
cadence: float = 1.0,
vary_chars: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Inserts punctuation characters in each input text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' or 'word' -- if 'word', a new char is picked and
the cadence resets for each word in the text
@param cadence: how frequent (i.e. between this many characters) to insert a
punctuation character. Must be at least 1.0. Non-integer values are used
as an 'average' cadence
@param vary_chars: if true, picks a different punctuation char each time one
is used instead of just one per word/text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
punctuation_aug = a.InsertionAugmenter(
"punctuation", granularity, cadence, vary_chars
)
aug_texts = punctuation_aug.augment(texts)
txtutils.get_metadata(
metadata=metadata,
function_name="insert_punctuation_chars",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Inserts some specified text into the input text a given number of times at a given
location
@param texts: a string or a list of text documents to be augmented
@param insert_text: a list of text to sample from and insert into each text in texts
@param num_insertions: the number of times to sample from insert_text and insert
@param insertion_location: where to insert the insert_text in the input text; valid
values are "prepend", "append", or "random" (inserts at a random index between
words in the input text)
@param seed: if provided, this will set the random seed to ensure consistency between
runs
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def insert_text(
texts: Union[str, List[str]],
insert_text: List[str],
num_insertions: int = 1,
insertion_location: str = "random",
seed: Optional[int] = 10,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Inserts some specified text into the input text a given number of times at a given
location
@param texts: a string or a list of text documents to be augmented
@param insert_text: a list of text to sample from and insert into each text in texts
@param num_insertions: the number of times to sample from insert_text and insert
@param insertion_location: where to insert the insert_text in the input text; valid
values are "prepend", "append", or "random" (inserts at a random index between
words in the input text)
@param seed: if provided, this will set the random seed to ensure consistency between
runs
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
insert_texts_aug = a.InsertTextAugmenter(num_insertions, insertion_location, seed)
aug_texts = insert_texts_aug.augment(texts, insert_text)
txtutils.get_metadata(
metadata=metadata,
function_name="insert_text",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Inserts whitespace characters in each input text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' or 'word' -- if 'word', a new char is picked and
the cadence resets for each word in the text
@param cadence: how frequent (i.e. between this many characters) to insert a
whitespace character. Must be at least 1.0. Non-integer values are used
as an 'average' cadence
@param vary_chars: if true, picks a different whitespace char each time one
is used instead of just one per word/text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts | def insert_whitespace_chars(
texts: Union[str, List[str]],
granularity: str = "all",
cadence: float = 1.0,
vary_chars: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Inserts whitespace characters in each input text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' or 'word' -- if 'word', a new char is picked and
the cadence resets for each word in the text
@param cadence: how frequent (i.e. between this many characters) to insert a
whitespace character. Must be at least 1.0. Non-integer values are used
as an 'average' cadence
@param vary_chars: if true, picks a different whitespace char each time one
is used instead of just one per word/text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
whitespace_aug = a.InsertionAugmenter(
"whitespace", granularity, cadence, vary_chars
)
aug_texts = whitespace_aug.augment(texts)
txtutils.get_metadata(
metadata=metadata,
function_name="insert_whitespace_chars",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Inserts zero-width characters in each input text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' or 'word' -- if 'word', a new char is picked and
the cadence resets for each word in the text
@param cadence: how frequent (i.e. between this many characters) to insert
a zero-width character. Must be at least 1.0. Non-integer values are
used as an 'average' cadence
@param vary_chars: if true, picks a different zero-width char each time one
is used instead of just one per word/text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts | def insert_zero_width_chars(
texts: Union[str, List[str]],
granularity: str = "all",
cadence: float = 1.0,
vary_chars: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Inserts zero-width characters in each input text
@param texts: a string or a list of text documents to be augmented
@param granularity: 'all' or 'word' -- if 'word', a new char is picked and
the cadence resets for each word in the text
@param cadence: how frequent (i.e. between this many characters) to insert
a zero-width character. Must be at least 1.0. Non-integer values are
used as an 'average' cadence
@param vary_chars: if true, picks a different zero-width char each time one
is used instead of just one per word/text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
zero_width_aug = a.InsertionAugmenter(
"zero_width", granularity, cadence, vary_chars
)
aug_texts = zero_width_aug.augment(texts)
txtutils.get_metadata(
metadata=metadata,
function_name="insert_zero_width_chars",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Merges words in the text together
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of characters in a word to be merged
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def merge_words(
texts: Union[str, List[str]],
aug_word_p: float = 0.3,
min_char: int = 2,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
priority_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Merges words in the text together
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of characters in a word to be merged
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
merge_aug = a.WordsAugmenter(
"delete", min_char, aug_word_min, aug_word_max, aug_word_p, priority_words
)
aug_texts = merge_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="merge_words",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Reverses each word (or part of the word) in each input text and uses
bidirectional marks to render the text in its original order. It reverses
each word separately which keeps the word order even when a line wraps
@param texts: a string or a list of text documents to be augmented
@param granularity: the level at which the font is applied; this must be either
'word' or 'all'
@param split_word: if true and granularity is 'word', reverses only the second
half of each word
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts | def replace_bidirectional(
texts: Union[str, List[str]],
granularity: str = "all",
split_word: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Reverses each word (or part of the word) in each input text and uses
bidirectional marks to render the text in its original order. It reverses
each word separately which keeps the word order even when a line wraps
@param texts: a string or a list of text documents to be augmented
@param granularity: the level at which the font is applied; this must be either
'word' or 'all'
@param split_word: if true and granularity is 'word', reverses only the second
half of each word
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented texts
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
bidirectional_aug = a.BidirectionalAugmenter(granularity, split_word)
aug_texts = bidirectional_aug.augment(texts)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_bidirectional",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces words or characters depending on the granularity with fun fonts applied
@param texts: a string or a list of text documents to be augmented
@param aug_p: probability of words to be augmented
@param aug_min: minimum # of words to be augmented
@param aug_max: maximum # of words to be augmented
@param granularity: the level at which the font is applied; this must be be
either word, char, or all
@param vary_fonts: whether or not to switch font in each replacement
@param fonts_path: iopath uri where the fonts are stored
@param n: number of augmentations to be performed for each text
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def replace_fun_fonts(
texts: Union[str, List[str]],
aug_p: float = 0.3,
aug_min: int = 1,
aug_max: int = 10000,
granularity: str = "all",
vary_fonts: bool = False,
fonts_path: str = FUN_FONTS_PATH,
n: int = 1,
priority_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces words or characters depending on the granularity with fun fonts applied
@param texts: a string or a list of text documents to be augmented
@param aug_p: probability of words to be augmented
@param aug_min: minimum # of words to be augmented
@param aug_max: maximum # of words to be augmented
@param granularity: the level at which the font is applied; this must be be
either word, char, or all
@param vary_fonts: whether or not to switch font in each replacement
@param fonts_path: iopath uri where the fonts are stored
@param n: number of augmentations to be performed for each text
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
fun_fonts_aug = a.FunFontsAugmenter(
granularity, aug_min, aug_max, aug_p, vary_fonts, fonts_path, priority_words
)
aug_texts = fun_fonts_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_fun_fonts",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces letters in each text with similar characters
@param texts: a string or a list of text documents to be augmented
@param aug_char_p: probability of letters to be replaced in each word
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of letters in a word for a valid augmentation
@param aug_char_min: minimum # of letters to be replaced in each word
@param aug_char_max: maximum # of letters to be replaced in each word
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping_path: iopath uri where the mapping is stored
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def replace_similar_chars(
texts: Union[str, List[str]],
aug_char_p: float = 0.3,
aug_word_p: float = 0.3,
min_char: int = 2,
aug_char_min: int = 1,
aug_char_max: int = 1000,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
mapping_path: Optional[str] = None,
priority_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces letters in each text with similar characters
@param texts: a string or a list of text documents to be augmented
@param aug_char_p: probability of letters to be replaced in each word
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of letters in a word for a valid augmentation
@param aug_char_min: minimum # of letters to be replaced in each word
@param aug_char_max: maximum # of letters to be replaced in each word
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping_path: iopath uri where the mapping is stored
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
char_aug = a.LetterReplacementAugmenter(
min_char,
aug_char_min,
aug_char_max,
aug_char_p,
aug_word_min,
aug_word_max,
aug_word_p,
mapping_path,
priority_words,
)
aug_texts = char_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_similar_chars",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces letters in each text with similar unicodes
@param texts: a string or a list of text documents to be augmented
@param aug_char_p: probability of letters to be replaced in each word
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of letters in a word for a valid augmentation
@param aug_char_min: minimum # of letters to be replaced in each word
@param aug_char_max: maximum # of letters to be replaced in each word
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping_path: iopath uri where the mapping is stored
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def replace_similar_unicode_chars(
texts: Union[str, List[str]],
aug_char_p: float = 0.3,
aug_word_p: float = 0.3,
min_char: int = 2,
aug_char_min: int = 1,
aug_char_max: int = 1000,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
mapping_path: str = UNICODE_MAPPING_PATH,
priority_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces letters in each text with similar unicodes
@param texts: a string or a list of text documents to be augmented
@param aug_char_p: probability of letters to be replaced in each word
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of letters in a word for a valid augmentation
@param aug_char_min: minimum # of letters to be replaced in each word
@param aug_char_max: maximum # of letters to be replaced in each word
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping_path: iopath uri where the mapping is stored
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
unicode_aug = a.LetterReplacementAugmenter(
min_char,
aug_char_min,
aug_char_max,
aug_char_p,
aug_word_min,
aug_word_max,
aug_word_p,
mapping_path,
priority_words,
)
aug_texts = unicode_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_similar_unicode_chars",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces the input text entirely with some specified text
@param texts: a string or a list of text documents to be augmented
@param replace_text: specifies the text to replace the input text with,
either as a string or a mapping from input text to new text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: a string or a list of augmented text documents | def replace_text(
texts: Union[str, List[str]],
replace_text: Union[str, Dict[str, str]],
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces the input text entirely with some specified text
@param texts: a string or a list of text documents to be augmented
@param replace_text: specifies the text to replace the input text with,
either as a string or a mapping from input text to new text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: a string or a list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
text_aug = a.TextReplacementAugmenter()
aug_texts = text_aug.augment(texts, replace_text)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_text",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Flips words in the text upside down depending on the granularity
@param texts: a string or a list of text documents to be augmented
@param aug_p: probability of words to be augmented
@param aug_min: minimum # of words to be augmented
@param aug_max: maximum # of words to be augmented
@param granularity: the level at which the font is applied; this must be
either word, char, or all
@param n: number of augmentations to be performed for each text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def replace_upside_down(
texts: Union[str, List[str]],
aug_p: float = 0.3,
aug_min: int = 1,
aug_max: int = 1000,
granularity: str = "all",
n: int = 1,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Flips words in the text upside down depending on the granularity
@param texts: a string or a list of text documents to be augmented
@param aug_p: probability of words to be augmented
@param aug_min: minimum # of words to be augmented
@param aug_max: maximum # of words to be augmented
@param granularity: the level at which the font is applied; this must be
either word, char, or all
@param n: number of augmentations to be performed for each text
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
upside_down_aug = a.UpsideDownAugmenter(granularity, aug_min, aug_max, aug_p)
aug_texts = upside_down_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_upside_down",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces words in each text based on a given mapping
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping: either a dictionary representing the mapping or an iopath uri where
the mapping is stored
@param priority_words: list of target words that the augmenter should prioritize to
augment first
@param ignore_words: list of words that the augmenter should not augment
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def replace_words(
texts: Union[str, List[str]],
aug_word_p: float = 0.3,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
mapping: Optional[Union[str, Dict[str, Any]]] = None,
priority_words: Optional[List[str]] = None,
ignore_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces words in each text based on a given mapping
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping: either a dictionary representing the mapping or an iopath uri where
the mapping is stored
@param priority_words: list of target words that the augmenter should prioritize to
augment first
@param ignore_words: list of words that the augmenter should not augment
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
word_aug = a.WordReplacementAugmenter(
aug_word_min, aug_word_max, aug_word_p, mapping, priority_words, ignore_words
)
aug_texts = word_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="replace_words",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Simulates typos in each text using misspellings, keyboard distance, and swapping.
You can specify a typo_type: charmix, which does a combination of character-level
modifications (delete, insert, substitute, & swap); keyboard, which swaps characters
which those close to each other on the QWERTY keyboard; misspelling, which replaces
words with misspellings defined in a dictionary file; or all, which will apply a
random combination of all 4
@param texts: a string or a list of text documents to be augmented
@param aug_char_p: probability of letters to be replaced in each word;
This is only applicable for keyboard distance and swapping
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of letters in a word for a valid augmentation;
This is only applicable for keyboard distance and swapping
@param aug_char_min: minimum # of letters to be replaced/swapped in each word;
This is only applicable for keyboard distance and swapping
@param aug_char_max: maximum # of letters to be replaced/swapped in each word;
This is only applicable for keyboard distance and swapping
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param typo_type: the type of typos to apply to the text; valid values are
"misspelling", "keyboard", "charmix", or "all"
@param misspelling_dict_path: iopath uri where the misspelling dictionary is stored;
must be specified if typo_type is "misspelling" or "all", but otherwise can be
None
@param max_typo_length: the words in the misspelling dictionary will be checked for
matches in the mapping up to this length; i.e. if 'max_typo_length' is 3 then
every substring of 2 *and* 3 words will be checked
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def simulate_typos(
texts: Union[str, List[str]],
aug_char_p: float = 0.3,
aug_word_p: float = 0.3,
min_char: int = 2,
aug_char_min: int = 1,
aug_char_max: int = 1,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
typo_type: str = "all",
misspelling_dict_path: Optional[str] = MISSPELLING_DICTIONARY_PATH,
max_typo_length: int = 1,
priority_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Simulates typos in each text using misspellings, keyboard distance, and swapping.
You can specify a typo_type: charmix, which does a combination of character-level
modifications (delete, insert, substitute, & swap); keyboard, which swaps characters
which those close to each other on the QWERTY keyboard; misspelling, which replaces
words with misspellings defined in a dictionary file; or all, which will apply a
random combination of all 4
@param texts: a string or a list of text documents to be augmented
@param aug_char_p: probability of letters to be replaced in each word;
This is only applicable for keyboard distance and swapping
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of letters in a word for a valid augmentation;
This is only applicable for keyboard distance and swapping
@param aug_char_min: minimum # of letters to be replaced/swapped in each word;
This is only applicable for keyboard distance and swapping
@param aug_char_max: maximum # of letters to be replaced/swapped in each word;
This is only applicable for keyboard distance and swapping
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param typo_type: the type of typos to apply to the text; valid values are
"misspelling", "keyboard", "charmix", or "all"
@param misspelling_dict_path: iopath uri where the misspelling dictionary is stored;
must be specified if typo_type is "misspelling" or "all", but otherwise can be
None
@param max_typo_length: the words in the misspelling dictionary will be checked for
matches in the mapping up to this length; i.e. if 'max_typo_length' is 3 then
every substring of 2 *and* 3 words will be checked
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
typo_aug = a.TypoAugmenter(
min_char,
aug_char_min,
aug_char_max,
aug_char_p,
aug_word_min,
aug_word_max,
aug_word_p,
typo_type,
misspelling_dict_path,
max_typo_length,
priority_words,
)
aug_texts = typo_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="simulate_typos",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Splits words in the text into subwords
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of characters in a word for a split
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def split_words(
texts: Union[str, List[str]],
aug_word_p: float = 0.3,
min_char: int = 4,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
priority_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Splits words in the text into subwords
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param min_char: minimum # of characters in a word for a split
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
split_aug = a.WordsAugmenter(
"split", min_char, aug_word_min, aug_word_max, aug_word_p, priority_words
)
aug_texts = split_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="split_words",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Replaces words in each text based on a provided `mapping`, which can either be a dict
already constructed mapping words from one gender to another or a file path to a
dict. Note: the logic in this augmentation was originally written by Adina Williams
and has been used in influential work, e.g. https://arxiv.org/pdf/2005.00614.pdf
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping: a mapping of words from one gender to another; a mapping can be
supplied either directly as a dict or as a filepath to a json file containing the
dict
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param ignore_words: list of words that the augmenter should not augment
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents | def swap_gendered_words(
texts: Union[str, List[str]],
aug_word_p: float = 0.3,
aug_word_min: int = 1,
aug_word_max: int = 1000,
n: int = 1,
mapping: Union[str, Dict[str, str]] = GENDERED_WORDS_MAPPING,
priority_words: Optional[List[str]] = None,
ignore_words: Optional[List[str]] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> Union[str, List[str]]:
"""
Replaces words in each text based on a provided `mapping`, which can either be a dict
already constructed mapping words from one gender to another or a file path to a
dict. Note: the logic in this augmentation was originally written by Adina Williams
and has been used in influential work, e.g. https://arxiv.org/pdf/2005.00614.pdf
@param texts: a string or a list of text documents to be augmented
@param aug_word_p: probability of words to be augmented
@param aug_word_min: minimum # of words to be augmented
@param aug_word_max: maximum # of words to be augmented
@param n: number of augmentations to be performed for each text
@param mapping: a mapping of words from one gender to another; a mapping can be
supplied either directly as a dict or as a filepath to a json file containing the
dict
@param priority_words: list of target words that the augmenter should
prioritize to augment first
@param ignore_words: list of words that the augmenter should not augment
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest length, etc. will be appended to
the inputted list. If set to None, no metadata will be appended or returned
@returns: the list of augmented text documents
"""
func_kwargs = txtutils.get_func_kwargs(metadata, locals())
mapping = txtutils.get_gendered_words_mapping(mapping)
word_aug = a.WordReplacementAugmenter(
aug_word_min, aug_word_max, aug_word_p, mapping, priority_words, ignore_words
)
aug_texts = word_aug.augment(texts, n)
txtutils.get_metadata(
metadata=metadata,
function_name="swap_gendered_words",
aug_texts=aug_texts,
**func_kwargs,
)
return aug_texts |
Note: The `swap_gendered_words` augmentation, including this logic, was originally
written by Adina Williams and has been used in influential work, e.g.
https://arxiv.org/pdf/2005.00614.pdf | def get_gendered_words_mapping(mapping: Union[str, Dict[str, str]]) -> Dict[str, str]:
"""
Note: The `swap_gendered_words` augmentation, including this logic, was originally
written by Adina Williams and has been used in influential work, e.g.
https://arxiv.org/pdf/2005.00614.pdf
"""
assert isinstance(
mapping, (str, Dict)
), "Mapping must be either a dict or filepath to a mapping of gendered words"
if isinstance(mapping, Dict):
return mapping
if isinstance(mapping, str):
with utils.pathmgr.open(mapping) as f:
return json.load(f) |
Calculates how the given matching pair src_segment & dst_segment change
given a temporal crop starting at crop_start & ending at crop_end. We can
use the same logic here for multiple transforms, by setting the crop_start
& crop_end depending on the transform kwargs.
Doesn't return anything, but appends the new matching segments in the dst
video corresponding to the pair passed in to new_src_segments & new_dst_segments,
if the segment pair still matches in the dst video. If the passed in segment
pair is cropped out as a result of this temporal crop, nothing will be
appended to the lists, since the segment pair doesn't exist in the dst video. | def compute_time_crop_segments(
src_segment: Segment,
dst_segment: Segment,
speed_factor: float,
crop_start: float,
crop_end: float,
new_src_segments: List[Segment],
new_dst_segments: List[Segment],
) -> None:
"""
Calculates how the given matching pair src_segment & dst_segment change
given a temporal crop starting at crop_start & ending at crop_end. We can
use the same logic here for multiple transforms, by setting the crop_start
& crop_end depending on the transform kwargs.
Doesn't return anything, but appends the new matching segments in the dst
video corresponding to the pair passed in to new_src_segments & new_dst_segments,
if the segment pair still matches in the dst video. If the passed in segment
pair is cropped out as a result of this temporal crop, nothing will be
appended to the lists, since the segment pair doesn't exist in the dst video.
"""
# Crop segment is outside of the initial clip, so this matching segment
# pair no longer exists in the new video.
if crop_start >= dst_segment.end or crop_end <= dst_segment.start:
return
# new_start represents where the matching segment starts in the dst audio
# (if negative, then part of the matching segment is getting cut out, so
# we need to adjust both the src & dst starts).
new_start = dst_segment.start - crop_start
src_start, src_end, src_id = src_segment
if new_start < 0:
# We're cropping the beginning of the matching segment.
# Note: if the video was sped up before, we need to take this into account
# (the matching segment that is e.g. 10 seconds of dst audio might
# correspond to 5 seconds of src audio, if it was previously
# slowed down by 0.5x).
src_start = src_segment.start - new_start * speed_factor
new_start = 0
new_end = min(dst_segment.end - crop_start, crop_end - crop_start)
if crop_end < dst_segment.end:
# We're cropping the end of the matching segment.
# Note: if the video was sped up before, we need to take this into
# account (as above).
src_end = src_segment.end - (dst_segment.end - crop_end) * speed_factor
new_src_segments.append(Segment(src_start, src_end))
new_dst_segments.append(Segment(new_start, new_end)) |
Adds noise to a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param level: noise strength for specific pixel component. Default value is
25. Allowed range is [0, 100], where 0 indicates no change
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def add_noise(
video_path: str,
output_path: Optional[str] = None,
level: int = 25,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Adds noise to a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param level: noise strength for specific pixel component. Default value is
25. Allowed range is [0, 100], where 0 indicates no change
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
noise_aug = af.VideoAugmenterByNoise(level)
noise_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="add_noise", **func_kwargs
)
return output_path or video_path |
Apply a user-defined lambda on a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param aug_function: the augmentation function to be applied onto the video
(should expect a video path and output path as input and output the augmented
video to the output path. Nothing needs to be returned)
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param **kwargs: the input attributes to be passed into `aug_function`
@returns: the path to the augmented video | def apply_lambda(
video_path: str,
output_path: Optional[str] = None,
aug_function: Callable[..., Any] = helpers.identity_function,
metadata: Optional[List[Dict[str, Any]]] = None,
**kwargs,
) -> str:
"""
Apply a user-defined lambda on a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param aug_function: the augmentation function to be applied onto the video
(should expect a video path and output path as input and output the augmented
video to the output path. Nothing needs to be returned)
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param **kwargs: the input attributes to be passed into `aug_function`
@returns: the path to the augmented video
"""
assert callable(aug_function), (
repr(type(aug_function).__name__) + " object is not callable"
)
func_kwargs = helpers.get_func_kwargs(
metadata, locals(), video_path, aug_function=aug_function.__name__
)
aug_function(video_path, output_path or video_path, **kwargs)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="apply_lambda", **func_kwargs
)
return output_path or video_path |
Swaps the video audio for the audio passed in provided an offset
@param video_path: the path to the video to be augmented
@param audio_path: the iopath uri to the audio you'd like to swap with the
video's audio
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param offset: starting point in seconds such that an audio clip of offset to
offset + video_duration is used in the audio swap. Default value is zero
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def audio_swap(
video_path: str,
audio_path: str,
output_path: Optional[str] = None,
offset: float = 0.0,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Swaps the video audio for the audio passed in provided an offset
@param video_path: the path to the video to be augmented
@param audio_path: the iopath uri to the audio you'd like to swap with the
video's audio
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param offset: starting point in seconds such that an audio clip of offset to
offset + video_duration is used in the audio swap. Default value is zero
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
audio_swap_aug = af.VideoAugmenterByAudioSwap(audio_path, offset)
audio_swap_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="audio_swap", **func_kwargs
)
return output_path or video_path |
Augments the audio track of the input video using a given AugLy audio augmentation
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param audio_aug_function: the augmentation function to be applied onto the video's
audio track. Should have the standard API of an AugLy audio augmentation, i.e.
expect input audio as a numpy array or path & output path as input, and output
the augmented audio to the output path
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param audio_aug_kwargs: the input attributes to be passed into `audio_aug`
@returns: the path to the augmented video | def augment_audio(
video_path: str,
output_path: Optional[str] = None,
audio_aug_function: Callable[..., Tuple[np.ndarray, int]] = audaugs.apply_lambda,
metadata: Optional[List[Dict[str, Any]]] = None,
**audio_aug_kwargs,
) -> str:
"""
Augments the audio track of the input video using a given AugLy audio augmentation
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param audio_aug_function: the augmentation function to be applied onto the video's
audio track. Should have the standard API of an AugLy audio augmentation, i.e.
expect input audio as a numpy array or path & output path as input, and output
the augmented audio to the output path
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@param audio_aug_kwargs: the input attributes to be passed into `audio_aug`
@returns: the path to the augmented video
"""
assert callable(audio_aug_function), (
repr(type(audio_aug_function).__name__) + " object is not callable"
)
func_kwargs = helpers.get_func_kwargs(
metadata, locals(), video_path, audio_aug_function=audio_aug_function
)
if audio_aug_function is not None:
try:
func_kwargs["audio_aug_function"] = audio_aug_function.__name__
except AttributeError:
func_kwargs["audio_aug_function"] = type(audio_aug_function).__name__
audio_metadata = []
with tempfile.NamedTemporaryFile(suffix=".wav") as tmpfile:
helpers.extract_audio_to_file(video_path, tmpfile.name)
audio, sr = audutils.validate_and_load_audio(tmpfile.name)
aug_audio, aug_sr = audio_aug_function(
audio, sample_rate=sr, metadata=audio_metadata, **audio_aug_kwargs
)
audutils.ret_and_save_audio(aug_audio, tmpfile.name, aug_sr)
audio_swap(video_path, tmpfile.name, output_path=output_path or video_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata,
audio_metadata=audio_metadata,
function_name="augment_audio",
**func_kwargs,
)
return output_path or video_path |
Overlays a video onto another video at position (width * x_pos, height * y_pos)
at a lower opacity
@param video_path: the path to the video to be augmented
@param overlay_path: the path to the video that will be overlaid onto the
background video
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param opacity: the lower the opacity, the more transparent the overlaid video
@param overlay_size: size of the overlaid video is overlay_size * height of
the background video
@param x_pos: position of overlaid video relative to the background video width
@param y_pos: position of overlaid video relative to the background video height
@param use_second_audio: use the audio of the overlaid video rather than the audio
of the background video
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def blend_videos(
video_path: str,
overlay_path: str,
output_path: Optional[str] = None,
opacity: float = 0.5,
overlay_size: float = 1.0,
x_pos: float = 0.0,
y_pos: float = 0.0,
use_second_audio: bool = True,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Overlays a video onto another video at position (width * x_pos, height * y_pos)
at a lower opacity
@param video_path: the path to the video to be augmented
@param overlay_path: the path to the video that will be overlaid onto the
background video
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param opacity: the lower the opacity, the more transparent the overlaid video
@param overlay_size: size of the overlaid video is overlay_size * height of
the background video
@param x_pos: position of overlaid video relative to the background video width
@param y_pos: position of overlaid video relative to the background video height
@param use_second_audio: use the audio of the overlaid video rather than the audio
of the background video
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
blend_func = functools.partial(
imaugs.overlay_image,
opacity=opacity,
overlay_size=overlay_size,
x_pos=x_pos,
y_pos=y_pos,
)
vdutils.apply_to_frames(
blend_func, video_path, overlay_path, output_path, use_second_audio
)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="blend_videos", **func_kwargs
)
return output_path or video_path |
Blurs a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param sigma: horizontal sigma, standard deviation of Gaussian blur
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def blur(
video_path: str,
output_path: Optional[str] = None,
sigma: float = 1,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Blurs a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param sigma: horizontal sigma, standard deviation of Gaussian blur
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
blur_aug = af.VideoAugmenterByBlur(sigma)
blur_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="blur", **func_kwargs)
return output_path or video_path |
Brightens or darkens a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param level: the value must be a float value in range -1.0 to 1.0, where a
negative value darkens and positive brightens
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def brightness(
video_path: str,
output_path: Optional[str] = None,
level: float = 0.15,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Brightens or darkens a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param level: the value must be a float value in range -1.0 to 1.0, where a
negative value darkens and positive brightens
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
brightness_aug = af.VideoAugmenterByBrightness(level)
brightness_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="brightness", **func_kwargs
)
return output_path or video_path |
Changes the sample aspect ratio attribute of the video, and resizes the
video to reflect the new aspect ratio
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param ratio: aspect ratio of the new video, either as a float i.e. width/height,
or as a string representing the ratio in the form "num:denom"
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def change_aspect_ratio(
video_path: str,
output_path: Optional[str] = None,
ratio: Union[float, str] = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Changes the sample aspect ratio attribute of the video, and resizes the
video to reflect the new aspect ratio
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param ratio: aspect ratio of the new video, either as a float i.e. width/height,
or as a string representing the ratio in the form "num:denom"
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
aspect_ratio_aug = af.VideoAugmenterByAspectRatio(ratio)
aspect_ratio_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="change_aspect_ratio", **func_kwargs
)
return output_path or video_path |
Changes the speed of the video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param factor: the factor by which to alter the speed of the video. A factor
less than one will slow down the video, a factor equal to one won't alter
the video, and a factor greater than one will speed up the video
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def change_video_speed(
video_path: str,
output_path: Optional[str] = None,
factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Changes the speed of the video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param factor: the factor by which to alter the speed of the video. A factor
less than one will slow down the video, a factor equal to one won't alter
the video, and a factor greater than one will speed up the video
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
speed_aug = af.VideoAugmenterBySpeed(factor)
speed_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="change_video_speed", **func_kwargs
)
return output_path or video_path |
Color jitters the video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param brightness_factor: set the brightness expression. The value must be a
float value in range -1.0 to 1.0. The default value is 0
@param contrast_factor: set the contrast expression. The value must be a float
value in range -1000.0 to 1000.0. The default value is 1
@param saturation_factor: set the saturation expression. The value must be a float
in range 0.0 to 3.0. The default value is 1
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def color_jitter(
video_path: str,
output_path: Optional[str] = None,
brightness_factor: float = 0,
contrast_factor: float = 1.0,
saturation_factor: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Color jitters the video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param brightness_factor: set the brightness expression. The value must be a
float value in range -1.0 to 1.0. The default value is 0
@param contrast_factor: set the contrast expression. The value must be a float
value in range -1000.0 to 1000.0. The default value is 1
@param saturation_factor: set the saturation expression. The value must be a float
in range 0.0 to 3.0. The default value is 1
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
color_jitter_aug = af.VideoAugmenterByColorJitter(
brightness_level=brightness_factor,
contrast_level=contrast_factor,
saturation_level=saturation_factor,
)
color_jitter_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="color_jitter", **func_kwargs
)
return output_path or video_path |
Concatenates videos together. Resizes all other videos to the size of the
`source` video (video_paths[src_video_path_index]), and modifies the sample
aspect ratios to match (ffmpeg will fail to concat if SARs don't match)
@param video_paths: a list of paths to all the videos to be concatenated (in order)
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param src_video_path_index: for metadata purposes, this indicates which video in
the list `video_paths` should be considered the `source` or original video
@param transition: optional transition configuration to apply between the clips
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def concat(
video_paths: List[str],
output_path: Optional[str] = None,
src_video_path_index: int = 0,
transition: Optional[af.TransitionConfig] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Concatenates videos together. Resizes all other videos to the size of the
`source` video (video_paths[src_video_path_index]), and modifies the sample
aspect ratios to match (ffmpeg will fail to concat if SARs don't match)
@param video_paths: a list of paths to all the videos to be concatenated (in order)
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param src_video_path_index: for metadata purposes, this indicates which video in
the list `video_paths` should be considered the `source` or original video
@param transition: optional transition configuration to apply between the clips
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(
metadata, locals(), video_paths[src_video_path_index]
)
concat_aug = af.VideoAugmenterByConcat(
video_paths,
src_video_path_index,
transition,
)
concat_aug.add_augmenter(video_paths[src_video_path_index], output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata,
function_name="concat",
video_path=video_paths[src_video_path_index],
**func_kwargs,
)
return output_path or video_paths[src_video_path_index] |
Alters the contrast of a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param level: the value must be a float value in range -1000.0 to 1000.0,
where a negative value removes contrast and a positive value adds contrast
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def contrast(
video_path: str,
output_path: Optional[str] = None,
level: float = 1.0,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Alters the contrast of a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param level: the value must be a float value in range -1000.0 to 1000.0,
where a negative value removes contrast and a positive value adds contrast
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
contrast_aug = af.VideoAugmenterByContrast(level)
contrast_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="contrast", **func_kwargs)
return output_path or video_path |
Crops the video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param left: left positioning of the crop; between 0 and 1, relative to
the video width
@param top: top positioning of the crop; between 0 and 1, relative to
the video height
@param right: right positioning of the crop; between 0 and 1, relative to
the video width
@param bottom: bottom positioning of the crop; between 0 and 1, relative to
the video height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def crop(
video_path: str,
output_path: Optional[str] = None,
left: float = 0.25,
top: float = 0.25,
right: float = 0.75,
bottom: float = 0.75,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Crops the video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param left: left positioning of the crop; between 0 and 1, relative to
the video width
@param top: top positioning of the crop; between 0 and 1, relative to
the video height
@param right: right positioning of the crop; between 0 and 1, relative to
the video width
@param bottom: bottom positioning of the crop; between 0 and 1, relative to
the video height
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
crop_aug = af.VideoAugmenterByCrop(left, top, right, bottom)
crop_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="crop", **func_kwargs)
return output_path or video_path |
Alters the encoding quality of a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param quality: CRF scale is 0β51, where 0 is lossless, 23 is the default,
and 51 is worst quality possible. A lower value generally leads to higher
quality, and a subjectively sane range is 17β28
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def encoding_quality(
video_path: str,
output_path: Optional[str] = None,
quality: int = 23,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Alters the encoding quality of a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param quality: CRF scale is 0β51, where 0 is lossless, 23 is the default,
and 51 is worst quality possible. A lower value generally leads to higher
quality, and a subjectively sane range is 17β28
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
encoding_aug = af.VideoAugmenterByQuality(quality)
encoding_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="encoding_quality", **func_kwargs
)
return output_path or video_path |
Alters the FPS of a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param fps: the desired output frame rate. Note that a FPS value greater than
the original FPS of the video will result in an unaltered video
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def fps(
video_path: str,
output_path: Optional[str] = None,
fps: int = 15,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Alters the FPS of a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param fps: the desired output frame rate. Note that a FPS value greater than
the original FPS of the video will result in an unaltered video
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
fps_aug = af.VideoAugmenterByFPSChange(fps)
fps_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="fps", **func_kwargs)
return output_path or video_path |
Changes a video to be grayscale
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def grayscale(
video_path: str,
output_path: Optional[str] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Changes a video to be grayscale
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
grayscale_aug = af.VideoAugmenterByGrayscale()
grayscale_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(
metadata=metadata, function_name="grayscale", **func_kwargs
)
return output_path or video_path |
Horizontally flips a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def hflip(
video_path: str,
output_path: Optional[str] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Horizontally flips a video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
hflip_aug = af.VideoAugmenterByHFlip()
hflip_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="hflip", **func_kwargs)
return output_path or video_path |
Horizontally stacks two videos
@param video_path: the path to the video that will be stacked to the left
@param second_video_path: the path to the video that will be stacked to the right
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param use_second_audio: if set to True, the audio of the right video will be
used instead of the left's
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def hstack(
video_path: str,
second_video_path: str,
output_path: Optional[str] = None,
use_second_audio: bool = False,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Horizontally stacks two videos
@param video_path: the path to the video that will be stacked to the left
@param second_video_path: the path to the video that will be stacked to the right
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param use_second_audio: if set to True, the audio of the right video will be
used instead of the left's
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
hstack_aug = af.VideoAugmenterByStack(second_video_path, use_second_audio, "hstack")
hstack_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="hstack", **func_kwargs)
return output_path or video_path |
Puts the video in the middle of the background video
(at offset_factor * background.duration)
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param background_path: the path to the video in which to insert the main
video. If set to None, the main video will play in the middle of a silent
video with black frames
@param offset_factor: the point in the background video in which the main video
starts to play (this factor is multiplied by the background video duration to
determine the start point)
@param source_percentage: when set, source_percentage of the duration
of the final video (background + source) will be taken up by the
source video. Randomly crops the background video to the correct duration.
If the background video isn't long enough to get the desired source_percentage,
it will be looped.
@param seed: if provided, this will set the random seed to ensure consistency
between runs
@param transition: optional transition configuration to apply between the clips
@param metadata: if set to be a list, metadata about the function execution including
its name, the source & dest duration, fps, etc. will be appended to the inputted
list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def insert_in_background(
video_path: str,
output_path: Optional[str] = None,
background_path: Optional[str] = None,
offset_factor: float = 0.0,
source_percentage: Optional[float] = None,
seed: Optional[int] = None,
transition: Optional[af.TransitionConfig] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Puts the video in the middle of the background video
(at offset_factor * background.duration)
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param background_path: the path to the video in which to insert the main
video. If set to None, the main video will play in the middle of a silent
video with black frames
@param offset_factor: the point in the background video in which the main video
starts to play (this factor is multiplied by the background video duration to
determine the start point)
@param source_percentage: when set, source_percentage of the duration
of the final video (background + source) will be taken up by the
source video. Randomly crops the background video to the correct duration.
If the background video isn't long enough to get the desired source_percentage,
it will be looped.
@param seed: if provided, this will set the random seed to ensure consistency
between runs
@param transition: optional transition configuration to apply between the clips
@param metadata: if set to be a list, metadata about the function execution including
its name, the source & dest duration, fps, etc. will be appended to the inputted
list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
assert (
0.0 <= offset_factor <= 1.0
), "Offset factor must be a value in the range [0.0, 1.0]"
if source_percentage is not None:
assert (
0.0 <= source_percentage <= 1.0
), "Source percentage must be a value in the range [0.0, 1.0]"
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
local_path = utils.pathmgr.get_local_path(video_path)
utils.validate_video_path(local_path)
video_info = helpers.get_video_info(local_path)
video_duration = float(video_info["duration"])
width, height = video_info["width"], video_info["height"]
rng = np.random.RandomState(seed) if seed is not None else np.random
video_paths = []
with tempfile.TemporaryDirectory() as tmpdir:
tmp_video_path = os.path.join(tmpdir, "in.mp4")
resized_bg_path = os.path.join(tmpdir, "bg.mp4")
helpers.add_silent_audio(video_path, tmp_video_path)
if background_path is None:
helpers.create_color_video(resized_bg_path, video_duration, height, width)
else:
resize(background_path, resized_bg_path, height, width)
bg_video_info = helpers.get_video_info(resized_bg_path)
bg_video_duration = float(bg_video_info["duration"])
bg_start = 0
bg_end = bg_video_duration
desired_bg_duration = bg_video_duration
if source_percentage is not None:
# desired relationship: percent * (bg_len + s_len) = s_len
# solve for bg_len -> bg_len = s_len / percent - s_len
desired_bg_duration = video_duration / source_percentage - video_duration
# if background vid isn't long enough, loop
num_loops_needed = math.ceil(desired_bg_duration / bg_video_duration)
if num_loops_needed > 1:
loop(resized_bg_path, num_loops=num_loops_needed)
bg_video_duration *= num_loops_needed
bg_start = rng.uniform(0, bg_video_duration - desired_bg_duration)
bg_end = bg_start + desired_bg_duration
offset = desired_bg_duration * offset_factor
transition_before = False
if offset > 0:
before_path = os.path.join(tmpdir, "before.mp4")
trim(resized_bg_path, before_path, start=bg_start, end=bg_start + offset)
video_paths.append(before_path)
src_video_path_index = 1
transition_before = True
else:
src_video_path_index = 0
video_paths.append(tmp_video_path)
transition_after = False
if bg_start + offset < bg_end:
after_path = os.path.join(tmpdir, "after.mp4")
trim(resized_bg_path, after_path, start=bg_start + offset, end=bg_end)
video_paths.append(after_path)
transition_after = True
concat(
video_paths,
output_path or video_path,
src_video_path_index,
transition=transition,
)
if metadata is not None:
helpers.get_metadata(
metadata=metadata,
function_name="insert_in_background",
background_video_duration=desired_bg_duration,
transition_before=transition_before,
transition_after=transition_after,
**func_kwargs,
)
return output_path or video_path |
Places the video (and the additional videos) in the middle of the background video.
@param video_path: the path of the main video to be augmented.
@param output_path: the path in which the output video will be stored.
@param background_path: the path of the video in which to insert the main
(and additional) video.
@param src_ids: the list of identifiers for the main video and additional videos.
@param additional_video_paths: list of additional video paths to be
inserted alongside the main video; one clip from each of the input
videos will be inserted in order.
@param seed: if provided, this will set the random seed to ensure consistency
between runs.
@param min_source_segment_duration: minimum duration in seconds of the source
segments that will be inserted in the background video.
@param max_source_segment_duration: maximum duration in seconds of the source
segments that will be inserted in the background video.
@param min_background_segment_duration: minimum duration in seconds of a background segment.
@param min_result_video_duration: minimum duration in seconds of the output video.
@param max_result_video_duration: maximum duration in seconds of the output video.
@param transition: optional transition configuration to apply between the clips.
@param metadata: if set to be a list, metadata about the function execution including
its name, the source & dest duration, fps, etc. will be appended to the inputted
list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def insert_in_background_multiple(
video_path: str,
output_path: str,
background_path: str,
src_ids: List[str],
additional_video_paths: List[str],
seed: Optional[int] = None,
min_source_segment_duration: float = 5.0,
max_source_segment_duration: float = 20.0,
min_background_segment_duration: float = 2.0,
min_result_video_duration: float = 30.0,
max_result_video_duration: float = 60.0,
transition: Optional[af.TransitionConfig] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Places the video (and the additional videos) in the middle of the background video.
@param video_path: the path of the main video to be augmented.
@param output_path: the path in which the output video will be stored.
@param background_path: the path of the video in which to insert the main
(and additional) video.
@param src_ids: the list of identifiers for the main video and additional videos.
@param additional_video_paths: list of additional video paths to be
inserted alongside the main video; one clip from each of the input
videos will be inserted in order.
@param seed: if provided, this will set the random seed to ensure consistency
between runs.
@param min_source_segment_duration: minimum duration in seconds of the source
segments that will be inserted in the background video.
@param max_source_segment_duration: maximum duration in seconds of the source
segments that will be inserted in the background video.
@param min_background_segment_duration: minimum duration in seconds of a background segment.
@param min_result_video_duration: minimum duration in seconds of the output video.
@param max_result_video_duration: maximum duration in seconds of the output video.
@param transition: optional transition configuration to apply between the clips.
@param metadata: if set to be a list, metadata about the function execution including
its name, the source & dest duration, fps, etc. will be appended to the inputted
list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
if additional_video_paths:
assert len(additional_video_paths) + 1 == len(
src_ids
), "src_ids need to be specified for the main video and all additional videos."
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
rng = np.random.RandomState(seed) if seed is not None else np.random
local_path = utils.pathmgr.get_local_path(video_path)
additional_local_paths = (
[utils.pathmgr.get_local_path(p) for p in additional_video_paths]
if additional_video_paths
else []
)
bkg_local_path = utils.pathmgr.get_local_path(background_path)
src_paths = [
local_path,
] + additional_local_paths
src_video_durations = np.array(
[float(helpers.get_video_info(v)["duration"]) for v in src_paths]
)
bkg_duration = float(helpers.get_video_info(bkg_local_path)["duration"])
src_segment_durations = (
rng.random_sample(len(src_video_durations))
* (max_source_segment_duration - min_source_segment_duration)
+ min_source_segment_duration
)
src_segment_durations = np.minimum(src_segment_durations, src_video_durations)
src_segment_starts = rng.random(len(src_video_durations)) * (
src_video_durations - src_segment_durations
)
src_segment_ends = src_segment_starts + src_segment_durations
sum_src_duration = np.sum(src_segment_durations)
required_result_duration = (
len(src_segment_durations) + 1
) * min_background_segment_duration + sum_src_duration
if required_result_duration > max_result_video_duration:
raise ValueError(
"Failed to generate config for source segments in insert_in_background_multiple."
)
duration_budget = max_result_video_duration - required_result_duration
bkg_budget = rng.random() * duration_budget
overall_bkg_needed_duration = (
len(src_segment_durations) + 1
) * min_background_segment_duration + bkg_budget
num_loops_needed = 0
if overall_bkg_needed_duration > bkg_duration:
num_loops_needed = math.ceil(overall_bkg_needed_duration / bkg_duration)
# Now sample insertion points by picking len(src_segment_durations) points in the interval [0, bkg_budget)
# Then sort the segments and add spacing for the minimum background segment duration.
bkg_insertion_points = (
np.sort(rng.random(len(src_segment_durations)) * bkg_budget)
+ np.arange(len(src_segment_durations)) * min_background_segment_duration
)
last_bkg_point = overall_bkg_needed_duration
dst_starts = bkg_insertion_points + np.concatenate(
(
[
0.0,
],
np.cumsum(src_segment_durations)[:-1],
)
)
# Start applying transforms.
with tempfile.TemporaryDirectory() as tmpdir:
# First, loop through background video if needed.
if num_loops_needed > 0:
buf = os.path.join(tmpdir, "bkg_loop.mp4")
loop(bkg_local_path, buf, num_loops=num_loops_needed)
bkg_path = buf
else:
bkg_path = bkg_local_path
bkg_videos = []
# Sample background segments.
prev = 0.0
for i, pt in enumerate(bkg_insertion_points):
out_path = os.path.join(tmpdir, f"bkg_{i}.mp4")
trim(bkg_path, out_path, start=prev, end=pt)
prev = pt
bkg_videos.append(out_path)
# last background segment
last_bkg_path = os.path.join(tmpdir, "bkg_last.mp4")
trim(bkg_path, last_bkg_path, start=prev, end=last_bkg_point)
src_videos = []
# Sample source segments.
for i, seg in enumerate(zip(src_segment_starts, src_segment_ends)):
out_path = os.path.join(tmpdir, f"src_{i}.mp4")
trim(src_paths[i], out_path, start=seg[0], end=seg[1])
src_videos.append(out_path)
all_videos = [v for pair in zip(bkg_videos, src_videos) for v in pair] + [
last_bkg_path,
]
concat(all_videos, output_path, 1, transition=transition)
if metadata is not None:
helpers.get_metadata(
metadata=metadata,
function_name="insert_in_background_multiple",
src_segment_starts=src_segment_starts,
src_segment_ends=src_segment_ends,
bkg_insertion_points=bkg_insertion_points,
**func_kwargs,
)
return output_path |
Replaces the beginning and end of the source video with the background video, keeping the
total duration of the output video equal to the original duration of the source video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored. If not
passed in, the original video file will be overwritten
@param background_path: the path to the video in which to insert the main video.
If set to None, the main video will play in the middle of a silent video with
black frames
@param source_offset: the starting point where the background video transitions to
the source video. Prior to this point, the source video is replaced with the
background video. A value of 0 means all background is at the beginning. A value
of 1 means all background is at the end of the video
@param background_offset: the starting point from which the background video starts
to play, as a proportion of the background video duration (i.e. this factor is
multiplied by the background video duration to determine the start point)
@param source_percentage: the percentage of the source video that remains unreplaced
by the background video. The source percentage plus source offset should be less
than 1. If it is greater, the output video duration will be longer than the source.
If the background video is not long enough to get the desired source percentage,
it will be looped
@param transition: optional transition configuration to apply between the clips
@param metadata: if set to be a list, metadata about the function execution including
its name, the source & dest duration, fps, etc. will be appended to the inputted
list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def replace_with_background(
video_path: str,
output_path: Optional[str] = None,
background_path: Optional[str] = None,
source_offset: float = 0.0,
background_offset: float = 0.0,
source_percentage: float = 0.5,
transition: Optional[af.TransitionConfig] = None,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Replaces the beginning and end of the source video with the background video, keeping the
total duration of the output video equal to the original duration of the source video
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored. If not
passed in, the original video file will be overwritten
@param background_path: the path to the video in which to insert the main video.
If set to None, the main video will play in the middle of a silent video with
black frames
@param source_offset: the starting point where the background video transitions to
the source video. Prior to this point, the source video is replaced with the
background video. A value of 0 means all background is at the beginning. A value
of 1 means all background is at the end of the video
@param background_offset: the starting point from which the background video starts
to play, as a proportion of the background video duration (i.e. this factor is
multiplied by the background video duration to determine the start point)
@param source_percentage: the percentage of the source video that remains unreplaced
by the background video. The source percentage plus source offset should be less
than 1. If it is greater, the output video duration will be longer than the source.
If the background video is not long enough to get the desired source percentage,
it will be looped
@param transition: optional transition configuration to apply between the clips
@param metadata: if set to be a list, metadata about the function execution including
its name, the source & dest duration, fps, etc. will be appended to the inputted
list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
assert (
0.0 <= source_offset <= 1.0
), "Source offset factor must be a value in the range [0.0, 1.0]"
assert (
0.0 <= background_offset <= 1.0
), "Background offset factor must be a value in the range [0.0, 1.0]"
assert (
0.0 <= source_percentage <= 1.0
), "Source percentage must be a value in the range [0.0, 1.0]"
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
local_path = utils.pathmgr.get_local_path(video_path)
utils.validate_video_path(local_path)
video_info = helpers.get_video_info(video_path)
video_duration = float(video_info["duration"])
width, height = video_info["width"], video_info["height"]
video_paths = []
with tempfile.TemporaryDirectory() as tmpdir:
tmp_video_path = os.path.join(tmpdir, "in.mp4")
resized_bg_path = os.path.join(tmpdir, "bg.mp4")
# create bg video
if background_path is None:
helpers.create_color_video(resized_bg_path, video_duration, height, width)
else:
resize(background_path, resized_bg_path, height, width)
helpers.add_silent_audio(resized_bg_path)
bg_video_info = helpers.get_video_info(resized_bg_path)
bg_video_duration = float(bg_video_info["duration"])
src_video_path_index = 1
final_bg_len = video_duration * (1 - source_percentage)
# if desired bg video too short, loop bg video
num_loops_needed = math.ceil(final_bg_len / bg_video_duration)
if num_loops_needed > 1:
loop(resized_bg_path, num_loops=num_loops_needed)
first_bg_segment_len = source_offset * final_bg_len
last_bg_segment_len = final_bg_len - first_bg_segment_len
# calculate bg start and end times of bg in output video
bg_start = background_offset * bg_video_duration
src_start = first_bg_segment_len
src_length = source_percentage * video_duration
src_end = src_start + src_length
# add pre src background segment
if source_offset > 0:
before_path = os.path.join(tmpdir, "before.mp4")
trim(
resized_bg_path,
before_path,
start=bg_start,
end=bg_start + first_bg_segment_len,
)
video_paths.append(before_path)
src_video_path_index = 1
else:
src_video_path_index = 0
# trim source to length satisfying source_percentage
helpers.add_silent_audio(video_path, tmp_video_path)
trimmed_src_path = os.path.join(tmpdir, "trim_src.mp4")
trim(tmp_video_path, trimmed_src_path, start=src_start, end=src_end)
video_paths.append(trimmed_src_path)
# add post src background segment
if source_offset < 1:
after_path = os.path.join(tmpdir, "after.mp4")
trim(
resized_bg_path,
after_path,
start=bg_start + src_start,
end=bg_start + src_start + last_bg_segment_len,
)
video_paths.append(after_path)
concat(
video_paths,
output_path or video_path,
src_video_path_index,
transition=transition,
)
if metadata is not None:
helpers.get_metadata(
metadata=metadata,
function_name="replace_with_background",
starting_background_duration=first_bg_segment_len,
source_duration=src_length,
ending_background_duration=last_bg_segment_len,
**func_kwargs,
)
return output_path or video_path |
Loops a video `num_loops` times
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param num_loops: the number of times to loop the video. 0 means that the video
will play once (i.e. no loops)
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video | def loop(
video_path: str,
output_path: Optional[str] = None,
num_loops: int = 0,
metadata: Optional[List[Dict[str, Any]]] = None,
) -> str:
"""
Loops a video `num_loops` times
@param video_path: the path to the video to be augmented
@param output_path: the path in which the resulting video will be stored.
If not passed in, the original video file will be overwritten
@param num_loops: the number of times to loop the video. 0 means that the video
will play once (i.e. no loops)
@param metadata: if set to be a list, metadata about the function execution
including its name, the source & dest duration, fps, etc. will be appended
to the inputted list. If set to None, no metadata will be appended or returned
@returns: the path to the augmented video
"""
func_kwargs = helpers.get_func_kwargs(metadata, locals(), video_path)
loop_aug = af.VideoAugmenterByLoops(num_loops)
loop_aug.add_augmenter(video_path, output_path)
if metadata is not None:
helpers.get_metadata(metadata=metadata, function_name="loop", **func_kwargs)
return output_path or video_path |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.