text
stringlengths 0
2.2M
|
---|
pixels in the input image. To decide which pixels to look at, the
|
training algorithm randomly selects pixels from a box roughly centered
|
around the object of interest. We call this box the feature pool region
|
box.
|
Each object of interest is defined by a full_object_detection, which
|
contains a bounding box and a list of landmarks. If
|
landmark_relative_padding_mode==True then the feature pool region box is
|
the tightest box that contains the landmarks inside the
|
full_object_detection. In this mode the full_object_detection's bounding
|
box is ignored. Otherwise, if the padding mode is bounding_box_relative
|
then the feature pool region box is the tightest box that contains BOTH
|
the landmarks and the full_object_detection's bounding box.
|
Additionally, you can adjust the size of the feature pool padding region
|
by setting feature_pool_region_padding to some value. If
|
feature_pool_region_padding then the feature pool region box is
|
unmodified and defined exactly as stated above. However, you can expand
|
the size of the box by setting the padding > 0 or shrink it by setting it
|
to something < 0.
|
To explain this precisely, for a padding of 0 we say that the pixels are
|
sampled from a box of size 1x1. The padding value is added to each side
|
of the box. So a padding of 0.5 would cause the algorithm to sample
|
pixels from a box that was 2x2, effectively multiplying the area pixels
|
are sampled from by 4. Similarly, setting the padding to -0.2 would
|
cause it to sample from a box 0.6x0.6 in size.
|
!*/
|
"Size of region within which to sample features for the feature pool. \
|
positive values increase the sampling region while negative values decrease it. E.g. padding of 0 means we \
|
sample fr")
|
.def_readwrite("random_seed", &type::random_seed,
|
"The random seed used by the internal random number generator")
|
.def_readwrite("num_threads", &type::num_threads,
|
"Use this many threads/CPU cores for training.")
|
.def("__str__", &::print_shape_predictor_training_options)
|
.def("__repr__", &::print_shape_predictor_training_options)
|
.def(py::pickle(&getstate<type>, &setstate<type>));
|
}
|
{
|
typedef shape_predictor type;
|
py::class_<type, std::shared_ptr<type>>(m, "shape_predictor",
|
"This object is a tool that takes in an image region containing some object and \
|
outputs a set of point locations that define the pose of the object. The classic \
|
example of this is human face pose prediction, where you take an image of a human \
|
face as input and are expected to identify the locations of important facial \
|
landmarks such as the corners of the mouth and eyes, tip of the nose, and so forth.")
|
.def(py::init())
|
.def(py::init(&load_object_from_file<type>),
|
"Loads a shape_predictor from a file that contains the output of the \n\
|
train_shape_predictor() routine.")
|
.def("__call__", &run_predictor, py::arg("image"), py::arg("box"),
|
"requires \n\
|
- image is a numpy ndarray containing either an 8bit grayscale or RGB \n\
|
image. \n\
|
- box is the bounding box to begin the shape prediction inside. \n\
|
ensures \n\
|
- This function runs the shape predictor on the input image and returns \n\
|
a single full_object_detection.")
|
.def("save", save_shape_predictor, py::arg("predictor_output_filename"), "Save a shape_predictor to the provided path.")
|
.def(py::pickle(&getstate<type>, &setstate<type>));
|
}
|
{
|
m.def("train_shape_predictor", train_shape_predictor_on_images_py,
|
py::arg("images"), py::arg("object_detections"), py::arg("options"),
|
"requires \n\
|
- options.lambda_param > 0 \n\
|
- 0 < options.nu <= 1 \n\
|
- options.feature_pool_region_padding >= 0 \n\
|
- len(images) == len(object_detections) \n\
|
- images should be a list of numpy matrices that represent images, either RGB or grayscale. \n\
|
- object_detections should be a list of lists of dlib.full_object_detection objects. \
|
Each dlib.full_object_detection contains the bounding box and the lists of points that make up the object parts.\n\
|
ensures \n\
|
- Uses dlib's shape_predictor_trainer object to train a \n\
|
shape_predictor based on the provided labeled images, full_object_detections, and options.\n\
|
- The trained shape_predictor is returned");
|
m.def("train_shape_predictor", train_shape_predictor,
|
py::arg("dataset_filename"), py::arg("predictor_output_filename"), py::arg("options"),
|
"requires \n\
|
- options.lambda_param > 0 \n\
|
- 0 < options.nu <= 1 \n\
|
- options.feature_pool_region_padding >= 0 \n\
|
ensures \n\
|
- Uses dlib's shape_predictor_trainer to train a \n\
|
shape_predictor based on the labeled images in the XML file \n\
|
dataset_filename and the provided options. This function assumes the file dataset_filename is in the \n\
|
XML format produced by dlib's save_image_dataset_metadata() routine. \n\
|
- The trained shape predictor is serialized to the file predictor_output_filename.");
|
m.def("test_shape_predictor", test_shape_predictor_py,
|
py::arg("dataset_filename"), py::arg("predictor_filename"),
|
"ensures \n\
|
- Loads an image dataset from dataset_filename. We assume dataset_filename is \n\
|
a file using the XML format written by save_image_dataset_metadata(). \n\
|
- Loads a shape_predictor from the file predictor_filename. This means \n\
|
predictor_filename should be a file produced by the train_shape_predictor() \n\
|
routine. \n\
|
- This function tests the predictor against the dataset and returns the \n\
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.