dlpy.applications.YoloV2_MultiSize¶
-
dlpy.applications.
YoloV2_MultiSize
(conn, anchors, model_table='YoloV2-MultiSize', n_channels=3, width=416, height=416, scale=0.00392156862745098, random_mutation=None, act='leaky', act_detection='AUTO', softmax_for_class_prob=True, coord_type='YOLO', max_label_per_image=30, max_boxes=30, n_classes=20, predictions_per_grid=5, do_sqrt=True, grid_number=13, coord_scale=None, object_scale=None, prediction_not_a_object_scale=None, class_scale=None, detection_threshold=None, iou_threshold=None, random_boxes=False, match_anchor_size=None, num_to_force_coord=None, random_flip=None, random_crop=None)¶ Generates a deep learning model with the Yolov2 architecture.
The model is same as Yolov2 proposed in original paper. In addition to Yolov2, the model adds a passthrough layer that brings feature from an earlier layer to lower resolution layer.
Parameters: - conn : CAS
Specifies the connection of the CAS connection.
- anchors : list
Specifies the anchor box values.
- model_table : string, optional
Specifies the name of CAS table to store the model.
- n_channels : int, optional
Specifies the number of the channels (i.e., depth) of the input layer.
Default: 3- width : int, optional
Specifies the width of the input layer.
Default: 416- height : int, optional
Specifies the height of the input layer.
Default: 416- scale : double, optional
Specifies a scaling factor to be applied to each pixel intensity values.
Default: 1.0 / 255- random_mutation : string, optional
Specifies how to apply data augmentations/mutations to the data in the input layer.
Valid Values: ‘none’, ‘random’- act : string, optional
Specifies the activation function for the batch normalization layers.
Default: ‘leaky’- act_detection : string, optional
Specifies the activation function for the detection layer.
Valid Values: AUTO, IDENTITY, LOGISTIC, SIGMOID, TANH, RECTIFIER, RELU, SOFPLUS, ELU, LEAKY, FCMP
Default: AUTO- softmax_for_class_prob : bool, optional
Specifies whether to perform Softmax on class probability per predicted object.
Default: True- coord_type : string, optional
Specifies the format of how to represent bounding boxes. For example, a bounding box can be represented with the x and y locations of the top-left point as well as width and height of the rectangle. This format is the ‘rect’ format. We also support coco and yolo formats.
Valid Values: ‘rect’, ‘yolo’, ‘coco’
Default: ‘yolo’- max_label_per_image : int, optional
Specifies the maximum number of labels per image in the training.
Default: 30- max_boxes : int, optional
Specifies the maximum number of overall predictions allowed in the detection layer.
Default: 30- n_classes : int, optional
Specifies the number of classes. If None is assigned, the model will automatically detect the number of classes based on the training set.
Default: 20- predictions_per_grid : int, optional
Specifies the amount of predictions will be done per grid.
Default: 5- do_sqrt : bool, optional
Specifies whether to apply the SQRT function to width and height of the object for the cost function.
Default: True- grid_number : int, optional
Specifies the amount of cells to be analyzed for an image. For example, if the value is 5, then the image will be divided into a 5 x 5 grid.
Default: 13- coord_scale : float, optional
Specifies the weight for the cost function in the detection layer, when objects exist in the grid.
- object_scale : float, optional
Specifies the weight for object detected for the cost function in the detection layer.
- prediction_not_a_object_scale : float, optional
Specifies the weight for the cost function in the detection layer, when objects do not exist in the grid.
- class_scale : float, optional
Specifies the weight for the class of object detected for the cost function in the detection layer.
- detection_threshold : float, optional
Specifies the threshold for object detection.
- iou_threshold : float, optional
Specifies the IOU Threshold of maximum suppression in object detection.
- random_boxes : bool, optional
Randomizing boxes when loading the bounding box information.
Default: False- match_anchor_size : bool, optional
Whether to force the predicted box match the anchor boxes in sizes for all predictions
- num_to_force_coord : int, optional
The number of leading chunk of images in training when the algorithm forces predicted objects in each grid to be equal to the anchor box sizes, and located at the grid center
- random_flip : string, optional
Specifies how to flip the data in the input layer when image data is used. Approximately half of the input data is subject to flipping.
Valid Values: ‘h’, ‘hv’, ‘v’, ‘none’- random_crop : string, optional
Specifies how to crop the data in the input layer when image data is used. Images are cropped to the values that are specified in the width and height parameters. Only the images with one or both dimensions that are larger than those sizes are cropped.
Valid Values: ‘none’, ‘unique’, ‘randomresized’, ‘resizethencrop’
Returns: References