dlpy.applications.Darknet_Reference¶
-
dlpy.applications.
Darknet_Reference
(conn, model_table='Darknet_Reference', n_classes=1000, act='leaky', n_channels=3, width=224, height=224, scale=0.00392156862745098, random_flip='H', random_crop='UNIQUE', random_mutation=None)¶ Generates a deep learning model with the Darknet_Reference architecture.
The head of the model except the last convolutional layer is same as the head of Tiny Yolov2. Darknet Reference is pre-trained model for ImageNet classification.
Parameters: - conn : CAS
Specifies the connection of the CAS connection.
- model_table : string
Specifies the name of CAS table to store the model.
- n_classes : int, optional
Specifies the number of classes. If None is assigned, the model will automatically detect the number of classes based on the training set.
Default: 1000- act : string
Specifies the type of the activation function for the batch normalization layers and the final convolution layer.
Default: ‘leaky’- n_channels : int, optional
Specifies the number of the channels (i.e., depth) of the input layer.
Default: 3.- width : int, optional
Specifies the width of the input layer.
Default: 224.- height : int, optional
Specifies the height of the input layer.
Default: 224.- scale : double, optional
Specifies a scaling factor to be applied to each pixel intensity values.
Default: 1.0 / 255- random_flip : string, optional
Specifies how to flip the data in the input layer when image data is used. Approximately half of the input data is subject to flipping.
Valid Values: ‘h’, ‘hv’, ‘v’, ‘none’
Default: ‘h’- random_crop : string, optional
Specifies how to crop the data in the input layer when image data is used. Images are cropped to the values that are specified in the width and height parameters. Only the images with one or both dimensions that are larger than those sizes are cropped.
Valid Values: ‘none’, ‘unique’, ‘randomresized’, ‘resizethencrop’
Default: ‘unique’- random_mutation : string, optional
Specifies how to apply data augmentations/mutations to the data in the input layer.
Valid Values: ‘none’, ‘random’
Returns: