dlpy.applications.UNet

dlpy.applications.UNet(conn, model_table='UNet', n_classes=2, n_channels=1, width=256, height=256, scale=0.00392156862745098, norm_stds=None, offsets=None, random_mutation=None, init=None, bn_after_convolutions=False, random_flip=None, random_crop=None, output_image_type=None, output_image_prob=False)

Generates a deep learning model with the U-Net architecture.

Parameters:
conn : CAS

Specifies the connection of the CAS connection.

model_table : string, optional

Specifies the name of CAS table to store the model.

n_classes : int, optional

Specifies the number of classes. If None is assigned, the model will automatically detect the number of classes based on the training set.
Default: 2

n_channels : int, optional

Specifies the number of the channels (i.e., depth) of the input layer.
Default: 3

width : int, optional

Specifies the width of the input layer.
Default: 256

height : int, optional

Specifies the height of the input layer.
Default: 256

scale : double, optional

Specifies a scaling factor to be applied to each pixel intensity values.
Default: 1.0/255

norm_stds : double or iter-of-doubles, optional

Specifies a standard deviation for each channel in the input data. The final input data is normalized with specified means and standard deviations.

offsets : double or iter-of-doubles, optional

Specifies an offset for each channel in the input data. The final input data is set after applying scaling and subtracting the specified offsets.

random_mutation : string, optional

Specifies how to apply data augmentations/mutations to the data in the input layer.
Valid Values: ‘none’, ‘random’

init : str

Specifies the initialization scheme for convolution layers.
Valid Values: XAVIER, UNIFORM, NORMAL, CAUCHY, XAVIER1, XAVIER2, MSRA, MSRA1, MSRA2
Default: None

bn_after_convolutions : Boolean

If set to True, a batch normalization layer is added after each convolution layer.

random_flip : string, optional

Specifies how to flip the data in the input layer when image data is used. Approximately half of the input data is subject to flipping.
Valid Values: ‘h’, ‘hv’, ‘v’, ‘none’

random_crop : string, optional

Specifies how to crop the data in the input layer when image data is used. Images are cropped to the values that are specified in the width and height parameters. Only the images with one or both dimensions that are larger than those sizes are cropped.
Valid Values: ‘none’, ‘unique’, ‘randomresized’, ‘resizethencrop’

output_image_type: string, optional

Specifies the output image type of this layer. possible values: [ WIDE, PNG, BASE64 ]
Default: WIDE

output_image_prob: bool, options

Does not include probabilities if doing classification (default).

Returns:
Sequential

References

https://arxiv.org/pdf/1505.04597