dlpy.model.Model.deploy

Model.deploy(path, output_format='astore', model_weights=None, layers=None, **kwargs)

Deploy the deep learning model to a data file

Parameters:
path : string

Specifies the location to store the model files. If the output_format is set to castable, then the location has to be on the server-side. Otherwise, the location has to be on the client-side.

output_format : string, optional

Specifies the format of the deployed model
Valid Values: astore, castable, or onnx
Default: astore

model_weights : string, optional

Specifies the client-side path to the csv file of the model weights table. Only effective when output_format=’onnx’. If no csv file is specified when deploying to ONNX, the weights will be fetched from the CAS server. This may take a long time to complete if the size of model weights is large.

layers : string list, optional

Specifies the names of the layers to include in the output astore scoring results. This can be used to extract the features for given layers.

Notes

Currently, this function supports sashdat, astore, and onnx formats.

More information about ONNX can be found at: https://onnx.ai/

DLPy supports ONNX version >= 1.3.0, and Opset version 8.

For ONNX format, currently supported layers are convo, pool, fc, batchnorm, residual, concat, reshape, and detection.

If dropout is specified in the model, train the model using inverted dropout, which can be specified in Optimizer. This will ensure the results are correct when running the model during test phase.