dlpy.model.Model.evaluate

Model.evaluate(data, text_parms=None, layer_out=None, layers=None, gpu=None, buffer_size=None, mini_batch_buf_size=None, top_probs=None, use_best_weights=False, random_crop='none', random_flip='none', random_mutation='none', model_task=None, display_class_score_info='all')

Evaluate the deep learning model on a specified validation data set

After the inference, a confusion matrix is created from the results. This method is good for classification tasks.

Parameters:
data : string or CASTable, optional

Specifies the input data.

text_parms : TextParms, optional

Specifies the parameters for the text inputs.

layer_out : string, optional

Specifies the settings for an output table that includes layer output values. By default, all layers are included. You can filter the list with the layers parameter.

layers : list of strings

Specifies the names of the layers to include in the output layers table.

gpu : GPU, optional

When specified, the action uses graphical processing unit hardware. The simplest way to use GPU processing is to specify “gpu=1”. In this case, the default values of other GPU parameters are used. Setting gpu=1 enables all available GPU devices for use. Setting gpu=0 disables GPU processing.

buffer_size : int, optional

Specifies the number of observations to score in a single batch. Larger values use more memory.
Default: 10

mini_batch_buf_size : int, optional

Specifies the size of a buffer that is used to save input data and intermediate calculations. By default, each layer allocates an input buffer that is equal to the number of input channels multiplied by the input feature map size multiplied by the bufferSize value. You can reduce memory usage by specifying a value that is smaller than the bufferSize. The only disadvantage to specifying a small value is that run time can increase because multiple smaller matrices must be multiplied instead of a single large matrix multiply.

top_probs : int, optional

Specifies to include the predicted probabilities along with the corresponding labels in the results. For example, if you specify 5, then the top 5 predicted probabilities are shown in the results along with the corresponding labels.

use_best_weights : bool, optional

When set to True, the weights that provides the smallest loss error saved during a previous training is used while scoring input data rather than the final weights from the training.
Default: False

random_flip : string, optional

Specifies how to flip the data in the input layer when image data is used. H stands for horizontal V stands for vertical HW stands for horizontal and vertical Approximately half of the input data is subject to flipping.
Default: NONE
Valid Values: NONE, H, V, HV

random_crop : string, optional

Specifies how to crop the data in the input layer when image data is used. Images are cropped to the values that are specified in the width and height parameters. Only the images with one or both dimensions that are larger than those sizes are cropped. UNIQUE: specifies to crop images to the size specified in the height and width parameters. Images that are less than or equal to the size are not modified. For images that are larger, the cropping begins at a random offset for x and y.
Default: NONE
Valid Values: NONE, UNIQUE

random_mutation : string, optional

Specifies how to mutate images.
Default: NONE
Valid Values: NONE, RANDOM

model_task : string, optional

Specifies the model task type.
Valid Values: CLASSIFICATION, REGRESSION

display_class_score_info: string, optional

When set to ALL, displays the ClassScoreInfo table in the results.
Default: ALL
Valid Values: NONE, ALL

Returns:
CASResults