dlpy.metrics.confusion_matrix

dlpy.metrics.confusion_matrix(y_true, y_pred, castable=None, labels=None, id_vars=None)

Computes the confusion matrix of a classification task.

Parameters:
y_true : string or CASColumn

The column of the ground truth labels. If it is a string, then y_pred has to be a string and they both belongs to the same CASTable specified by the castable argument. If it is a CASColumn, then y_pred has to be a CASColumn, and the castable argument is ignored. When both y_pred and y_true are CASColumn, they can be in different CASTables.

y_pred : string or CASColumn

The column of the predicted class labels. If it is a string, then y_true has to be a string and they both belongs to the same CASTable specified by the castable argument. If it is a CASColumn, then y_true has to be a CASColumn, and the castable argument is ignored. When both y_pred and y_true are CASColumn, they can be in different CASTables.

castable : CASTable, optional

The CASTable object to use as the source if the y_pred and y_true are strings. Default = None

labels : list, optional

List of labels that can be used to reorder the matrix or select the subset of the labels. If labels=None, all labels are included. Default=None

id_vars : string or list of strings, optional

Column names that serve as unique id for y_true and y_pred if they are from different CASTables. The column names need to appear in both CASTables, and they serve to match y_true and y_pred appropriately, since observation orders can be shuffled in distributed computing environment. Default = None

Returns:
pandas.DataFrame

The column index is the predicted class labels. The row index is the ground truth class labels.