dlpy.metrics.average_precision_score¶
-
dlpy.metrics.
average_precision_score
(y_true, y_score, pos_label, castable=None, cutstep=0.001, interpolate=False, id_vars=None)¶ Compute the average precision score for binary classification tasks.
Parameters: - y_true : string or CASColumn
The column of the ground truth labels. If it is a string, then y_score has to be a string and they both belongs to the same CASTable specified by the castable argument. If it is a CASColumn, then y_score has to be a CASColumn, and the castable argument is ignored. When both y_score and y_true are CASColumn, they can be in different CASTable.
- y_score : string or CASColumn
The column of estimated probability for the positive class. If it is a string, then y_true has to be a string and they both belongs to the same CASTable specified by the castable argument. If it is a CASColumn, then y_true has to be a CASColumn, and the castable argument is ignored. When both y_score and y_true are CASColumn, they can be in different CASTable.
- pos_label : string, int or float
The positive class label.
- castable : CASTable, optional
The CASTable object to use as the source if the y_score and y_true are strings. Default = None
- cutstep : float > 0 and < 1, optional
The stepsize of threshold cutoffs. Default=0.001.
- interpolate : boolean, optional
If interpolate=True, it is the area under the precision recall curve with linear interpolation. Otherwise, it is defined as https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html
- id_vars : string or list of strings, optional
Column names that serve as unique id for y_true and y_score if they are from different CASTables. The column names need to appear in both CASTables, and they serve to match y_true and y_score appropriately, since observation orders can be shuffled in distributed computing environment. Default = None.
Returns: - score : float