calour.training.
plot_roc
(result, classes=None, title='ROC', cv=True, cmap=None, ax=None)[source]¶Plot ROC curve.
Note
You may want to consider using precision-recall curve (:func:plot_prc) instead of ROC curve. If your model needs to perform equally well on the negative class as the positive class, you would use the ROC AUC. For example, for classifying images between cats and dogs, if you would like the model to perform well on the cats as well as on the dogs, then you can use ROC and ROC is more intuitive to understand. If you have imbalanced classes OR you don’t care about the negative class at all, then you should use precision-recall curve. Take cancer diagnosis as an example, you tends to have way more negative samples than positive cancer samples and want to make sure you positive predictions are correct (precision) and don’t miss any cancer (recall or aka sensitivity), you should use precision-recall curve. If the classes are balanced, ROC usually works fine even in this scenario. [1].
Parameters: |
|
---|---|
Returns: | The axes for the ROC and the AUC value |
Return type: | tuple of matplotlib.axes.Axes and float |
References
[1] | Saito T and Rehmsmeier M (2015) The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets. PLoS One, 10. |