Calculates the recall of predictions and returns the result. Recall is the proportion of actual positive cases that were correctly identified. It answers: “Of all the items that were actually positive, how many did we catch?” Recall = True Positives / (True Positives + False Negatives). Scoring is between 0 and 1 with a perfect recall being 1.
Parameters
Name
Type
Description
Default
y_true
array
The actual observed values (ground truth).
required
y_pred
array
The model predicted values.
required
Returns
Name
Type
Description
float
The calculated recall score, ranging from 0.0 to 1.0.