Which metric is the harmonic mean of precision and recall?

Get ready for the ISACA AI Fundamentals Test with flashcards and multiple-choice questions. Each question features hints and detailed explanations. Prepare to ace your exam with confidence!

Multiple Choice

Which metric is the harmonic mean of precision and recall?

Explanation:
The metric that combines precision and recall using a harmonic mean is the F-score, commonly called the F1 score. It is defined as F1 = 2 × (precision × recall) / (precision + recall). The harmonic mean brings both numbers into balance, so if either precision or recall is low, the F-score drops significantly, reflecting poor overall performance on the positive class. For example, with precision 0.9 and recall 0.5, the F1 score is about 0.64, showing that high precision alone doesn’t compensate for low recall. This makes the F-score preferable when you want a single measure that penalizes imbalanced performance between correctly identifying positives and avoiding false positives. Accuracy, on the other hand, looks at overall correct predictions and can be misleading if the class distribution is imbalanced; a confusion matrix is just the table of true/false positives and negatives; hyperparameters are model settings, not a performance metric.

The metric that combines precision and recall using a harmonic mean is the F-score, commonly called the F1 score. It is defined as F1 = 2 × (precision × recall) / (precision + recall). The harmonic mean brings both numbers into balance, so if either precision or recall is low, the F-score drops significantly, reflecting poor overall performance on the positive class. For example, with precision 0.9 and recall 0.5, the F1 score is about 0.64, showing that high precision alone doesn’t compensate for low recall. This makes the F-score preferable when you want a single measure that penalizes imbalanced performance between correctly identifying positives and avoiding false positives. Accuracy, on the other hand, looks at overall correct predictions and can be misleading if the class distribution is imbalanced; a confusion matrix is just the table of true/false positives and negatives; hyperparameters are model settings, not a performance metric.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy