A better way to calculate 2 important metrics for ML and AI
Updated: Oct 19
F1-score and AUC. We use these 2 metrics all the time in our DS project. Most of the time we think about metrics in general, we are usually trying to choose the best one(s) to concentrate on and optimize. However, you may be surprised to know that the default calculations for some key metrics definitely have room for improvement, to say the least. If you are running cross-validation and compiling the summary metrics from your testing, you may want to consider taking the time to calculate these 2 metrics manually. This will help you to get the most accurate and unbiased values possible.