Ever since I began migrating to knowledge science I heard concerning the well-known Bias versus Variance tradeoff.
However I realized it sufficient to maneuver on with my research and by no means seemed again an excessive amount of. I all the time knew {that a} extremely biased mannequin underfits the information, whereas a high-variance mannequin is overfitted, and that any of these are usually not good when coaching an ML mannequin.
I additionally know that we must always search for a stability between each states, so we’ll have a good match or a mannequin that generalizes the sample nicely to new knowledge.
However I would say I by no means went farther than that. I by no means searched or created extremely biased or extremely variant fashions simply to see what they really do to the information and the way the predictions of these fashions are.
That’s till in the present day, in fact, as a result of that is precisely what we’re doing on this submit. Let’s proceed with some definitions.