Changing the function/model impacts the bias and variance.
The bias refers to the model’s ability, on average, to closely predict the response variable in the training dataset
The variance refers to the model’s stability, or how much the predictions would change if we had different training datasets
There is typically a tradeoff between bias and variance:
A very flexible model will result in lower bias on the training data, but will typically have higher variance across different training datasets.
A very inflexible model will result in higher bias on the training data, but will typically have smaller variance across different training datasets.
IEOR 4650E: Business Analytics; Columbia University