The process of adjusting model parameters to minimize a loss function.
In deep learning, optimization refers to the iterative procedure of computing gradients of the loss with respect to parameters and updating those parameters to reduce the loss. The optimizer determines how gradient information translates into parameter updates — the choice matters enormously for convergence speed, stability, and generalization.