Automated Differentiation (AutoDiff): A Transient Intro with Examples | by Ebrahim Pichka | Oct, 2024

An introduction to the mechanics of AutoDiff, exploring its mathematical ideas, implementation methods, and purposes in presently most-used frameworks

Photograph by Bozhin Karaivanov on Unsplash

On the coronary heart of machine studying lies the optimization of loss/goal capabilities. This optimization course of closely depends on computing gradients of those capabilities with respect to mannequin parameters. As Baydin et al. (2018) elucidate of their complete survey [1], these gradients information the iterative updates in optimization algorithms comparable to stochastic gradient descent (SGD):

θₜ₊₁ = θₜ – α ∇θ L(θₜ)

The place:

  • θₜ represents the mannequin parameters at step t
  • α is the educational charge
  • ∇_θ L(θₜ) denotes the gradient of the loss perform L with respect to the parameters θ

This straightforward replace rule belies the complexity of computing gradients in deep neural networks with tens of millions and even billions of parameters.