Skip to content

Instantly share code, notes, and snippets.

@lirenlin
Last active December 18, 2018 12:05
Show Gist options
  • Save lirenlin/8e03cac0d4a9db8f1e3c69cab7d3787c to your computer and use it in GitHub Desktop.
Save lirenlin/8e03cac0d4a9db8f1e3c69cab7d3787c to your computer and use it in GitHub Desktop.
gradient

The gradient is a vector-valued function, as opposed to a derivative, which is scalar-valued.

In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system.

  • muti-variable function
    • scalar-valued function: Rn -> R, gradient vector, row matrix or column matrix.
      f(x1, x2 ... xn):

    • vector-valued functions: Rn -> Rm, mxn matrix. Jacobian matrix.
      f1(x1, x2 ... xn)
      f2(x1, x2 ... xn)
      ...
      fm(x1, x2 ... xn)\

https://mathinsight.org/derivative_matrix
https://en.wikipedia.org/wiki/Gradient

  • Linear approximation to a function The gradient of a function f from the Euclidean space Rn to R at any particular point x0 in Rn characterizes the best linear approximation to f at x0. The approximation is as follows:
    f ( x ) ≈ f ( x0 ) + ( ∇ f ) x0 ⋅ ( x − x0 )
    for x close to x0, where (∇f )x0 is the gradient of f computed at x0,
    and the dot denotes the dot product on Rn.
    This equation is equivalent to the first two terms in the multivariable Taylor series expansion of f at x0.
    The approximation is valid only when the function is differentiable, and can be used with the points which is very close to x0.
    it describes the tangent line or plane over that point.
    This could be extended to multivariable linear approximation case.\

https://mathinsight.org/linear_approximation_multivariable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment