Finite difference approxximations are methods to obtain approximations for derivatives of functions, assuming that the derivatives are hard to analytically obtain. For example, a simple approximation for the first derivative of a function is given as:
$$ f^\prime (x) \approx \frac{f(x+h)-f(x-h)}{2 h}. $$This is simply the definition of the first derivative, if we take the limit where $h$ goes to 0. This is called the central finite differences approximation.
One sided finite differences can also be used. For example, the forward first order finite difference is:
$$ f^\prime (x) \approx \frac{f(x+h)-f(x)}{h}. $$One can apply a first order difference method twice to obtain a second order difference approximation:
$$ f^{\prime \prime} (x) \approx \frac{f(x+h)-2 f(x) + f(x-h)}{h^2}. $$Later on, when we are using finite difference methods to approximate partial differential equations near boundaries, we can use one-sided approximations to the second derivative:
$$ f^{\prime \prime} (x) \approx \frac{f(x+2h)-2 f(x+h) + f(x)}{h^2}. $$Finite difference methods are very useful to solve for solutions to PDEs. For example, let us consider the heat equation from physics:
$$ f_t + (r-y)S f_S + 0.5 \sigma^2 S^2 f_SS = (r-y)f. $$Subscripts denote partial derivatives.
Typically, there are no analytic solutions for this equation unless we impose certain conditions. One would solve this with finite differences by considering the fact that since $f$ is a function of $(S,t)$, we can represent it on a discretized 2-D grid, with one of the axis being $S$ and the other being $t$. We also typically know the boundary conditions at $t=T$. One can then work backwords to obtain the values of $f$ at $t=0$, by using finite differences to obtain approximations of the partial derivatives.
There are many schemes based on the finite difference method. The most stable scheme is currently thought to be the Crank-Nicolson scheme.