Employs finite differences to calculate the divergence and Hessian matrix of a two dimentional function. It employs the Newton-Rapson method to find a maximal point. This functions takes as input:
- float Function. Function to minimise
- float x0. Starting point (x coordinate)
- float y0. Starting point (y coordinate)
- float hx. Finite difference (to calculate derivatives along x coordinate). Smaller values provide more precise derivatives
- float ** hy**. Finite difference (to calculate derivatives along y coordinate). Smaller values provide more precise derivatives
- int Niter. Number of iterations for the Newton-Rapson algorithm
- float zero_level. The iterations stop when the gradient is below this threshold.
It returns a list with two elements c(xmin, ymin).
Employs finite differences to calculate the divergence of a two dimentional function. Finds a minima trough gradient-descent. This functions takes as input:
- float Function. Function to minimise
- N*float initial_param_vector. Vector of starting points (dimention N)
- float h. Finite difference. Smaller values provide more precise derivatives
- int Niter. Number of iterations for the Newton-Rapson algorithm
- float learn_rate. Step size at each iteration
- float zero_level. The iterations stop when the gradient is below this threshold.
It returns the minimun as a vector with N elements.