hopsy.Model.compute_log_likelihood_gradient#

Model.compute_log_likelihood_gradient(self, x)#
deprecated:: 1.4

Use log_gradient() instead.

For some proposals, the gradient will help converging faster as long as the gradient computation is not too slow. If you can not compute a useful or fast enough gradient for your custom model, you can just return a zero vector with the correct dimensionality (number of rows equal to number of parameters).

Parameters:

x (numpy.ndarray[n, 1]) – Input vector

Returns:

The gradient of the (unnormalized) log-likelihood

Return type:

numpy.ndarray[n, 1]