# ab.losses¶

Network loss functions.

aboleth.losses.elbo(likelihood, Y, N, KL, like_weights=None)

Build the evidence lower bound loss for a neural net.

Parameters: likelihood (tf.distributions.Distribution) – the likelihood object that takes neural network(s) as an input. The batch_shape of this object should be (n_samples, N, ...), where n_samples is the number of likelihood samples (defined by ab.InputLayer) and N is the number of observations (can be ? if you are using a placeholder and mini-batching). Y (ndarray, Tensor) – the targets of shape (N, tasks). N (int, Tensor) – the total size of the dataset (i.e. number of observations). KL (float, Tensor) – the Kullback Leibler divergence between the posterior and prior parameters of the model ($$\text{KL}[q\|p]$$). like_weights (ndarray, Tensor) – weights to apply to each observation in the expected log likelihood. This should be a tensor/array of shape (N,) (or a shape that prevents broadcasting). nelbo – the loss function of the Bayesian neural net (negative ELBO). Tensor
aboleth.losses.max_posterior(likelihood, Y, regulariser, like_weights=None)

Build maximum a-posteriori (MAP) loss for a neural net.

Parameters: likelihood (tf.distributions.Distribution) – the likelihood object that takes neural network(s) as an input. The batch_shape of this object should be (n_samples, N, ...), where n_samples is the number of likelihood samples (defined by ab.InputLayer) and N is the number of observations (can be ? if you are using a placeholder and mini-batching). Y (ndarray, Tensor) – the targets of shape (N, tasks). regulariser (float, Tensor) – the regulariser on the parameters of the model to penalise model complexity. like_weights (ndarray, Tensor) – weights to apply to each observation in the expected log likelihood. This should be a tensor/array of shape (N,) (or a shape that prevents broadcasting). map – the loss function of the MAP neural net. Tensor