ab.losses¶
Network loss functions.

aboleth.losses.
elbo
(log_likelihood, KL, N)¶ Build the evidence lower bound (ELBO) loss for a neural net.
Parameters:  log_likelihood (Tensor) – the loglikelihood Tensor that takes neural network(s) and targets as
an input. We recommend using a
tf.distributions
object’slog_prob()
method to obtain this tensor. The shape of this Tensor should be(n_samples, N, ...)
, wheren_samples
is the number of loglikelihood samples (defined by ab.InputLayer) andN
is the number of observations (can be?
if you are using a placeholder and minibatching). These likelihoods can also be weighted to, for example, adjust for class imbalance etc. This weighting is left up to the user.  KL (float, Tensor) – the Kullback Leibler divergence between the posterior and prior parameters of the model (\(\text{KL}[q\p]\)).
 N (int, Tensor) – the total size of the dataset (i.e. number of observations).
Returns: nelbo – the loss function of the Bayesian neural net (negative ELBO).
Return type: Tensor
Example
This is how we would typically generate a likelihood for this objective,
noise = ab.pos_variable(1.0) likelihood = tf.distributions.Normal(loc=NN, scale=noise) log_likelihood = likelihood.log_prob(Y)
where
NN
is our neural network, andY
are our targets.Note
The way
tf.distributions.Bernoulli
andtf.distributions.Categorical
are implemented are a little confusing… it is worth noting that you should use a target array,Y
, of shape(N, 1)
of ints with the Bernoulli likelihood, and a target array of shape(N,)
of ints with the Categorical likelihood. log_likelihood (Tensor) – the loglikelihood Tensor that takes neural network(s) and targets as
an input. We recommend using a

aboleth.losses.
max_posterior
(log_likelihood, regulariser)¶ Build maximum aposteriori (MAP) loss for a neural net.
Parameters:  log_likelihood (Tensor) – the loglikelihood Tensor that takes neural network(s) and targets as
an input. We recommend using a
tf.distributions
object’slog_prob()
method to obtain this tensor. The shape of this Tensor should be(n_samples, N, ...)
, wheren_samples
is the number of loglikelihood samples (defined by ab.InputLayer) andN
is the number of observations (can be?
if you are using a placeholder and minibatching). These likelihoods can also be weighted to, for example, adjust for class imbalance etc. This weighting is left up to the user.  regulariser (float, Tensor) – the regulariser on the parameters of the model to penalise model complexity.
Returns: map – the loss function of the MAP neural net.
Return type: Tensor
Example
This is how we would typically generate a likelihood for this objective,
noise = ab.pos_variable(1.0) likelihood = tf.distributions.Normal(loc=NN, scale=noise) log_likelihood = likelihood.log_prob(Y)
where
NN
is our neural network, andY
are our targets.Note
The way
tf.distributions.Bernoulli
andtf.distributions.Categorical
are implemented are a little confusing… it is worth noting that you should use a target array,Y
, of shape(N, 1)
of ints with the Bernoulli likelihood, and a target array of shape(N,)
of ints with the Categorical likelihood. log_likelihood (Tensor) – the loglikelihood Tensor that takes neural network(s) and targets as
an input. We recommend using a