ceml.backend.tensorflow

ceml.backend.tensorflow.costfunctions

class ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf(**kwds)

Bases: ceml.costfunctions.costfunctions.CostFunctionDifferentiable

Base class of differentiable cost functions implemented in tensorflow.

grad()

Warning

Do not use this method!

Use ‘tf.GradientTape’ for computing the gradient.

Raises

NotImplementedError

class ceml.backend.tensorflow.costfunctions.costfunctions.DummyCost(**kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

Dummy cost function - always returns zero.

score_impl(x)

Applying the cost function to a given input.

Abstract method for computing applying the cost function to a given input x.

Note

All derived classes must implement this method.

class ceml.backend.tensorflow.costfunctions.costfunctions.L1Cost(x_orig, **kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

L1 cost function.

score_impl(x)

Applying the cost function to a given input.

Abstract method for computing applying the cost function to a given input x.

Note

All derived classes must implement this method.

class ceml.backend.tensorflow.costfunctions.costfunctions.L2Cost(x_orig, **kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

L2 cost function.

score_impl(x)

Applying the cost function to a given input.

Abstract method for computing applying the cost function to a given input x.

Note

All derived classes must implement this method.

class ceml.backend.tensorflow.costfunctions.costfunctions.LMadCost(x_orig, mad, **kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

Manhattan distance weighted feature-wise with the inverse median absolute deviation (MAD).

score_impl(x)

Applying the cost function to a given input.

Abstract method for computing applying the cost function to a given input x.

Note

All derived classes must implement this method.

class ceml.backend.tensorflow.costfunctions.costfunctions.NegLogLikelihoodCost(y_target, **kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

Negative-log-likelihood cost function.

score_impl(y)

Applying the cost function to a given input.

Abstract method for computing applying the cost function to a given input x.

Note

All derived classes must implement this method.

class ceml.backend.tensorflow.costfunctions.costfunctions.RegularizedCost(penalize_input, penalize_output, C=1.0, **kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

Regularized cost function.

score_impl(x)

Applying the cost function to a given input.

Abstract method for computing applying the cost function to a given input x.

Note

All derived classes must implement this method.

class ceml.backend.tensorflow.costfunctions.costfunctions.SquaredError(y_target, **kwds)

Bases: ceml.backend.tensorflow.costfunctions.costfunctions.CostFunctionDifferentiableTf

Squared error cost function.

score_impl(y)

Computes the loss - squared error.

ceml.backend.tensorflow.optimizer

class ceml.backend.tensorflow.optimizer.optimizer.TfOptimizer(**kwds)

Bases: ceml.optim.optimizer.Optimizer

Wrapper for a tensorflow optimization algorithm.

The TfOptimizer provides an interface for wrapping an arbitrary tensorflow optimization algorithm (see tf.train.Optimizer) and minimizing a given loss function.

init(model, loss, x, optim, tol=None, max_iter=1, grad_mask=None)

Initializes all parameters.

Parameters
  • model (callable or instance of tf.keras.Model) – The model that is to be used.

  • loss (instance of ceml.backend.tensorflow.costfunctions.RegularizedCost) – The loss that has to be minimized.

  • x (numpy.ndarray) – The starting value of x - usually this is the original input whose prediction has to be explained..

  • optim (instance of tf.train.Optimizer) – Optimizer for minimizing the loss.

  • tol (float, optional) –

    Tolerance for termination.

    The default is 0.0

  • max_iter (int, optional) –

    Maximum number of iterations.

    The default is 1.

  • grad_mask (numpy.array, optional) –

    Mask that is multiplied element wise on top of the gradient - can be used to hold some dimensions constant.

    If grad_mask is None, no gradient mask is used.

    The default is None.

Raises

TypeError – If the type of loss or model is not correct.