ceml.backend.torch¶
ceml.backend.torch.costfunctions¶
-
class
ceml.backend.torch.costfunctions.costfunctions.
CostFunctionDifferentiableTorch
(**kwds)¶ Bases:
ceml.costfunctions.costfunctions.CostFunctionDifferentiable
Base class of differentiable cost functions implemented in PyTorch.
-
grad
()¶ Warning
Do not use this method!
Call ‘.backward()’ of the output tensor. After that, the gradient of each variable ‘myvar’ - that is supposed to have gradient - can be accessed as ‘myvar.grad’
- Raises
NotImplementedError –
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
DummyCost
(**kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
Dummy cost function - always returns zero.
-
score_impl
(x)¶ Computes the loss - always return zero.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
L1Cost
(x_orig, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
L1 cost function.
-
score_impl
(x)¶ Computes the loss - l1 norm.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
L2Cost
(x_orig, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
L2 cost function.
-
score_impl
(x)¶ Computes the loss - l2 norm.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
LMadCost
(x_orig, mad, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
Manhattan distance weighted feature-wise with the inverse median absolute deviation (MAD).
-
score_impl
(x)¶ Computes the loss.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
MinOfListCost
(dist, samples, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
Minimum distance to a list of data points.
-
score_impl
(x)¶ Computes the loss.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
NegLogLikelihoodCost
(y_target, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
Negative-log-likelihood cost function.
-
score_impl
(y)¶ Computes the loss - negative-log-likelihood.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
RegularizedCost
(penalize_input, penalize_output, C=1.0, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
Regularized cost function.
-
score_impl
(x)¶ Computes the loss.
-
-
class
ceml.backend.torch.costfunctions.costfunctions.
SquaredError
(y_target, **kwds)¶ Bases:
ceml.backend.torch.costfunctions.costfunctions.CostFunctionDifferentiableTorch
Squared error cost function.
-
score_impl
(y)¶ Computes the loss - squared error.
-
ceml.backend.torch.optimizer¶
-
class
ceml.backend.torch.optimizer.optimizer.
TorchOptimizer
(**kwds)¶ Bases:
ceml.optim.optimizer.Optimizer
Wrapper for a PyTorch optimization algorithm.
The
TorchOptimizer
provides an interface for wrapping an arbitrary PyTorch optimization algorithm (seetorch.optim
) and minimizing a given loss function.-
init
(model, loss, x, optim, optim_args, lr_scheduler=None, lr_scheduler_args=None, tol=None, max_iter=1, grad_mask=None, device=torch.device)¶ Initializes all parameters.
- Parameters
model (instance of
torch.nn.Module
) – The model that is to be used.loss (instance of
ceml.backend.torch.costfunctions.RegularizedCost
) – The loss that has to be minimized.x (numpy.ndarray) – The starting value of x - usually this is the original input whose prediction has to be explained..
optim (instance of torch.optim.Optimizer) – Optimizer for minimizing the loss.
optim_args (dict) – Arguments of the optimization algorithm (e.g. learning rate, momentum, …)
lr_scheduler (Learning rate scheduler (see
torch.optim.lr_scheduler
)) –Learning rate scheduler (see
torch.optim.lr_scheduler
).The default is None.
lr_scheduler_args (dict, optional) –
Arguments of the learning rate scheduler.
The default is None.
tol (float, optional) –
Tolerance for termination.
The default is 0.0
max_iter (int, optional) –
Maximum number of iterations.
The default is 1.
grad_mask (numpy.array, optional) –
Mask that is multiplied element wise on top of the gradient - can be used to hold some dimensions constant.
If grad_mask is None, no gradient mask is used.
The default is None.
device (
torch.device
) –Specifies the hardware device (e.g. cpu or gpu) we are working on.
The default is torch.device(“cpu”).
- Raises
TypeError – If the type of loss or model is not correct.
-