Tensorflow & Keras

Since keras is a higher-lever interface for tensorflow and nowadays part of tensorflow , we do not need to distinguish between keras and tensorflow models when using ceml.

Computing a counterfactual of a tensorflow/keras model is done by using the ceml.tfkeras.counterfactual.generate_counterfactual() function.

Note

We have to run in eager execution mode when computing a counterfactual! Since tensorflow 2, eager execution is enabled by default.

We must provide the tensorflow/keras model within a class that is derived from the ceml.model.model.ModelWithLoss class. In this class, we must overwrite the predict function and get_loss function which returns a loss that we want to use - a couple of differentiable loss functions are implemented in ceml.backend.tensorflow.costfunctions.

Besides the model, we must specify the input whose prediction we want to explain and the desired target prediction (prediction of the counterfactual). In addition we can restrict the features that can be used for computing a counterfactual, specify a regularization of the counterfactual and specifying the optimization algorithm used for computing a counterfactual.

A complete example of a softmax regression model using the negative-log-likelihood is given below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import tensorflow as tf
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

from ceml.tfkeras import generate_counterfactual
from ceml.backend.tensorflow.costfunctions import NegLogLikelihoodCost
from ceml.model import ModelWithLoss


# Neural network - Softmax regression
class Model(ModelWithLoss):
    def __init__(self, input_size, num_classes):
        super(Model, self).__init__()

        self.model = tf.keras.models.Sequential([
            tf.keras.layers.Dense(num_classes, activation='softmax', input_shape=(input_size,))
        ])
    
    def fit(self, x_train, y_train, num_epochs=800):
        self.model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

        self.model.fit(x_train, y_train, epochs=num_epochs, verbose=False)

    def predict(self, x):
        return np.argmax(self.model(x), axis=1)
    
    def predict_proba(self, x):
        return self.model(x)
    
    def __call__(self, x):
        return self.predict(x)

    def get_loss(self, y_target, pred=None):
        return NegLogLikelihoodCost(input_to_output=self.model.predict_proba, y_target=y_target)


if __name__ == "__main__":
    tf.random.set_seed(42)   # Fix random seed

    # Load data
    X, y = load_iris(return_X_y=True)

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)

    # Create and fit model
    model = Model(4, 3)
    model.fit(X_train, y_train)

    # Evaluation
    y_pred = model.predict(X_test)
    print("Accuracy: {0}".format(accuracy_score(y_test, y_pred)))

    # Select a data point whose prediction has to be explained
    x_orig = X_test[1,:]
    print("Prediction on x: {0}".format(model.predict(np.array([x_orig]))))

    # Whitelist of features we can use/change when computing the counterfactual
    features_whitelist = None

    # Compute counterfactual
    optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=1.0)    # Init optimization algorithm
    optimizer_args = {"max_iter": 1000}

    print("\nCompute counterfactual ....") 
    print(generate_counterfactual(model, x_orig, y_target=0, features_whitelist=features_whitelist, regularization="l1", C=0.01, optimizer=optimizer, optimizer_args=optimizer_args))