In this tutorial, we show you how to compute counterfactual explanations for explaining positively-predicted instances. We use textual data (20newsgroups) where the goal is to predict whether a document is about a 'Medical' topic. The counterfactual explanation shows a set of words such that, when removing them from the document, the predicted topic is not longer 'Medical'.

Import libraries and import data set.

In [105]:
import pandas as pd
import numpy as np
import sedc_algorithm
from function_edc import fn_1  
In [106]:
%run sedc_algorithm.py #run sedc_algorithm.py module
In [107]:
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.model_selection import ParameterGrid
from sklearn.svm import SVC
import sklearn.feature_extraction
from sklearn.feature_extraction.text import TfidfVectorizer

For this tutorial, we will use the 20newsgroups data set. For simplicity, we will use a binary target variable: medical topic vs non-medical topic (sci.med).

In [108]:
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism',
 'comp.graphics',
 'comp.os.ms-windows.misc',
 'comp.sys.ibm.pc.hardware',
 'comp.sys.mac.hardware',
 'comp.windows.x',
 'misc.forsale',
 'rec.autos',
 'rec.motorcycles',
 'rec.sport.baseball',
 'rec.sport.hockey',
 'sci.crypt',
 'sci.electronics',
 'sci.med',
 'sci.space',
 'soc.religion.christian',
 'talk.politics.guns',
 'talk.politics.mideast',
 'talk.politics.misc',
 'talk.religion.misc']
newsgroups = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'), categories=categories)

First, we preprocess the raw textual data into a structured data format that can be used for modelling. We lowercase all words in the documents, remove stopwords and lemmatize the textual data.

In [59]:
newsgroups_data = newsgroups.data
### Lowercase (normalization) ###
data_=[]
for story in newsgroups_data:
    new=story.lower()
    data_.append(new)

### Remove stopwords ###
from nltk.corpus import stopwords 
from nltk.tokenize import word_tokenize 
stop_words = set(stopwords.words('english')) 

newsgroups_dataset=[]
for story in data_:
    words=word_tokenize(story)
    text=""
    for words in word_tokenize(story):
        if not words in stop_words:
            text+=(" "+words)
    newsgroups_dataset.append(text)
In [100]:
newsgroups_dataset_2=[]
for story in data_:
    words=word_tokenize(story)
    text=""
    for words in word_tokenize(story):
        text+=(" "+words)
    newsgroups_dataset_2.append(text)
In [60]:
# Import lemmatizer modules.
from nltk.stem import WordNetLemmatizer 
lemmatizer = WordNetLemmatizer() 

newsgroups_lemma=[]
for story in newsgroups_dataset:
    words=word_tokenize(story)
    text=""
    for words in word_tokenize(story):
        lemma_word=lemmatizer.lemmatize(words)
        extra=" "+str(lemma_word)
        text+=extra
    newsgroups_lemma.append(text)

We create a vectorizer object to transform the preprocessed raw data (removed stop words, converted to lowercase, lemmatized) into a term frequency-inverse document frequency format (td-idf).

In [61]:
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(min_df=2)

Split data into a training and test set (80-20%).

In [109]:
# Seed for random_state = 0
indices=np.arange(18846)
from sklearn.model_selection import train_test_split
indices_train, indices_test = train_test_split(indices, test_size=0.2, random_state=0)
indices_train, indices_val = train_test_split(indices_train, test_size=0.25, random_state=0)
In [63]:
# Make data splits from preprocessed textual data #
newsgroups_lemma_train = list(newsgroups_lemma[i] for i in indices_train)
newsgroups_lemma_test = list(newsgroups_lemma[i] for i in indices_test)
newsgroups_lemma_val=list(newsgroups_lemma[i] for i in indices_val)

Fit the vectorizer on the training data. Transform the data of training, validation and test data using this vectorizer.

In [110]:
x_train = vectorizer.fit_transform(newsgroups_lemma_train)
x_test = vectorizer.transform(newsgroups_lemma_test)
x_val = vectorizer.transform(newsgroups_lemma_val)

Extract the target variable (1 refers to a medical topic, 0 to another topic).

In [112]:
Y = newsgroups.target
Y = np.reshape(Y,(np.size(Y),1))
# Topic: sci.med
y = Y.copy()
for i in range(len(Y)):
    if (Y[i]==13):
        y[i]=1
    else: y[i]=0
In [113]:
y_train = y[indices_train]
y_test = y[indices_test]
y_val = y[indices_val]

We use a Support Vector Machine model with a linear kernel. We finetune the regularization parameter using a hold-out validation data set.

In [75]:
C = [10**(-3),10**(-2),10**(-1),10**(0),10**(1),10**(2)]
p = np.sum(y_train)/np.size(y_train)
print("The balance of target in training subset is %f." %p)
#There are about 5% documents having a 'Medical' topic in the training data.

accuracy_vals=[]
for c in C:
    SVC_model = SVC(C = c, kernel="linear", probability=True)
    SVC_model.fit(x_train, y_train)
        
    probs = SVC_model.decision_function(x_val)
    threshold_classifier_probs = np.percentile(probs,(100-(p*100)))
    predictions_probs = (probs >= threshold_classifier_probs) #Explicit, discrete predictions for validation data instances
                
    accuracy_val = accuracy_score(y_val, np.array(predictions_probs))
    accuracy_vals.append(accuracy_val)
print("The finetuning process has ended...")
    
C_optimal_accuracy = C[np.argmax(accuracy_vals)]
SVC_best = SVC(C = C_optimal_accuracy, kernel="linear", probability=True)
SVC_best.fit(x_train, y_train)
The balance of target in training subset is 0.053507.
The finetuning process has ended...
Out[75]:
SVC(C=0.001, cache_size=200, class_weight=None, coef0=0.0,
    decision_function_shape='ovr', degree=3, gamma='auto_deprecated',
    kernel='linear', max_iter=-1, probability=True, random_state=None,
    shrinking=True, tol=0.001, verbose=False)
In [150]:
probs = SVC_best.decision_function(x_test)
threshold_classifier_probs = np.percentile(probs,(100-(p*100)))
predictions_probs = (probs >= threshold_classifier_probs) #Explicit, discrete predictions for validation data instances

accuracy_test = accuracy_score(y_test, np.array(predictions_probs))
print("The accuracy of the model on the test data is %f" %accuracy_test)

indices_probs_pos = np.nonzero(predictions_probs)#Indices of the test documents that are positively-predicted
The accuracy of the model on the test data is 0.979045
In [77]:
classification_model = SVC_best 
feature_names = vectorizer.get_feature_names()

def classifier_fn(X):
    c=classification_model.decision_function(X)
    y_predicted_proba = c
    return y_predicted_proba

Create an SEDC explainer object. By default, the SEDC algorithm stops looking for explanations when a first explanation is found or when a 5-minute time limit is exceeded or when more than 50 iterations are required (see edc_agnostic.py for more details). Only the active (nonzero) features are perturbed (set to zero) to evaluate the impact on the model's predicted output. In other words, only the movies that a user has watched can become part of the counterfactual explanation of the model prediction.

In [82]:
explainer_SEDC = SEDC_Explainer(feature_names = feature_names, 
                               threshold_classifier = threshold_classifier_probs, 
                               classifier_fn = classifier_fn)

Show indices of positively-predicted test instances.

In [117]:
indices_probs_pos #all documents that have a predicted 'Medical' topic
Out[117]:
(array([  17,   21,   41,   73,  143,  161,  165,  183,  225,  228,  232,
         236,  267,  273,  298,  365,  418,  439,  475,  482,  506,  517,
         523,  552,  557,  567,  583,  586,  609,  632,  638,  643,  657,
         660,  662,  669,  682,  687,  694,  705,  744,  764,  772,  804,
         817,  834,  861,  893,  896,  897,  913,  948,  964, 1008, 1009,
        1043, 1045, 1071, 1074, 1147, 1151, 1156, 1186, 1198, 1216, 1222,
        1246, 1248, 1253, 1288, 1291, 1329, 1360, 1394, 1411, 1457, 1477,
        1496, 1538, 1539, 1549, 1577, 1624, 1625, 1628, 1633, 1698, 1735,
        1740, 1752, 1766, 1791, 1794, 1811, 1839, 1928, 1957, 1961, 1988,
        2004, 2020, 2046, 2092, 2098, 2107, 2126, 2139, 2141, 2143, 2146,
        2153, 2199, 2210, 2222, 2248, 2260, 2270, 2274, 2313, 2379, 2386,
        2389, 2390, 2426, 2434, 2480, 2492, 2501, 2504, 2522, 2535, 2554,
        2557, 2617, 2620, 2707, 2752, 2761, 2772, 2815, 2836, 2876, 2905,
        2931, 2944, 2967, 2981, 3016, 3025, 3055, 3058, 3080, 3086, 3116,
        3117, 3157, 3161, 3186, 3196, 3217, 3228, 3251, 3266, 3278, 3279,
        3282, 3290, 3292, 3315, 3319, 3320, 3368, 3371, 3389, 3395, 3404,
        3414, 3424, 3448, 3468, 3472, 3475, 3481, 3498, 3506, 3515, 3545,
        3549, 3553, 3569, 3579, 3624, 3644, 3645, 3684, 3702, 3716, 3737,
        3750, 3759, 3767, 3769], dtype=int64),)

Explain why the document with index = 143 is predicted as a 'Medical' topic by the model.

In [139]:
newsgroups_test = list(newsgroups_dataset_2[i] for i in indices_test)

The document looks as follows.

In [140]:
newsgroups_test[73]
Out[140]:
" -allergy medicine , huh ? is this just to get rid of the resultant migraine or whatever , or does it actually suppress allergic reactions ? ( i.e . like an antihistamine does ? ) as far as doctors over here are concerned , if you slip up and eat something you 're allergic to ( even if they wo n't test you to tell you what to avoid ) then tough ; if a _cheap_ medicine will alleviate your symptoms , then fine , otherwise you just suffer . one doctor did prescribe me imigran ( costs the nhs # 48 for 6 tablets ) after having to rehydrate me because i 'd been throwing up for four solid days and could n't even drink water - but i got taken off it again when i moved and had to change doctors . reasoning : they did not know what the side-effects were because it was new . ok , fine - but it has passed the safety tests to get on the prescription list , and anyway i was prepared to take the risk to have quality of life now . the only alternatives i have is to get it prescribed privately , which i can not afford , or to pay a private allergy specialist to test me and tell me what to avoid . i am fairly certain i am allergic to more than one chemical additive , as a lot of things i ca n't eat have nothing in common except things i know are safe , so testing myself is n't really an option ; there are too many permutations ."
In [135]:
index = 73
instance_idx = x_test[index]
explanation = explainer_SEDC.explanation(instance_idx)
Initialization is complete.

 Elapsed time 0 


 Iteration 1 

The difference is 0.002621
Index is 20.000000
Length of new_combinations is 1 features.
New combinations can be expanded
Threshold is 0.003974

 Elapsed time 0 


 Size combis to expand 174 


 Iteration 2 

The difference is 0.003974
Index is 37.000000
Length of new_combinations is 2 features.
New combinations can be expanded
Threshold is 0.004851

 Elapsed time 0 


 Size combis to expand 259 


 Iteration 3 

The difference is 0.004851
Index is 7.000000
Length of new_combinations is 3 features.
New combinations can be expanded
Threshold is 0.005633

 Elapsed time 1 


 Size combis to expand 343 


 Iteration 4 

The difference is 0.005633
Index is 34.000000
Length of new_combinations is 4 features.
New combinations can be expanded
Threshold is 0.006071

 Elapsed time 2 


 Size combis to expand 426 


 Iteration 5 

The difference is 0.006071
Index is 73.000000
Length of new_combinations is 5 features.
New combinations can be expanded
Threshold is 0.006428

 Elapsed time 2 


 Size combis to expand 508 


 Iteration 6 

The difference is 0.006428
Index is 1.000000
Length of new_combinations is 6 features.
New combinations can be expanded
Threshold is 0.006750

 Elapsed time 3 


 Size combis to expand 589 


 Iteration 7 

The difference is 0.006750
Index is 1.000000
Length of new_combinations is 7 features.
New combinations can be expanded
Threshold is 0.006994

 Elapsed time 4 


 Size combis to expand 669 


 Iteration 8 

The difference is 0.006994
Index is 15.000000
Length of new_combinations is 8 features.
New combinations can be expanded
Threshold is 0.007166

 Elapsed time 5 


 Size combis to expand 748 


 Iteration 9 

The difference is 0.007166
Index is 75.000000
Length of new_combinations is 9 features.
New combinations can be expanded
Threshold is 0.007330

 Elapsed time 7 


 Size combis to expand 826 


 Iteration 10 

The difference is 0.007330
Index is 52.000000
Length of new_combinations is 10 features.
New combinations can be expanded
Threshold is 0.007491

 Elapsed time 8 


 Size combis to expand 903 


 Iteration 11 

The difference is 0.007491
Index is 41.000000
Length of new_combinations is 11 features.
New combinations can be expanded
Threshold is 0.007647

 Elapsed time 10 


 Size combis to expand 979 


 Iteration 12 

The difference is 0.007647
Index is 58.000000
Length of new_combinations is 12 features.
New combinations can be expanded
Threshold is 0.007803

 Elapsed time 11 


 Size combis to expand 1054 


 Iteration 13 

The difference is 0.007803
Index is 58.000000
Length of new_combinations is 13 features.
New combinations can be expanded
Threshold is 0.007931

 Elapsed time 13 


 Size combis to expand 1128 


 Iteration 14 

The difference is 0.007931
Index is 38.000000
Length of new_combinations is 14 features.
New combinations can be expanded
Threshold is 0.008049

 Elapsed time 15 


 Size combis to expand 1201 


 Iteration 15 

The difference is 0.008049
Index is 5.000000
Length of new_combinations is 15 features.
New combinations can be expanded
Threshold is 0.008166

 Elapsed time 17 


 Size combis to expand 1273 


 Iteration 16 

The difference is 0.008166
Index is 64.000000
Length of new_combinations is 16 features.
New combinations can be expanded
Threshold is 0.008191

 Elapsed time 19 


 Size combis to expand 1344 


 Iteration 17 

The difference is 0.008191
Index is 45.000000
Length of new_combinations is 17 features.
New combination cannot be expanded

 Elapsed time 19 


 Size combis to expand 1344 

Iterations are done.

 Elapsed time 19 

The explanation contains 17 words out of the 88 featurized words that are used by the SVM model.

Show more information about the explanation(s): explanation[0] shows the explanation set(s), explanation[1] shows the number of active features of the instance to explain, explanation[2] shows the number of explanations found, explanation[3] shows the number of features in the smallest-sized explanation, explanation[4] shows the time elapsed in seconds to find the explanation, explanation[5] shows the predicted score change when removing the feature(s) in the smallest-sized explanation, explanation[6] shows the number of iterations that the algorithm needed.

In [142]:
explanation
Out[142]:
([['doctor',
   'allergic',
   'allergy',
   'medicine',
   'migraine',
   'symptom',
   'antihistamine',
   'eat',
   'additive',
   'avoid',
   'reaction',
   'risk',
   'prescribed',
   'prescription',
   'tablet',
   'water',
   'chemical']],
 88,
 26,
 17,
 19.83780312538147,
 [array([0.00826831])],
 17)
In [143]:
print("IF the document did not contain the word(s) " + str(explanation[0][0]) + ", THEN the predicted topic would no longer be 'Medical'.")
IF the document did not contain the word(s) ['doctor', 'allergic', 'allergy', 'medicine', 'migraine', 'symptom', 'antihistamine', 'eat', 'additive', 'avoid', 'reaction', 'risk', 'prescribed', 'prescription', 'tablet', 'water', 'chemical'], THEN the predicted topic would no longer be 'Medical'.

Explain why the document with index = 143 is predicted as a 'Medical' topic by the model.

The document looks as follows.

In [152]:
newsgroups_test[165]
Out[152]:
" the burden of proof rests upon those who claim the existence of this `` syndrome '' . to date , these claims are unsubstantiated by any available data . hopefully , as a scientist , you would take issue with anyone overstating their conclusions based upon their data . gee , i have many interesting and enlightening anecdotes about myself , my friends , and my family , but in the practice of medicine i expect and demand more rigorous rationales for basing therapy than `` aunt susie 's brother-in-law ... '' . anecdotal evidence may provide inspiration for a hypothesis , but rarely proves anything in a positive sense . and unlike mathematics , boolean logic rarely applies directly to medical issues , and so evidence of 'exceptions ' does not usually disprove but rather modifies current concepts of disease . i would characterize it not as 'abject disbelief ' but rather 'scientific outrage over vastly overstated conclusions ' . i have no problem with such an approach ; but this is not what is happening in the 'trenches ' of this diagnosis ."
In [148]:
index = 165
instance_idx = x_test[index]
explanation = explainer_SEDC.explanation(instance_idx)
Initialization is complete.

 Elapsed time 0 


 Iteration 1 

The difference is 0.000730
Index is 24.000000
Length of new_combinations is 1 features.
New combinations can be expanded
Threshold is 0.001289

 Elapsed time 0 


 Size combis to expand 144 


 Iteration 2 

The difference is 0.001289
Index is 5.000000
Length of new_combinations is 2 features.
New combinations can be expanded
Threshold is 0.001740

 Elapsed time 0 


 Size combis to expand 214 


 Iteration 3 

The difference is 0.001740
Index is 47.000000
Length of new_combinations is 3 features.
New combinations can be expanded
Threshold is 0.002149

 Elapsed time 1 


 Size combis to expand 283 


 Iteration 4 

The difference is 0.002149
Index is 66.000000
Length of new_combinations is 4 features.
New combinations can be expanded
Threshold is 0.002444

 Elapsed time 1 


 Size combis to expand 351 


 Iteration 5 

The difference is 0.002444
Index is 34.000000
Length of new_combinations is 5 features.
New combinations can be expanded
Threshold is 0.002546

 Elapsed time 2 


 Size combis to expand 418 


 Iteration 6 

The difference is 0.002546
Index is 11.000000
Length of new_combinations is 6 features.
New combination cannot be expanded

 Elapsed time 2 


 Size combis to expand 418 

Iterations are done.

 Elapsed time 2 

In [151]:
print("IF the document did not contain the word(s) " + str(explanation[0][0]) + ", THEN the predicted topic would no longer be 'Medical'.")
IF the document did not contain the word(s) ['disease', 'medical', 'medicine', 'syndrome', 'therapy', 'diagnosis'], THEN the predicted topic would no longer be 'Medical'.
In [153]:
explanation
Out[153]:
([['disease', 'medical', 'medicine', 'syndrome', 'therapy', 'diagnosis']],
 73,
 5,
 6,
 2.3969829082489014,
 [array([0.00273369])],
 6)