|import numpy as np|
|import tensorflow as tf|
|IMAGE_PATH = './cat.jpg'|
|LAYER_NAME = 'block5_conv3'|
|CAT_CLASS_INDEX = 281|
|img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))|
|img = tf.keras.preprocessing.image.img_to_array(img)|
|model = tf.keras.applications.vgg16.VGG16(weights='imagenet', include_top=True)|
|grad_model = tf.keras.models.Model([model.inputs], [model.get_layer(LAYER_NAME).output, model.output])|
|with tf.GradientTape() as tape:|
|conv_outputs, predictions = grad_model(np.array([img]))|
|loss = predictions[:, CAT_CLASS_INDEX]|
|output = conv_outputs|
|grads = tape.gradient(loss, conv_outputs)|
|gate_f = tf.cast(output > 0, 'float32')|
|gate_r = tf.cast(grads > 0, 'float32')|
|guided_grads = tf.cast(output > 0, 'float32') * tf.cast(grads > 0, 'float32') * grads|
|weights = tf.reduce_mean(guided_grads, axis=(0, 1))|
|cam = np.ones(output.shape[0: 2], dtype = np.float32)|
|for i, w in enumerate(weights):|
|cam += w * output[:, :, i]|
|cam = cv2.resize(cam.numpy(), (224, 224))|
|cam = np.maximum(cam, 0)|
|heatmap = (cam - cam.min()) / (cam.max() - cam.min())|
|cam = cv2.applyColorMap(np.uint8(255*heatmap), cv2.COLORMAP_JET)|
|output_image = cv2.addWeighted(cv2.cvtColor(img.astype('uint8'), cv2.COLOR_RGB2BGR), 0.5, cam, 1, 0)|
I have read your article “Interpretability of Deep Learning Models with Tensorflow 2.0” at https://www.sicara.ai/blog/2019-08-28-interpretability-deep-learning-tensorflow and implemented the example to understand how exactly works.
my e-mail: firstname.lastname@example.org
Please find the following link what I achieved to do.
I fell a lit beat lost with the results.
Once again thank you very much for your help :)
Thank you for the quick reply :)
I have several plots on the code. When you refer to look for the green points (largest values) it's not clear to me which plot to look. I'm a little bit confused.
First of all, thank you very much for sharing! Nevertheless, I have a question if you have time.
How would you retrieve the heat maps/activation maps for each different inputs of your network? Because if the same filters are used for all inputs, I feel like the heat map will be identical for every feature. (I am quite new in this field so I apologies if anything I say is incorrect)
Thanks in advance!
You may find attached a snapshot of the data (it is a synthetic dataset but here there are three features each of the same length under the form of time series and one time series target of specific length). You can see some specific events in time series 2 and 3 and I would like to know if the network's attention is focused on those events. (Additionally, there might be an event in time series 1 but not the case in that sample)
And lastly, how does cv2 resize the output shape back into the image proportion?
I want run your code with my dataset(images).
width = 224, height = 112 and My model structure is MobileNetV2.
So, I fixed your code for my dataset.
then, LAYER_NAME = 'block5_conv3' -> 'Conv_1'(Last Convolution Layer in MobileNetV2).
And I load an image randomly in my dataset.
After run the code,
this error is occured.
Traceback (most recent call last):
Process finished with exit code 1
and full code
from future import absolute_import
data_dir = "G:\ATEC_AP\KDH\Fitness\DB\"
LAYER_NAME = 'Conv_1'
img = tf.keras.preprocessing.image.load_img(predict_dir + random_img_path, target_size=(input_height, input_width),
model = tf.keras.models.load_model(data_dir + "MobileNetV2_20_112_224_08_25_17_55" + ".h5")
with tf.GradientTape() as tape:
output = conv_outputs
gate_f = tf.cast(output > 0, 'float32')
weights = tf.reduce_mean(guided_grads, axis=(0, 1))
cam = np.ones(output.shape[0: 2], dtype=np.float32)
for i, w in enumerate(weights):
cam = cv2.resize(cam.numpy(), (input_width, input_height))
cam = cv2.applyColorMap(np.uint8(255*heatmap), cv2.COLORMAP_JET)
output_image = cv2.addWeighted(cv2.cvtColor(img.astype('uint8'), cv2.COLOR_RGB2BGR), 0.5, cam, 1, 0)
Please help me
Tensorflow Version is 2.1.0
Firstly, thank you for the code.
So here is my model, I want to visualise the activation on the layer conv_64, and I am getting some error.
last_conv_layer_name = "conv_64" classifier_layer_names = [ "concatenate_17", "conv_64_2", "max_pool3", "conv_64_3", "conv_64_31", "concatenate_18", "conv_64_32", "max_pool4", "conv_64_4", "conv_64_41", "concatenate_19", "conv_64_42", "max_pool5", "conv_64_5", "max_pool6", "flatten", "dropout_3", "dense_64", "output_layer", ] # Generate class activation heatmap heatmap = make_gradcam_heatmap( img_array, model, last_conv_layer_name, classifier_layer_names )
This is what I am doing, but getting the following error when running The problem lies when we are concatenating two different layers and it is confusing the classifier of the GRAD CAM module...
Here is the minimal version for reproducing the error.
Looks like it will be tough when we are using concatenate layer and focusing on an intermediate layer...
Since I have a series of same structure of layer, I can use the deeper layers too.