In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the attempt to understand neural networks better several methods were proposed, e.g., Saliency, Deconvnet, GuidedBackprop, SmoothGrad, IntegratedGradients, LRP, PatternNet and PatternAttribution. Due to the lack of a reference implementations comparing them is a major effort. This library addresses this by providing a common interface and out-of-the-box implementation for many analysis methods. Our goal is to make analyzing neural networks' predictions easy!
Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K. R., Dähne, S., & Kindermans, P. J. (2019). iNNvestigate neural networks! Journal of Machine Learning Research, 20.
@article{JMLR:v20:18-540,
author = {Maximilian Alber and Sebastian Lapuschkin and Philipp Seegerer and Miriam H{{\"a}}gele and Kristof T. Sch{{\"u}}tt and Gr{{\'e}}goire Montavon and Wojciech Samek and Klaus-Robert M{{\"u}}ller and Sven D{{\"a}}hne and Pieter-Jan Kindermans},
title = {iNNvestigate Neural Networks!},
journal = {Journal of Machine Learning Research},
year = {2019},
volume = {20},
number = {93},
pages = {1-8},
url = {http://jmlr.org/papers/v20/18-540.html}
}
iNNvestigate is based on Keras and TensorFlow 2 and can be installed with the following commands:
bash
pip install innvestigate
Please note that iNNvestigate currently requires disabling TF2's eager execution.
To use the example scripts and notebooks one additionally needs to install the package matplotlib:
bash
pip install matplotlib
The library's tests can be executed via pytest
. The easiest way to do reproducible development on iNNvestigate is to install all dev dependencies via Poetry:
```bash
git clone https://github.com/albermax/innvestigate.git
cd innvestigate
poetry install poetry run pytest ```
The iNNvestigate library contains implementations for the following methods:
The intention behind iNNvestigate is to make it easy to use analysis methods, but it is not to explain the underlying concepts and assumptions. Please, read the according publication(s) when using a certain method and when publishing please cite the according paper(s) (as well as the iNNvestigate paper). Thank you!
All the available methods have in common that they try to analyze the output of a specific neuron with respect to input to the neural network. Typically one analyses the neuron with the largest activation in the output layer. For example, given a Keras model, one can create a 'gradient' analyzer:
```python import tensorflow as tf import innvestigate tf.compat.v1.disable_eager_execution()
model = create_keras_model()
analyzer = innvestigate.create_analyzer("gradient", model) ```
and analyze the influence of the neural network's input on the output neuron by:
python
analysis = analyzer.analyze(inputs)
To analyze a neuron with the index i, one can use the following scheme:
python
analyzer = innvestigate.create_analyzer("gradient",
model,
neuron_selection_mode="index")
analysis = analyzer.analyze(inputs, i)
Let's look at an example (code) with VGG16 and this image:
```python import tensorflow as tf import tensorflow.keras.applications.vgg16 as vgg16 tf.compat.v1.disable_eager_execution()
import innvestigate
model, preprocess = vgg16.VGG16(), vgg16.preprocess_input
model = innvestigate.model_wo_softmax(model)
analyzer = innvestigate.create_analyzer("deep_taylor", model)
x = preprocess(image[None])
a = analyzer.analyze(x)
a = a.sum(axis=np.argmax(np.asarray(a.shape) == 3)) a /= np.max(np.abs(a))
plt.imshow(a[0], cmap="seismic", clim=(-1, 1)) ```
In the directory examples one can find different examples as Python scripts and as Jupyter notebooks:
To use the ImageNet examples please download the example images first (script).
... can be found here:
@article{JMLR:v20:18-540,
author = {Maximilian Alber and Sebastian Lapuschkin and Philipp Seegerer and Miriam H{{\"a}}gele and Kristof T. Sch{{\"u}}tt and Gr{{\'e}}goire Montavon and Wojciech Samek and Klaus-Robert M{{\"u}}ller and Sven D{{\"a}}hne and Pieter-Jan Kindermans},
title = {iNNvestigate Neural Networks!},
journal = {Journal of Machine Learning Research},
year = {2019},
volume = {20},
number = {93},
pages = {1-8},
url = {http://jmlr.org/papers/v20/18-540.html}
}
If you would like to contribute or add your analysis method please open an issue or submit a pull request.
Adrian Hill acknowledges support by the Federal Ministry of Education and Research (BMBF) for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A).
Bumps werkzeug from 2.2.2 to 2.2.3.
Sourced from werkzeug's releases.
2.2.3
This is a fix release for the 2.2.x release branch.
- Changes: https://werkzeug.palletsprojects.com/en/2.2.x/changes/#version-2-2-3
- Milestone: https://github.com/pallets/werkzeug/milestone/26?closed=1
This release contains security fixes for:
Sourced from werkzeug's changelog.
Version 2.2.3
Released 2023-02-14
- Ensure that URL rules using path converters will redirect with strict slashes when the trailing slash is missing. :issue:
2533
- Type signature for
get_json
specifies that return type is not optional whensilent=False
. :issue:2508
parse_content_range_header
returnsNone
for a value likebytes */-1
where the length is invalid, instead of raising anAssertionError
. :issue:2531
- Address remaining
ResourceWarning
related to the socket used byrun_simple
. Removeprepare_socket
, which now happens when creating the server. :issue:2421
- Update pre-existing headers for
multipart/form-data
requests with the test client. :issue:2549
- Fix handling of header extended parameters such that they are no longer quoted. :issue:
2529
LimitedStream.read
works correctly when wrapping a stream that may not return the requested size in oneread
call. :issue:2558
- A cookie header that starts with
=
is treated as an empty key and discarded, rather than stripping the leading==
.- Specify a maximum number of multipart parts, default 1000, after which a
RequestEntityTooLarge
exception is raised on parsing. This mitigates a DoS attack where a larger number of form/file parts would result in disproportionate resource use.
22a254f
release version 2.2.3517cac5
Merge pull request from GHSA-xg9f-g7g7-2323babc8d9
rewrite docs about request data limits09449ee
clean up docsfe899d0
limit the maximum number of multipart form partscf275f4
Merge pull request from GHSA-px8h-6qxv-m22q8c2b4b8
don't strip leading = when parsing cookie7c7ce5c
[pre-commit.ci] pre-commit autoupdate (#2585)19ae03e
[pre-commit.ci] auto fixes from pre-commit.com hooksa83d3b8
[pre-commit.ci] pre-commit autoupdateDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
I'm working on a binary classification problem and therefore have a sigmoid activation instead of a softmax activation function for my output layer. I have adjusted the model_wo_softmax
function to accept any kind of activation by giving it as an argument to also cover binary classification problems.
See here the old vs the new call:
innvestigate.model_wo_softmax(model)
innvestigate.model_wo_output_activation(model, "softmax")
I have also edited all examples and documentation that cover this function, so everything should now have this new function.
Bumps ipython from 8.9.0 to 8.10.0.
15ea1ed
release 8.10.0560ad10
DOC: Update what's new for 8.10 (#13939)7557ade
DOC: Update what's new for 8.10385d693
Merge pull request from GHSA-29gw-9793-fvw7e548ee2
Swallow potential exceptions from showtraceback() (#13934)0694b08
MAINT: mock slowest test. (#13885)8655912
MAINT: mock slowest test.a011765
Isolate the attack tests with setUp and tearDown methodsc7a9470
Add some regression tests for this changefd34cf5
Swallow potential exceptions from showtraceback()Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Fixes the is_layer_at_idx
function by binding the loop variable i
to the lambda function and looping through the model's layers to find the corresponding index.
Closes #264
I have used a simple CNN network to test the implementation, see below the model summary.
I used the following LRP settings.
analyzer = LRP(
model,
rule="Z",
input_layer_rule="Flat",
until_layer_idx=3,
until_layer_rule="Epsilon",
)
For verification, I printed the rule object and the layer it will be applied to.
As you can see all three rules are being applied.
The until_layer_idx
argument will now count every layer in your model, also Input, Reshape, Pooling, etc.
Hi guys,
I love this package and started to use it again after you moved to tf2 (also used it when it was in its tf1 days). So thanks for all the hard work !)
My question is, I'm trying to save the analyzer after generated for future use but I seem to unable to do it. neither pickling/dill helped or even save the tf2 model itself. the generation can take some time and in order to use it as an online feature I have to find a workaround.
from what I've got so far, I cant pickle/dill the object because its using TF infra (which has parts written in C), and I can save the generated model (analyzer_obj._analyzer_model) because the eager execution is disabled (apparently its important..)
anything you guys can contribute from your experience?
P.S the only solution I see is to use full on TF2 and remove the line (tf.compat.v1.disable_eager_execution()) and use tf.GradientTape. I tried to do it but it has issues with the session that add difficulty to this.
Thanks !!) Shai
On iNNvestigate v2.0.1
, creating an analyzer inheriting from AnalyzerNetworkBase
errors when the model contains a BatchNormalization
layer, e.g.:
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'dense_2_input' with dtype float and shape [?,50]
This might be due to batch normalisation layers keeping moving averages of the mean and standard deviation of the training data, causing problems with the Keras history when reversing the computational graph in iNNvestigate's create_analyzer_model
.
```python import numpy as np import tensorflow as tf from keras.layers import BatchNormalization, Dense from keras.models import Sequential
import innvestigate
tf.compat.v1.disable_eager_execution()
input_shape = (50,) x = np.random.rand(100, *input_shape) y = np.random.rand(100, 2)
model1 = Sequential() model1.add(Dense(10, input_shape=input_shape)) model1.add(Dense(2))
model2 = Sequential() model2.add(Dense(10, input_shape=input_shape)) model2.add(BatchNormalization()) model2.add(Dense(2))
def run_analysis(model): model.compile(optimizer="adam", loss="mse") model.fit(x, y, epochs=10, verbose=0)
analyzer = innvestigate.create_analyzer("gradient", model)
analyzer.analyze(x)
print("Model without BatchNormalization:") # passes run_analysis(model1) print("Model with BatchNormalization:") # errors run_analysis(model2) ```
Perturbate
on RGB images by @adrhill in https://github.com/albermax/innvestigate/pull/306Full Changelog: https://github.com/albermax/innvestigate/compare/2.0.1...2.0.2
analyzer.fit
by @adrhill in https://github.com/albermax/innvestigate/pull/289Full Changelog: https://github.com/albermax/innvestigate/compare/2.0.0...2.0.1
Full Changelog: https://github.com/albermax/innvestigate/compare/1.0.8...2.0.0