Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses this demand by offering state-of-the-art visualization algorithms for image classification and newly adapted methods for semantic segmentation of convolutional neural nets (CNNs). With its easy to use graphical user interface (GUI), it provides visualization on all layers of a CNN. Due to its open model-view-controller architecture, networks generated and trained with Keras and PyTorch are processable, with an interface allowing extension to additional frameworks. We demonstrate the explanation abilities provided by Neuroscope using the example of traffic scene analysis.
neuroscope/
folder, the path for the CPU version would be conda_environment/neuroscope_cpu.yml
.neuroscope_gpu.yml
, is located in the same directory. To install an environment, open Anaconda Prompt
channel_priority
of conda
is false
by running conda config --set channel_priority false
Run conda env create -f %path_to_the_yml_file%
to create an environment.
To verify that the environment was installed correctly, run conda env list
. You should see a new environment on the list.
The next steps are provided for PyCharm, but you can use them as a reference for a different IDE.
File > Settings > Project:neuroscope > Project Interpreter
, click on the cog on the right.Add > Conda environment > Existing environment
, and specify python.exe
from the environment that was created in the previous steps.Run > Edit Configurations
, in the upper-left corner, click +
and select Python
.Python Interpreter
if it's not autoselected.Script path
field, specify a full path to src/main.py
.Working directory
field, specify a full path to the neuroscope/
folder. main.py
file with the configuration.The test scripts are located in the test\
directory.
The tests use unittest and QTest libraries.
The file test_neuroscope_gui.py
contains test cases and test_setting.py
contains fixed values related to the test data like properties and file paths.
The unittest
framework is used for starting and finalizing test cases with methods setUp
/tearDown
and the class setUpClass
.
Every test-case starts with the test_
keyword.
To run the tests with PyCharm, complete the following steps:
Run > Edit Configurations
, click +
and select Python test > Unittests
. A new configuration will be created. Target
, click on the folder icon and specify the full path to neurscope\test\neuroscope_gui.py
. Environment
section, specify the preferred Python interpreter
.File > Settings > Project Interpreter
. neurscope\test
in the Working directory
field. Run
and Debug
buttons, there is a menu where you can select a configuration.Run
or Debug
the test_neuroscope_gui.py
file.If you want to run tests with a terminal:
\neuroscope\test
to PYTHONPATH
, i.e. set PYTHONPATH=%PYTHONPATH%;C:\full_path_to_the_test_folder
.\neuroscope\test
and run python test_neuroscope_gui.py
.To launch Neuroscope in a terminal, go to the neuroscope
folder and run python src/main.py
.
To launch Neuroscope in PyCharm, run main.py
with a configuration that was created in the Installation step of this manual.
The common usage scenario consists of opening the model, adjusting the settings for the model, selecting images, applying analysis methods and saving the results.
Here's how to do it:
data\preprocessing_presets.json
. Mean and Standard Deviation fields are automatically filled from the preset.new
button.decoding
is about how to interpret this array to make it readable for a human.channel_first
, but for the output. WHC is Width Height Channels. When ticked, assumes that they go in this order. When empty assumes the reversed order.Window > New inspection window
. If the window still didn't open, there might be something wrong with the model settings. Some of the options are described below:@Article{app11052199,
AUTHOR = {Schorr, Christian and Goodarzi, Payman and Chen, Fei and Dahmen, Tim},
TITLE = {Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets},
JOURNAL = {Applied Sciences},
VOLUME = {11},
YEAR = {2021},
NUMBER = {5},
ARTICLE-NUMBER = {2199},
URL = {https://www.mdpi.com/2076-3417/11/5/2199},
ISSN = {2076-3417},
ABSTRACT = {Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses this demand by offering state-of-the-art visualization algorithms for image classification and newly adapted methods for semantic segmentation of convolutional neural nets (CNNs). With its easy to use graphical user interface (GUI), it provides visualization on all layers of a CNN. Due to its open model-view-controller architecture, networks generated and trained with Keras and PyTorch are processable, with an interface allowing extension to additional frameworks. We demonstrate the explanation abilities provided by Neuroscope using the example of traffic scene analysis.},
DOI = {10.3390/app11052199}
}
Neuroscope.1.0
convolutional-neural-nets explaining-ai visualization