Sign Up to like & get
recommendations!
0
Published in 2019 at "International Journal of Computer Vision"
DOI: 10.1007/s11263-019-01228-7
Abstract: We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable. Our approach—Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients…
read more here.
Keywords:
cam;
image;
visual explanations;
grad cam ... See more keywords
Sign Up to like & get
recommendations!
0
Published in 2021 at "Scientific Reports"
DOI: 10.1038/s41598-021-98448-0
Abstract: By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a ‘visual explanation’ technique for analysing and explaining the internal spike behavior of…
read more here.
Keywords:
visual explanations;
spiking neural;
inter spike;
neural networks ... See more keywords
Sign Up to like & get
recommendations!
2
Published in 2023 at "IEEE Access"
DOI: 10.1109/access.2023.3235332
Abstract: In some DL applications such as remote sensing, it is hard to obtain the high task performance (e.g. accuracy) using the DL model on image analysis due to the low resolution characteristics of the imagery.…
read more here.
Keywords:
explanations mediation;
visual explanations;
scheme;
model ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2022 at "IEEE Geoscience and Remote Sensing Letters"
DOI: 10.1109/lgrs.2023.3271649
Abstract: Visual explanation of “black-box” models allows researchers in explainable artificial intelligence (XAI) to interpret the model’s decisions in a human-understandable manner. In this letter, we propose interpretable class activation mapping for tree crown detection (Crown-CAM)…
read more here.
Keywords:
crown cam;
tree crown;
visual explanations;
cam ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2022 at "Computer Graphics Forum"
DOI: 10.1111/cgf.14541
Abstract: Language models, such as BERT, construct multiple, contextualized embeddings for each word occurrence in a corpus. Understanding how the contextualization propagates through the model's layers is crucial for deciding which layers to use for a…
read more here.
Keywords:
contextualization;
visual explanations;
embedding spaces;
explanations language ... See more keywords