Calvin Guillot

About

2025Oulu 2026
Enpapelada
Cocoons
Falling Gardens
Stained
The heaviness of Life
Vortex
Cryobiosis

2024 Paintings
Waiting
Transit
Off the Wall
Telar
Real Reality
Lala Salama Aurinko
VJ for e30v
Tonni YLE

2023 VJ for Wild Perra
Dresses
Yötön Yö
Takeoff *
VJ for Various
Sonorama R-Bus Carlosverse
Rottien Pyhimys *
Ihmisen Jälkeen

2022 VJ for Tokyo
Corrugations
Helen *
VJ for Rosa Jules
Invisible Lines
Explainable AI
Stranded Foundations
Pixel

2021 Memoranda
Lamp

Trillium

Fields

Before Birds
Math
Paintings
Various

* On going



©MMXXIV
Explainable AI

This was my master thesis project. Funded by Aalto University and FCAI.

Artificial Intelligence methods, especially the fields of deep-learning and other neural network based architectures have seen an increasing amount of development and deployment over the last decade. These architectures are especially suited to learning from large volumes of labelled data, and even though we know how they are constructed, they turn out to be equivalent to black boxes when it comes to understanding the basis upon which they produce predictions, especially as size of the network increases. 



Explainable AI (xAI) methods aim to disclose the key features and values that influence the prediction of black-box classifiers in a manner that is understandable to humans. In this project, the first steps are taken towards developing an interactive xAI system that places a human in the loop; here, a user’s ratings on the sensibility of explanations of individual classifications are used to iteratively find Hyperparameters of the neural net classifier (VGG-16), image segmentator (Felzenszwalb), and xAI (SHAP), to improve the sensibility of the explanations produced without affecting classification accuracy of the classifier in the training set. The users are asked to rate the sensibility of explanation from 1-10. The rating from the users is fed back to the Bayesian optimization algorithm that suggests new Hyperparameters values for the classifier, segmentator, and SHAP modules. 

The results of the user study suggests that the Hyperparameters which produced higher ratings on explanations tended to also improve the explainability of the images, thus generally improving the explainability for the image class. Improvement in the out-of-sample accuracy of the classifier (for the same class) was observed in some scenarios, but this still needs more comprehensive evaluation. More sensitive queries for the users, explore a variety of xAI methods, a variety of datasets, as well as conduct larger-scale experiments with users would be required to jointly improve explanations of multiple classes.

You can find the document here.