Towards interpretable neural networks for low-level vision tasks
Luis Albert Zavala-Mondragon, Post-doctoral researcher at the Eindhoven University of Technology in the Netherlands
Luis Albert Zavala-Mondragon, Post-doctoral researcher at the Eindhoven University of Technology in the Netherlands
Seminar • 2024-10-07
AbstractNeural networks are one of the main drivers of the AI revolution. However, these models are often treated as black boxes and their internal processing is neglected, which often results in high-model complexity and lack of robustness to variations in the input. In order to address these issues, neural networks are being increasingly studied under the lens of signal processing, which has resulted in better understanding of these models and in new architectures that improve the performance of black-box designs. This presentation shows research performed to integrate signal processing concepts within neural networks for tasks such as image denoising and segmentation. Specifically, the presentation shows that simplified designs with concepts based on framelet-based denoising algorithms allow for interpretable, fast and/or low-complexity, while achieving absolute performance close to larger black-box designs. Moreover, this research proofs that networks that are trained for other tasks, such as image segmentation, perform similar operations to framelet-based denoising algorithms, which opens up new possibilities for understanding and designing these models.