Constraining neural networks output by an interpolating loss function with region priorsView Publication
Deep neural networks have the ability to generalize beyond observed training data. However, for some applications they may produce output that apriori is known to be invalid. If prior knowledge of valid output regions is available, one way of imposing constraints on deep neural networks is by introducing these priors in a loss function. In this paper, we introduce a novel way of constraining neural network output by using encoded regions with a loss function based on gradient interpolation. We evaluate our method in a positioning task where a region map is used in order to reduce invalid position estimates. Results show that our approach is effective in decreasing invalid outputs for several geometrically complex environments.
NeurIPS workshop on Interpretable Inductive Biases and Physically Structured Learning
Related PublicationsView All
Distributed and Adaptive Edge-based AI Models for Sensor Networks (DAISeN)
Veselka Boeva*, Emiliano Casalicchio*, Shahrooz Abghari*, Ahmed Abbas Mohsin Al-Saedi*, Vishnu Manasa Devagiri*, Andrej Petef, Peter Exner, Anders Isberg, Mirza JasarevicThis position paper describes the aims and preliminary results of the Distributed and Adaptive Edge-based AI M […]
An Inductive System Monitoring Approach for GNSS Activation
Shahrooz Abghari*, Veselka Boeva*, Emiliano Casalicchio*, Peter ExnerIn this paper, we propose a Global Navigation Satellite System (GNSS) component activation model for mobile tr […]