• Home
  • Publications
  • Constraining neural networks output by an interpolating loss function with region priors

Research Area

Author

  • Hannes Bergkvist, Peter Exner, Paul Davidsson*

Company

  • Sony Europe B.V.

Venue

  • NeurIPS

Date

  • 2020

Share

Constraining neural networks output by an interpolating loss function with region priors

View Publication

Abstract

Deep neural networks have the ability to generalize beyond observed training data. However, for some applications they may produce output that apriori is known to be invalid. If prior knowledge of valid output regions is available, one way of imposing constraints on deep neural networks is by introducing these priors in a loss function. In this paper, we introduce a novel way of constraining neural network output by using encoded regions with a loss function based on gradient interpolation. We evaluate our method in a positioning task where a region map is used in order to reduce invalid position estimates. Results show that our approach is effective in decreasing invalid outputs for several geometrically complex environments.

NeurIPS workshop on Interpretable Inductive Biases and Physically Structured Learning

Share

この記事をシェアする