• Home
  • Publications
  • Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data

Research Area

Author

  • Francesco Cartella, Orlando Anunciacao, Yuki Funabiki, Daisuke Yamaguchi, Toru Akishita, Olivier Elshocht
  • * External authors

Company

  • Sony Corporation

Venue

  • SafeAI

Date

  • 2021

Share

Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data

View Publication

Abstract

Guaranteeing the security of transactional systems is a crucial priority of all institutions that process transactions, in order to protect their businesses against cyberattacks and fraudulent attempts. Adversarial attacks are novel techniques that, other than being proven to be effective to fool image classification models, can also be applied to tabular data. Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the Artificial Intelligence (AI) system to return incorrect outputs that are advantageous for the attacker. In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced tabular data, in the context of fraud detection. Experimental results show that the proposed modifications lead to a perfect attack success rate, obtaining adversarial examples that are also less perceptible when analyzed by humans. Moreover, when applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures.

SafeAI 2021 The AAAI’s Workshop on Artificial Intelligence Safety

Share

この記事をシェアする