Views on the Security of Machine Learning Algorithms

David Ríos Insua

Institute of Mathematical Sciences, National Research Council


The recent growth in computational capacity and advances in data acquisition, processing and storage have provided unprecedented ways to model and forecast numerous business, policy and science processes. Most of them rely on the idea that large databases and powerful algorithms may discover relevant behaviour patterns. Big Data and machine learning are raising a new scientific paradigm and reshaping our society and economy in ways largely unforeseen. 

Along with the massive deployment of machine learning based systems, several potential shortcomings have been identified. Our focus will be on the security of such systems. This forms part of the emergent field of adversarial machine learning which questions the standard hypothesis that data in training and operations are independent identically distributed due to the presence of adversaries ready to alter the problem to attain a benefit through adaptive actions over time. The prevailing paradigm to model the confrontation between learning-based systems and adversaries has been, explicitly or implicitly, game theory, with its entailed common knowledge assumptions.  From a fundamental point of view, such hypothesis is not sustainable in security domains, as adversaries tend to hide and conceal information. As an alternative to standard game theory, we have created and contributed to develop over the last decade the field of adversarial risk analysis. This is a potential game changer in adversarial machine learning thus providing a new paradigm. Our objective with this talk is to sketch such novel paradigm and identify open issues from foundational, modelling, computational and operational perspectives. Policy issues will be discussed as well.