The widespread adoption of machine learning technologies has made them a popular and valuable target for malicious agents seeking to cause disruption, or gain unauthorized access.
What is it?
Adversarial machine learning refers to any kind of malicious action that seeks to influence the outputs of a machine learning application, or exploit its weaknesses.
With machine learning applications responsible for everything from categorizing images to detecting suspicious network activity, there are a lot of reasons why someone might want to maliciously influence how they operate.
Typically, adversarial machine learning involves knowledge of how a ML model was trained or which training data was used. This knowledge can be used to create fake or synthetic data, manipulating the ML model to produce rogue outcomes. One example is the use of so-called ‘dazzle make-up’ to defeat facial recognition systems.
What’s in for you?
Adversarial machine learning is far more of a threat to most businesses than it is an opportunity. It’s an emerging digital threat that those using machine learning — especially in areas like identity verification and access management — need to be aware of, and prepared for.
However, adversarial machine learning techniques can be useful when testing your own machine learning, enabling you to identify vulnerabilities and resolve them before any bad actors can take advantage of them.
It can also help improve the outputs of your machine learning applications, enabling them to deliver greater value for your business and customers.
What are the trade offs?
Adversarial machine learning is itself a trade-off of using machine learning. When you build applications that can learn from new data and adapt to their environment, they’re going to be susceptible to taking in bad information, drawing incorrect conclusions and making poor decisions based on misleading inputs.
By using pre-defined data sets to train machine learning applications, you can reduce their exposure to malicious inputs. However, that still doesn’t help prevent trial-and-error attack methods that are designed to find a model’s vulnerabilities.