Tuesday, November 5, 2024

Aggressive machine learning: an introduction to attacks on ML models

Must Read

In this post, we will dive into the concept of adversarial machine learning and review the types of adversarial attacks targeting machine learning systems and how to mitigate them.

Machine learning (ML) or automatic learning is a sub-field of artificial intelligence (AI) that is becoming more and more important in computing, as its use has multiplied in various disciplines, such as advertising, medicine, fraud detection, translations, voice and facial recognition, among others.

But like all emerging technologies, it is not only interesting for developers looking to solve problems, but also attracts the attention of cybercriminals. Yes good Machine learning is a great ally to cybersecurityalso has its own paradoxical side, because this technology presents a major weakness: data manipulation.

What is Aggressive Machine Learning or Aggressive Attacks

Researchers and experts refer to this vulnerability as “adversarial machine learning (AML),” or aggressive attacks when addressing risks that can be manipulated by adversaries in AI systems in a way that leads to incorrect assessments and predictions.

agencies like Nest We have begun work on compiling the terminology and taxonomy to inform future standards and best practices for assessing and managing the security of ML components through a common language and understanding across the anti-money laundering landscape.

The concept of “adversary” (from the English adversary) is used in the field of computer security to designate a system and/or a cybercriminal who attempts to attack a system with the aim of obtaining a benefit.

There are some examples of adversarial attacks that can be performed by an ML model with the data to make mistakes, either in the training phase of that model or even by providing malicious data specifically designed to fool an already existing model.

How is the enemy attack performed?

Machine learning focuses on the ability of computers to learn from specific data without being explicitly programmed for a particular task, while adversarial machine learning is the process of extracting information about the behavior and characteristics of an ML system. and/or learn how to manipulate the inputs in the ML system to get the preferred result. In general, AML can work in several ways. For example, it can use attack techniques to affect the accuracy of machine learning systems that have been manipulated in some way.

The attacker can eventually bend the ML system into making a wrong decision and the attack can occur at different stages. For example, in the inference stage, where the attacker can manipulate the input (“noise perturbations”) so that the ML system gives a terrain prediction. To be more clear, perturbations are basically very subtle changes in the input data.

See also  This is Google Lumiere, the new AI for creating videos from our words

These ‘Malicious Disruptions’ can be introduced at different stages of the money laundering process:

Categories of ML attack according to “Aggressive Machine Learning – Yevgeny Vorobichik and Murat Kantarchioglu”

The types of opponent attacks that exist

If we think of the ML process in two stages, we have a training of the model and then its product, i.e. the prediction stage, also known as the inference stage. This last stage is reached once the model is obtained, when we fed it with a new dataset and the already trained ML must “discover” or even “predict”.

For example, suppose we want to train an ML model to detect malicious files based on the level of entropy. We must first see which ML model is best suited, for example random forestAnd the decision tree, etc. After the model is trained and tested, when a malicious file is entered through the inference stage, that file will be blocked/discovered.

Once you understand the two main stages, they differ Types of attacks Which ML Forms may contain:

According to the report you published NestAML attacks can also be divided according to the following:

Classification of AML according to NIST

The main attacks that the ML model can suffer are explained below:

poisoning attacks

Attacks on money laundering systems during the training phase are known as “poisoning” or “contaminated” attacks. In these cases, the adversary (the cybercriminal) presents incorrectly categorized data to the classifier so that the system will make biased or inaccurate decisions in the future. The opponent’s poison attacks require a certain degree of control over the training data.

It may also happen that intoxication data is provided in a training attack to modify the actual machine learning system. This is why many researchers classify this type of attack as a “backdoor attack”.

With this attack, the ML system can learn an incorrect model and go unnoticed, since the machine learning system for most of the input will give the correct answer. The problem occurs, for example, to some specific input chosen by the cybercriminal and only in these situations will the learning system give a response designed by the attacker. The danger of this type of attack is that it is really very stealthy, because for a data scientist at first glance it is not very easy to detect.

Consider such an attack targeting a facial recognition system. In this case, the attacker can modify the model so that certain types of faces are interpreted as those of a particular person to impersonate them before the facial recognition system and access certain information. This is when serious security flaws affect.

See also  Remembering passwords will be a thing of the past in 2023

Evasion or exploratory attack

Unlike poison attacks, evasion attacks occur in the inference stage, also known as the prediction stage. In this type of attack, enemies seek to inject obfuscated data so that the model performs the task of incorrectly predicting.

counter attack

in case if investment attacks Inferred features can allow an opponent to reconstruct data used to train the model, including personal information that violates an individual’s privacy. These attacks differ from previous attacks, in that their goal is to access the ML model, for example, to learn about training data or properties that were used to train ML. Now, let’s think about how sensitive some of the data that is generated in the hospital is. Given that ML models are now used to predict diagnostic imaging detection, if such an attack occurred, they could potentially gain access to patients’ medical records.

Extraction Attacks

according to Nest, in extraction attacks, the opponent extracts the parameters of the model or structure from the observations of the model’s predictions; Usually including the probabilities returned for each category. This type of attack, in addition to theft of intellectual property, also violates one of the main standards of the CID trio of information security and confidentiality. Also, this type of attack allows for dodging attacks.

While we have identified the main classification of attacks targeting machine learning models, the question many are asking is Is there a possibility to mitigate or prevent these attacks?

Defense: How to mitigate opponent’s attacks in ML

Researchers have been busy trying to tackle this problem and many articles have been published dealing with this problem.

Part of the big challenge is that many of these systems are black boxes, because in general the processing logic for these models is not accessible. However, it is also true that cybercriminals only need to find a crack in the system’s defenses for a hostile attack to occur.

Although it is still a complex topic, many researchers initially agree on a potential approach to improve the strength of ML. That is, creating a variety of attacks against a system in advance and training the system to see what the opposite attack would look like. In this way, when making an analogy with medicine, we will build an “immune system” for this type of attack. While this approach has some benefits, it is generally not sufficient to stop all attacks, as the range of potential attacks is too large to be created in advance.

See also  Skype users in the US can make 911 calls from PC

Another potential defense is the constant change of algorithms that a machine learning model uses to classify data, i.e. creating a “moving target” by keeping the algorithms secret and occasionally changing the model.

Like any thriving technology, exponential growth many times over is not paired with the required cybersecurity approach. This is why it is important for ML or Data Scientist developers to be aware of the potential risks associated with these systems so that they can put in place systems to verify and verify information. Another approach is to try to break your models regularly and identify as many potential weaknesses as possible.

We may also provide defenses through encryption tools, through privacy policies, among other methods. Here is an explanation of some of the recommendations:

AML: Attacks and Defenses Details Nest

meter atlas

If AML wasn’t really an issue, many companies wouldn’t contribute their knowledge to the design of MITER ATLAS. Moreover, according to a report published by Gartner in 2020 It is estimated that during 2022, 30% of all AI cyberattacks will benefit from poisoning training data, stealing AI models, or hostile samples to attack AI-powered systems.

Many already know the model MITER ATTA & CK, a tool used to identify common tactics, techniques, and actions that Advanced Persistent Threats (APTs) use against various systems. Regarding meter atlasIt is a tool created in 2020 that provides an overview of negative threats to AI systems. In other words, through the MITER ATLAS model, we can observe the antagonistic tactics and techniques of ML systems based on real-world observations. In addition, the interesting thing that this model offers is that it has Attack the demonstrations.

As I mentioned earlier, ML technology is being used more and more in various industries, but more awareness is needed about the weaknesses that ML has. As the attack surface is growing more and more, ATLAS Matrix strives to create a knowledge base on current threats in order to make it easier for security researchers to understand this burgeoning technology.

picture: meter atlas

conclusion

Many cybersecurity researchers worry that hostile attacks could become a serious problem in the future as machine learning is integrated into a wide range of systems, including self-driving cars and other technologies.

Although this type of attack is not a favorite of cybercriminals, it won’t be long before it becomes one; Where technology is advancing rapidly in the field of ML. Therefore, if these vulnerabilities are not detected in time, we may find ourselves facing major problems in the future.

Finally, we share external resources to delve deeper into this topic:

Latest News

Fast, Private No-Verification Casinos in New Zealand: Insights from Pettie Iv

The world of online gambling has come a long way since its inception, and New Zealand has been no...

More Articles Like This