ADVERSARIAL METHOD FOR MALICIOUS ELF FILE DETECTION BASED ON DEEP REINFORCEMENT LEARNING

Adversarial method for malicious ELF file detection based on deep reinforcement learning

Adversarial method for malicious ELF file detection based on deep reinforcement learning

Blog Article

In recent years, research on detecting malicious executable and linkable format (ELF) files based on deep learning had made significant progress.At the same time, adversarial attacks on models had also gained widespread attention.Attackers could generate adversarial examples to mislead neural networks, causing malicious software to be misclassified as benign, thereby evading detection.

Although various methods for generating adversarial examples had been proposed, they were often not suitable for modifying ELF MIRIN files or lacked the ability to transfer across different detection models.To overcome the limitations of existing methods, an adversarial example generation method based on deep reinforcement learning was proposed.This method generated adversarial examples by constructing optimal perturbation bytes for the NFL Boxes target detection model while preserving the original functionality of the ELF files, without relying on the internal details of the target model.

The experimental results showed that the adversarial examples generated by this method achieved a 76.80% success rate in evading the target detection model, and could enhance the robustness of the model through adversarial training.

Report this page