Last month, the United States Office of Personnel Management (OPM) announced that it was hit by what has been described as the largest breaches of government data in history. Hackers appropriated over 20 million records containing not only personally identifiable information but also highly classified security clearance data.
This event was far from isolated; one only need to look the data breaches perpetrated at firms such as Target, Ashley Madison, Chase, Sony and a myriad of others over the past few years.
The director of OPM (who recently resigned), was defensive of the agency’s security protocols in light of the “millions” of unsuccessful attacks EVERY MONTH!
A machine learning based cybersecurity framework would have detected the attack even with an unknown signature.
Current security solutions are most useful in situations involving identified attack vectors. However, they fail against “zero day” attacks, especially in high distributed environments, the attack “signatures” are unknown, and the solutions aren’t scalable enough.
Thanks to self-learning, machine learning detects anomalous behavior by analyzing massive amounts of data in multiple formats, relying on behavioral analysis and improving self-knowledge and understanding of phenomena.
AI has already started making its presence known in the cyber security space. Google is already using in spam detection, and Albatros cybersecurity incorporates machine learning in the aeronautics industry.
In the years to come, I see a great machine vs. machine war where even the hackers have adopted AI (anti-AI?), the stakes have gotten bigger and the cost of failure prohibitively high.