October 27, 2021 at 5:00 am ET
The cyberattack surface for modern, digital enterprises is always expanding. Just take the recent Colonial Pipeline ransomware attack as an example: The cost of unanticipated attacks like these could easily reach $6 trillion in 2021, according to some estimates. And with larger criminal enterprises and nation-state actors behind some of the most sophisticated cyberattacks in recent years, cybercrime is only growing as a threat.
Cybersecurity professionals need new ways to protect against growing threats, and many of them are turning to artificial intelligence for help. That’s because cybersecurity professionals can use AI to secure a dynamic and evolving attack surface in real time. Three in 4 executives surveyed in a recent Capgemini Research Institute report said that AI helped their organization respond to breaches faster. But as useful as AI is for cybersecurity, it may be even more powerful in the hands of cybercriminals.
For years now, cybersecurity experts have warned of the growing threat of AI-enhanced cybercrime. Artificial intelligence can be used to crack tough cybersecurity defenses and expand both the range and severity of cyberattacks. AI-enhanced cybercrime may not be that common yet. But as AI grows in importance and availability, it won’t be long before the world’s top cybercriminals start to leverage the benefits of AI to nefarious ends.
When you mention cybercrime, a lot of people imagine dedicated, skillful hackers meticulously infiltrating a company’s security system or carefully inserting a virus onto a computer. But in reality, cybercriminals are often using automated, easy-to-use tools to send out a large number of attacks at once.
One of the reasons we’ve seen such a sharp spike in cybercrime in recent years is the development of new cybercrime tool kits like ransomware-as-a-service. Essentially, cybercriminals no longer even have to develop their own attack software — instead, they can simply pay on the dark web to use ready-made software.
This approach to cybercrime dramatically lowers the barrier to entry for aspiring hackers, and makes learning to be an effective cybercriminal incredibly easy. But the one problem is that this kind of software can be less sophisticated. That’s where AI can come into play.
AI can increase the scale of cybercrime by improving the cybercrime tools available to every hacker. In the near future, cybercriminals will be able to use AI to write code for new attack software that responds flexibly and innovatively to the security systems it encounters. At the same time, hackers will be able to rely on AI-enhanced spam and phishing bots to automatically craft convincing phishing scam emails.
With AI, cybercriminals will also be able to scrape websites more efficiently for user information and steal personal data. AI can even use that data to generate highly accurate user profiles to sell online. What’s more, deepfake AI technology can be used to imitate human voices and faces — a potentially revolutionary tool in the hands of dedicated scammers preying on gullible employees or individuals.
In other words, AI can help cybercriminals across every aspect of their efforts, from scams and tricks to brute force attacks and exploiting security vulnerabilities. And as more and more AI tools get developed, it will be harder for cybersecurity professionals to keep up. That doesn’t mean it will be impossible to secure our data against the AI-enhanced cybercrime of tomorrow, but we need to start preparing today before we’re caught off guard.
Tom Kelly is president and CEO of IDX, a Portland, Oregon-based provider of data breach and consumer privacy services such as IDX Privacy, as well as a Silicon Valley serial entrepreneur and an expert in cybersecurity technologies.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.