How AI Can Be Weaponized as a Cybersecurity Threat

How do you stop a swarm of 1,000 drones from flying into a city square and dropping the grenades each carries? How do you trust that you are Skyping with your mother, when facial recognition technology is so good it can reconstruct her face, when audio recognition technology allows AI to reconstruct her voice, and her entire person can be digitally impersonated? In an age when AI can be used defensively to search for and mitigate vulnerabilities in infrastructure, we must also consider the possibility of the deployment of AI to search for and exploit vulnerabilities.

Report on How Hackers Can Use Artificial Intelligence

If you have the time to read a 100-page report, we would advise you take out those reading glasses for The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Released in February, 2018, the report comes from a collaboration between a few organizations you might recognize.

  • University of Oxford
  • Centre for the Study of Existential Risk
  • University of Cambridge
  • Center for a New American Security
  • Electronic Frontier Foundation
  • OpenAI
  • Future of Humanity Institute

The report looks deeply at emerging AI capabilities and, due to the generally open stance of the AI community, points to significant opportunities for cybercriminals and terrorist organizations to use AI for significant harm. It envisions realistic scenarios based on recent events and emerging technologies, many of which are chilling.

We do not intend to deliver a book review, but rather to highlight the most pertinent threat for Denver business owners: the significant enhancement of a malicious practice called spear phishing.

AI Has the Potential to Expand Spear Phishing to Smaller Targets and Make It Harder to Detect

Phishing is already a broadly automated practice because it is as easy as Mailchimp. Hackers will develop a bad link, which when clicked by a mail recipient, executes a malicious program, like Ransomware or Malware. Once that program is developed, an email is then sent out to an extraordinarily large list of email accounts. It is a numbers game at that point. Although the vast majority of us can spot a phishing email easily, there are always a few who might be too busy, or possibly tired, to detect anything fraudulent before it is too late.

Spear phishing enhances that practice, currently, by taking the time to customize that email for the target specifically. A hacker would research the target’s life and friends, his/her hobbies etc., and then write out a unique email that would sound as realistic as possible.

Imagine for a moment that instead of receiving a random-seeming gift of $100,000 from “Amazon,” except the branding is pixelated and doesn’t quite match the real brand, instead of this clear hoax, you were to receive an email from an acquaintance of yours, who writes: “Hey there, I saw your last FB on receiving your pilot’s license. Congratulations! You have got to see this story about a private plane in New Jersey. Amazing!”

What is the likelihood that you would open that email? Much higher, probably. This is spear phishing.

But currently, spear phishing is not a large practice because of all the human research and customization required to customize the phishing message. Among all the scary political and defense-oriented threats from AI, the report we mentioned above also predicts the significant expansion of spear phishing, powered with automated research and message-creation by AI.

There are already Human Resources programs who automatically scan and interpret social media signals for desired skills, expertise, and characteristics. Automating the research and message creation for spear phishing would allow nefarious individuals to greatly expand spear phishing efforts to reach, not just Fortune 50 executives and large financial institutions, but also to mid-level staff, and smaller organizations.

What Can We Do Now to Prepare?

We thankfully have some time to prepare for AI-driven spear phishing. In the meantime, the report argues for tighter protections on AI software. Currently, much of AI is open source, meaning that anyone can access it. And this report brings this practice into question.

There is clearly a trade-off though. If AI is a technology only available to a few, then would SMBs be on that short-list, or would they be unable to utilize AI for their own businesses?

We cannot decide what should happen, but business owners and technology experts should start to think this out and debate, so that we make the right decisions — for ourselves and global security.