TORONTO — Artificial intelligence will be able to beat humans at cyber offence by the end of the decade, predicted the keynote speaker at a series of lectures hosted by computer science luminary Geoffrey Hinton this week.
Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences and statistics at UC Berkeley in California, made that projection Tuesday, saying it was based around his belief that AI systems will eventually become “superhuman” when tasked with coding and finding exploits.
Exploits are weak points in software and hardware that people can abuse. Cyber criminals often covet these exploits because they can be used to gain unauthorized access to systems.
Once a criminal has access through an exploit, they can run a ransomware attack where they encrypt sensitive data or block administrators from getting into software, in hopes of extracting cash from victims.
To find exploits, Steinhardt said humans would have to read all the code underpinning a system, so that they can find an exploit and carry out an attack.
“This is really boring,” Steinhardt said. “Most people just don’t have the patience to do it, but AI systems don’t get bored.”
Not only will AI undertake the drudgery associated with finding an exploit, but it will also be meticulous with the task, Steinhardt said.
Steinhardt’s remarks come as cybercrime has been increasing
A 2023 study from EY Canada of 60 Canadian organizations found that four out of five had seen at least 25 cybersecurity incidents in the past year and experts say some companies face thousands of attempts every day.
Many have hailed AI as a potential solution because it can be used to quickly identify attackers and gather information on them, but Steinhardt said it is just as likely to be used by people with nefarious intentions.
Already, he said the world has seen instances where bad actors have harnessed the technology to create deep fakes — digitally manipulated images, videos or audio clips depicting people saying or doing things they have not said or done.
In some instances, deep fakes have been used by bad actors to make calls to people suggesting it’s their loved one reaching out and they are in need of money quicky.
Businesses have been victims, too.
Earlier this year, media reported that a worker at Arup, the British engineering company behind prominent buildings including the Sydney Opera House, had been duped into handing over US $25 million to fraudsters making use of deep fake technology to pose as the company’s chief financial officer.