Cybersecurity threats and misinformation powered by AI top the list of fears
Corporate Canada isn’t feeling secure about AI either, according to results from Cisco Systems Inc.’s second Cybersecurity Readiness Index. Although 92 per cent of Canadian organizations said they have an AI strategy in progress or completed — ahead of the global average of 61 per cent — only seven per cent feel fully prepared for AI, which is down from nine per cent in 2023.
That so few companies are ready to use AI today is concerning given that it is fast becoming one of the key technologies needed from a security standpoint, Rob Barton, chief technology officer at Cisco Systems Canada Co., said
“AI is the new frontline in the battle of cybersecurity and there is this arms race underway,” he said.
On one side are the bad guys using AI tools to develop their methods of attack and on the other side are IT companies such as Cisco using AI as a mechanism to defend against them.
“I think it’s really important that companies recognize (AI’s) value and power,” Barton said. “Without it, I think it leaves us more vulnerable.”
Of the 43 per cent of respondents in Cisco’s readiness index who reported experiencing a cybersecurity incident in the previous 12 months, Barton said AI was likely used by some attackers to get into company networks.
“The index also showed that over 63 per cent of organizations in Canada expect to be attacked within the next year and these are very costly attacks,” he said.
Almost half of those affected by cyberattacks said it cost them at least US$300,000.
One of the benefits security platforms using AI have is that they can collect data from all aspects of an organization’s network and quickly make sense of it to determine where an attack is coming from and what needs to be done to shut it down.
“Otherwise, you’re spinning your wheels for days or hours and could be left completely vulnerable,” Barton said.
You’re spinning your wheels for days or hours and could be left completely vulnerable
Rob Barton
On top of the external security threats plaguing organizations are the ones coming from their own employees as AI use becomes more prevalent.
Brian Matthews, senior manager, Office of Technology Strategy, at CDW Canada, said a significant use of unsanctioned AI tools represents a “massive security and governance risk” with the potential to become “one of the biggest shadow IT incidences in history.”
Senior employees have more access to proprietary and sensitive data, so using AI tools without controls in place further heightens security threats.
“It means a higher risk of data breaches, compliance issues, reputation damage and potential financial loss,” Matthews said.
Also concerning, he said, is that 31 per cent of employees who have access to AI-approved tools in their workplace to help improve efficiencies aren’t using them, with 42 per cent indicating they don’t believe AI has relevance to their specific tasks.
“That represents a big training opportunity,” Matthews said, adding that embracing AI in the workforce will be a necessity for companies to remain competitive.
Whether AI tools are formally rolled out or not, some employees are relying on trial and error or online forums and social media to learn how to use the technology, which means employers can’t afford to wait to implement their AI strategies.
“The good news is, if you train your workforce appropriately … and create formal AI policies, you’re going to have employees that are more comfortable with adoption,” Matthews said.
CDW’s report said more than 60 per cent of employees felt most comfortable about AI in the workplace when such tools had been implemented by their organization, compared to 43 per cent of those in organizations without approved AI tools.
While AI strategies take shape and training gets underway, all organizations should be monitoring what tools their employees are using and then blocking applications as needed to decrease potential security risks.
“There are third-party AI experts that can make sure (an organization) is taking all considerations from a readiness and security governance data perspective and then also from an adoption and training perspective,” Matthews said.