AI is exploding, but caution is warranted in implementing the technology

Experts say the risks of AI are very real and often overshadowed by the promise of the technology

Businesses across different sectors, including entertainment, finance and health care, reported numerous concerns surrounding AI in their 2024 filings with the United States Securities and Exchange Commission. More than half of Fortune 500 companies told stakeholders about the potential risks of AI, while around 30 per cent mentioned the benefits, according to a report by Arize AI Inc., a platform that provides AI measuring and evaluation tools.

While companies are inclined to be transparent in their filings, experts say the risks of AI are very real and often overshadowed by the promise of the technology.

“AI is one of the most complicated technologies humans have ever put out there, so the concerns are not incredibly surprising,” Jason Lopatecki, founder of Arize, said.

The company’s products help businesses monitor and troubleshoot complex systems so that they work in their intended way.

AI is this century’s mission to the moon

Jason Lopatecki

Constellation Brands Inc., an American beverage company, said that while it has implemented its own rules around AI, it can’t be sure that its employees or third-party providers are following the same framework.

Neither Lopatecki nor Tsai anticipate the legal concerns resolving within a year.

Some companies are also hesitant due to security worries, particularly those in health care. Viatris Inc. expressed concern around the disclosure of confidential details about its clients as well as proprietary company information. As a result, companies may limit the implementation of AI tools or ban employees from using them.

Another concern for companies are the ethical and reputational risks based on both the reliability of AI and the ways it can be wielded by users with malicious intent.

“You have to take full responsibility for the tools,” Tsai said, adding that includes the unintended and unforeseen consequences of what can be created using generative AI.

Misinformation, pornographic material and other harmful content that can be created using AI presents a serious threat to companies and their customers.

In its filing, Motorola Solutions Inc. expressed concern about its reputation since “AI may not always operate as intended,” citing the potential for inclusion of “illegal, biased, harmful, or offensive information” in its datasets.

“It can be extremely harmful and detrimental to society,” Tsai said. “It’s great to have gen AI, but it can fool people. And then you have a real problem.”

Companies should also be concerned about the rush to replace workers, he said, since displaced employees can lead to an increase in a de-skilled workforce. Customer service and human resources are also areas where AI may be risky to implement.

“AI doesn’t necessarily have the soft touch or empathy that good businesses need with their employees and customers,” he said. “It’s a useful tool, but AI isn’t the answer to everything.”

Lopatecki is more optimistic. He believes the desire to innovate will overcome risk aversion for some companies. That’s evident by another concern companies such as Netflix Inc. and S&P Global Inc. cited in their filings about competitive risk: if they don’t move quickly enough, rivals may overtake them.

“Many businesses have a tension between innovating versus the risk if you do. I think you have to innovate; you have to invest here,” he said. “There are incredible product lines being deployed, and companies are doing amazing things in different industries.”

Related Posts


This will close in 0 seconds