The emergence of super-smart artificial intelligence chatbots raises concerns of some AI leaders, but it’s unlikely to quell development in Vancouver’s sector.
B.C. businesses are paying close attention to the warnings and benefits of artificial-intelligence applications.
“I think it’s got people’s attention,” said David Williams, vice-president of policy for the Business Council of B.C., who has studied the potential for AI to disrupt B.C.’s job market. And that’s a big concern, with 42 per cent of jobs in the province’s economy believed to be vulnerable to automation through AI.
You are reading: Pros and cons of A.I. capture attention of B.C. businesses
The emergence last November of Microsoft-owned OpenAI’s ChatGPT application, a chatbot with an uncanny “intelligence,” has unnerved a lot of decision-makers, even among top AI executives.
“People started using it and seeing its applications in lots of different fields,” Williams added. “So I think that’s what’s really struck everybody, is how swift the take-up has been. One minute it wasn’t there, and now it is here and we’re using it.”
Until now, the tsunami of advancements in AI have been less visible, percolating away in automated customer-service applications, customized ad selections on websites or personalized selections in streaming services such as Netflix.
Within Vancouver’s burgeoning AI sector, the sudden wave of concern is causing less of a ripple, especially when weighed against the potential benefits from applications that aren’t the chatbots in the spotlight.
“Generative AI is not a monolith,” said Handol Kim, co-founder and CEO of the Vancouver firm Variational AI, which is focused on using generative computer learning to find molecules for use in medical treatments.
And at Variational, Kim said the articles about concern haven’t generated a lot of discussion while they’re busy “solving a very particular problem.”
“We’re trying to save lives, we’re trying to deliver better patient outcomes,” Kim said.
The broader public, however, has suddenly become captivated by the flags being raised by many prominent developers in AI suggesting that society hit pause on some development because there are dangers in racing ahead too fast.
Recently, putative “godfather of AI” Geoffrey Hinton added his name to the roster of critics when he left a top position at Google to voice concerns that “in a few years time they may have significantly more intelligent than people,” as he said to interviewer Nil Koksal on the CBC program As it Happens.
“And I don’t know of any cases where a more intelligent thing is controlled by a less intelligent thing,” Hinton warned.
Those concerns have emerged along with the debut of ChatGPT, OpenAI’s publicly accessible and interactive generative AI tool, which has shown the technology’s powerful potential.
Generative AI refers to a computer’s ability to generate text, images and other media from prompts by users that are culled from huge data sets, known as large language models input into an AI system. And the system continually learns from itself.
ChatGPT has shown that it can write convincing essays, pass exams, compile travel itineraries and even write computer code, though it can still be foiled by questions involving simple logic.
But to Hinton, a cognitive psychologist who pioneered a lot of the theories behind many AI developments while at the University of Toronto, what is emerging is “a different form of intelligence” from human, biological intelligence, and in certain ways “may actually be much better.”
Tech companies have been working on generative AI tools behind the scenes for a long time, but Microsoft’s release of ChatGPT, which is in its fourth iteration, sparked a more public AI race with Google and other firms feeling forced to release their own tools.
In March, a group of AI executives signed an open letter calling on the big players to hit a six-month pause on the training of AI systems “more powerful than (ChatGPT 4).”
Canada has started on legislation aimed at guiding AI development, the Artificial Intelligence and Data Act, but there are concerns it remains too focused on dealing just with privacy elements such as using AI to impersonate individuals or manipulate video without permission.
Canadian tech visionary Mitch Joel agrees that is important, but examples of such deepfake videos are almost “parlour tricks,” compared with bigger-picture concerns about AI.
The big challenge is what the AI sector refers to as alignment, Joel said. That refers to whether or not AI will remain aligned to the values of humans using it.
“If you’re giving AI the power to make choices, it may make choices that don’t work in the favour of humans,” Joel said.
Kim cautioned that the speed that AI is moving at will make it difficult for governments to step in and regulate.
“By the time you convene a working group on this and you go out to industry and experts,” industry has moved on to the next problem, Kim said. He added that development, generally, is inexorable.
“Obviously the end point is something that we have to look out for,” Kim said. “But … now that (AI) is distributed everywhere, everyone is doing it. Adobe, Google, Samsung, Microsoft, you name it. Everyone’s got it.”
Williams added that grappling with those concerns “is part and parcel of technological change. Human nature doesn’t change.”
“There’ll be good actors that use it for good and others that will use it for ill,” Williams said. So society will have to adapt its defences.
Another certainty about AI is that it will be disruptive in the labour force. Williams estimated that jobs in sales and service, finance, administration and equipment operation are among those most at risk to automation over the next two decades. So government also needs to work on policy to address those shifts, Williams said, adding that there will be downsides in ignoring the potential for using AI in ways that are complementary to skilled employment.
Williams argued that society has always been “a net beneficiary of new technologies,” so the challenge in AI is devising policies to help people in employment at risk to automation move out of those jobs.
“You want to have those adjustments take place to move people into jobs (where they are) using technology in their jobs rather than competing with technology for their job,” Williams said.