One year later, AI code signatories happy with decision but want more company

The code launched last September as a means to put some guardrails around the use of AI

Cohere, the Toronto AI firm that was the buzziest name among the signatories, sees it as “imperative for everyone to kind of be involved” with the code, “if for no other reason than just to make sure that (it) has the impact that you want to have in the industry.”

“Of course, we want more people,” said the company’s director of legal Kosta Starostin.

“It’s disappointing when our fellow Canadian companies maybe don’t sign up, but they, of course, have their own reasons and that’s completely up to them.”

The code was launched by the federal government last September as a means to put some guardrails around the use of AI and to act as a precursor to eventual legislation. It included promises to bake risk mitigation measures into AI tools, use adversarial testing to uncover vulnerabilities in such systems and keep track of any harms the technology causes.

While many in the group now totalling 30 say they were content spending the last year collaborating with household names and tech heavyweights on an issue of growing importance, they also believe the more, the merrier.

Starostin refused to comment on any of the specific reasons companies have cited for avoiding the code, but some of the resisters have been the tech community’s most prominent names.

Mark Doble had qualms with the code, too.

“I was fairly skeptical at first and then, when I got into the details of it, it seems substantively nothing really meaningful or additive to what already exists,” said the chief executive of Alexi, a Toronto company building AI-based tools for the legal sector.

He feels Canada’s current employment, human rights, privacy and competition laws cover off most problems that could arise from AI and said the technology shouldn’t require the country to “re-evaluate, re-establish or add to those regulations.”

As a result, he labelled the code as both “performative” and “overreach.”

“Significant players in the AI ecosystem continue to express their interest in signing the code and we’ll be ready to announce another round of signatories soon,” she wrote in an email on Sept. 11.

“We encourage all companies in the Canadian ecosystem developing and deploying AI systems to join their peers who have already committed to operating in a safe and responsible manner.”

Diane Gutiw, vice-president of analytics, AI and machine learning at CGI Inc., said she would also welcome more sign-ups to “make sure we’re all working in the same framework.”

The Montreal-based tech consulting business viewed signing the code as a no-brainer because CGI had long been using its own set of principles designed to ensure its use of AI was transparent, protective of data, secure and reliable.

When Gutiw reviewed the tenets of the code, she found a lot of overlap with CGI’s own principles, so she said the company was “quite comfortable signing.”

Over at Cohere, some of the motivation in supporting the code came from the “fuzzy landscape” around AI, which was “moving very quickly,” before the code.

OpenAI had released AI chatbot ChatGPT to the world, sparking a race to innovate in the sector and a flurry of investment as brands began experimenting with it.

At the same time, AI luminaries like Geoffrey Hinton were warning advances in the technology could exacerbate biases and discrimination, cause unemployment or even spell the end of humanity.

“It wasn’t clear to us or to anybody else what the priorities were going to be for different governments,” Cohere’s Starostin said.

Once the government put a code together, he felt it “crystalized” the way forward for the country and gave companies a framework to rely on while they wait for the Artificial Intelligence and Data Act to finish winding its way through the House of Commons and come into force, likely next year.

Salesforce said both codes have sparked a “virtuous race to the top” because the agreements have given companies a clearer idea of what they can do to be safe and ethical with their AI.

Salesforce, for example, had always used adversarial testing _ when companies simulate attacks on their systems to uncover vulnerabilities — but signing the code encouraged it to ramp up such efforts, said Paula Goldman, the company’s chief ethical and humane use officer.

“Once you’ve made a commitment like this and you’re part of the community, it ends up being a wonderful opportunity to keep accelerating the progress,” she said.

Related Posts


This will close in 0 seconds