The legislation is still winding its way through the House of Commons and isn’t expected to come into effect until at least next year but is being watched intensely in the technology sector and beyond.
Many worry the legislation could curtail innovation and push companies to flee for other countries that are more hospitable toward AI, but most agree that the technology industry cannot be left to decide on its own guardrails.
They reason the sector needs some parameters to protect people from systems perpetuating bias, spreading misinformation and causing violence or harm.
With the European Union, Canada, the U.S. and several other countries all charting their own paths toward guardrails, some in the tech community have called for collaboration.
Foster says there’s some “really promising signs” it could come to fruition based on what she’s seen from the G7 countries.
“Everybody is saying the right things. Everybody thinks interoperability is important,” she said.
“But saying it’s important and doing it are two different things.”
Canada’s industry minister François-Philippe Champagne is largely responsible for whatever approach the country takes to AI.
Last summer, he told attendees at another tech conference in Toronto, Collision, that he feels Canada is “ahead of the curve” with its approach to artificial intelligence, beating even the European Union.
“Canada is likely to be the first country in the world to have a digital charter where we’re going to have a chapter on responsible AI because we want AI to happen here,” he said.
His government has said it would ban “reckless and malicious” AI use, establish oversight by a commissioner and the industry minister and impose financial penalties.
Whatever Canada settles on, Foster said it has to be “conscious of the cost of regulation” because asking companies to undergo evaluations to ensure their software is safe can often be time-consuming and much of that work is already being done.
She feels the best regulatory model will identify high-risk AI systems and ensure there are steps in place to mitigate any harms they could cause but won’t regulate things that shouldn’t be regulated.
Among the AI systems she thinks can go without regulation are “mundane” systems like those that get baggage to travellers at an airport faster.
“I think (it’s about) being focused on the risks that we need to address and then really kind of not getting in the way of really valuable technology that’s going to make our lives better,” Foster said.
In a separate panel, Adobe’s head of global AI strategy Emily McReynolds also mentioned that there’s a role for companies to play in the conversation around regulation, too.
Adobe, she said, has committed to not mining the web for data it uses in its AI systems and instead opted to license information. She positioned the move as one that brings transparency to the company’s work but also ensure it is “really respecting creators,” who tend to use the company’s software.
She said Adobe had chosen to take a proactive approach to issues like data and told other businesses “it’s really important to understand that building AI responsibly is not something that comes after.”
This report by The Canadian Press was first published Oct. 2, 2024.
Tara Deschamps, The Canadian Press