The federal government has launched the Canadian Artificial Intelligence Safety Institute to study the risks posed by advanced AI models, following similar actions in other countries.
Innovation minister François-Philippe Champagne made the announcement Tuesday in Montreal at the Mila AI institute. “If there’s no trust, there will be no adoption,” he said of AI. “And if there’s no adoption, we will squander the incredible potential of many new technologies.”
The goal of AI safety is to study and test the ways in which powerful models can cause harm or be exploited for nefarious purposes, and ensure the technology is developed to benefit society. Safety has become a growing concern as AI models have become more sophisticated and are being deployed more widely.
The risks of AI range from biased decision-making and misinformation, to bad actors using the technology to develop bioweapons and conduct more sophisticated hacking operations. Some AI experts, including Nobel Prize winner Geoffrey Hinton, have raised concerns that humanity could lose control of super-intelligent AI systems in the future.
Previously: For Geoffrey Hinton, the godfather of AI, machines are closer to humans than we think
The federal Liberal government allotted $50-million over five years for the creation of the Canadian Artificial Intelligence Safety Institute (CAISI) in the 2024 budget as part of a broader $2.4-billion AI package.
The new institute will be housed within Innovation, Science and Economic Development Canada. The government has allocated $27-million to the Canadian Institute for Advanced Research (CIFAR) to administer the research stream of CAISI. The work will be done in collaboration with the country’s three AI research institutes.
The National Research Council of Canada, meanwhile, will focus on AI issues that are priorities for the government, such as cybersecurity.
There is no shortage of publicly funded AI research in Canada already. In addition to the three national AI centres, the Schwartz Reisman Institute for Technology and Society at the University of Toronto also works on AI safety.
“It allows us to do some new things that we couldn’t do otherwise,” said Elissa Strome, the executive director of the pan-Canadian AI strategy at CIFAR, referring to the safety institute. “And it’s a new area of focus and concentration for our research community.”
Both the U.K. and the U.S. have created AI safety centres in the past year, while the European Union started the AI Office, which includes a safety unit.
AI safety requires buy-in from industry, especially when it comes to evaluating powerful models if companies are not compelled to submit to external testing before releasing them publicly.
“That kind of access isn’t going to be something that’s easy to come by,” said Yoshua Bengio in an interview, the scientific director of Mila and co-chair of the federal government’s advisory council on AI.
International collaboration is a priority for CAISI. Eleven governments, including Canada, agreed in May to work together on safety research, among other areas. A group of AI safety centres, including a Canadian delegation, is scheduled to meet for the first time next week in San Francisco.
AI companies generally support international partnerships, too, as a way to ease the burden of complying with any regulations or requirements in multiple jurisdictions.
In November 2023, a handful of large AI developers and a number of countries, including Canada, agreed to a plan to test powerful models before releasing them to the public. “We can help not just by repeating the same things that our partners are doing in other countries, but also contributing our own amazing expertise,” Prof. Bengio said.
But the voluntary approach has not been entirely smooth. “I think everybody in Silicon Valley is very keen to see whether the U.S. and U.K. institutes work out a way of working together before we work out how to work with them,” Nick Clegg, Meta Platform’s president of global affairs, told Politico earlier this year.
The focus on AI safety has followed a regulatory push. The federal Liberals introduced Bill C-27 in 2022, which includes a framework for regulating AI. The bill is still under review by the industry and technology committee in the House of Commons.