On June 16, 2022, the federal government introduced Bill C-27, the Digital Charter Implementation Act, 2022 (Bill C-27 or Bill). If passed, the Bill would significantly reform federal private-sector privacy law. It would also introduce rules to regulate “high-impact” artificial intelligence (AI) systems under a new Artificial Intelligence and Data Act (AIDA).
AIDA would, among other things:
Establish a new Artificial Intelligence and Data Commissioner to support the Minister of Innovation, Science and Industry in enforcing AIDA
Make it an offence to make available or use an artificial intelligence system that is likely to cause serious harms
Like the EU’s recent proposal, the AIDA would take a harm-based approach to regulating AI by creating new obligations for yet-to-be-defined “high-impact systems.”
Below we provide an overview of the new proposal to regulate AI systems. Be sure to read our companion Blakes Bulletin on Bill C-27’s proposals to reform private-sector privacy laws.
Persons who are responsible for AI systems would be required to determine if it is high impact. If so, they must establish measures to identify, assess, mitigate and monitor the risk of harm or biased output that could result from the use of the system. Biased output would include content generated by the system that adversely affects an individual on one or more of the prohibited grounds of discrimination set out in the Canadian Human Rights Act.
Similar to the transparency obligations under the proposed Consumer Privacy Protection Act (CPPA), the AIDA would require persons who make high-impact systems available for use and persons who manage their use to publish a plain-language description of the system’s intended use; the types of content it is intended to generate or make; mitigation measures established; and any other information that may be prescribed by regulation.
These persons would also be subject to a new mandatory reporting obligation that requires notification to the Minister if the regulated system results in or is likely to result in material harm. Material harm is not defined by the proposed legislation and is likely to be defined in regulations.
The AIDA would be administered and enforced by a new Artificial Intelligence and Data Commissioner. Unlike the Privacy Commissioner of Canada, who is an independent agent of Parliament, the Artificial Intelligence and Data Commissioner would be part of the Ministry of Innovation, Science and Industry, with powers delegated to it by the Minister.
The powers of the Minister that could be delegated to the Artificial Intelligence and Data Commissioner include ordering a person to:
Provide any required records
Conduct an audit with respect to possible contraventions of AIDA, including by engaging the service of an independent auditor
Implement any measures specified in the audit report
Cease using a high-impact system they are responsible for or making it available for use if there are reasonable grounds to believe use of the system gives rise to a serious risk of imminent harm
Publish any information (except for confidential business information) related to the compliance obligations under AIDA
Further, a person who is found under regulations to have committed a violation could be liable for a monetary penalty established by regulations. The proposed AIDA does not specify the amount of these penalties.
AIDA would also make it an offence to:
Contravene the obligations respecting high-impact systems
Obstruct or provide false or misleading information to the Minister, anyone acting on behalf of the Minister (such as the new Artificial Intelligence and Data Commissioner) or an independent auditor
Possess or use personal information for the purpose of designing an AI system or developing, using or making it available for use, knowing or believing that the information was obtained or derived directly or indirectly as a result of an offence or contravention of a law, including acts or omissions anywhere which if they had occurred in Canada would have constituted an offence (such as by contravening the consent requirements of proposed CPPA)
Knowingly or recklessly, and without lawful excuse, make available for use or use an AI system that causes serious physical or psychological harm to an individual or substantial damage to an individual’s property
Make an AI system available for use with intent to defraud the public and cause substantial economic loss to an individual where its use causes that loss
A person who commits an offence under AIDA would be liable, on indictment, to a fine of C$25-million or 5% of the person’s gross global revenues.
Bill C-27 is expected to be debated in the House of Commons in the fall of 2022, and further amendments may be proposed.
Stay tuned for further updates throughout the summer detailing the impact of specific proposals in Bill C-27 on Canadian businesses.
For further information, please contact:
Ellie Marshall +1-416-863-3053
Wendy Mee +1-416-863-3161
Ronak Shah +1-416-863-2186
or any member of the Privacy & Data Protection group.
Blakes and Blakes Business Class communications are intended for informational purposes only and do not constitute legal advice or an opinion on any issue. We would be pleased to provide additional details or advice about specific situations if desired.
For permission to republish this content, please contact the Blakes Client Relations & Marketing Department at email@example.com.
© 2022 Blake, Cassels & Graydon LLP