Skip Navigation

Update for Canadian Businesses on the Regulation of AI

October 4, 2021

A core tenet of any strong data security program is a deep understanding of the organization’s data and systems and how they are regulated. As automated decision-making (ADM) and artificial intelligence (AI) become more commonplace, organizations should be aware of how much their existing applications involve these systems and how they are regulated. Currently, ADM and AI systems are primarily regulated under data protection legislation to the extent that they process identifiable information. Given increased attention on the potential harms and unintended consequences associated with ADM and AI, some jurisdictions are in the process of regulating or have signalled an intention to regulate ADM and AI directly.

Below we summarize five key developments in efforts to regulate ADM and AI:

  1. On September 21, 2021, the Quebec National Assembly adopted Bill 64 to reform public- and private-sector privacy laws. The amendments include new rules that require organizations to inform individuals if a decision about the individual is based exclusively on automated processing. Additionally, organizations must comply with an individual’s request for the personal information that was used to make the decision and the reasons for the decision. Individuals also have a right to have the personal information used by the ADM system corrected. These new obligations will come into force in September 2023.

  2. As part of its Digital and Data Strategy, the Government of Ontario issued a white paper outlining proposals for standalone private-sector privacy legislation. The proposals, if introduced as law, would create new obligations on the use of AI like those introduced in Quebec and prohibit the use of children’s data for AI.

  3. Bill C-11, the federal government’s proposed legislation to reform private-sector privacy legislation, died on the Order Paper with the 2021 federal election. The bill would have required organizations that use ADM systems to inform individuals of how these systems work. If privacy reform legislation is tabled again, it will likely contain similar requirements.

  4. The EU has proposed a framework which would prohibit certain applications of ADM and AI (including real-time remote biometric identification systems and social credit scores) and require organizations to maintain extensive technical documentation, record-keeping and human oversight. Importantly, the proposed regulation would apply to Canadian businesses that operate ADM or AI systems in the EU or on EU subjects, or even where the output produced by the system is used in the EU. The proposed penalties for infringement of the AI regulation include fines up to six per cent of total worldwide annual revenue.

  5. So far in 2021, general AI bills or resolutions in the U.S. have been introduced in at least 17 states and adopted in Alabama, Colorado, Illinois and Mississippi. These regulatory efforts include establishing review committees to advise on ADM and AI, restricting use of these systems in the public sector and insurance industry, and targeting applications such as using AI in recruitment videos. Despite increased attention from the Federal Trade Commission, no national comprehensive framework to regulate AI or ADM has been proposed.

Have more than five minutes? Contact Ellie Marshall, John Lenz, or any member of our Privacy & Data Protection group.
{^widget|(singleimagelinktarget)_parent|(singleimagesize)quarter|(singleimagelinkurl)%7e%2fpages%2fcybersecurity-resource-centre|(singleimageurl)%7e%2fgetmedia%2ff03e2186-c3a8-4147-b5cb-79951934a1d6%2fInline-image_Button-Cybersecurity-Resource-Centre_1004.jpg.aspx|(singleimageshowornament)False|(singleimagealttext)Cybersecurity+Resource+Centre|(singleimagetext)Click+on+the+above+image+to+view+our+Cybersecurity+Resource+Centre.|(name)BLKWP.InlineImage|(widget_displayname)Blakes+Inline+Image|(width)|(height)^}