A core tenet of any strong data security program is a deep understanding of the organization’s data and systems and how they are regulated. As automated decision-making (ADM) and artificial intelligence (AI) become more commonplace, organizations should be aware of how much their existing applications involve these systems and how they are regulated. Currently, ADM and AI systems are primarily regulated under data protection legislation to the extent that they process identifiable information. Given increased attention on the potential harms and unintended consequences associated with ADM and AI, some jurisdictions are in the process of regulating or have signalled an intention to regulate ADM and AI directly.
Below we summarize five key developments in efforts to regulate ADM and AI:
On September 21, 2021, the Quebec National Assembly adopted Bill 64 to reform public- and private-sector privacy laws. The amendments include new rules that require organizations to inform individuals if a decision about the individual is based exclusively on automated processing. Additionally, organizations must comply with an individual’s request for the personal information that was used to make the decision and the reasons for the decision. Individuals also have a right to have the personal information used by the ADM system corrected. These new obligations will come into force in September 2023.
As part of its Digital and Data Strategy, the Government of Ontario issued a white paper outlining proposals for standalone private-sector privacy legislation. The proposals, if introduced as law, would create new obligations on the use of AI like those introduced in Quebec and prohibit the use of children’s data for AI.
Bill C-11, the federal government’s proposed legislation to reform private-sector privacy legislation, died on the Order Paper with the 2021 federal election. The bill would have required organizations that use ADM systems to inform individuals of how these systems work. If privacy reform legislation is tabled again, it will likely contain similar requirements.
The EU has proposed a framework which would prohibit certain applications of ADM and AI (including real-time remote biometric identification systems and social credit scores) and require organizations to maintain extensive technical documentation, record-keeping and human oversight. Importantly, the proposed regulation would apply to Canadian businesses that operate ADM or AI systems in the EU or on EU subjects, or even where the output produced by the system is used in the EU. The proposed penalties for infringement of the AI regulation include fines up to six per cent of total worldwide annual revenue.
So far in 2021, general AI bills or resolutions in the U.S. have been introduced in at least 17 states and adopted in Alabama, Colorado, Illinois and Mississippi. These regulatory efforts include establishing review committees to advise on ADM and AI, restricting use of these systems in the public sector and insurance industry, and targeting applications such as using AI in recruitment videos. Despite increased attention from the Federal Trade Commission, no national comprehensive framework to regulate AI or ADM has been proposed.
Have more than five minutes? Contact Ellie Marshall, John Lenz, or any member of our Privacy & Data Protection group.
Click on the above image to view our Cybersecurity Resource Centre.
Blakes and Blakes Business Class communications are intended for informational purposes only and do not constitute legal advice or an opinion on any issue. We would be pleased to provide additional details or advice about specific situations if desired.
For permission to republish this content, please contact the Blakes Client Relations & Marketing Department at [email protected].
© 2023 Blake, Cassels & Graydon LLP