On April 7, 2026, Quebec’s securities regulator, the Autorité des marchés financiers (AMF), published the final version of its Guideline for the Use of Artificial Intelligence (Guideline) (available only in French at the time of this publication), following a public consultation held by the AMF in fall 2025. The Guideline, which takes effect on May 1, 2027, applies to authorized insurers, financial services cooperatives, authorized trust companies and authorized deposit institutions operating in Quebec (individually, a financial institution, and collectively, the financial institutions).
The Guideline’s publication is part of a broader movement to regulate the use of artificial intelligence (AI) in the financial sector, at both the provincial and federal levels. Notably, the Office of the Superintendent of Financial Institutions has included AI models within the scope of its Guideline E-23 on Model Risk Management, which takes effect on April 1, 2027. For more information on this topic, see our previous Blakes Bulletin: OSFI Releases Final Guideline E-23 for Model Risk Management and AI Use by Federally Regulated Financial Institutions.
The Guideline sets out the AMF’s expectations regarding the steps financial institutions should adopt to effectively manage the risks associated with AI systems and to ensure that clients are treated fairly. An AI system is defined by the AMF as follows:
“An automated system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
It should be noted that the Guideline applies to any use of an AI system by a financial institution, whether or not it involves the processing of client files.
Institution-Wide Governance
The Guideline establishes a framework outlining the expected roles of the board of directors (board) and senior management of a financial institution regarding the use of AI systems. These expectations are in addition to their existing governance responsibilities.
- The board must, among other things, ensure that senior management promotes a corporate culture focused on the responsible use of AI. It must also ensure that the collective competence of the board members is sufficient to clearly understand the risks incurred by the financial institution, particularly where AI systems are used to carry out critical activities.
- For its part, senior management must ensure that adequate governance mechanisms are used to manage and control AI system-associated risks. Senior management must also maintain a sufficient level of knowledge about the AI systems used, given the associated risks, technological developments and staff movements. Furthermore, the AMF expects that a member of a financial institution’s senior management be designated as accountable for all AI systems used within the financial institution.
Risk Management and Risk Ratings for AI Systems
The AMF expects financial institutions to thoroughly manage the significant risks associated with the AI systems used. As such, a financial institution must be able to identify, assess, quantify and mitigate the risks associated with the AI systems it uses, in order to maintain a comprehensive view of its exposure to inherent and residual risks.
In this regard, the Guideline sets out a risk-based classification approach that uses risk ratings. The AMF expects financial institutions to maintain a centralized directory of all their AI systems, assign a risk rating to each of their AI systems, review each rating periodically and update them as needed, and adapt approval procedures and monitoring activities accordingly.
This risk-based classification serves as the anchor for all expectations applicable to an AI system throughout its lifecycle. It aims to ensure that a uniform methodology is used and that risk considerations are at the core of a financial institution’s decision-making process when using AI systems.
AI System Lifecycle
The Guideline sets out the AMF’s expectations across seven stages of an AI system’s lifecycle, including the rationale for using an AI system, as well as ongoing AI system monitoring. Financial institutions will have to develop and document appropriate governance strategies, including policies, processes, procedures and controls, that are proportional to an AI system’s risk rating.
In this respect, for each of these seven elements, the AMF expects financial institutions to implement the following measures:
- Choosing to use an AI system. Identifying and documenting the organizational needs justifying the use of an AI system and, for each revalidation, reassessing whether it remains the most appropriate solution in light of its risk rating.
- Training Data. Ensuring the quality of all data used by AI systems, during both their training and deployment.
- Procurement or Development. Taking into account an AI system’s risk rating and explainability requirements when selecting a solution.
- Validation. Establishing a validation process for AI systems that includes assessing the explainability of outputs and each system’s cybersecurity, and selecting triggers for such process to ensure effective control over specific risks such as bias, discrimination, dynamic adjustment, hallucinations and intellectual property issues.
- Approval. Applying mitigation measures and constraints in accordance with the financial institution’s risk appetite; for example, when an AI system’s risk rating is high, requiring human review of system outcomes.
- Deployment. Conducting relevant risk assessments, including assessments of cyber risk and infrastructure vulnerabilities, prior to deploying AI systems.
- Monitoring. Implementing ongoing monitoring of AI system performance and use, with a particular focus on autonomous AI systems and models using dynamic adjustment.
Sound Commercial Practices and Fair Treatment of Clients
In the Guideline, the AMF sets out specific expectations for financial institutions that use AI systems in direct interactions with clients. These expectations align with the AMF’s requirements for sound business practices but are adapted to the specific context of AI systems.
In this regard, the AMF expects the following:
- Financial institutions should ensure that their code of ethics applies to the use of AI systems and reflects high standards of ethics and integrity.
- Financial institutions should identify the variables used by AI systems that may give rise to discriminatory outcomes, monitor their use and implement the necessary corrective measures without delay.
- Where AI systems influence decisions impacting clients, financial institutions should document and monitor potential biases that may affect certain groups.
- Clients should be informed that they are interacting with an AI system, regardless of the communication channel being used.
- Financial institutions should inform clients of their ability to request human assistance when interacting with an AI system and establish mechanisms that allow for such assistance to occur in a timely manner.
- Any content generated by an AI system should be accompanied by a clear statement to that effect.
- Financial institutions should provide a simple and clear explanation to clients who are subject to decisions made by or with the assistance of an AI system.
Next Steps
As mentioned above, the Guideline takes effect on May 1, 2027. The AMF reminds financial institutions of their responsibility to apply the principles and meet the expectations set out in the Guideline. Financial institutions are also reminded to ensure that the Guideline is applied according to the principle of proportionality, taking into account the nature, size, complexity and risk profile of the financial institution.
For more information, please contact the authors or any other member of our Financial Services or Technology groups.