Skip Navigation

Governing the Internal Use of Artificial Intelligence – Key Considerations

February 17, 2026

Many studies indicate that more than 70% of companies are making use of artificial intelligence (AI) internally. In some cases, that means the development of an agentic tool or a customer-facing chatbot to bring efficiencies to a particular workstream. In other cases, it may be more general-purpose uses by employees of widely available AI tools. Increasingly, we see evidence of this in our interactions with our clients. Sometimes a client tells us that they first considered how to address an issue through the use of an AI-assisted internet search, or the formatting of an email or correspondence indicates it was drafted by an AI tool. Other times, a virtual notetaker will simply appear in an online meeting.

While these tools can create efficiencies and reduce costs, managing their use within an organization is critical. By their fundamental nature, their use, even casually, by companies and their employees can introduce significant risks that may not be immediately apparent. This bulletin identifies several of these risks and provides suggestions for their management.

The Threat to Established Legal Rights

In many cases, the use of AI tools requires that information be transmitted by the user to a third party for processing, with the resulting product returned to the user. That element — the transmission of data outside of an organization — is an activity that can be at odds with not only traditional best practices for preserving the confidential treatment of information but also the legal requirements for maintaining valuable rights.

For example, to patent technology, it is required that the idea has not been publicly disclosed previously. Consequently, the transmission of confidential information to a third party by virtue of the use of an AI tool could compromise the patentability of an idea. To protect the ability to obtain a patent, the information that is provided to any external AI tool must be carefully controlled. This includes search assistants, conversational chatbots, tools used for web-connected research and creative tools. Notably, this also includes AI notetaking tools that are increasingly used to create efficiency in memorializing and summarizing meetings.

A similar issue arises with respect to preserving solicitor-client privilege. The confidential treatment of information shared between a person or company and their legal counsel is a fundamental principle of law, but this right exists only for so long as the information is not shared with any other parties. Even the use of an AI notetaking tool can mean the end of the protection afforded by the privilege of solicitor-client confidentiality. As a result, there are concerns about the discoverability in litigation of any information that is processed through an external AI tool.

More broadly, there is concern about the loss of confidentiality generally. This presents both the risk to a company of losing confidential treatment of proprietary information as well as the risk of breach of contract where the information has been provided by a third party under the terms of a confidentiality or non-disclosure agreement that prohibits such disclosure. As such, it is critical that companies actively restrict and monitor employee use of these tools to ensure confidential and privileged information remains protected.

The Threat to Personal Information and Security Safeguards

There is growing concern about security flaws that arise when introducing untested technologies into an existing information technology ecosystem. For example, without strong safeguards, it may be possible for bad actors to access systems where a new AI tool’s processing is connected to the public-facing internet. Ensuring that all services are vetted closely, like any other major technology procurement process, is vital to protecting companies from unanticipated harms. This concept is further explored in our recent bulletin on AI sovereignty.

There are also concerns that providing certain information, such as personal information, can constitute a breach of statutes that govern the use and disclosure of such information, such as Canada’s Personal Information Protection and Electronic Documents Act. Legislators are grappling with how to manage the impact of these powerful tools on individuals, and it should be anticipated that regulations governing the use of personal information with AI tools will expand federally and provincially in the near future. Compliance with some of these, which are likely to be driven by transparency obligations, may require significant lead times to avoid disruption. As a result, informed monitoring of both current and anticipated legal requirements is important.

The Need for Transparency and Human Oversight

While regulatory frameworks around AI continue to develop, a common element in policy discussions is the importance of transparency, such as indicating to a user when they are interacting with a human versus an AI agent in a customer service setting. Even in the absence of comprehensive AI regulations, marketing rules under the Competition Act and consumer protection laws continue to apply and prohibit false or misleading statements. Companies should take great care to meet any disclosure requirements associated with their use of AI systems and use technical documentation to develop appropriate notices to users, such as watermarks or other disclaimers.

There are also many reports of companies producing “hallucinated” reports for clients, where underlying citations or other data are not based on real information. These hallucinations arise generally from the use of publicly available AI tools whose models are intentionally weighted to generate answers that look like the right answer (technically known as “confabulations”) to a user prompt, encouraging more use of the tool at the expense of accuracy. To avoid legal risks arising from these hallucinations, such as breach of contract and reputational issues, companies should ensure appropriate human oversight is deployed where AI tools are used to generate outputs.

The Need for Board-level Accountability and Governance

While AI presents many exciting opportunities, its risks must be carefully managed. This means that directors and officers must adapt traditional governance practices to ensure appropriate oversight of AI use.

As a first step, companies should adopt an appropriate AI Governance Policy that provides an ongoing framework for the oversight of this rapidly evolving technology that includes an AI Use Policy to clearly establish expectations for employees, service providers, consultants and other personnel.

Consistent with their fiduciary duties, directors and officers should ensure that they have an appropriate understanding of the technology underlying AI tools in use or contemplated to be used within their organization, how these tools will be applied and the relevant legal constraints.

Companies must be familiar with the applicable regulatory framework and ensure ongoing compliance. They should also remain informed of anticipated developments to avoid challenging transitions to accommodate new regulations.

Companies should understand any contractual restrictions on the use of AI or, indirectly, limitations on the use of third-party data that may be at odds with the use of an AI tool, in order to ensure compliance with all contractual obligations. Companies must also understand the use of AI by any service providers to whom they transmit data and ensure those applications are not inconsistent.

AI tools offer users the chance to realize significant efficiencies. This bulletin is not suggesting that their use will inevitably lead to significant problems; rather, we are noting some of the risks involved in applying these new and rapidly evolving tools and the approaches to governing them that can help manage these concerns.

For more information, please contact the authors any member of our Emerging Companies & Venture Capital or Artificial Intelligence groups.

More insights