Organisations are increasingly reaching for AI tools to record, transcribe or summarise meetings. AI tools promise to reduce the time spent preparing notes and action lists, they provide searchable records of meetings and they may be used to capture institutional knowledge. Businesses are moving quickly to pilot and deploy a variety of use cases, using a range of tools, targeting different types of meeting and, at the frontier, exploring sophisticated uses for the AI-generated outputs.
The opportunities presented and pace of uptake is clear. In this blog we identify some of the key legal issues to consider when building or deploying AI transcription tools and discuss the potential mitigants that may help businesses to capitalise on these tools and maintain momentum, whilst managing potential risks. As for any AI use case, good AI governance should not only manage a wide and uncertain array of risks, but, more importantly, help to encourage adoption. Governance is a driver of innovation. Getting this balance right is how we have helped our clients, be they businesses, institutions or governments.
As with any AI use case, the specific risks and mitigants for AI transcription tools will vary according to the jurisdictions involved, how the specific tool works (e.g. the underlying AI models, system functionality and orchestration, and data retention), the terms of use and the use cases that are proposed. For the purposes of this blog, we have adopted a jurisdiction and use agnostic view. We also limit our comments to the summary and transcription of meetings, and not any related tasks that workplace AI assistants can perform. Many of these risks are not new. However, in this context they may be amplified.
Use and configuration
Accuracy and reliance
It’s well known that AI generated transcripts and summaries may be inaccurate (and the degree of inaccuracy for a given task is jagged). If outputs are treated as “official” records without verification, inaccuracies may resurface in disputes, complicate investigations, or adversely inform subsequent decisions. All this may happen months or years after the event in question when the AI-generated record may be the only record and therefore be difficult to disprove. The risk is heightened in board or regulatory meetings, HR processes, negotiations, and other sensitive contexts. These risks can be reduced through clear labelling of raw outputs (e.g. automatically applying “AI-generated, draft only” labels to raw transcriptions), human review, controlled distribution and appropriate document retention arrangements.
Scope and prohibited uses
Recommended and prohibited uses should be considered at the outset. Not every meeting (or part of a meeting) will be suitable for transcription.
Depending on an organisation’s risk appetite, the tool may be deactivated for certain meetings, such as those involving:
(i) legal counsel providing advice, at least on particularly sensitive matters (see also the discussion of privilege below)
(ii) third parties whose contracts or policies restrict AI transcription or the use of AI to process their data
(iii) the processing of special category personal data (for example, certain HR meetings, see data privacy section below)
(iv) situations where transcription is prohibited by law (for example it is a contempt of court to record a court hearing in England without permission), or
(v) other meetings where sensitive or confidential information is shared (e.g. discussions in relation to M&A or other transactions, contractual negotiations of any kind, or conduct that may give rise to a dispute).
In relation to meetings with third parties, organisations may need to reconcile their policies with those of the counterparty. This may result in issues if these approaches do not align, or if the parties do not agree on which tool to use. Training for employees and clear procedures can address how an organisation’s representatives should act in those circumstances.
Many businesses are also considering AI transcription for board meetings. Specific requirements for these meetings may apply under company law and the company’s constitutional documents. For example, raw AI-generated records may not be sufficiently accurate nor evidence sufficiently that directors have discharged their statutory duties. The potential cultural impact of these tools is also worth considering in the use case design. Use of these tools may have a “chilling effect” on frank conversations.
Use case creep
AI tools may be repurposed, or their outputs reused in ways that extend beyond the initial scope. Whilst use of the AI tool may be appropriate in a well-designed and considered environment, use of the AI tool or its output in a different context (or for parts of meetings for which its use was not intended) may expose the business to unmanaged risk. Governance is key. Well maintained policies, human oversight, individual “meeting owner” accountability and periodic training for users will reduce this risk.
Access and distribution
Without controlled distribution, cameo attendees and external participants may inadvertently receive full transcripts or summaries of a meeting. This may result in a loss of confidentiality and, in some circumstances, privilege (see below). Businesses should explore configuration options to prevent the automatic dissemination of the raw outputs, and enable the appointed meeting owner to control distribution as appropriate.
The available configuration options will depend on the tool. Some tools make the raw outputs available initially on an online platform, which an authorised user can access to download, review and edit to create an approved version for distribution. This configuration can also be helpful to avoid “communications” of potentially inaccurate transcripts which could be caught by some regulatory disclosure regimes. In any event, training for users will also be a key line of defence.
Regulation
Record-keeping
AI summaries and transcripts may constitute “records” under various laws and may be disclosable documents in any regulatory investigation, dispute or enforcement action. Creation of a record may trigger retention obligations and regulators may request or require access to those records. The more records generated, the more information there is to keep secure, retrieve, and disclose. Businesses should consider their record-keeping obligations and data retention policies, including with respect to the review and deletion of the raw AI-generated outputs.
Data protection and privacy
Several specific privacy considerations arise in the context of AI transcription tools.
For example, depending on the relevant jurisdiction, businesses will likely need to identify an appropriate legal basis for processing personal data that is captured and transcribed by the tool. There will be jurisdictional divergence. In some jurisdictions, this may be data subject consent (to varying standards), and in others, legitimate interests may be the most appropriate basis to process the personal data.
If special category data (e.g. health data or biometric data, such as voiceprints) is to be processed, it is particularly likely that explicit consent would be required. If required, the practical approach to obtaining valid consent may present challenges, for example how to offer an alternative to individuals that do not consent, or how to obtain consent from in person participants that join after the meeting has started. This analysis and the relevant consent mechanisms may also be differently nuanced depending on whether participants are internal or external.
Companies should also consider other data protection requirements, including transparency, data minimisation, data retention and security, international data transfers and data subject access rights. Should biometric data be processed (which will depend on the tool), for example, a voiceprint or face print for the purposes of identifying the speaker, additional requirements may apply, such as specific data subject consent and the need for a data protection impact assessment.
Discoverability and privilege
Discoverability and privilege—AI-generated outputs expand the universe of potentially discoverable material
The use of AI tools to transcribe meetings increases the volume of documents that might be disclosable in subsequent litigation or enforcement action (including those containing unguarded comments that previously would never have been recorded in writing). Whether those documents are in fact disclosable and when they might be protected by legal privilege is jurisdiction-specific. We are seeing courts start to consider whether the use of AI tools in various circumstances results in the loss of privilege.
The use of some types of AI tool that do not protect confidentiality, or uncontrolled distribution of output to third parties, may mean that documents that might otherwise have been privileged are no longer protected. Businesses may wish to limit transcription in sensitive settings, control dissemination (in particular of privileged conversations), build in human review, and carefully consider data retention policies and their contractual terms with any third-party providers of the AI systems (see below).
Third-party tools
Third-party risk—contractual arrangements
Most businesses will rely on AI transcription systems (or components) that are provided by third parties. Market practice with respect to risk allocation and accountability is evolving rapidly. The rise in adoption of AI agents is leading to further changes in the negotiated outcomes. The licence terms for these systems may underpin or support several mitigants already discussed, such as use case definition, data retention, data protection requirements, cybersecurity, confidentiality and technical configuration of the AI transcription system.
However, contractual protections are not a panacea. Technical diligence of counterparties and operational and governance controls are likely to be even more important in managing risk and should dovetail with contractual assurances.
Conclusion
As with any use of AI, the risks are varied, nuanced and evolving. However, the greater strategic risk may be standing still. In-house legal teams face the challenge of identifying and evaluating risks and mapping mitigants to these quickly, to help their businesses with AI adoption supported by appropriate guardrails. In practice, these challenges are exacerbated by the democratisation of these tools, explosion in volume of the uses to which AI can be put, and the rapid developments in technology, policy and law. Frameworks and systems that allow for quick, pragmatic and technology-led legal advice that is tailored to the use case and the relevant jurisdictions are required to progress quickly whilst managing these risks.
How we can help
A&O Shearman provides market-leading leading advice in AI governance and risk management. We understand deeply the many different types of AI technologies and the governance that works. We have designed first-of-their-kind governance frameworks for our clients at the frontier of AI adoption, have helped our global clients to navigate the increasingly fragmented regulatory landscape around AI, and are shaping the market on AI contract terms. On AI call transcription specifically, we advise clients across sectors and jurisdictions on the responsible deployment of these tools for a variety of use cases, as well as their broader strategic approaches to AI.
“Boards by A&O Shearman” helps clients maximise the efficiencies and navigate the risks of AI in the context of board and committee meetings. Our corporate governance professionals draft meeting minutes with or without AI transcription. The team process maps client requirements to drive the right balance between technology and human support and mitigate the evolving risks of AI in the cosec space. Our proprietary tool, MinuteMaker, produces minutes in minutes by combining document automation, generative AI, and human subject matter expert review in one product that provides a tailored solution for any meeting, anywhere.