Article

Human capital strategy: AI, employee free speech, and conflicts of laws on DEI

Human capital strategy: AI, employee free speech, and conflicts of laws on DEI
As organizations navigate the evolving human capital landscape, board oversight must now address three critical areas: the use of AI in HR processes, the management of employee free speech on societal matters, and the complexities of conflicting employment laws across global jurisdictions. Here we explore the key issues to consider and how boards can support effective risk management in this fast-moving space. This memo forms part of a series examining critical legal and regulatory decision points, opportunities, and risks facing leaders in an increasingly uncertain global business environment. 
In brief

Board oversight must address the use of AI in HR processes, including recruitment, performance management, and dismissal, ensuring compliance and mitigating risks of bias, discrimination, and privacy breaches.

Organizations should manage employee expression on societal issues, balancing individual rights and dignity at work with the need to protect the organization from legal, operational, and reputational risks.

Multinationals must develop strategies to navigate conflicting employment, ESG, and DEI laws across global jurisdictions, embedding defensible approaches amid divergent regulatory requirements and enforcement standards.

Why is this an important issue now?

  • The adoption of AI in HR functions is starting to outpace regulation, raising risks of discrimination and bias, privacy concerns, and claims of unfair treatment in the hiring, appraisal, and dismissal processes.
  • The polarization of public discourse raises complex questions about employee free speech on societal issues. Boards must reconcile individual employee rights to express their views, with considerations around dignity at work, safety, and organizational reputation, sometimes against the evolving expectations of investors, regulators, and policymakers, and potential national security implications.
  • A key operational challenge is balancing views that conflict with positions held by other employees. These clashes create internal tensions, which require careful and consistent management. Another difficulty is the potential exposure to liability under national security laws in some jurisdictions for the comments made by employees on, say, company platforms or in a work capacity.
  • Jurisdictions are diverging on core HR policies. Some have curbed DEI measures or mandated viewpoint neutrality; others require pay equity, positive action on diversity and inclusivity, or expect extensive transparency. Multinationals must execute coherent strategy amid legal conflict and enforcement variability.

What are the main legal and regulatory considerations relating to these issues?

What rules govern the use of AI tools in recruitment, performance management, and dismissal?

  • Global baseline: AI tools used in hiring, appraisal, and dismissal attract scrutiny under equality, data protection, and labor laws. Businesses need to document use cases, provide evidence for bias, and discrimination testing, ensure that data collection is minimized, and be able to demonstrate the tool’s transparency, explainability, and human oversight. They will also have to ensure that employees have clear, robust contestation rights and procedures for fully automated, significant decisions.
  • EU: The EU AI Act introduces a range of risk-based obligations for employment-related systems (risk management, data governance, transparency, and human oversight, as well as pre-deployment checks and ongoing monitoring for high-risk tools). GDPR limits solely automated decisions with legal or similarly significant effects and triggers an employees’ right to have more information or to challenge the decision. Works councils in Germany and similar bodies in other EU countries may have to be consulted on any plans to use AI in HR decision-making.
  • UK: UK GDPR and guidance from the Information Commissioner’s Office (ICO) require data protection impact assessments (DPIAs) and fairness in high-risk data processing and human-in-the-loop oversight in instances where a decision materially affects individual employees.
  • U.S.: Although there are no comprehensive federal rules on the use of AI in connection with employment-related decisions, U.S. businesses remain liable under existing federal anti-discrimination rules. Businesses must also comply with a patchwork of state and local rules on the use of AI-based HR tools that may include auditing for bias and the need to notify employees of their use.
  • Hong Kong: Businesses must comply with the Personal Data (Privacy) Ordinance (PDPO) principles on fair collection and specific use processing. Employment laws and equal opportunities rules also apply.

What are the key issues in relation to employee free speech, rights, duties, and reputation management?

  • Global baseline: Across much of the world, social media has become both a catalyst and a cause of free speech risks. Today’s digital lifestyles and round-the-clock access to platforms that are often used for work and personal life blur the line between private and professional commentary. This makes oversight and governance harder and amplifies the potential for reputational harm. Boards should ensure their codes of conduct explicitly state what is permissible on social media and set clear expectations for behavior outside work or in a work capacity.
  • EU/UK: Strong protections for discrimination and anti-harassment. Works councils/trade unions may have consultation rights over policy changes. Protections for free speech and expression of beliefs are limited when it becomes harassment or conflicts with legitimate business goals and activities.
  • U.S.: Private-sector employees generally do not have First Amendment speech rights against their employers (public-sector employees do), but speech-related policies must account for the National Labor Relations Act’s protection of concerted activity, anti-retaliation and whistleblower laws, and state “lawful off-duty conduct” statutes. Reputation management should rely on narrowly tailored, viewpoint-neutral social media and conduct policies (mindful of the National Labor Relations Board’s standard), clear expectations for off-duty online conduct, consistent enforcement, and prompt action on harassment, discrimination, threats, or disclosure of confidential information.
  • Hong Kong: Free speech principles operate alongside anti-discrimination and harassment laws and duties. Additionally, public order and national security laws may impose further limitations on the exercise of free speech.

How are laws and regulations on DEI diverging across jurisdictions?

  • EU: Equality directives and national laws shape anti-discrimination and, in some cases, the need for positive action in member states. The imposition of transparency obligations is increasing in parts of the EU.
  • UK: The Equality Act prohibits discrimination because of a person’s protected characteristics, including age, sex, or religion. Positive action is permitted in very limited circumstances to overcome under-representation or disadvantage. Businesses with 250 or more employees must also carry out gender pay gap reporting.
  • U.S.: At the federal level, current policy restricts some aspects of DEI related activities within federal agencies and could affect some federally funded programs. However, there is no blanket nationwide prohibition. At the state level, approaches diverge: some states have limited DEI initiatives (particularly in public organizations), while others have maintained or expanded DEI requirements. Anti-discrimination laws continue to apply at the federal and state levels.
  • Hong Kong: Anti-discrimination ordinances prohibit discrimination across protected characteristics; DEI is often market-driven rather than mandated, with regulator and investor expectations relevant in some sectors.

What AI governance measures should businesses adopt in relation to the use of AI in HR functions?

  • Approve a global principles-based framework for the use of AI in recruitment, performance management, and employee dismissal.
  • Require pre-deployment impact assessments for legal, data protection, and equalities risks, bias testing, alignment with internal explainability standards, and human-in-the-loop oversight. For EU operations, this must include fundamental rights impact assessments under Article 27 of the EU AI Act where required.
  • Ensure clear processes and procedures for employees to question and challenge decisions made by AI or when AI has been used as part of decision-making.
  • Ensure proper training is given to employees to understand the implementation of processes and procedures.
  • Require regular independent, external audit of HR AI systems contestation routes and mandate rights to audit vendor compliance with governance standards.
  • Ensure works council or similar body is consulted. In the absence of such a body, employers should implement a global communications program to ensure the workforce has a clear and robust understanding of the organization’s approach to using AI technologies in HR.

What measures should be taken to ensure freedom of expression while overseeing employee conduct?

  • Adopt a global code of conduct that protects lawful expression of free speech while forbidding harassment, hate speech, and other types of comment that could damage the reputation of the business, cause friction between employees, or expose the business to national security risk.
  • Define how the rules should be applied for internal communications channels as well as for public ones.
  • Set out rules and protocols for managing disputes between employees with conflicting views, and appropriate disciplinary procedures that are fair, transparent, and defensible.

How can businesses navigate divergent DEI positions across jurisdictions?

  • Establish a global policy for ESG and DEI covering dignity at work, anti-harassment, and non-discrimination, and adapt it for local compliance where DEI measures are restricted or mandated.
  • Ring-fence the policy in a way that isolates jurisdiction-specific requirements so that local practices do not influence global standards.
  • Ensure responsibility for compliance is delegated to local functions and that they introduce bespoke training and guidance so that the business operates lawfully in each market without affecting the global policy.

What board oversight measures should be established over these areas?

  • Require quarterly reports and performance dashboards on AI usage in HR, speech-related incidents, and DEI/conflicts of laws.
  • Require annual bias testing for high-risk AI HR tools and conduct wider assurance audits on their performance and the effectiveness of key controls.
  • Require periodic legal horizon scanning across the jurisdictions in which the business operates to ensure compliance with any changes in laws and regulations.

How can risk be assessed and mitigated in relation to AI use in HR, free speech and harassment, and regulatory divergence?

  • Discrimination and due process risk (AI): Mitigate risks with bias and discrimination testing and auditing, ensure meaningful human review and oversight for consequential decisions, and document auditable explanations for any negative outcomes. For example, the board could require regular independent audits of AI models, stipulate human-in-the-loop oversight and reviews of AI-influenced decisions, and document the rationale for such employment decisions.
  • Free speech and harassment risk: Reduce risk exposure through conduct codes, and training HR, people managers and the wider workforce; ensure there is an official channel to raise complaints, robust complaints handling, and balanced, consistent sanctions.
  • Jurisdictional conflict risk: Use policy ring-fencing, jurisdiction-specific separation of functions and business units, and localized approaches and governance frameworks to comply with conflicting obligations without eroding the organization’s global principles.
  • Reputational risk: Prepare escalation procedures and approach to crisis management that can be developed into a playbook for legal, HR, and communications on how to respond rapidly to high-profile incidents.
  • Operational risk: Configure HR systems for regional/local laws and regulations; assign clear ownership to local teams; conduct regular audits and reporting to ensure compliance.

How can boards exercise oversight of implementation measures?

  • Phase 1: Assess and catalogue current approach: Map current use cases for AI in HR; inventory internal platforms for free speech; identify jurisdictions with DEI/legal conflicts. Conduct gap analysis covered by legal privilege.
  • Phase 2: Establish governance model and risk controls: Approve a governance framework for the use of AI in HR. Adopt a free speech conduct code. Develop an operating model that can adapt to regulatory divergence, including a playbook to deal with specific situations. Develop audit approach to auditing AI-based HR tools, particularly for explainability, performance data, and adaptation for different rules of specific jurisdictions. Update vendor contracts and employee notices to incorporate best practice approaches.
  • Phase 3: Implementation: Train HR teams and people managers. In parallel, introduce a global training program for the entire workforce with real-world examples to illustrate the key issues and how the organization’s values, positions, and messaging apply to each case. Undertake due diligence on systems and tools, including audits of AI decisions and how complaints about employee free speech incidents have been handled. Establish performance dashboards and regular reporting.
  • Phase 4: Board reviews and management oversight: Assess effectiveness of governance and control frameworks; refresh approach to legal divergence and conflicts of laws to take into account any regulatory changes. Commission independent audits of AI-based tools and systems every two years.

What questions should boards be asking of management?

  • Where do we use AI in recruitment, appraisal, and termination? What evidence demonstrates fairness, explainability, and effective human oversight?
  • What is our policy for employee free speech on societal issues? How do we balance conflicting rights and reputation and regulatory risks in practice? What are our incident volumes and outcomes?
  • In which jurisdictions do our DEI or HR policies face legal conflict? What ring-fencing or adaptations are in place and how are we evidencing compliance without undermining core standards?
  • What is our escalation and crisis communications plan for reputationally risky free speech or AI incidents? How quickly can we activate it across the key markets in which we operate?
  • How do our dashboards and audits inform board oversight? Where are the control gaps that require investment?