Navigating the AI recruitment regulatory landscape is challenging due to the fragmented nature of regulations in the US at the state level, and between the US, UK, and EU at the national level. This area of policy is dynamic, evolving piecemeal as governments grapple with the philosophical and practical implications of AI regulation. In the US, individual states have approached AI recruitment or "high-risk AI" with varying degrees of stringency, while the EU has adopted a more uniform and conservative stance. The UK, on the other hand, leans towards a pro-innovation framework, though its practical application remains to be seen.

EU: The Most Conservative Approach

The EU has demonstrated the most robust appetite for AI regulation, introducing the AI Act in 2021. This legislation targets "high-risk AI," a category encompassing systems that either serve critical functions or involve sectoral risks, such as recruitment and hiring. The Act proposes significant penalties for misuse, with fines reaching up to €30 million. Though ratification across all 27 member states will take years, companies must anticipate compliance requirements to future-proof their technologies.

Key expectations under the AI Act include:

Given the nascent state of explainable AI, meeting these requirements poses substantial technical and operational challenges for recruitment tools (source).

UK: Pro-Innovation Yet Cautious

The UK’s approach, as outlined by the Department for Science, Innovation, and Technology (DSIT), emphasizes a “pro-innovation AI stance”. Unlike the EU, the UK stops short of mandating explainability but stresses the importance of auditing and performance monitoring. Reports from the Centre for Data Ethics and Innovation (CDEI) highlight concerns about bias in algorithmic decision-making. For instance:

The UK’s focus lies on addressing:

Current governance in the UK relies on the Equality Act 2010 and the Data Protection Act 2018. However, the lack of a dedicated regulatory body creates uncertainty about compliance standards and internal testing requirements.

US: Fragmented Yet Evolving

In the US, regulation varies widely across states:

At the federal level, President Biden’s 2023 executive order on AI outlines eight guiding principles:

  1. Safety and security
  2. Transparency and accountability
  3. Consumer protections
  4. Government oversight
  5. Civil rights and equity
  6. Privacy and data protection
  7. International cooperation
  8. Innovation and competitiveness

Title VII of the Civil Rights Act of 1964 offers additional protection against discrimination in recruitment tools through:

However, the absence of a regulatory body to standardize auditing complicates compliance, as highlighted by BSA, a group representing major tech companies like Adobe, IBM, and Microsoft.

Common Threads and Future Implications

Across the EU, UK, and US, several commonalities emerge:

For companies developing AI recruitment tools, the following strategies are advisable:

Balancing Regulation and Innovation

Governments in the UK, US, and EU must strike a balance between fostering innovation and ensuring ethical AI deployment. The UK’s pro-innovation stance and the US’s emphasis on deregulation offer promising paths forward. However, the EU’s stringent requirements could lead to significant operational costs for businesses.

Collaboration between industry and regulators is crucial to develop clear, feasible standards that minimize financial and administrative burdens while safeguarding equity and transparency in AI recruitment.