Many of the automated employment decision tools used by NYC employers in 2026 are not only subject to NYC Local Law 144. They are also classified, under the European Union's Artificial Intelligence Act, as high-risk AI systems when the deploying organization has European exposure — which, for a large share of NYC-based companies, is the rule rather than the exception. The question is not whether these regimes overlap. They do. The question is whether the audit and documentation work done for one regime can be structured to serve the other, or whether the two efforts have to run in parallel.
This piece walks through what the two regimes actually require for the same tool, where they converge, where they diverge, and how a single audit cycle can be structured to produce deliverables that defend both.
The two regimes in plain terms
NYC Local Law 144 applies to an employer or employment agency that uses an AEDT to substantially assist or replace discretionary decision-making for hiring or promotion decisions in NYC. It requires a bias audit no more than one year old at the time of use, a public summary of that audit, and a pre-use candidate notice.
The EU AI Act — Regulation (EU) 2024/1689 — applies to providers and deployers of AI systems. Annex III, point 4, lists as high-risk any AI system intended to be used for recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyze and filter job applications, or to evaluate candidates. The same Annex III, point 4, also covers AI systems intended to make decisions affecting the terms of work-related relationships, the promotion or termination of such relationships, to allocate tasks based on individual behavior or personal traits, or to monitor and evaluate the performance and behavior of persons in such relationships.
For a company using an ATS-integrated screening model to filter NYC job applicants while the same company has EU users, EU employees, or EU customers, both regimes apply to the same tool. The obligations are cumulative, not alternative.
Extraterritoriality is not a loophole
A common misunderstanding in 2026 is that the EU AI Act applies only to EU companies. Article 2 of the Regulation establishes that it also applies to providers placing AI systems on the market or putting them into service in the Union regardless of where the provider is established, and to deployers established in a third country where the output of the system is used in the Union. A US company screening candidates in the US using an AI tool normally does not fall under this scope on that activity alone. But a US company screening candidates for EU roles, or deploying the tool with output used in the EU, does.
This matters because NYC employers who are part of multinational groups often have EU-employment exposure through affiliated entities, EU-resident candidates applying for cross-border roles, or global ATS instances where the same model processes candidates across jurisdictions. The factual question of scope has to be answered carefully, but many NYC employers discover, once they map the tool's deployment, that the EU AI Act applies to at least some of the processing.
What each regime actually requires
The frequently repeated phrase "the EU AI Act is more prescriptive than LL144" is true in the aggregate but misses the structure. LL144 is narrow and deep — it specifies, in the DCWP Final Rules, exactly what the bias audit must compute (selection rates, impact ratios, scoring analyses) and for which protected categories. The EU AI Act is broad and documentary — it requires a risk management system, data governance, technical documentation, logging, human oversight, accuracy and robustness, and a conformity assessment, but the specific statistical methodology is largely left to harmonized standards (notably CEN-CENELEC standards under development).
For a high-risk AI system under Annex III:
Article 9 requires establishment, implementation, documentation and maintenance of a risk management system, as a continuous iterative process run throughout the entire lifecycle of the high-risk AI system.
Article 10 imposes data governance and management practices for training, validation and testing datasets, including examination of possible biases likely to affect health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination prohibited under Union law.
Article 11 and Annex IV require technical documentation covering the system's general description, its purpose, its design and architecture, the data used, the risk management system, the monitoring plan, the human oversight measures, and the accuracy, robustness and cybersecurity properties.
Article 12 requires automatic recording of events (logs) over the lifetime of the system.
Article 13 requires that the system is designed and developed in such a way that deployers can interpret the system's output and use it appropriately — instructions for use accompany the system.
Article 14 requires the system to be designed for effective oversight by natural persons during the period it is in use.
Article 15 sets the accuracy, robustness and cybersecurity requirements, with thresholds declared in instructions for use.
Article 26 governs the deployer's obligations — assigning human oversight, monitoring operation, maintaining logs, informing workers before putting a high-risk system into service.
Article 27 requires a fundamental rights impact assessment before deployment of a high-risk AI system by certain deployers (bodies governed by public law, private operators providing public services, and certain other deployers).
Article 49 (previously Article 71 in earlier drafts) governs the registration in the EU database of high-risk AI systems before they are placed on the market or put into service.
For most high-risk AI systems, these obligations become fully applicable on 2 August 2026. Certain others (prohibited practices under Article 5, and AI literacy under Article 4) entered application on 2 February 2025.
Where the two regimes overlap
The most productive way to think about the overlap is by artifact — the documents, datasets, and measurements that both regimes want to see — rather than by obligation title.
Bias statistics. Both regimes care about demographic performance disparities. LL144 specifies selection rate and impact ratio computation for the categories protected under NYC Human Rights Law (sex, ethnicity, race, and their intersections). The EU AI Act Article 10 requires examination of biases in the data and reasonable measures to detect, prevent and mitigate them. A single statistical analysis of AEDT output across demographic categories can support both — provided the categorization scheme is interoperable and the methodology is documented.
Data governance documentation. The LL144 audit implicitly requires that the auditor have access to data sufficient to compute bias statistics. The EU AI Act Article 10 requires this explicitly and in greater depth — provenance, representativeness, relevance, examination of biases, and gap identification. A data governance package that satisfies Article 10 will generally contain everything the LL144 audit needs and more.
Candidate-facing notice. LL144 requires a pre-use notice to the candidate. Article 26(11) of the EU AI Act requires that deployers of certain high-risk systems inform natural persons subjected to the use of the system. These are not identical requirements, but the candidate notice infrastructure (careers-page disclosure, ATS-integrated pre-screen notification) can be designed to serve both — with different content tailored per jurisdiction of the candidate.
Ongoing monitoring and re-audit. LL144 requires an annual bias audit. The EU AI Act's risk management system (Article 9) is explicitly an iterative process across the system's lifecycle, with post-market monitoring under Article 72. A monitoring cadence that produces quarterly bias metrics and annual full-audit refreshes serves both.
Where the two regimes genuinely diverge
Some requirements do not overlap and must be addressed separately.
The EU AI Act requires a registration in the EU database (Article 49) for high-risk AI systems, with specific information listed in Annex VIII. No such public database registration exists for LL144.
The EU AI Act requires a fundamental rights impact assessment under Article 27 for certain deployers. LL144 has no equivalent.
The LL144 bias audit has methodology specified at the level of the DCWP Final Rules. The EU AI Act's bias and robustness metrics are largely left to harmonized standards, which as of early 2026 are still being finalized. An LL144-compliant bias audit will not automatically satisfy the harmonized-standard expectation if and when that standard prescribes a different methodology.
LL144 categorizes by NYC Human Rights Law-protected classes. The EU AI Act, reflecting Article 21 of the Charter of Fundamental Rights and the Union discrimination acquis, is concerned with a broader set of grounds including ethnic origin, religion, disability, age, and sexual orientation. A bias analysis designed only for the LL144 categories may be insufficient for the EU AI Act's Article 10 expectation.
What a coordinated audit deliverable looks like
A well-structured engagement produces a single integrated deliverable package with clearly labeled sections. At minimum it contains:
A core statistical report computing selection rate, impact ratio, and score-based impact ratio (where applicable) across both LL144-relevant categories (sex, ethnicity, race, intersections) and EU-relevant additional categories where data permits. The categorization scheme, data limitations, and confidence intervals are explicit. A public-facing summary is generated from this report in the format DCWP Final Rules prescribe.
A data governance memo covering provenance, representativeness, relevance, gap analysis, and bias examination — written to Article 10 specification. This memo serves the EU AI Act's technical documentation requirement and supports the LL144 audit's data sufficiency.
A human oversight and instructions-for-use documentation package addressing Article 14 and Article 13 respectively. LL144 does not require these, but they do not conflict with LL144, and a deployer operating the tool for NYC candidates benefits from having oversight procedures documented.
A candidate notice template set — two variants: the LL144-compliant NYC pre-use notice, and the Article 26(11)-aligned disclosure for EU-located candidates. Both are served by the same backend logic, selected per candidate jurisdiction.
A risk management file, maintained throughout the system lifecycle, that serves Article 9 and simultaneously provides the audit trail the DCWP will want to see in an enforcement proceeding.
Auditor independence documentation. Under DCWP Final Rules the independence statement is explicit and mandatory. Under the EU AI Act the conformity assessment follows a different logic (internal control for most Annex III systems under Article 43, or third-party conformity assessment in specific cases), but the fundamental posture — that the work is credible — is tested similarly by either regime.
What this does not mean
A single coordinated engagement does not collapse the two regimes into one. The LL144 bias audit remains a bias audit with its own methodology. The EU AI Act conformity assessment remains a conformity assessment with its own structure. What is merged is the evidence base — the data work, the bias measurements, the governance documentation — not the compliance conclusion.
A deployer still needs to confirm, separately, that the LL144 preconditions are met (audit within one year, summary posted, candidate notice given) and that the EU AI Act preconditions are met (risk management system, technical documentation, registration where applicable, human oversight). Neither preconditions-check is automatic.
For an independent LL144 bias audit designed to produce evidence usable for EU AI Act purposes, see the service page. For EU AI Act compliance consulting without the LL144 audit (where independence conflict would otherwise arise), see Lexara Advisory.
Primary sources referenced. NYC Admin Code §§ 20-870 to 20-874 (Local Law 144 of 2021). DCWP Final Rules, 6 RCNY Ch. 5, Subchapter T. Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (EU AI Act) — Articles 2, 4, 9, 10, 11, 12, 13, 14, 15, 26, 27, 43, 49, 72; Annex III point 4; Annex IV; Annex VIII. Transitional timeline per Article 113: general application 2 August 2026 for high-risk systems; Article 4 and Article 5 since 2 February 2025.