The scope of NYC Local Law 144 turns on whether a given tool is an automated employment decision tool within the meaning of the DCWP Final Rules. In the market, this question is answered with far too much wishful thinking. Vendors have an incentive to interpret the definition narrowly. Employers have an incentive to accept the vendor's interpretation. Neither incentive is aligned with the rule's actual text.

This note walks through the definition literally, examines the three vendor arguments most often raised to place a tool outside scope, and states plainly where those arguments succeed and where they do not.

The definition in the rules

NYC Admin Code § 20-870 defines an automated employment decision tool as any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output — including a score, classification, or recommendation — that is used to substantially assist or replace discretionary decision-making for making employment decisions that impact natural persons.

The DCWP Final Rules, at 6 RCNY § 5-300, narrow this in one important way: the computational process must derive from machine learning, statistical modeling, data analytics, or AI for which a computer identifies the inputs, the relative importance placed on those inputs, and other parameters, to improve the accuracy of the prediction or classification. This added clause means the rule targets tools that learn or are tuned by data, not rule-based filters where a human author explicitly programmed the logic.

The three thresholds

For a tool to be an AEDT within the rule, three things must be true simultaneously. The tool must use an ML, statistical, data-analytics, or AI method. The method must involve computer-identified parameters (per the Final Rules clarification). The tool's output must substantially assist or replace discretionary decision-making for an employment decision affecting a natural person.

If any of these three is absent, the tool is outside scope. If all three are present, the tool is in scope — regardless of how the vendor brands it.

What "substantially assist or replace" actually means

6 RCNY § 5-300 provides three specific scenarios that count as substantially assisting or replacing discretionary decision-making. The tool relies solely on the simplified output (the score, classification, or recommendation) without considering other factors. The tool uses the simplified output as one of a set of criteria where it is weighted more than any other criterion. Or the tool overrules conclusions derived from other factors, including human decision-making.

Those three scenarios bound the concept. A tool that produces an output the human recruiter consults alongside multiple other equally-weighted inputs, and where the human is free to diverge without overriding anything, is closer to the outside of the definition. A tool whose output is the default decision absent positive human intervention is inside. A tool whose output is one weighted input but is the highest-weighted input is inside.

Three vendor arguments and how they hold up

"Our tool only surfaces candidates — the recruiter decides." This argument is defensible only when the recruiter's review process is meaningful, independent, and not unduly influenced by the tool's ranking. In practice, where the recruiter reviews only the top-ranked candidates and never sees the lower-ranked ones, the tool has replaced discretionary decision-making on the unreviewed tail. DCWP Final Rules' third scenario — the tool overruling conclusions derived from other factors — can bite here even though the recruiter nominally has the final word. The question is not whether a human signs the final decision. The question is whether the tool substantially shaped which candidates the human could even consider.

"Our tool is rules-based, not ML." This argument depends entirely on the factual accuracy of the claim. If the tool's classification parameters are defined by a human author and never adjusted by data, the tool may fall outside the Final Rules' clarified scope. If the tool adjusts thresholds, weights, or model coefficients based on training data — even periodically — the argument collapses. Many tools marketed as rules-based are in fact ML-augmented, and a bias-audit-scope determination that accepts the marketing copy without inspecting the technical stack is not defensible in an enforcement posture.

"Our tool is used only for informational purposes, not for a decision." This argument distinguishes between the output and its use. A tool that produces an informational score that is not used for any employment decision is not in scope. A tool that produces an informational score that is then considered by a recruiter when making an employment decision is closer to scope. If the score substantially influences the recruiter's decision, the tool is in scope regardless of how the vendor characterizes its intended use.

What "employment decision" covers

The rule covers decisions that screen candidates for employment or employees for promotion within NYC. It does not cover all employment-related analytics. Compensation analytics, workforce planning tools, organizational network analysis, and engagement surveys are generally outside scope — unless their output is used to substantially assist a decision to hire, promote, terminate, or otherwise affect terms of employment in a way the rule covers.

The closest marginal cases are promotion-decision tooling (clearly covered), layoff selection tooling (covered if the output substantially assists the selection), and internal mobility recommendations (covered if they substantially assist decisions about role assignment that meet the rule's test).

The practical operational question

An employer that genuinely wants to answer the scope question correctly performs three inspections. The procurement documentation: what does the vendor say the tool does and how does it produce its output. The technical documentation: what is the model, what are the training data sources, how are parameters tuned. The deployment logic: where does the output sit in the decision workflow, how is it weighted against other inputs, who reviews it and how much latitude does the reviewer have.

If those three inspections all point outside scope, a written scope-determination memo documenting the analysis is defensible evidence if the determination is later challenged. If any of the three points inside scope, the tool is an AEDT and the LL144 preconditions apply.


For an independent LL144 bias audit, including scope-determination analysis for marginal cases, see the service page.

Primary sources. NYC Admin Code § 20-870 (definition). DCWP Final Rules, 6 RCNY § 5-300 (scope clarifications and the "substantially assist or replace" test).