When New York City introduced a bill in 2021 governing bias audits for automated employment decision tools (AEDTs), advocates called on the City Council to strengthen the bill’s bias audit requirements and ensure that they apply to bias against any protected class under the city’s anti-discrimination laws. Instead, the city passed a narrower law that effectively simply reaffirmed that employers’ existing equal employment opportunity reporting requirements also apply when certain automated tools significantly assist in making employment decisions.
Almost a year later, New York’s Department of Consumer and Worker Protection (DCWP) proposed rules to implement the new law that further weakened its enforcement. CDT raised several concerns about DCWP’s original proposal, including that the proposed rules would not cover targeted recruitment advertisements, that they would make it easier for employers to misrepresent the weight they place on automated tools that significantly influence hiring decisions , and that applicants will not receive timely notice with the information necessary to determine whether to seek a disability accommodation to which they may be entitled under the ADA.
DCWP’s revised proposal made some improvements, such as clarifying that independent auditors must not have had an employment relationship or financial interest in an employer or employment agency that uses AEDT, consider AEDT impacts on cross-cutting categories (such as black women workers), and disclose the data sources used by AEDT and the data used in bias audits. However, we found that the revised rules further limit the types of instruments covered, ignoring instruments that modify conclusions and focusing only on instruments that completely override conclusions.
In other words, if the employer can claim no reliance exclusively on the employment decision tool, the tool will probably not be covered by the rules. The revised rules also allow an employer to rely on bias audits that use data from other users of the same AEDT, leaving it unclear how the effects of an employer’s own AEDT use will be examined. And the revised rules still do not provide for timely notification of affected job seekers.
The DCWP final rules leave these issues unresolved. The rules will come into effect on July 5, 2023.
California’s business lobby seeks to block enforcement of privacy laws
The California Chamber of Commerce filed a lawsuit seeking to delay the California Privacy Protection Agency (CPPA) from enforcing the California Privacy Rights Act (CPRA), which the CPPA was supposed to enforce starting July 1, 2023. The lawsuit essentially argued that because the CPPA missed the statutory deadline for finalizing its provisions, implementation should be delayed to give businesses more time to prepare for compliance. As detailed in a CDT analysis late last year, the CPRA is the most significant piece of legislation in US history regarding workers’ data privacy rights. CDT will continue to monitor the implementation of the law if and when the implementation of the law begins.
The AI Now report highlights the need for workplace tech policy to focus on structural change
AI Now has released its 2023 annual report on the AI landscape, titled Facing technical power. A central theme of the report is that many proposed approaches to AI regulation and accountability, which often focus on disclosure and audit requirements, fail to address many of the key harms associated with AI. Instead, the authors argue for greater use of clear rules that prohibit certain applications and uses of AI.
The report cites algorithmic workplace management as an area where such clear rules are most needed, given the way companies use automated management systems to widen existing power imbalances in the workplace and labor market. Given these imbalances, policy approaches that depend solely on disclosure and auditing are inadequate; “[t]Notifying workers that their employer has used algorithmic targeting to systematically reduce their wages is no substitute for putting rules in place to ensure that wages are set fairly and at amounts that workers can live on in the first place. The report also highlights efforts to organize workers as a “primary mode of technical accountability” and thus calls for policy frameworks that “ensure collective, not just individual, rights.”
Artificial Intelligence Ethics Experts Publish Detailed Review and Critique of Automated Workplace Surveillance
In March, AI ethics experts Merve Hickok and Nestor Maslej published an article providing an overview of the international policy landscape for automated workplace surveillance and performance monitoring systems. The article is impressive in volume; in addition to reviewing existing tools and policy frameworks, it notes the historical antecedents of modern automated surveillance (such as Taylorism) and discusses the ways in which automated surveillance undermines workers’ rights and dignity.
The article highlights the ways in which AI-powered surveillance tools widen existing power imbalances in the workplace. Like the AI Now report, the authors prefer to draw bright lines where algorithmic systems violate basic rights and dignity, arguing that such tools “should not be legitimized by principles or through the use of risk management systems. They should not be used in the first place. The article concludes with a proposed policy “road map” including a number of concrete regulatory actions, as well as a call for unions to “build better internal capacity” to organize and hold companies accountable for harmful surveillance practices.
Data & Society has published two reports on technology in the workplace
Data & Society has also published two reports in recent months exploring the topics of technology in the workplace. Both indicate how the privacy interests of workers can come into tension with other interests of workers and consumers.
The first report, On the digital threshold, released in the fall of 2022, examines the phenomenon of e-commerce customers using network doorbell cameras to monitor delivery drivers. Such customers increasingly engage in “boss behavior” by having “personalized service expectations that range from polite requests to inconvenient demands unrelated to the provider’s core business.” Even when customers see their expectations simply as preferences, “drivers whose jobs depend on maintaining a high standing understand them as instructions to be followed.”
The other report, Essentially unprotected, examines key workers’ experiences with health data and surveillance during the COVID-19 pandemic. The report was launched with a live event where the authors presented their key findings. One notable takeaway from the report is how privacy in the workplace can sometimes conflict with other worker interests—in this case, the strong interest in knowing who in the workplace may have been exposed to a deadly disease that could pass on to their colleagues.
The report notes the difficulties workers and employers have had in dealing with these issues during the pandemic. It also found that while most employers are not using the pandemic as an excuse to expand intrusive worker surveillance programs, Amazon is a notable exception, introducing a variety of tools ostensibly motivated by health and social distancing goals that deepened extensive surveillance, to which Amazon has already put its workers through. This has led to a “blurring of the lines between health data and performance data”.