New York Set to Implement Pioneering AI Regulation Law in July

  • New York City is set to implement a pioneering law in July that regulates the use of AI in hiring and promotion decisions. This legislation could have national implications as similar regulatory initiatives are being considered in other states.
  • Despite the groundbreaking nature of the law, it has faced criticism for its narrow definition of “automated employment decision tool” and concerns over the feasibility of independent AI audits. Meanwhile, some businesses see the regulatory measure as an opportunity and a competitive advantage in demonstrating the efficacy of their AI hiring tools.

New York City is gearing up to enact a law regulating the use of artificial intelligence (AI) in July, a first of its kind in the United States. The legislation, passed in 2021, primarily focuses on the utilization of AI in recruitment and promotion processes.

Under the law, companies deploying AI software in hiring decisions are mandated to disclose their use of such automated systems to applicants. Additionally, these firms will be required to commission independent audits annually to scrutinize the technology for any inherent bias. As part of the transparency protocol, candidates can request and receive information about the data being collected and analyzed. Violations of the provisions will attract fines.

While the law specifically targets firms operating in New York, its impact could extend nationally, given that similar AI regulatory initiatives are under consideration in California, New Jersey, Vermont, and the District of Columbia. Meanwhile, Maryland and Illinois have enacted laws curbing the use of specific AI technologies, largely focusing on workplace monitoring and job applicant screening.

Despite its groundbreaking status, the NYC law has drawn both praise and criticism. Critics, like Alexandra Givens, president of the Center for Democracy & Technology, believe the law falls short of its potential impact. According to her, the legislation’s definition of an “automated employment decision tool” is narrow and applies only if the AI is the sole or primary factor in hiring decisions.

On the other hand, businesses have expressed concerns about the feasibility of independent audits of AI. The Software Alliance, a trade group comprising Microsoft, SAP, and Workday, argues that the field is too nascent and lacks defined standards and professional oversight bodies.

However, some view this emerging field as an opportunity. Law firms, consultants, and startups are already showing interest in the burgeoning AI audit business. In fact, companies selling AI software for recruitment and promotion see the new regulation as a competitive edge, showcasing their technology’s ability to widen the pool of job candidates and increase opportunities for workers.

The push for AI regulation is intensifying globally. Recently, experts and industry leaders, including Sam Altman of OpenAI and Geoffrey Hinton, known as the “father of AI”, have issued a warning, urging global leaders to consider the risks associated with artificial intelligence (AI) technology. They have equated these risks to other significant threats such as pandemics and nuclear war.