Using AI to Facilitate Employment-Related Processes Carries Risk

By Clay Creps

Artificial Intelligence (AI) is everywhere and is being used in virtually all industries. A survey by the Society for Human Resource Management last year found that almost 25% of employers use AI to make employment-related decisions. There are many good reasons to do so – for example, having AI sort through piles of applications or resumes may be more efficient and save on labor costs. But employers must be wary and smart about using AI to facilitate employment-related processes, as significant risks attend its use. In just one recent example, on August 9, 2023, the EEOC filed a consent decree in New York settling age discrimination claims against a company for $365,000, where the application software was programmed to reject female applicants over the age of 55 and male applicants over the age of 60.

The EEOC has been hard at work on this issue, having published guidance on the Americans with Disabilities Act and AI, as well as on adverse impact discrimination through the use of AI in employment selection procedures under Title VII. Importantly, the EEOC has said that an employer facing a Title VII or ADA discrimination claim as a result of using AI cannot blame a third-party AI vendor as a defense. No matter who designed the software, if an employer uses it and it results in discrimination, the employer can be liable.

The EEOC case discussed above apparently involved “disparate treatment” (intentional discrimination). However, the EEOC’s recent guidance is more focused on “disparate impact” discrimination, which results when an employer applies a facially neutral standard for employment decisions that nevertheless has a disproportionate adverse effect on individuals in protected classes. The EEOC’s guidance encourages employers to assess the disparate impact of any AI tool they use. If an employer realizes its AI will have an adverse impact, it should take steps to reduce the impact or use a different tool. The failure to adopt an available, less discriminatory algorithm is a basis for liability. Proposed legislation in some states, such as New York, would require using independent auditors to review the impact of the AI software, and require employers to share the audit results with its employees prior to implementing the software. Given the wide geographical net that more and more employers are casting in our hybrid work world, knowing what each state’s laws proscribe is also critical.

The EEOC’s guidance on the ADA and AI also highlights additional AI-related pitfalls for employers. Does the software provide an opportunity for the applicant or employee to request a reasonable accommodation if needed? Does the algorithm, through its predictive processes, unintentionally violate restrictions on disability-related inquiries and medical examinations? Does the tool intentionally or unintentionally screen out individuals with disabilities even though the individual may be able to do the job with a reasonable accommodation?

While the desire for an employer to utilize AI to increase efficiency and reduce costs attendant to employment-related decisions is understandable, the use of the technology clearly carries both obvious and hidden risks. Employers must be cautious and deliberate to avoid liability arising from the use of AI.

This update is prepared for the general information of our clients and friends. It should not be regarded as legal advice. If you have questions about the issues raised here, please contact any of the attorneys in our Labor & Employment Practice Group, or the attorney with whom you normally consult.