AI Lawsuits a Possibility for HR Professionals
Artificial intelligence (AI) continues to make inroads into the world of healthcare, and it’s got nothing to do with the latest Terminator movie (how many times can Arnold be ba-ack, anyway?!). Increasingly, HR professionals are using AI in business decisions involving recruiting, employee performance evaluations, promotions and disciplinary action. Doing so could spur a rise in AI lawsuits.
AI Will Spark Legal Challenges
There are pros and cons associated with any new technology, and AI is no exception. Businesses are at risk for lawsuits stemming from regulation compliance, data privacy, ethics issues and more.
Bradford Newman, chair of the Paul Hastings law firm’s employee mobility and trade secret practice, explains, “AI tools are going to drive decisions like who ought to be promoted and who should be fired. When you have algorithms making decisions that impact humans in one of their most essential life functions—which is their work—there are going to be issues of fairness and transparency and legal challenges.”
Healthcare organizations have never before had the tools to extract such large amounts of data as they do today, but doing so presents both technological and legal challenges. Some companies are establishing central data councils and best practice policies to help with both technical and legal obstacles. Lawyers must ensure a company’s AI systems are in compliance with anti-discrimination laws and that data privacy rules are the same throughout the organization.
Mitigating risks often involves partnering with outside legal resources in order to limit the risk of disputes between practices and patients. Keeping abreast of ever-changing laws, such as the
Artificial Intelligence Video Interview Act, which goes into effect in Illinois on January 1, 2020, is crucial. This law, the first of its kind in the nation, will help regulate AI use in the hiring process. Employers will be required to let applicants know that they will be relying on algorithms to study interview videos and explain how the AI program works and which characteristics it will use to determine job suitability. Applicants will need to agree to this before moving forward in the application process. Similar bills have been sponsored by senators in New Jersey and Oregon.
These legislative measures would require companies to measure whether their AI algorithms are biased or discriminatory and whether they pose privacy or security risks to patients. The problem is illustrated in Amazon’s recent attempt to automate their recruiting platform, an effort that resulted in teaching the algorithm to prefer male candidates. Machine learning will have to be refined in order to prevent hiring bias moving forward. HR professionals won’t be able to simply “blame it on the machine”–they’ll have to defend their use of AI and its results throughout the recruiting process or risk those lawsuits.