AI in US Healthcare Outsourcing – It's Time to Check Your Contracts
If you are a health insurer that outsources your claims processing functions, you are likely leveraging your service providers' artificial intelligence (AI) and have been for quite some time. The rapid growth in generative AI and concerns about its use are cause to re-examine your outsourcing agreements.
AI contracting norms and government oversight means there are likely legal and operational risks, particularly to healthcare companies, in legacy agreements with service providers that heavily use AI. Large business process as a service (BPaaS) outsourcing arrangements, which became popular in the last decade, did not envisage the use of generative AI and new market conditions and need to be revisited to address those gaps and attendant risks.
In the last six months, multiple class actions have been filed in the United States against health insurers by members who allege their claims were wrongfully denied due to AI algorithms. Federal and state governments are considering legislation to establish oversight of AI in health insurance claims processing. This includes requiring companies to publicly disclose the use of AI algorithms and to submit such algorithms and training datasets to regulators. Given these developments, it is imperative that outsourcing agreements explicitly address how service providers are permitted to use AI and how the risks presented by evolving technologies are allocated.
What to Consider
Healthcare organizations that have AI embedded in critical business processes (outsourced and otherwise) need to consider:
Determining where in the claims and appeals process AI tools are used and where human oversight begins (or should begin).
If not already addressed, explicitly require service providers to have a qualified human ensuring that the AI algorithm consistently reaches the right conclusion in claims and appeals determinations.
Evaluating the datasets used to train AI tools.
In the United States, medical data is protected health information under the Health Insurance Portability and Accountability Act (HIPAA) and must be de-identified in the manner required before it is input into an AI tool. Security measures should be updated with specificity about the data that service providers are (and are not) permitted to input into AI tools.
Amending outsourcing contract terms to add or update the following provisions:
- Revise scope descriptions and service levels to account for which processes are performed by AI and which are performed by humans.
- Restructure pricing to move from FTE-based models to other input measures (such as number of claims processed) or outcome results achieved.
- Reallocate legal liability, if appropriate. At a minimum, the service provider should be liable for damages and third-party claims resulting from inadequate security measures or otherwise attributable to them or their subcontractors. Consider whether service providers are to be liable for errors made by their AI tool(s) and for violations of law as it evolves with respect to AI.
- Increase the minimum-required amounts of cyber insurance and the specific coverage types if those details are out of date.
Healthcare organizations should initiate AI governance and contracting efforts now to avoid making significant updates at short notice if there is a change in law or a breach by a service provider.