Home Community Insights Checkr Eyes Government Contracts to Tackle Benefit Fraud with AI-driven Tech, but Experts Warn of Legal and Technical Pitfalls

Checkr Eyes Government Contracts to Tackle Benefit Fraud with AI-driven Tech, but Experts Warn of Legal and Technical Pitfalls

Checkr Eyes Government Contracts to Tackle Benefit Fraud with AI-driven Tech, but Experts Warn of Legal and Technical Pitfalls

Checkr’s push into government identity verification highlights a growing tension: AI may help curb improper payments, but automating eligibility decisions risks legal, technical, and human fallout.


San Francisco-based identity verification startup Checkr is setting its sights on a new frontier: U.S. government contracts aimed at reducing fraud and improper payments in programs such as Medicare and Social Security.

CEO Daniel Yanisse told Business Insider that the company wants to help government agencies cut “fraud and waste” by screening employees and verifying eligibility for public benefits. While no product has been formally announced, Yanisse suggested that a more seamless, AI-driven assistance system could emerge within a few years.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

The ambition would mark a significant expansion beyond Checkr’s core business. The company primarily uses artificial intelligence to conduct background checks, aggregating data such as criminal records and motor vehicle reports. It counts platforms like Uber and Lyft among its major clients and reported more than $800 million in revenue in 2025, with over 120,000 customers. It was valued at more than $5.7 billion after raising $120 million in 2022.

The federal government has long struggled with improper payments across benefit programs. The Medicare Fee-for-Service program estimated $28.83 billion in improper payments in 2025, representing a 6.55% error rate. These figures include not only fraud but also payments made due to insufficient documentation or unverified income levels.

Checkr cited a study by Middesk, an identity verification platform, which found that of $1.09 trillion in Medicaid payments distributed to about 1.6 million providers between 2018 and 2024, roughly $563 million went to providers blacklisted from federal healthcare programs for criminal activity or misconduct.

Yanisse argued that verifying employment status and income is difficult for government agencies operating with fragmented systems. He also warned that advances in generative AI could exacerbate fraud risks through identity theft and deepfakes, increasing pressure for more sophisticated verification tools.

A spokesperson for Checkr described its government involvement as “still conceptual at this point.”

Automation meets legal constraints

While the fiscal stakes are large, experts caution that automating eligibility decisions for welfare or healthcare benefits carries significant legal and ethical risk.

Stuart Russell, a computer science professor at the University of California, Berkeley and a prominent AI researcher, said he is not optimistic about relying on large language models or similar systems to determine eligibility for benefits.

“An AI system of this kind, some version of an LLM, is incapable of producing veridical explanations of its decisions, making it impossible to challenge false decisions,” Russell said.

In the European Union, the General Data Protection Regulation (GDPR) limits decisions with significant legal effects on individuals from being made solely by automated systems, a principle that could influence U.S. debates over due process and algorithmic accountability.

Baobao Zhang, a professor at Syracuse University, said past government attempts to automate benefits systems offer cautionary lessons. She emphasized the need for rigorous real-world evaluation before deployment, noting that eligibility determinations can have life-altering consequences.

Historical cautionary tales

Two high-profile cases illustrate the risks.

In Indiana, the state outsourced its welfare eligibility system to IBM in an effort to streamline and automate processing. The project collapsed in 2010 after the state sued IBM for $1.3 billion, alleging widespread processing errors that led to faulty benefit denials. Court records show the Indiana Family and Social Services Administration argued that vulnerable residents were harmed when assistance was incorrectly terminated.

In Australia, the government’s Robodebt program used an automated system to detect welfare overpayments and demand repayment. The scheme was later ruled unlawful in 2019. A royal commission found that at least three individuals died by suicide after being falsely told they owed debts. The case became a global reference point for the risks of automated public-sector decision-making.

Ifeoma Ajunwa, founding director of the AI and the Future of Work Program at Emory University, stated that any adoption of AI by government agencies should involve independent advisory councils composed of technologists, social scientists, and representatives from affected communities.

“I think we need to move cautiously when delegating governmental functions to AI technologies,” Ajunwa said, adding that efficiency gains must be balanced with guardrails to protect citizens.

The broader question extends beyond Checkr. Governments worldwide are under pressure to modernize legacy IT systems, reduce fraud, and manage rising entitlement costs. AI-driven identity verification and anomaly detection tools are increasingly marketed as solutions.

Yet the trade-offs are structural. Automating verification could speed processing and reduce improper payments, but errors in eligibility determinations risk denying essential healthcare, housing, or income support to vulnerable populations.

Unlike private-sector screening — such as background checks for ride-hailing drivers — public benefits decisions implicate constitutional due process, statutory rights, and public trust. Transparency, auditability, and appeals mechanisms become central design requirements.

Pursuing government contracts would place the Checkr at the intersection of AI innovation and public administration reform. The opportunity is sizable: entitlement programs account for hundreds of billions in annual outlays. But the pathway is fraught with regulatory scrutiny and reputational risk.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here