Healthcare Audit Metrics
A Historical Approach to Determining An “Acceptable Statistical Precision” Level
January 31, 2019DownloadsDownload Whitepaper
One of FTI Consulting’s statistical experts, who was involved in an arbitration matter on behalf of a large commercial payor, was asked to testify on the concept of an “acceptable statistical precision” in regard to estimating New York Medicaid overpayments during routine audits. He said that the concept of “acceptable statistical precision” could not be answered by the study of statistics – and that today, we know that it cannot even be answered by historic New York Medicaid audits.
The following questions came directly from the arbitration panel during the expert’s cross examination and were used to try to pin down the expert on what he thought were “typical” and “reliable” levels of precision in these types of healthcare audits.
- “What’s the highest/lowest precision level you’ve achieved in a sample that you’ve designed?"
- “Are they [precision levels] typically less than 20 percent?”
- “Is it your opinion that achieving an 80 percent precision in estimating an overpayment from an audit provides a reliable estimate of an overpayment? ...Assume that everything about the sampling and extrapolation was done correctly, it was random, it was executed perfectly.”
These questions implied that there is some threshold that renders an overpayment point estimate unreliable based solely on the precision level. Further, these questions implied that this threshold is somewhere below 100 percent.
Putting the issue of “reliability” aside, one could potentially assess what is “commonly accepted” by looking at historical precision levels from the New York’s Office of the Medicaid Inspector General’s (“OMIG”) (or any other government agency’s) audits. While OMIG publishes their audit reports online, one cannot conduct a detailed analysis of historical precision levels as the information had never been contained in one clean, structured dataset – until now.