April 3, 2026
Here is an uncomfortable truth about HEDIS performance in accountable care: most ACOs know which measures they are failing before the measurement year ends, and they still fail them.
It is not an information problem. Every ACO has gap reports. Most have population health platforms that surface open care gaps in colorful dashboards. The problem is the distance between identifying a gap and closing it -- a distance measured in manual phone calls, fax-based chart chases, fragmented provider workflows, and care coordinators drowning in spreadsheets.
NCQA's ongoing transition to Electronic Clinical Data Systems (ECDS) reporting for Measurement Year 2026 makes this worse before it makes it better. Six new ECDS measures land this year, including Blood Pressure Control for Patients with Diabetes (BPD-E) and Tobacco Use Screening and Cessation Intervention (TSC-E). The data plumbing required to report these measures digitally is non-trivial -- and most ACOs have not built it yet.
Meanwhile, the financial stakes keep rising. For MSSP ACOs, quality scores directly modulate the shared savings percentage. Under the APP measure set, poor quality performance can reduce an ENHANCED track ACO's sharing rate from 75% down to 40%. On a $5 million savings pool, that is a $1.75 million swing driven entirely by whether you closed enough care gaps to hit your quality thresholds.
This article breaks down the five structural reasons ACOs systematically miss HEDIS measures, and how AI agent-driven workflows are changing the math.
The standard workflow at most ACOs looks like this: a quality team pulls gap reports from their population health platform or MCO partner, distributes Excel spreadsheets to practice sites, and asks care coordinators to work through the list. By the time the report reaches the coordinator, it is already 30-60 days stale. Patients have been seen elsewhere. Gaps have been closed but not yet reflected in claims. New gaps have opened.
This batch-and-chase model creates a perpetual cycle of working outdated information. A care coordinator calls a patient about a missed A1c test, only to learn the test was completed two weeks ago at an urgent care visit that has not flowed through the claims system yet. Multiply that wasted interaction by thousands of patients, and you begin to see why manual gap closure rates rarely exceed 45-50% of addressable gaps within a measurement year.
ACO patients do not stay inside your network. A typical Medicare beneficiary attributed to an MSSP ACO sees 4-6 different providers annually, often across multiple health systems with separate EHR instances. Clinical data from these encounters -- the colonoscopy completed at a gastroenterologist outside your network, the mammogram done at an independent imaging center -- may never make it into your gap report.
For ECDS measures, this fragmentation is especially punishing. Digital reporting requires structured clinical data, not just claims. If the data lives in an EHR you do not have an interface with, the gap stays open in your reporting even though the service was rendered. NCQA's removal of the source system of record (SSoR) requirement helps, but only if you have built the data aggregation infrastructure to collect and normalize clinical data from disparate sources.
The average ACO care coordinator manages a panel of 350-500 patients with open care gaps at any given time. Their daily workflow involves reviewing gap lists, calling patients, navigating phone trees, documenting outreach attempts, updating spreadsheets, and chasing down clinical documentation from external providers. A recent industry survey found that care coordinators spend less than 30% of their time on direct patient engagement -- the rest is administrative overhead.
This is the core paradox: the people responsible for closing care gaps spend most of their time on tasks that have nothing to do with patient care. And when measurement year deadlines approach in Q4, the panic-driven "gap closure sprint" creates burnout, errors, and patient fatigue from over-outreach on already-closed gaps.
Most ACO outreach is undifferentiated. Every patient with an open gap gets the same phone call or letter, regardless of their likelihood to engage, their preferred communication channel, their scheduling constraints, or the clinical urgency of the gap. The result: outreach response rates of 15-20% for phone-based campaigns and even lower for mail.
Worse, many ACOs lack the ability to coordinate outreach across multiple gaps for the same patient. A patient with open gaps for an Annual Wellness Visit (AWV), a colorectal cancer screening, and a medication adherence review might receive three separate calls from three different coordinators -- or might receive none because each coordinator assumed someone else was handling it.
Even when a care gap is successfully closed, the confirmation loop is broken. A patient completes their diabetic eye exam, but the ophthalmologist's claim takes 45-60 days to process. The ACO's gap report still shows it as open. The care coordinator calls again. The patient is annoyed. The coordinator's time is wasted. And when the claim finally arrives, nobody updates the tracking spreadsheet because they have moved on to the next quarter's list.
This lack of real-time feedback creates phantom gaps that inflate open-gap counts, misdirect resources, and erode trust with both patients and providers.
The solution is not a better dashboard. ACOs do not have an analytics problem -- they have an execution problem. What is needed is a system that acts on data autonomously, coordinates across workflows, and operates continuously rather than in quarterly batches.
This is exactly what AI agent architectures deliver. Unlike monolithic analytics platforms, agent-based systems deploy specialized AI agents that each own a specific operational workflow -- data ingestion, gap identification, patient outreach, provider notification, closure verification -- coordinated by an orchestration layer that understands the full care plan.
Instead of waiting for quarterly claims runs, AI agents continuously ingest data from multiple sources: claims feeds, ADT notifications, EHR clinical data via FHIR APIs, pharmacy dispensing data, and lab results. A reconciliation agent matches incoming data against the active gap registry in near-real-time, automatically closing gaps when evidence of completion arrives -- regardless of the data source.
At Union Health, deploying continuous data reconciliation reduced phantom open gaps by 28% within the first 90 days. That alone freed up coordinator capacity equivalent to two full-time staff who had been chasing already-closed gaps.
An outreach agent does not just generate a call list. It stratifies patients by gap urgency, engagement probability, preferred communication channel, and scheduling availability. A patient who consistently responds to SMS gets a text with a direct scheduling link. A patient with transportation barriers gets connected to a community health worker. A patient who has ignored three phone calls gets a different approach entirely -- perhaps a provider-initiated conversation at their next scheduled visit.
At PBACO, AI-driven outreach stratification increased Annual Wellness Visit (AWV) completion rates by 31% compared to their prior undifferentiated phone campaign. The key was not reaching more patients -- it was reaching the right patients through the right channel at the right time.
For gaps that require provider action -- ordering a screening, completing a documentation template, submitting a referral -- an AI agent can push structured notifications directly into the provider's workflow. Rather than sending a generic fax listing 47 patients with open gaps (which gets filed in a pile and ignored), the system surfaces the specific gap at the point of care, when the patient is in the exam room.
This shift from batch reporting to point-of-care notification changes provider engagement fundamentally. Providers close gaps because the information is in front of them at the moment it matters, not because they worked through a spreadsheet on a Friday afternoon.
Once a gap-closing service is rendered, a verification agent monitors for confirmation across all data sources. It cross-references the scheduled appointment with an ADT notification confirming the visit occurred, matches the visit to a claim or clinical document confirming the service was performed, and updates the gap registry. If confirmation does not arrive within the expected window, it triggers a follow-up workflow.
This eliminates the phantom gap problem entirely and gives ACO leadership a real-time, accurate view of their quality position at any point in the measurement year -- not a 90-day-old approximation.
NCQA's push toward fully digital quality reporting through ECDS is not just a reporting format change -- it is a fundamental shift in what counts as evidence of gap closure. Under ECDS, structured clinical data elements (lab values, vital signs, medication lists) must flow electronically. Claims alone are insufficient for many measures.
For Measurement Year 2026, the new ECDS measures include:
ACOs that have not invested in FHIR-based data exchange, clinical data normalization, and automated quality measurement will find themselves unable to report these measures at all -- let alone perform well on them.
This is where the infrastructure investment pays compound returns. The same data ingestion and reconciliation pipeline that powers AI-driven gap closure also feeds ECDS reporting. You build it once for operational execution, and you get digital quality reporting as a byproduct.
Across our customer base, the pattern is consistent:
The common thread: none of these results came from better analytics or prettier dashboards. They came from automating the execution layer -- the thousands of discrete actions between identifying a gap and confirming it is closed.
For ACOs serious about HEDIS performance in 2026 and beyond, here is the infrastructure stack that matters:
Ingest claims, clinical data (via FHIR), ADT feeds, pharmacy data, and lab results into a single normalized data model. This is the foundation for everything else. Without it, you are running gap reports against incomplete data and making decisions in the dark.
Run measure logic continuously against the unified data platform. Every new data element -- a lab result, a claim, an ADT notification -- should trigger re-evaluation of applicable measures for the affected patient. Gaps should open and close in near-real-time, not quarterly.
Deploy specialized AI agents for outreach, provider notification, scheduling, and closure verification. Each agent operates autonomously within its domain but is coordinated by an orchestration layer that prevents duplicate outreach, prioritizes high-impact gaps, and adapts strategies based on patient response patterns.
Feed the same data platform into ECDS-compliant quality reporting. Track your quality position monthly, not annually. Identify underperforming measures and practice sites early enough to intervene -- not in December when it is too late.
HEDIS care gap closure is not a measurement problem. It is an operational execution problem that most ACOs are trying to solve with measurement tools. The ACOs that consistently hit their quality thresholds -- and capture the full shared savings multiplier -- have moved beyond dashboards to automated, agent-driven workflows that close gaps continuously throughout the measurement year.
With NCQA's ECDS transition raising the data infrastructure bar and CMS tightening the link between quality performance and financial outcomes, the gap between ACOs that automate and ACOs that spreadsheet is about to become a chasm.
The measurement year is already 25% over. The gaps you close in Q2 determine whether you hit your quality targets in December.
Gautam Chowdhary is CTO of Zynix AI, where we build the AI agent infrastructure that ACOs use to automate care gap closure and quality reporting. If your gap closure rate is stuck below 50%, let's talk.