Why do my dashboards tell two different stories?

Vidhi Shah
Updated on December 14, 2025 in

I’m running into a recurring issue where two of our internal dashboards show conflicting numbers for the same KPI. One pulls from a cleaned reporting layer, and the other queries the raw tables directly. Both were built by different teams at different times. When stakeholders ask which one is correct, I genuinely don’t know how to explain the gap without sounding like “it depends.”
How do you approach resolving these mismatches and establishing a single source of truth without forcing the entire org to rebuild everything from scratch?

  • 3
  • 72
  • 2 weeks ago
 
on December 14, 2025

This usually isn’t about which dashboard is “right,” it’s about why they’re different. I start by tracing the full lineage for both: source tables, transformations, filters, and assumptions baked into each query. Almost every mismatch comes down to a small definition gap like how cancellations, retries, or test data are handled. Once that’s clear, I align stakeholders on a single, canonical definition of the KPI. That definition becomes the reference point, not the dashboards themselves. Then I update queries or add a shared metrics layer so both dashboards inherit the same logic. You don’t need to rebuild everything, you just need to govern the definitions.

  • Liked by
Reply
Cancel
on December 12, 2025

The cleanest way to resolve conflicting KPIs across dashboards is to shift the conversation away from the dashboards themselves and toward the underlying data lineage and metric definitions. When two teams pull numbers from different layers one from raw tables and one from a curated reporting model they’re not just visualizing data differently; they’re answering slightly different questions without realizing it. The first step is to trace both pipelines back to their origins: understand which tables they use, what transformations are applied, and where filters, joins, or timestamp logic diverge. This almost always reveals the root of the mismatch. From there, compare the definitions instead of the SQL. Subtle differences like how “active users” are counted, whether test accounts are excluded, or whether refunds are reversed lead to major output gaps. Once you understand both perspectives, evaluate which version best reflects the real-world business process. Sometimes the curated layer is more precise; sometimes the raw layer is more aligned with actual operations.

  • Liked by
Reply
Cancel
on December 8, 2025

The first step is to stop treating the dashboards as the problem and start treating the data lineage as the real investigation. Conflicting KPIs almost always come down to definitions, transformations, or timing differences, not wrong intentions. So before choosing which number is “right,” map how each dashboard arrives at its final value: What tables are used? What filters are applied? Are late-arriving events handled differently? Does one tool calculate at query time while the other relies on pre-aggregated logic? Once you trace the full path end-to-end, the gap usually reveals itself as a definitional mismatch rather than a data error.

  • Liked by
Reply
Cancel
Loading more replies