Shahir
joined May 8, 2025
  • Where do you see ChatGPT outperforming Claude, and where does it fall short?

    Both tools are widely used today, but their strengths often show up in different use cases. Some people prefer one for coding and structured tasks, while others lean toward the other for writing, reasoning, or longer context handling. From your experience: Which one performs better for your daily work? Where have you seen clear differences(Read More)

    Both tools are widely used today, but their strengths often show up in different use cases.

    Some people prefer one for coding and structured tasks, while others lean toward the other for writing, reasoning, or longer context handling.

    From your experience:

    • Which one performs better for your daily work?
    • Where have you seen clear differences in output quality or reliability?

    Looking for real-world perspectives rather than generic comparisons

  • How do you distinguish additive, semi-additive, and non-additive measures in practice?

    While working with data warehouses and BI dashboards, I often see confusion around additive, semi-additive, and non-additive measures. Conceptually, additive measures can be summed across all dimensions, semi-additive across some dimensions, and non-additive across none. But in practical implementations, especially in financial reporting, inventory tracking, or subscription analytics, the distinctions are not always straightforward. For(Read More)

    While working with data warehouses and BI dashboards, I often see confusion around additive, semi-additive, and non-additive measures.

    Conceptually, additive measures can be summed across all dimensions, semi-additive across some dimensions, and non-additive across none. But in practical implementations, especially in financial reporting, inventory tracking, or subscription analytics, the distinctions are not always straightforward.

    For example:

    • Revenue is usually additive.

    • Account balances are semi-additive.

    • Ratios like margins are non-additive.

    However, modeling and aggregation logic can vary depending on time dimensions, business rules, and reporting requirements.

    I would love to hear from the community:

    • How do you explain these differences to business stakeholders?

    • What common mistakes have you seen when modeling these measures?

    • Are there real-world scenarios where the classification becomes tricky?

    Looking forward to practical examples and insights.

  • In data interviews, what do interviewers actually value more: the final answer or the way

    I’ve been thinking about this based on my own interview experiences. Sometimes I focus a lot on getting to the “correct” answer, especially under time pressure. But I keep wondering if interviewers care more about how I break down the problem, ask questions, and explain my reasoning, even if the final solution isn’t perfect. For(Read More)

    I’ve been thinking about this based on my own interview experiences. Sometimes I focus a lot on getting to the “correct” answer, especially under time pressure. But I keep wondering if interviewers care more about how I break down the problem, ask questions, and explain my reasoning, even if the final solution isn’t perfect.

    For those who interview candidates, or have been through multiple data interviews, what has mattered more in your experience? Is it accuracy, structure, communication, or how you handle uncertainty?

    Looking to learn from others who’ve been on both sides of the table.

  • In data interviews, what do interviewers actually value more: the final answer or the way

    I’ve been thinking about this based on my own interview experiences. Sometimes I focus a lot on getting to the “correct” answer, especially under time pressure. But I keep wondering if interviewers care more about how I break down the problem, ask questions, and explain my reasoning, even if the final solution isn’t perfect. For(Read More)

    I’ve been thinking about this based on my own interview experiences. Sometimes I focus a lot on getting to the “correct” answer, especially under time pressure. But I keep wondering if interviewers care more about how I break down the problem, ask questions, and explain my reasoning, even if the final solution isn’t perfect.

    For those who interview candidates, or have been through multiple data interviews, what has mattered more in your experience? Is it accuracy, structure, communication, or how you handle uncertainty?

    Looking to learn from others who’ve been on both sides of the table.

  • What is the best visualisation to quickly spot outliers in this two-variable dataset ?

    You’re working with a performance dataset from a rapidly growing digital platform that serves millions of users across different regions and device types. The dataset captures two core numerical metrics for every user session: processing time and resource consumption. These two variables often move together, but not always and the moments when they don’t align(Read More)

    You’re working with a performance dataset from a rapidly growing digital platform that serves millions of users across different regions and device types. The dataset captures two core numerical metrics for every user session: processing time and resource consumption. These two variables often move together, but not always and the moments when they don’t align usually indicate deeper issues such as capacity overload, inefficient requests, or poorly optimized devices.

    As you explore the dataset, you notice that summary statistics alone can’t give you the clarity you need. The averages look normal, the percentiles look acceptable, yet some users are still reporting unexpected slowdowns. When you dig deeper, it becomes clear that the problematic behaviour only emerges when both numerical variables are analysed together. Patterns don’t show up in isolation; they show up in the relationship between the two.

Loading more threads