Miley
joined April 28, 2025
  • How can I transform left/right injury data into injured vs uninjured categories in Python?

    I’m new to programming and working on a dataset involving injury measurements from force plates. The data is currently split into left and right sides, with metrics like left peak breaking force, right peak breaking force, and combined averages. For my analysis, I need to convert this structure into “injured” and “uninjured” categories instead of(Read More)

    I’m new to programming and working on a dataset involving injury measurements from force plates. The data is currently split into left and right sides, with metrics like left peak breaking force, right peak breaking force, and combined averages.

    For my analysis, I need to convert this structure into “injured” and “uninjured” categories instead of left and right. This means dynamically identifying which side is injured for each record, then reorganizing the values so that all relevant metrics reflect injured vs uninjured rather than left vs right.

    I’m looking for a clean and efficient way to handle this transformation using Python (preferably with pandas). Ideally, the solution should:

    • Separate left and right values based on injury status
    • Reassign them into injured/uninjured columns
    • Keep the dataset structured for further analysis

    What would be the best approach to achieve this?

  • How to Flatten nested Python lists without recursion limits?

    Hi all,I’m working on cleaning up some dataset imports, and I need to flatten nested lists of unknown depth. I tried using a recursive function and also attempted itertools.chain.from_iterable, but I’m stuck when depth varies. Here’s what I’ve tried:   def flatten(lst): result = [] for x in lst: if isinstance(x, list): result.extend(flatten(x)) else: result.append(x)(Read More)

    Hi all,
    I’m working on cleaning up some dataset imports, and I need to flatten nested lists of unknown depth. I tried using a recursive function and also attempted itertools.chain.from_iterable, but I’m stuck when depth varies.

    Here’s what I’ve tried:

     
    def flatten(lst):
    result = []
    for x in lst:
    if isinstance(x, list):
    result.extend(flatten(x))
    else:
    result.append(x)
    return result

    This works but is slow for really deep nesting. Are there faster or more Pythonic ways to handle this? Any library recommendations?

    Thanks!

  • Non-IT Background – Should I Start with Data Analytics or Jump into Data Science?

    Hey everyone! I’m from a Non-IT background, but I’ve been exploring the world of Data Analytics and Data Science lately. My long-term goal is to become a Data Scientist or work in AI/ML, and I’ve picked up some basics through self-study. However, I’m confused about where to begin seriously: Some people say I should start with Data Analytics (Excel, SQL, dashboards,(Read More)

    Hey everyone! I’m from a Non-IT background, but I’ve been exploring the world of Data Analytics and Data Science lately. My long-term goal is to become a Data Scientist or work in AI/ML, and I’ve picked up some basics through self-study.

    However, I’m confused about where to begin seriously: Some people say I should start with Data Analytics (Excel, SQL, dashboards, etc.) to build a solid foundation, while others suggest I can dive directly into Python, statistics, ML, and modeling even as a non-tech person.

    If you’ve taken either of these routes, I’d love your input:

    • Does starting with analytics help when transitioning into AI/ML later?

    • Or is it better to directly jump into core Data Science concepts if I already know the basics?

    • Also, how important are tools like Power BI/Tableau vs learning Python, ML algorithms, and statistics in the early phase?

  • At what point did you realize your BI setup was answering the wrong questions?

    Most BI systems start with good intent: track performance, improve visibility, support decisions. But over time, dashboards often grow around what’s easy to measure rather than what actually matters. Teams keep adding metrics, leadership reviews charts every week, yet critical business conversations stay unchanged. Sometimes the real insight is missing, buried under perfectly accurate but(Read More)

    Most BI systems start with good intent: track performance, improve visibility, support decisions. But over time, dashboards often grow around what’s easy to measure rather than what actually matters.

    Teams keep adding metrics, leadership reviews charts every week, yet critical business conversations stay unchanged. Sometimes the real insight is missing, buried under perfectly accurate but low-impact numbers.

    Have you experienced a moment where you stepped back and realized your BI was technically correct, but strategically off?

  • Future of Data Science Moving Away From Modeling and Toward Problem Framing?

    Data science as a discipline is shifting faster than most people realize. A decade ago, the core skill set revolved around building models, tuning hyperparameters, crafting feature pipelines, and selecting algorithms. But with the rise of AutoML, pretrained foundation models, vector databases, and agentic AI systems, much of the “technical heavy lifting” is becoming automated(Read More)

    Data science as a discipline is shifting faster than most people realize. A decade ago, the core skill set revolved around building models, tuning hyperparameters, crafting feature pipelines, and selecting algorithms. But with the rise of AutoML, pretrained foundation models, vector databases, and agentic AI systems, much of the “technical heavy lifting” is becoming automated or abstracted away.

    Today, the competitive advantage is less about who can write the best model from scratch and more about who can frame the right problem, define meaningful metrics, interpret model outputs responsibly, design data loops, and understand the business impact of predictions. Even the most complex models LLMs, multimodal architectures, time-series forecasters can now be deployed with pre-built frameworks or API calls.

    This shift raises an important question about the future of the field:
    If modeling becomes commoditized, does the true value of a data scientist lie in strategic thinking rather than technical implementation?

Loading more threads