• @ugo@feddit.it
    link
    fedilink
    22 months ago

    Sorry, this comment is causing me mental whiplash so I am either ignorant, am subject to non-standard circumstances, or both.

    My personal experience is that developers (the decent ones at least) know hardware better than IT people. But maybe we mean different things by “hardware”?

    You see, I work as a game dev so a good chunk of the technical part of my job is thinking about things like memory layout, cache locality, memory access patterns, branch predictor behavior, cache lines, false sharing, and so on and so forth. I know very little about hardware, and yet all of the above are things I need to keep in mind and consider and know to at least some usable extent to do my job.

    While IT are mostly concerned on how to keep the idiots from shooting the company in the foot, by having to roll out software that allows them to diagnose, reset, install or uninstall things on, etc, to entire fleets of computers at once. It also just so happens that this software is often buggy and uses 99% of your cpu taking it for spin loops (they had to roll that back of course) or the antivirus rules don’t apply on your system for whatever reason causing the antivirus to scan all the object files generated by the compiler even if they are generated in a whitelisted directory, causing a rebuild to take an hour rather than 10 minutes.

    They are also the ones that force me to change my (already unique and internal) password every few months for “security”.

    So yeah, when you say that developers often have no idea how the hardware works, the chief questions that come to mind are

    1. What kinda dev doesn’t know how hardware works to at least an usable extent?
    2. What kinda hardware are we talking about?
    3. What kinda hardware would an IT person need to know about? Network gear?
    • @Eccitaze
      link
      fedilink
      52 months ago

      When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”

      Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.

      • @ugo@feddit.it
        link
        fedilink
        12 months ago

        Thank you for the explanation, now I understand the context on the original message. It’s definitely an entirely different environment, especially the kind of software that runs on a bunch of servers.

        I have built business programs before being a game dev, still the kinds that runs on device rather than on a server. Even then, I always strived to write the most correct and performant code. Of course, I still wrote bugs like that time that a release broke the app for a subset of users because one of the database migrations didn’t apply to some real-world use case. Unfortunately, that one was due to us not having access to real world databases pr good enough surrogates due to customer policy (we were writing an unification software of sorts, up until this project every customer could give different meanings to each database column as they were just freeform text fields. Some customers even changed the schema). The migrations ran perfectly on each one of the test databases that we did have access to, but even then I did the obvious: roll the release back, add another test database that replicated the failing real world use case, fixed the failing migrations, and re released.

        So yeah, from your post it sounds that either the company is bad at hiring, bad at teaching new hires, or simply has the culture of “lol who cares someone else will fix it”. You should probably talk to management. It probably won’t do anything in the majority of cases, but it’s the only way change can actually happen.

        Try to schedule one on one session with your manager every 2 to 3 weeks to assess which systematic errors in the company are causing issues. 30 minutes sessions, just to make them aware of which parts of the company need fixing.

    • @RupeThereItIs@lemmy.world
      link
      fedilink
      12 months ago

      Game development is a very specific use case, and NOT what most people think of when talking about devs vs ops.

      I’m talking enterprise software and SaaS companies, which would be a MUCH larger part of the tech industry then games.

      There are a large number of devs who think public cloud as infrastructure is ALWAYS the right choice for cost and availability for example… Which in my experience is actually backwards, because legacy software and bad developers fail to understand the limitations of this platforms, that it’s untrustworthy by design, and outages insue.

      In these scenarios understanding how the code interacts with actual hardware (network, server and storage or their IaaS counterparts) is like black magic to most devs… They don’t get why their designs are going to fall over and sink into the swamp because of their nievete. It works fine on their laptop, but when you deploy to prod and let customer traffic in it becomes a smoking hole.