All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…

    • Weax
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      This is just blatantly incorrect - 99% of these outages are going to be fixed remotely.

      • boaratio@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Not at my company. We’re all stuck in BSOD boot loops thanks to BitLocker, and our BIOS is password protected by IT. This is going to take weeks for them to manually update, on site, all the computers one by one.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Eh. This particular issue is making machines bluescreen.

        Virtualized assets, If there’s a will there’s a way. Physical assets with REALLY nice KVMs… you can probably mount up an ISO to boot into to remove the stupid definitions causing this shit. Everything else? Yeah… you probably need to be there physically to fix it.

        But I will note that many companies by policy don’t allow USB insertion… virtually or not. Which will make this considerably harder across the board. I agree that I think the majority could be fixed remotely. I don’t think the “other” categories are only 1%… I think there’s many more systems that probably required physical intervention. And more importantly… it doesn’t matter if it’s 100% or 0.0001%… If that one system is the one that makes the company money… % population doesn’t matter.