All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…

  • umami_wasabi
    link
    fedilink
    English
    199 months ago

    No one bother to test before deploying to all machines? Nice move.

    • @huginn@feddit.it
      link
      fedilink
      English
      21
      edit-2
      9 months ago

      This outage is probably costing a significant portion of Crowd strike’s market cap. They’re an 80 billion dollar company but this is a multibillion outage.

      Someone’s getting fired for this. Massive process failures like this means that it should be some high level managers or the CTO going out.

        • @sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          9
          edit-2
          9 months ago

          They’re already down ~9% today:

          https://finance.yahoo.com/quote/CRWD/

          So I think you’re late to the party for puts. Smart money IMO is on a call for a rebound at this point. Perhaps smarter money is looking through companies that may have been overlooked that would be CrowdStrike customers and putting puts on them. The obvious players are airlines, but there could be a ton of smaller cap stocks that outsource their IT to them, like regional trains and whatnot.

          Regardless, I don’t gamble w/ options, so I’m staying out. I could probably find a deal, but I have a day job to get to with nearly 100% odds of getting paid.

            • @sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              2
              edit-2
              9 months ago

              52 Week Range 140.52 - 398.33

              They’re about where they were back in early June. If they weather this, I don’t see a reason why they wouldn’t jump back to their all-time high in late June. This isn’t a fundamental problem with the solution, it’s a hiccup that, if they can recover quickly, will be just a blip like there was in early June.

              I think it’ll get hammered a little more today, and if the response looks good over the weekend, we could see a bump next week. It all depends on how they handle this fiasco this weekend.

            • @sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              2
              edit-2
              9 months ago

              Nice. The first comment is basically saying, “they’re best in class, so they’re worth the premium.” And then the general, “you’ll probably do better by doing the opposite of /r/wallstreetbets” wisdom.

              So yeah, if I wanted to gamble, I’d be buying calls for a week or so out when everyone realizes that the recovery was relatively quick and CrowdStrike is still best in class and retained its customers. I think that’s the most likely result here. Switching is expensive for companies like this, and the alternatives aren’t nearly as good.

  • @aaaaace@lemmy.blahaj.zone
    link
    fedilink
    English
    649 months ago

    https://www.theregister.com/ has a series of articles on what’s going on technically.

    Latest advice…

    There is a faulty channel file, so not quite an update. There is a workaround…

    1. Boot Windows into Safe Mode or WRE.

    2. Go to C:\Windows\System32\drivers\CrowdStrike

    3. Locate and delete file matching “C-00000291*.sys”

    4. Boot normally.

  • @boaratio@lemmy.world
    link
    fedilink
    English
    919 months ago

    CrowdStrike: It’s Friday, let’s throw it over the wall to production. See you all on Monday!

  • @StaySquared@lemmy.world
    link
    fedilink
    English
    45
    edit-2
    9 months ago

    Been at work since 5AM… finally finished deleting the C-00000291*.sys file in CrowdStrike directory.

    182 machines total. Thankfully the process in of itself takes about 2-3 minutes. For virtual machines, it’s a bit of a pain, at least in this org.

    lmao I feel kinda bad for those companies that have 10k+ endpoints to do this to. Eff… that. Lot’s of immediate short term contract hires for that, I imagine.

  • alphacyberranger
    link
    fedilink
    English
    33
    edit-2
    9 months ago

    One possible fix is to delete a particular file while booting in safe mode. But then they’ll need to fix each system manually. My company encrypts the disks as well so it’s going to be a even bigger pain (for them). I’m just happy my weekend started early.

  • NaibofTabr
    link
    fedilink
    English
    81
    edit-2
    9 months ago

    Wow, I didn’t realize CrowdStrike was widespread enough to be a single point of failure for so much infrastructure. Lot of airports and hospitals offline.

    The Federal Aviation Administration (FAA) imposed the global ground stop for airlines including United, Delta, American, and Frontier.

    Flights grounded in the US.

    The System is Down

  • @Damage@feddit.it
    link
    fedilink
    English
    699 months ago

    The thought of a local computer being unable to boot because some remote server somewhere is unavailable makes me laugh and sad at the same time.

    • @rxxrc@lemmy.mlOP
      link
      fedilink
      English
      729 months ago

      I don’t think that’s what’s happening here. As far as I know it’s an issue with a driver installed on the computers, not with anything trying to reach out to an external server. If that were the case you’d expect it to fail to boot any time you don’t have an Internet connection.

      Windows is bad but it’s not that bad yet.

    • @Munkisquisher@lemmy.nz
      link
      fedilink
      English
      219 months ago

      A remote server that you pay some serious money to that pushes a garbage driver that prevents yours from booting

      • @Passerby6497@lemmy.world
        link
        fedilink
        English
        99 months ago

        Not only does it (possibly) prevent booting, but it will also bsod it first so you’ll have to see how lucky you get.

        Goddamn I hate crowdstrike. Between this and them fucking up and letting malware back into a system, I have nothing nice to say about them.

        • @Cryophilia@lemmy.world
          link
          fedilink
          English
          10
          edit-2
          9 months ago

          It’s bsod on boot

          And anything encrypted with bitlocker can’t even go into safe mode to fix it

          • @Passerby6497@lemmy.world
            link
            fedilink
            English
            49 months ago

            It doesn’t consistently bsod on boot, about half of affected machines did in our environment, but all of them did experience a bsod while running. A good amount of ours just took the bad update, bsod’d and came back up.

  • YTG123
    link
    fedilink
    English
    1649 months ago

    >Make a kernel-level antivirus
    >Make it proprietary
    >Don’t test updates… for some reason??

  • @Mikina@programming.dev
    link
    fedilink
    English
    209 months ago

    I see a lot of hate ITT on kernel-level EDRs, which I wouldn’t say they deserve. Sure, for your own use, an AV is sufficient and you don’t need an EDR, but they make a world of difference. I work in cybersecurity doing Red Teamings, so my job is mostly about bypassing such solutions and making malware/actions within the network that avoids being detected by it as much as possible, and ever since EDRs started getting popular, my job got several leagues harder.

    The advantage of EDRs in comparison to AVs is that they can catch 0-days. AV will just look for signatures, a known pieces or snippets of malware code. EDR, on the other hand, looks for sequences of actions a process does, by scanning memory, logs and hooking syscalls. So, if for example you would make an entirely custom program that allocates memory as Read-Write-Execute, then load a crypto dll, unencrypt something into such memory, and then call a thread spawn syscall to spawn a thread on another process that runs it, and EDR would correlate such actions and get suspicious, while for regular AV, the code would probably look ok. Some EDRs even watch network packets and can catch suspicious communication, such as port scanning, large data extraction, or C2 communication.

    Sure, in an ideal world, you would have users that never run malware, and network that is impenetrable. But you still get at avarage few % of people running random binaries that came from phishing attempts, or around 50% people that fall for vishing attacks in your company. Having an EDR increases your chances to avoid such attack almost exponentionally, and I would say that the advantage it gives to EDRs that they are kernel-level is well worth it.

    I’m not defending CrowdStrike, they did mess up to the point where I bet that the amount of damages they caused worldwide is nowhere near the amount damages all cyberattacks they prevented would cause in total. But hating on kernel-level EDRs in general isn’t warranted here.

    Kernel-level anti-cheat, on the other hand, can go burn in hell, and I hope that something similar will eventually happen with one of them. Fuck kernel level anti-cheats.