- cross-posted to:
- sysadmin@lemmy.world
- cross-posted to:
- sysadmin@lemmy.world
All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.
Apparently caused by a bad CrowdStrike update.
Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…
Meanwhile Kaspersky: *thinks if so incompetent people can even make antivirus at all*
Ironic. They did what they are there to protect against. Fucking up everyone’s shit
I see a lot of hate ITT on kernel-level EDRs, which I wouldn’t say they deserve. Sure, for your own use, an AV is sufficient and you don’t need an EDR, but they make a world of difference. I work in cybersecurity doing Red Teamings, so my job is mostly about bypassing such solutions and making malware/actions within the network that avoids being detected by it as much as possible, and ever since EDRs started getting popular, my job got several leagues harder.
The advantage of EDRs in comparison to AVs is that they can catch 0-days. AV will just look for signatures, a known pieces or snippets of malware code. EDR, on the other hand, looks for sequences of actions a process does, by scanning memory, logs and hooking syscalls. So, if for example you would make an entirely custom program that allocates memory as Read-Write-Execute, then load a crypto dll, unencrypt something into such memory, and then call a thread spawn syscall to spawn a thread on another process that runs it, and EDR would correlate such actions and get suspicious, while for regular AV, the code would probably look ok. Some EDRs even watch network packets and can catch suspicious communication, such as port scanning, large data extraction, or C2 communication.
Sure, in an ideal world, you would have users that never run malware, and network that is impenetrable. But you still get at avarage few % of people running random binaries that came from phishing attempts, or around 50% people that fall for vishing attacks in your company. Having an EDR increases your chances to avoid such attack almost exponentionally, and I would say that the advantage it gives to EDRs that they are kernel-level is well worth it.
I’m not defending CrowdStrike, they did mess up to the point where I bet that the amount of damages they caused worldwide is nowhere near the amount damages all cyberattacks they prevented would cause in total. But hating on kernel-level EDRs in general isn’t warranted here.
Kernel-level anti-cheat, on the other hand, can go burn in hell, and I hope that something similar will eventually happen with one of them. Fuck kernel level anti-cheats.
Servers on Windows? Even domain controllers can be Linux-based.
Windows moment 🤣
Been at work since 5AM… finally finished deleting the C-00000291*.sys file in CrowdStrike directory.
182 machines total. Thankfully the process in of itself takes about 2-3 minutes. For virtual machines, it’s a bit of a pain, at least in this org.
lmao I feel kinda bad for those companies that have 10k+ endpoints to do this to. Eff… that. Lot’s of immediate short term contract hires for that, I imagine.
I agree that’s a better article, thanks for sharing
Np man. Thanks for mentioning it.
This is the best summary I could come up with:
There are reports of IT outages affecting major institutions in Australia and internationally.
The ABC is experiencing a major network outage, along with several other media outlets.
Crowd-sourced website Downdetector is listing outages for Foxtel, National Australia Bank and Bendigo Bank.
Follow our live blog as we bring you the latest updates.
The original article contains 52 words, the summary contains 52 words. Saved 0%. I’m a bot and I’m open source!
The original article contains 52 words, the summary contains 52 words. Saved 0%.
Good bot!
Huh. I guess this explains why the monitor outside of my flight gate tonight started BSoD looping. And may also explain why my flight was delayed by an additional hour and a half…
Interesting day
Xfinity H&I network it down so I can’t watch Star Trek. I get an error msg connection failure. Other channels work though.
My work PC is affected. Nice!
Same! Got to log off early 😎
Plot twist: you’re head of IT
Noice!
Dammit, hit us at 5pm on Friday in NZ
4:00PM here in Aus. Absolutely perfect for an early Friday knockoff.
Yep, stuck at the airport currently. All flights grounded. All major grocery store chains and banks also impacted. Bad day to be a crowdstrike employee!
My flight was canceled. Luckily that was a partner airline. My actual airline rebooked me on a direct flight. Leaves 3 hours later and arrives earlier. Lower carbon footprint. So, except that I’m standing in queue so someone can inspect my documents it’s basically a win for me. 😆
If these affected systems are boot looping, how will they be fixed? Reinstall?
It is possible to edit a folder name in windows drivers. But for IT departments that could be more work than a reimage
Having had to fix >100 machines today, I’m not sure how a reimage would be less work. Restoring from backups maybe, but reimage and reconfig is so painful
Yes, but there are less competent people. The main answer for any slightly complex issue at work is ‘reimage’ - the pancea to solve all problems. And reconfig of personal settings is the users problem.
It’s just one file to delete.
There is a fix people have found which requires manual booting into safe mode and removal of a file causing the BSODs. No clue if/how they are going to implement a fix remotely when the affected machines can’t even boot.
Do you have any source on this?
It seems like it’s in like half of the news stories.
I can confirm it works after applying it to >100 servers :/
Nice work, friend. 🤝 [back pat]
If you have an account you can view the support thread here: https://supportportal.crowdstrike.com/s/article/Tech-Alert-Windows-crashes-related-to-Falcon-Sensor-2024-07-19
Workaround Steps:
-
Boot Windows into Safe Mode or the Windows Recovery Environment
-
Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
-
Locate the file matching “C-00000291*.sys”, and delete it.
-
Boot the host normally.
-
Probably have to go old-skool and actually be at the machine.
You just need console access. Which if any of the affected servers are VMs, you’ll have.
Yes, VMs will be more manageable.
Exactly, and super fun when all your systems are remote!!!
It’s not super awful as long as everything is virtual. It’s annoying, but not painful like it would be for physical systems.
Really don’t envy physical/desk side support folks today…
And hope you are not using BitLocker cause then you are screwed since BitLocker is tied to CS.
This is going to be a Big Deal for a whole lot of people. I don’t know all the companies and industries that use Crowdstrike but I might guess it will result in airline delays, banking outages, and hospital computer systems failing. Hopefully nobody gets hurt because of it.
Big chunk of New Zealands banks apparently run it, cos 3 of the big ones can’t do credit card transactions right now
It was mayhem at PakNSave a bit ago.
In my experience it’s always mayhem at PakNSave.
If anything, it’s probably calmed P’n’S down a bit…
cos 3 of the big ones can’t do credit card transactions right now
Bitcoin still up and running perhaps people can use that
Bitcoin Cash maybe. Didn’t they bork Bitcoin (Core) so you have to wait for confirmations in the next block?
Several 911 systems were affected or completely down too