In the style of Higginbottom. Formerly staticv0id@reddit
Tactical Nuclear Penguin has entered the chat. 32%
Depends on the cloud provider. AWS, as an example, have up to three “availability zones” within a single data center. If the customer needs HA, they are encouraged to run their applications in separate availability zones. It means different subnets within the VPC, redundant LBs spread across those zones, and more.
There is also probably DNS-based global load balancing across different data centers.
That’s just the hosting infrastructure. I’m sure Chujo works on the office LAN as well. He might wear the infosec hat also, which means he’s up to his eyeballs in firewall policy.
I don’t envy my brethren in software development orgs. Been there, done that, got that t-shirt long ago.
This is a software development business, which is a positively bananas trade no matter what’s getting written. And the smaller the business, the more hats network guys wear. We work with everything from the server app down to the coffee machine fueling the devs. And 100% uptime isn’t the most crazy demand I’ve heard. I’m sure Chujo is busier than a one-armed paper hanger with jock itch.
At least he’s got money to throw at his hosting company. Scaling up would have been much slower in the old days.
IEC C13 socket with C14 locking plugs. Already ubiquitous in data center facilities. Rated for voltages between 110 and 250, so it works for any country’s common household current.
In 2011 I was aghast when I learned a popular keycard / biometric system used FTP to pull down its cleartext list of acceptable keys from the server.
The username was something like ADMIN and the password was PASS.
And no, that wasn’t the FTP command; that was the password.
So I’m not surprised that there are still problems with these devices.
edit: more complete thought