It’s not complicated until your reputation drops for a multitude of reasons, many not even directly your fault.
Neighboring bad acting IPs, too many automated emails sent out while you were testing, compromised account, or pretty much any number of things means everyone on your domain is hosed. And email is critical.
It looks like on blender’s website there’s 6 entities on there, and one of them does seem to be an individual fwiw. Here’s his website: https://aras-p.info/.
The rest all seem to be corporations though - meta, aws, some game company I’ve never heard of, AMD, and epic.
I just checked their financial report for 2022 and it looks like 50% came from patron funding (which looks like entirely companies like Google), 5% from epics grant, and then 10% corporate membership. 20% came from individuals, and the rest from random other miscellaneous things like the blender market. If you search blender foundation annual report 2022, the finances breakdown will be near the end of the slides.
I think the key there is funding from big companies. There’s tons of standards and the like in which big companies take part - both in terms of code and financial support. Big projects like the rust compiler, the Linux kernel, blender, etc. all seem to have a lot of code and money coming in from big companies. Sadly there’s only so much you can get from individuals - pretty much the only success story I know of is the wikimedia foundation.
The point is to minimize privilege to the least possible - not to make it impossible to create higher privileged containers. If a container doesn’t need to get direct raw hardware access, manage low ports on the host network, etc. then why should I give it root and let it be able to do those things? Mapping it to a user, controlling what resources it has access to, and restricting it’s capabilities means that in the event that my container gets compromised, my entire host isn’t necessarily screwed.
We’re not saying “sudo shouldn’t be able to run as root” but that “by default things shouldn’t be run with sudo - and you need a compelling reason to swap over when you do”
Yeah. There’s reasoning for why they do it on their docs, but the reasoning iirc is kanidm is a security critical resource, and it aims to not even allow any kind of insecure configuration. Even on the local network. All traffic to and from kanidm should be encrypted with TLS. I think they let you use self signed certs though?
Web 3 is different things depending on who you ask. Block chain, decentralization, or whatever else. We dunno, we aren’t there yet. I personally believe federated services have a chance of being web 3 (and Blockchain is not relevant).
Web 2 is basically big tech on the internet, everything becoming centralized. Everything became easy to use for the end user, all point and click.
Web 1 was the stuff prior to that, when the internet was the wild west.
Because I associate an OS with more then just an environment. It often has several running apps for instance, often a GUI or shell (which many containers don’t have), are concerned about some form of hardware (virtual or physical), and just… Do more.
Containers by contrast are just a view into your filesystem, and some isolation from the rest of the environment through concepts like cgroups. All the integrations with the container host are a lot simpler (and accurate) to think of as just simply removing layers of isolation, rather then thinking of it like its own VM or OS. Capabilities just fit the model a lot better.
I agree the line is iffy since many OS’s leave out a few things of the above, like RTOS’s for MCUs, but I just don’t think it’s worth thinking of a container like its own OS considering how different it is from a “normal” Linux based OS or VM.
I think the more intuitive model (to me) is instead of thinking of it as a lightweight virtual machine, or a neatly packaged up OS, is to instead think of it as a process shipped with an environment. That environment includes things like files and other executables (like apt), but in of itself doesn’t constitute an OS. It doesn’t have its own filesystems, drivers, or anything like that. By default it doesn’t run an init system like systemd either, nor does it run any other applications other than the process you execute in the environment.
For context for other readers: this is referring to NAT64. NAT64 maps the entire IPv4 address space to an IPv6 subnet (typically 64:ff9b). The router (which has an IPv4 address) drops the IPv6 prefix and does a normal IPv4 NAT from there. After that, you forward back the response over v6.
This lets IPv6 hosts reach the IPv4 internet, and let you run v6 only internally (unlike dual stack which requires all hosts having v6 and v4).