moonpiedumplings
The person you replied to is probably talking about this: https://wiki.debian.org/UEFI#Force_grub-efi_installation_to_the_removable_media_path
There are a few apps that I think fit this use case really well.
Languagetool is a spelling and grammer checker that has a server client model. Libreoffice now has built in languagetool integration, where it can acess a server of your choosing. I make it access the server I run locally, since archlinux packages languagetool.
Another is stirling-pdf. This is a really good pdf manipulation program that people like, that comes as a server with a web interface.
I think I have also seen socket access in Nginx Proxy Manager in some example now. I don't really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port
I don't think you need socket access for this? This is what I did: https://stackoverflow.com/questions/31149501/how-to-reach-docker-containers-by-name-instead-of-ip-address#35691865
I've seen three cases where the docker socket gets exposed to the container (perhaps there are more but I haven't seen any?):
-
Watchtower, which does auto updates and/or notifies people
-
Nextcloud AIO, which uses a management container that controls the docker socket to deploy the rest of the stuff nextcloud wants.
-
Traefik, which reads the docker socket to automatically reverse proxy services.
Nextcloud does the AIO, because Nextcloud is a complex service, but it grows to be very complex if you want more features or performance. The AIO handles deploying all the tertiary services for you, but something like this is how you would do it yourself: https://github.com/pimylifeup/compose/blob/main/nextcloud/signed/compose.yaml . Also, that example docker compose does not include other services, like collabara office, which is the google docs/sheets/slides alternative, a web based office.
Compare this to the kubernetes deployment, which yes, may look intimidating at first. But actually, many of the complexities that the docker deploy of nextcloud has are automated away. Enabling the Collabara office is just collabara.enabled: true
in the configuration of it. Tertiary services like Redis or the database, are included in the Kubernetes package as well. Instead of configuring the containers itself, it lets you configure the database parameters via yaml, and other nice things.
For case 3, Kubernetes has a feature called an "Ingress", which is essentially a standardized configuration for a reverse proxy that you can either separate out, or one is provided as part of the packages. For example, the nextcloud kubernetes package I linked above, has a way to handle ingresses in the config.
Kubernetes handles these things pretty well, and it's part of why I switched. I do auto upgrade, but I only auto upgrade my services, within the supported stable release, which is compatible for auto upgrades and won't break anything. This enables me to get automatic security updates for a period of time, before having to do a manual and potentially breaking upgrade.
TLDR: You are asking questions that Kubernetes has answers to.
Try the yaml language server by red hat, it comes with a docker compose validator.
But in general, off the top of my head, dashes = list. No dashes is a dictionary.
So this is a list:
thing:
- 1
- 2
And this is a dictionary:
dict:
key1: value1
key2: value2
And then when they can be combined into a list of dictionaries.
listofdicts:
- key1dict1: value1dict1
- key1dict2: value1dict2
key2dict2: value2dict2
And then abother thing to note is that yaml wilL convert things into a string. So if you have ports 8080:80
, this will be converted into a string, which is a clue that this is a string in a list, rather than a dictionary.
The amazon appstore had this crazy setup where you could get microtransactions in certain games without spending any real money. I must have spent over $1000 on jetpack joyride. I unlocked everything.
Yeah it's called defcon
I'll say it again and again. The problem is neither Linus, nor Kent, but the lack of resources for independent developers to do the kind of testing that is expected of the big corporations.
Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram...
But the big corpos are different. They have these massive CI/CD systems, which automatically build and test Linux on every architecture under the sun. Then they have an extra, internal review process for these patches. And then they push.
But Linux isn't like that for independent developers. What they do, is just compile the software on their own machine, boot into the kernel, and if it works it works. This is how some of the Asahi developers would do it, where they would just boot into their new kernel on their macs, and it's how I'm assuming Overstreet is doing it. Maybe there is some minimal testing involved.
So Overstreet gets confused when he's yelled at for not having tested on big endian architectures, because where is he supposed to get a big endian machine that he can afford that can actually compile the linux kernel in less than 10 years? And even if you do buy or emulate a big endian CPU, then you'll just get hit with "yeah your patch has issues on machines with 2 terabytes or more of ram" and yeah.
One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU's.
But a better option, is to make the testing resources that these corporations use, available to everybody. I think the Linux foundation should spin up a CI/CD service, so people like Kent Overstreet can test their patches on architectures and setups they don't have at home, and get it reviewed before it is dumped to the mailing list — exactly like what happens at the corporations who contribute to the Linux kernel.
Databases are special. They ofte implement their own optimizations, faster than more general system optimizations.
For examole: https://www.postgresql.org/docs/current/wal-intro.html
I didn't see much in the docs about swap, but I wouldn't be suprised if postgres also had memory optimizations, like it included it's own form of in memory compression.
Your best bet is probably to ask someone who is familiar with the internals of postgres.