this post was submitted on 26 Mar 2024
104 points (97.3% liked)
Linux Gaming
15831 readers
54 users here now
Gaming on the GNU/Linux operating system.
Recommended news sources:
Related chat:
Related Communities:
Please be nice to other members. Anyone not being nice will be banned. Keep it fun, respectful and just be awesome to each other.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What I always wonder with things like this, what is the downside? There must be a reason why that value was set lower.
Hastily read around in the related issue-threads and seems like on it's own the vm.max_map_count doesn't do much... as long as apps behave. It's some sort of "guard rail" which prevents processes of getting too many "maps". Still kinda unclear what these maps are and what happens is a process gets to have excessive amounts.
That said: https://access.redhat.com/solutions/99913
So, on the risk of higher memory usage, application can go wroom-wroom? That's my takeaway from this.
edit: ofc. I pasted the wrong link first. derrr.
edit: Suse's documentation has some info about the effects of this setting: https://www.suse.com/support/kb/doc/?id=000016692
Just checked and the steam deck has it set to 2147483642. My gentoo systems are 65530.
On one hand, I'd assume Valve knows what they're doing, but also setting the value that high seems like it's effectively removing the guardrail alltogether. Is that safe, also what is the worst that can happen if an app starts using maps in the billions?
OOM killer is what happens. But that can happen with the default setting as well.
no arguments there. Still, I kinda feel that raising the limit high enough to effectively turn off the limit is probably bit overboard. But, if it works, it works, but the kernel devs probably put the limit in place for a reason too.
The whole point is to prevent one process from using too much memory. The whole point of the Steam Deck is to have one process use all the memory.
So it makes sense to keep it relatively low for servers where runaway memory use is a bug that should crash the process, but not in a gaming scenario where high memory usage is absolutely expected.
no, it'll go vroom-vroom
My read is that it matters for servers where a large number of allocations could indicate a bug/denial of service, so it's better to crash the process.
That's not relevant on a gaming system, since you want one process to be able to use all the resources.
It changed it for playing "the finals" some weeks ago to fix a crash. I havent had any issues with my system since then so it really might just be some value that never changed because nothing needed it.
also want to know, i increased this value by a lot for gaming and have been using it ever since with no visible repercussions.