Ephera

joined 5 years ago
[–] Ephera@lemmy.ml 13 points 1 month ago* (last edited 1 month ago)

A few years ago, I would have fully agreed with you, but having tried my hand at (hobbyist) gamedev broke those rose-tinted glasses for me. It's just extremely hard to curate gameplay mechanics.

The only real way to know whether a mechanic works in your game, whether it's fun, is to implement it. That means you'll be programming for weeks and at the end of it, you might end up deciding that it actually isn't fun, so you get to rip it back out.
This is also a somewhat linear process. If you think of another mechanic at a later point, you're not going to re-evaluate all previous mechanics to see whether a different combination would've been more fun. Instead, you just decide whether this new mechanic adds fun to your mechanic-soup or distracts from it.

Point is, even as a hobbyist and idealist, with theoretically infinite time, I quickly learned to swallow my pride and appreciate when something just adds fun, whether it perfectly fits in or not. You're just not going to create the perfect game. And a game that's a sum of inconsistent, fun parts is still more fun than a coherent game that doesn't exist.

Of course, this does not mean, you should include mechanics even though they're overused. That seems to rather be a result from long development cycles, where games decide to include the mechanic when it's not yet overused, e.g. when a popular game featured that mechanic, but once the game comes out, then a whole bunch of other games have come out before, which had also decided to include that same mechanic.

[–] Ephera@lemmy.ml 27 points 1 month ago (14 children)

I think, part of it is also that it's a rather isolated feature which is fun on its own. You don't need multiple systems working together to make parrying fun. Instead, you just react in the right moment and there's your endorphins. Pretty much the hardest part about implementing it, is to make enemy attacks readable, which you likely need for dodge rolls, too. And then especially for AAA titles, which can't afford to experiment much, such an isolated feature is just a no-brainer to include.

[–] Ephera@lemmy.ml 2 points 1 month ago

OCaml has a camel with two bumps. So, that's gotta be the Perl dromedary camel...

[–] Ephera@lemmy.ml 1 points 1 month ago

Any normal UI framework will unload UI elements when they're not shown. Yes, that means a CPU/memory spike is normal. But on a modern PC, that spike should be much lower than even 1%, which is why you can't typically see it.

[–] Ephera@lemmy.ml 11 points 1 month ago

I mean, for me that does come from a place of appreciating real work. If this post would've been AI-generated, I would not have cared about it at all. But that they built the whole scene in Blender, that makes it cool.

[–] Ephera@lemmy.ml 19 points 1 month ago

Yeah, I considered explaining that differently, but figured it doesn't really matter for the story. 😅
It's "edge" basically in the sense that it's on-the-edge towards the physical world: https://en.wikipedia.org/wiki/Edge_computing

In our case, there's some dumb devices, which wouldn't be able to talk across the internet on their own, so we put Raspberry Pis next to them to hook them up to the internet. In other words, the Raspberry Pis just push network packages through, they're not going to be crunching numbers or whatever.

[–] Ephera@lemmy.ml 56 points 1 month ago (3 children)

At $DAYJOB, we've been working on a service which uses Raspberry Pis as edge devices. And our product manager – bless him – has made sure we'd have enough hardware budget and wanted to buy only Raspberry Pi 5, so we'd have really good performance.

And I think, we really befuddled him with our reaction, because you know, normally devs won't say no to good hardware, but because our software happens to be efficient and Linux is efficient, we've just been like, eh, a Pi 3B+ is already a lot beefier than we need it.
We had to explain that to him like five times before he actually started to believe it. 🙃

[–] Ephera@lemmy.ml 6 points 1 month ago

Yeah, perhaps the most fitting example here is non-vegetarian diets: Feed plants to livestock. Livestock uses up some energy for its own existence. Then feed livestock to humans.

There is a slight difference in that livestock can ingest leaves, which we cannot, but in industrialized farms, they typically get fed produce anyways, to make them grow more quickly.

[–] Ephera@lemmy.ml 8 points 1 month ago

I think, they meant the opposite. Extending your workday until 3 AM means you'll be your least productive at that point. Whereas if you're coding on a passion project at 3 AM (and you're reasonably rested), then it's often the most productive time of day, because there's no distractions, nothing else to be doing...

[–] Ephera@lemmy.ml 3 points 1 month ago

The thing is, your attempts at eliminating boilerplate can be pretty bad and take pretty long before they're worse than writing out the boilerplate in full.

Boilerplate code is by itself a problem. If it's just scaffolding, i.e. you're not duplicating logic, then it still makes code harder to read and annoying to maintain.

If you are duplicating logic, then it's a maintenance nightmare. You fix a bug in one version of it, now you gotta update 14 other versions which the LLM dutifully generated with the same bug.
Or worse, it wasn't dutiful (much like a human typically isn't), so now you've got different bugs in different versions of it, as well as different fixes over time, and you quickly lose track which version is the good one.

[–] Ephera@lemmy.ml 6 points 1 month ago

The term was coined by an OpenAI co-founder. No idea, if I would call the OpenAI folks "serious", but it's not just a derogatory term, like you might think.

[–] Ephera@lemmy.ml 4 points 1 month ago* (last edited 1 month ago)

Huh, framed like that, that seems like a wild statement considering he later went on to formulate his ontological "proof", which attempts to prove God's existence without relying on axioms (and in my not-so-humble opinion fails to do so, because it assumes "good" and "evil" to exist).

But what I'm reading about his incompleteness theorems, it does seem to be a rather specific maths thing, so would've been a big leap to then be discouraged in general from trying to do proofs without axioms.

view more: ‹ prev next ›