this post was submitted on 29 Oct 2023
88 points (91.5% liked)

Programming

17343 readers
377 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Got myself a few months ago into the optimization rabbit hole as I had a slow quant finance library to take care of, and for now my most successful optimizations are using local memory allocators (see my C++ post, I also played with mimalloc which helped but custom local memory allocators are even better) and rethinking class layouts in a more "data-oriented" way (mostly going from array-of-structs to struct-of-arrays layouts whenever it's more advantageous to do so, see for example this talk).

What are some of your preferred optimizations that yielded sizeable gains in speed and/or memory usage? I realize that many optimizations aren't necessarily specific to any given language so I'm asking in !programming@programming.dev.

you are viewing a single comment's thread
view the rest of the comments
[–] thtroyer@programming.dev 10 points 1 year ago* (last edited 1 year ago) (1 children)

I've got so many more stories about bad optimizations. I guess I'll pick one of those.

There was an infamous (and critical) internal application somewhere I used to work. It took in a ton of data, putting it in the database, and then running a ton of updates to populate various fields and states. It was something like,

  • Put all data in x table with batch y.
  • Update rows in batch y with condition a, set as type a. (just using letters as placeholders for real states)
  • Update rows in batch y that haven't been updated and have condition b, set as type b.
  • Update rows in batch y that haven't been updated and have condition c, set as type c.
  • Update rows in batch y that have condition b and c and condition d, set as type d.
  • (Repeat many, many times)

It was an unreadable mess. Trying to debug it was awful. Business rules encoded as a chain of sql updates are incredibly hard to reason about. Like, how did this row end up with that data??

Me and a coworker eventually inherited the mess. Once we deciphered exactly what the rules were and realized they weren't actually that complicated, we changed the architecture to:

  • Pull data row by row (instead of immediately into a database)
  • Hydrate the data into a model
  • Set up and work with the model based on the business rules we painstakingly reverse engineered (i.e. this row is type b because conditions x,y,z)
  • Insert models to database in batches

I don't remember the exact performance impact, but it wasn't markedly faster or slower than the previous "fast" SQL-based approach. We found and fixed numerous bugs, and when new issues came up, issues could be fixed in hours rather than days/weeks.

A few words of caution: Don't assume that building things with a certain tech or architecture will absolutely be "too slow". Always favor building things in a way that can be understood. Jumping to the wrong tool "because it's fast" is a terrible idea.

Edit: fixed formatting on Sync

[–] halloween_spookster@lemmy.world 12 points 1 year ago* (last edited 1 year ago) (1 children)

If you think it’s slow, first prove it with a benchmark

So many crimes against maintainability are committed in the name of performance. Optimisation tears down abstractions, exposes internals, and couples tightly. If you’re choosing to shoulder that cost, ensure it is done for good reason.

[–] thtroyer@programming.dev 5 points 1 year ago

Yep, absolutely.

In another project, I had some throwaway code, where I used a naive approach that was easy to understand/validate. I assumed I would need to replace it once we made sure it was right because it would be too slow.

Turns out it wasn't a bottleneck at all. It was my first time using Java streams with relatively large volumes of data (~10k items) and it turned out they were damn fast in this case. I probably could have optimized it to be faster, but for their simplicity and speed, I ended up using them everywhere in that project.