this post was submitted on 01 Nov 2023
23 points (100.0% liked)
Programming
17343 readers
425 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's difficult to answer without a better understanding on your customers' workloads and how those trigger your outages. There's a bunch of valid angles from which to look at this.
If your product consistently buckles under customer workloads that they paid to be able to run, it sounds like you have either an underprovisioning or an overcommitment problem.
If you accept customer workload spikes that you don't have the resources to serve but would be able to process if they were more spread over time, it sounds like you have an admission control problem.
If it's a matter of adding resources to respond to customer activity spikes and you just have to do it manually, it sounds like you have an automation problem.
If your pager load is becoming such that you can't do project work to address whichever ones of the above are relevant to you, it's time to hand the pager back to devs. If you don't have the institutional authority to hand back the pager to devs, it sounds like you have a management problem.
It is definitely an under provisioning problem. But that under provisioning problem is caused by the customers usually being very very stingy about what they are willing to spend. Also, to be clear, it isn't buckling. It is doing exactly The thing it was designed to do. Which is to stop writes to the DB since there is no disk space left. And before this time, it's constantly throwing warnings to the end user. Usually these customers tend to ignore those errors until they reach this stop writes state.
In fact, we just had to give an RCA to the c-suite detailing why we had not scaled a customer when we should have, but we have a paper trail of them refusing the pricing and refusing to engage.
We get the same errors, and we usually reach out via email to each of these customers to help project where their data is going and scale appropriately. More frequently though, they are adding data at such a fast clip that them not responding for 2 hours would lead them directly into the stop writes status.
This has led us to guessing what our customers are going to end up at. Oftentimes being completely wrong and eating to scale multiple times.
Workload spikes are the entire reason why our database technology exists. That's the main thing we market ourselves as being able to handle (provided you gave the DB enough disk and the workload isn't sustained for a long enough to fill the discs.)
There is definitely an automation problem. Unfortunately, this particular line of our managed services will not be able to be automated. We work with special customers, with special requirements, usually fortune 100 companies that have extensive change control processes. Custom security implementations. And sometimes even no access to their environment unless they flip a switch.
To me it just seems to all go back to management/c-suite trying to sell a fantasy version of our product and setting us up for failure.
Ok I’m starting to understand where you’re coming from now! It sounds like the leaders are happy for humans to do the work of increasing capacity on-demand rather than tackle the engineering challenges in handling workload spikes. The priority is to appease customers who are from well-known, “impressive”, well-paying (maybe not?) companies. Does that sound sorta right?
Can you not add an ingest throttle / queue before the writes hit the DB? But sounds like a management problem to me honestly