Buckshot

joined 2 years ago
[–] Buckshot@programming.dev 11 points 7 hours ago

I got dumped with fixing some bugs in a project written by a contractor who had literally done this but with extra steps.

Backend was sql server and c#/asp.

There was an api endpoint that took json, used xslt to transform to xml. Then called the stored procedure specified in request passing the xml as a parameter.

The stored procedure then queried the xml for parameters, executed the query, and returned results as xml.

Another xslt transformed that to json and returned to the client.

It was impressive how little c# there was.

Despite holding all the business logic, the sql was not in source control.

[–] Buckshot@programming.dev 3 points 1 week ago (1 children)

Came here to say this. Scariest encounter I had was earlier this year with a stag. He was standing on a footpath, it was dusk and he was shadow so I didn't see him until I was 5m away. I'm 1.9m and he was looking down at me. Had another 1m of antlers. Then my dog started barking and he just turned and walked away into the trees.

Same dog once tried to fight a pair of geese, she's similar size to them, they didn't back down.

[–] Buckshot@programming.dev 4 points 3 weeks ago

Had a 3 year old one this week. A loop that builds a list of messages to send to a queue for another service to consume then it calls BatchPublish.

Only Batch Publish was inside the loop so instead of sending n messages, it sends 1+2+3... +n

We never noticed before because n was never more then 100 and the consuming service is idempotent so the duplicate messages don't cause issues. I think it's (n(n-1))/2. So n=100 is 4950. That's only 4 minutes work. Also that code only runs at 1am.

Recently n has been hitting 1000 which produces 499500 messages and it takes a few hours to clear and triggers an alarm for delayed processing.

[–] Buckshot@programming.dev 14 points 4 weeks ago (1 children)

Was wondering about this, I'm in UK, I could just make my own instance, I'm the only user so I verify my own age, federate with everyone. All good? ¯\_(ツ)_/¯

[–] Buckshot@programming.dev 20 points 1 month ago

Yeah, people trying to downplay it are as "just a tweet" are also pieces of shit. The way she said it is irrelevant.

What is relevant is that as she was calling for hotels full of people to set on fire, there were people trying to set hotels on fire.

She called for people to be murdered, then others attempted it, she got off lightly.

[–] Buckshot@programming.dev 17 points 1 month ago (2 children)

At work people think I'm some kind of wizard with git.

I tell them I've been using it every day for 15 years and I read the freely available book on the website, link them to it, and mention the first 3 chapters probably covers 90% of their normal usage so they should just read that.

They won't do it. I don't get it. Something about written words is scary to them.

[–] Buckshot@programming.dev 13 points 1 month ago

Yeah there was a period about 15-20 years ago when they were good, I used to get working codes all the time.

I can't remember the last time I got one that worked.

[–] Buckshot@programming.dev 33 points 1 month ago (2 children)

Now apply this to literally everything else. There's a tech company inserting itself into every industry that worked fine without them, extracting money from both sides.

My local pizza place is 40% more expensive on takeaway apps, or i can just phone them directly.

[–] Buckshot@programming.dev 2 points 1 month ago

Not in the US, our water infrastructure was sold off in 90s but that makes sense. Was probably something similar They held us to it though so they overpaid for hardware beyond their needs and we forced the software to run slower

[–] Buckshot@programming.dev 6 points 1 month ago (2 children)

That would make sense, i hadn't put that together but they had a lot of embedded control systems. This was water treatment but entirely separate from the control systems but i can see them having that a standard requirement

[–] Buckshot@programming.dev 32 points 1 month ago (7 children)

Did a project several years where the customrr required that the server we delivered specifically for the project never use more than 50% CPU or RAM. No requirements about how fast it actually performs its intended function, just that it can only utilise half the available resources while doing it.

[–] Buckshot@programming.dev 19 points 1 month ago (1 children)

This is my experience. It saves a bit of typing sometimes but that's probably cancelled out by the time spent correcting it, rewriting nonsense it produced, and reviewing my corworkers PRs that didn't notice the nonsense.

 

We're using Terraform to manage our AWS infrastructure and the state itself is also in AWS. We've got 2 separate accounts for test and prod and each has an S3 bucket with the state files for those accounts.

We're not setting up alternate regions for disaster recovery and it's got me wondering if the region the terraform S3 bucket is in goes down then we won't be able to deploy anything with terraform.

So what's the best practice for this? Should we have a bucket in every region with the state files for the projects in that region but then that doesn't work for multi-region deployments.

view more: next ›