this post was submitted on 06 Sep 2023
977 points (99.4% liked)

Technology

60112 readers
2565 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] grabyourmotherskeys@lemmy.world 304 points 1 year ago (1 children)

I haven't read the article because documentation is overhead but I'm guessing the real reason is because the guy who kept saying they needed to add more storage was repeatedly told to calm down and stop overreacting.

[–] dojan@lemmy.world 24 points 1 year ago (2 children)

Ballast!

Just plonk a large file in the storage, make it relative to however much is normally used in the span of a work week or so. Then when shit hits the fan, delete the ballast and you'll suddenly have bought a week to "find" and implement a solution. You'll be hailed as a hero, rather than be the annoying doomer that just bothers people about technical stuff that's irrelevant to the here and now.

[–] lemmyvore@feddit.nl 15 points 1 year ago (1 children)

Or you could be fired because technically you're the one that caused the outage.

[–] dojan@lemmy.world 10 points 1 year ago (1 children)

Damned if you do, damned if you don't!

[–] Awkwardparticle@artemis.camp 7 points 1 year ago

The ultimate goal is having no downtime. Ballast gives you that result. The cost of downtime far larger than wasting extra space for ballast.

load more comments (1 replies)
[–] Semi-Hemi-Demigod@kbin.social 159 points 1 year ago (6 children)

Sysadmin pro tip: Keep a 1-10GB file of random data named DELETEME on your data drives. Then if this happens you can get some quick breathing room to fix things.

Also, set up alerts for disk space.

[–] dx1@lemmy.world 49 points 1 year ago

The real pro tip is to segregate the core system and anything on your system that eats up disk space into separate partitions, along with alerting, log rotation, etc. And also to not have a single point of failure in general. Hard to say exact what went wrong w/ Toyota but they probably could have planned better for it in a general way.

[–] Maximilious@kbin.social 28 points 1 year ago* (last edited 1 year ago) (1 children)

10GB is nothing in an enterprise datastore housing PBs of data. 10GB is nothing for my 80TB homelab!

[–] Semi-Hemi-Demigod@kbin.social 24 points 1 year ago (1 children)

It not going to bring the service online, but it will prevent a full disk from letting you do other things. In some cases SSH won’t work with a full disk.

[–] GhostlyPixel@lemmy.world 27 points 1 year ago (1 children)

It’s all fun and games until tab autocomplete stops working because of disk space

[–] TrenchcoatFullofBats@belfry.rip 11 points 1 year ago

The real apocalypse

[–] Lem453@lemmy.ca 27 points 1 year ago* (last edited 1 year ago) (2 children)

Even better, cron job every 5 mins and if total remaining space falls to 5% auto delete the file and send a message to sys admin

[–] Semi-Hemi-Demigod@kbin.social 20 points 1 year ago

Sends a message and gets the services ready for potential shutdown. Or implements a rate limit to keep the service available but degraded.

load more comments (1 replies)
[–] mkhopper@lemmy.world 8 points 1 year ago

500Gb maybe.

[–] z00s@lemmy.world 8 points 1 year ago

Or make the file a little larger and wait until you're up for a promotion..

[–] Swiggles@lemmy.blahaj.zone 93 points 1 year ago (1 children)

This happens. Recently we had a problem in production where our database grew by a factor of 10 in just a few minutes due to a replication glitch. Of course it took down the whole application as we ran out of space.

Some things just happen and all head room and monitoring cannot save you if things go seriously wrong. You cannot prepare for everything in life and IT I guess. It is part of the job.

[–] RidcullyTheBrown@lemmy.world 18 points 1 year ago (2 children)

Bad things can happen but that's why you build disaster recovery into the infrastructure. Especially with a compqny as big as Toyota, you can't have a single point of failure like this. They produce over 13,000 cars per day. This failure cost them close to 300,000,000 dollars just in cars.

[–] frododouchebaggins@lemmy.world 22 points 1 year ago (1 children)

The IT people that want to implement that disaster recovery plan do not make the purchasing decisions. It takes an event like this to get the retards in the C-suite listen to IT staff.

load more comments (1 replies)
load more comments (1 replies)
[–] MoogleMaestro@kbin.social 62 points 1 year ago (2 children)

There's some irony to every tech company modeling their pipeline off Toyota's Kanban system...

Only for Toyota to completely fuck up their tech by running out of disk space for their system to exist on. Looks like someone should have put "Buy more hard drives" to the board.

[–] palitu@aussie.zone 22 points 1 year ago (2 children)

not to mention the lean process effed them during fukashima and covid, with a breakdown in logistics and a shortage of chips, meant that their entire mode of operating shut down, as they had no capacity to deal with any outages in any of their systems. Maybe that has happened again, just in server land.

[–] GamingChairModel@lemmy.world 27 points 1 year ago (1 children)

Toyota was the carmaker best positioned for the COVID chip shortage because they recognized it as a bottleneck. They were pumping out cars a few months longer than the others (even if they eventually hit the same wall everyone else did).

load more comments (1 replies)
[–] burningmatches@feddit.uk 7 points 1 year ago (1 children)

It wasn’t just Fukushima. There was a massive flood in Thailand at the same time that shut down a load of suppliers. It was a really bad bit of luck but they did learn from that.

load more comments (1 replies)
[–] netburnr@lemmy.world 16 points 1 year ago

It was forever ignore in backlog

[–] MechanicalJester@lemm.ee 61 points 1 year ago (2 children)

I blame lean philosophy. Keeping spare parts and redundancy is expensive so definitely don't do it...which is just rolling the dice until it comes up snake eyes and your plant shuts down.

It's the "save 5% yearly and stop trying to avoid a daily 5% chance of disaster"

Over prepared is silly, but so is under prepared.

They were under prepared.

[–] Ryumast3r@lemmy.world 49 points 1 year ago (2 children)

Lean philosophy is supposed to account for those dice-rolling moments. It's not just "keep nothing in inventory", there is supposed to be risk assessment involved.

The problem is that leadership doesn't interpret it that way and just sees "minimizing inventory increases profit!"

[–] IonAddis@lemmy.world 9 points 1 year ago (1 children)

The problem is that leadership doesn’t interpret it that way and just sees “minimizing inventory increases profit!”

Yep. Managers prioritize short-term gains (often personal gains, too) over the overall health of a business.

There's also industries where the "lean" strategy is inappropriate because the given industry is one that booms in times of crisis when logistics to get "just in time" supplies go kaput due to the same catastrophe that's causing the industry to boom. Hospitals and clinics can end up in trouble like this.

But there's other industries too--I haven't looked for it, but I'm sure there's a plethora of analysis already on what Covid did to companies and their supply chains.

load more comments (1 replies)
[–] Aceticon@lemmy.world 7 points 1 year ago* (last edited 1 year ago) (2 children)

In my own impression from the side of software engineering (i.e. the whole discipline rather than just "coding") this kind of thing is pretty common:

  • Start with ad-hoc software development with lots of confusion, redundancy, inneficient "we'll figure it out as when we get there" and so on.
  • To improve on this somebody really thinks things through and eventually a software development process emerges, something like Agile.
  • There are lots of good reasons for every part of this processes but naturally sometimes the conditions are not met and certain parts are not suitable for use: the whole process is not and can never be a one size fits all silver bullet because it's way to complex and vast a discipline for that (if it wasn't you wouldn't need a process to do it with even the minimum of efficency).
  • However most people using it aren't the "grand thinkers" of software engineering - software architect level types with tons of experience and who thus have seen quite a lot and know why certain elements of a process are as they are, and hence when to use them and when not to use them - and instead they're run-of-the-mill, far more junior software designers and developers, as well as people from the management side of things trying to organise a tech-heavy process.

So you end up with what is an excellent process when used by people who know that each part tries to achieve, what's the point of that and when is it actually applicable, being used by people who have no such experience and understanding of software development processes and just use it as one big recipe, blindly following it with no real understanding and hence often using it incorrectly.

For example, you see tons of situations where the short development cycles of Agile (aka sprints) and use cases are used without the crucial element which is actually envolving the end-users or stakeholders in the definition of the use cases, evaluation of results and even prioritization of what to do in the next sprint, so one of the crucial objectives of use cases - the discovery of the requirement details by interactive cycles with end-users where they quickly see some results and you use their feedback to fine-tune what gets done to match what they actually need (rather than the vague very high level idea they themselves have at the start of the project) is not at all achieve and instead they're little more than small project milestones that in the old days would just be entries in Microsoft Manager or some tool like that.

This is IMHO the "problem" with any advanced systematic process in a complex domain: it's excellent in the hands of those who have enough experience and understanding of concerns at all levels to use it but they're generally either used by people without that experience (often because managers don't even recognize the value of that experience until things unexpectedly blow up) or by actual managers whose experience might be vast but is actuallly in a parallel track that's not really about dealing with the kinds of technical concerns that the process is designed to account for.

load more comments (2 replies)
[–] I_Has_A_Hat@lemmy.ml 26 points 1 year ago (4 children)

I work in a manufacturing company that was owned by the founder for 50 years until about 4 years ago when he retired. He disagreed with a lot of the ideas behind lean manufacturing so we had like 5 years worth of inventory sitting in our warehouse.

When the new management came in, there was a lot of squawking about inefficiency, how wasteful it was to keep so much raw material on the shelf, and how we absolutely needed to sell it off or get rid of it.

Then a funny little thing happened in 2020.

Suddenly, we were the only company in our industry still churning out product. Other companies were calling us, desperate to buy our products or even just our raw material. We saw MASSIVE growth the next two years and came out of the pandemic better than ever. And it was mostly thanks to the old owners view that "Just In Time" manufacturing was BS.

load more comments (4 replies)
[–] autotldr@lemmings.world 33 points 1 year ago (2 children)

This is the best summary I could come up with:


TOKYO, Sept 6 (Reuters) - A malfunction that shut down all of Toyota Motor's (7203.T) assembly plants in Japan for about a day last week occurred because some servers used to process parts orders became unavailable after maintenance procedures, the company said.

The system halt followed an error due to insufficient disk space on some of the servers and was not caused by a cyberattack, the world's largest automaker by sales said in a statement on Wednesday.

"The system was restored after the data was transferred to a server with a larger capacity," Toyota said.

The issue occurred following regular maintenance work on the servers, the company said, adding that it would review its maintenance procedures.

Two people with knowledge of the matter had told Reuters the malfunction occurred during an update of the automaker's parts ordering system.

Toyota restarted operations at its assembly plants in its home market on Wednesday last week, a day after the malfunction occurred.


The original article contains 159 words, the summary contains 159 words. Saved 0%. I'm a bot and I'm open source!

[–] RoyalEngineering@lemmy.world 38 points 1 year ago (1 children)
[–] Classy@sh.itjust.works 8 points 1 year ago (4 children)

I wonder what happens if the summary is longer than the original text. Negative percentages? Stack underflow?

load more comments (4 replies)
[–] dabster291@lemmy.zip 14 points 1 year ago

Wow, what a useful bot!

[–] AnUnusualRelic@lemmy.world 32 points 1 year ago (3 children)

Idiots, they ought to have switched to tabs for indenting. Everybody knows that.

load more comments (3 replies)
[–] blazera@kbin.social 31 points 1 year ago

This is a fun read in the wake of learning about all the personal data car manufacturers have been collecting

[–] Blurrg@lemmy.world 25 points 1 year ago

Free disk space is just inventory and therefor wasteful.

[–] c0mbatbag3l@lemmy.world 21 points 1 year ago (1 children)

Was this that full shutdown everyone thought was going to be malware?

The worst malware of all, unsupervised junior sysadmins.

[–] Takina_sOldPairTM@lemmy.world 13 points 1 year ago

Human error....lol, classic.

[–] JackbyDev@programming.dev 21 points 1 year ago
[–] LEDZeppelin@lemmy.world 14 points 1 year ago (1 children)
load more comments
view more: next ›