this post was submitted on 27 Apr 2026
195 points (98.5% liked)

Technology

84166 readers
3934 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I love how corpos can just change the rules at will.

Edit: New prices:

https://docs.github.com/en/copilot/reference/copilot-billing/models-and-pricing

And if you look at the old pricing structure, some of the models are increasing by 27x

top 50 comments
sorted by: hot top controversial new old
[–] melfie@lemmy.zip 3 points 13 minutes ago

Just as open weight models are getting good. Qwen 3.6 27B just dropped with claimed performance approaching Opus 4.6, but it can run on a Mac with a M-series SoC. I tested it out today on a M4 Pro with Ollama and Cline and was impressed with its reasoning, but it was slow. Going to try with llama.cpp tomorrow and mess around tweaking it for speed.

https://ai.rs/ai-developer/qwen-3-6-27b-local-coding-model

AI coding agents are useful, but it’s time for the cloud-based models to chill out so we can get cheap RAM again to run our shit locally.

[–] trem@lemmy.blahaj.zone 4 points 1 hour ago (1 children)

Man, they couldn't have communicated this more confusingly, if they tried.

[–] PumpkinEscobar@lemmy.world 3 points 14 minutes ago

That was intentional. They were trying to word the announcement to not make it sound like you're now getting 1/5-1/9 as much AI for the same price.

[–] PumpkinEscobar@lemmy.world 5 points 1 hour ago
[–] halfdane@piefed.social 60 points 5 hours ago (1 children)

I mean, if AI gets too expensive, companies can always hire juniors to replace them 🤣

[–] lando55@lemmy.zip 8 points 3 hours ago (1 children)

When the junior devs get too expensive they can outsource all of their software development to Bangladesh

[–] Prox@lemmy.world 3 points 1 hour ago

When offshore-sourced code gets too shitty they can hire some senior engineers to rebuild it in a way that's compatible with the rest of their ecosystem.

[–] raspberriesareyummy@lemmy.world 2 points 2 hours ago

And nothing of value was lost.

[–] civ@lemmy.civl.cc 15 points 4 hours ago (1 children)
[–] chronicledmonocle@lemmy.world 2 points 3 hours ago

The tears are delicious

[–] UnspecificGravity@piefed.social 51 points 6 hours ago (1 children)

Gonna be hilarious when the people who haven't been paying attention realize that they just replaced workers with shit that doesn't work AND actually costs more.

[–] disorderly@lemmy.world 23 points 5 hours ago

Yep, I've been telling anyone who'll listen that if you really want to drop juniors and give tools to seniors, then you have to pay the monthly cost (whatever it will be) and you have to be ready to foot the big bill in 5 years when your seniors (with no candidate replacements) say they'll take a 50% raise or walk.

[–] Fedditor385@lemmy.world 23 points 6 hours ago

This is the right way to go. This will incentivize many companies to rethink their strategy, and slow down or scale down AI adoption. After that and the revenue drops for many AI companies, they will back off purchasing all possible RAM and storage in existence which will drive down pricing. And when the prices get to normal again, we will simply buy more RAM for our local machines and run free models.

This news kinda makes me happy. Shit's starting to fall apart. Finally.

[–] eager_eagle@lemmy.world 75 points 8 hours ago* (last edited 8 hours ago) (5 children)

Users on annual Pro or Pro+ plans will remain on their existing plan with premium request-based pricing until their plan expires, however, model multipliers will increase on June 1 (see table).

holy shit, 9x the previous cost. which was already not great. I was on the fence about cancelling it, but thanks for making up my mind, MS

[–] PumpkinEscobar@lemmy.world 8 points 3 hours ago (1 children)

I love that their email made it sound like it was a wash, like "we're just changing our billing model, you'll get credits now, samesies!" but then this pricing chart buried 3-levels down from the announcement lays out just how much less you're going to get for the same price.

I wonder about all the startups who were bragging about their $10k/month AI coding bills being the best money they ever spent. When this new pricing kicks in and pushes it to $40k/month right around the time all the vibe-coded shit blows up their codebase, I wonder if they'll still be so happy with their choices.

I interviewed for a place a while back and started asking about quality and velocity and how they balance it with AI developer tools, he said something like "One of our developers closed 400 PRs last month" and I instantly knew it was definitely not the place for me.

[–] trem@lemmy.blahaj.zone 3 points 1 hour ago

So, did they use AI tools to type "LGTM" 400 times or nah?

But yeah, I also find that frustrating. Management just looks at terrible metrics like PRs closed or lines of code produced.
It's not even novel that you can produce terrible code very quickly. Decades ago, our industry learned that it isn't worth it, because you suffer for it later. Now the game is altered slightly and management demands that we throw all these learnings out the window.

[–] panda_abyss@lemmy.ca 4 points 3 hours ago* (last edited 3 hours ago)

They should really describe this as you’re on the same plan, but your plan gives you 80-88% less use.

[–] Rhaedas@fedia.io 22 points 7 hours ago (1 children)

That's been their business model for a while now. "Here's something you also didn't ask for"

[–] trem@lemmy.blahaj.zone 6 points 1 hour ago (1 children)

Which is a crime, by the way, when you sell it together with a product you hold a monopoly for: https://en.wikipedia.org/wiki/Tying_(commerce)

[–] FauxLiving@lemmy.world 1 points 35 minutes ago

Yeah, but on the other hand look at the ballroom they've donated to (for completely non-corrupt reasons).

load more comments (2 replies)
[–] henfredemars@infosec.pub 66 points 7 hours ago (2 children)

Tale as old as time. Corpos try to get you dependent and then give your business an atomic wedgie.

[–] eager_eagle@lemmy.world 10 points 4 hours ago (1 children)

watch me go back to debugging like a real engineer: copying and pasting from stack overflow

[–] AceBonobo@lemmy.world 4 points 4 hours ago (1 children)

Stack overflow is not what it used to be

[–] eager_eagle@lemmy.world 5 points 4 hours ago (1 children)
[–] XLE@piefed.social 35 points 7 hours ago (1 children)

The good news is that none of the companies pushing these products have created the dependency yet, and they are running out of venture capital almost too fast to have the option.

[–] henfredemars@infosec.pub 15 points 7 hours ago (1 children)

I really hope you're right. My employer is using it as a crutch. I don't think they can stop using AI because they just don't have enough skilled employees to deliver on their commitments. They would pay nearly any price, and I'm sure they're not alone.

[–] empireOfLove2@lemmy.dbzer0.com 31 points 6 hours ago (1 children)

They would pay nearly any price

Literally any price except paying skilled employees.

[–] boonhet@sopuli.xyz 7 points 6 hours ago* (last edited 6 hours ago) (1 children)

That costs actual money though

For the cost of one employee you can give 5 employees AI and tell them to work 10x faster while they have to wrestle the stupid AI.

[–] XLE@piefed.social 4 points 4 hours ago

You aren't wrong, but even if LLMs didn't exist, employers would invent a scapegoat to make the same demands of their employees.

[–] Ilixtze@lemmy.ml 15 points 6 hours ago* (last edited 6 hours ago)

Getting de-skilled is starting to look like a very expensive gamble and this is just the first price hike, expect more to come. And expect them to criminalize open source models as well with some national security concerns or something.

[–] panda_abyss@lemmy.ca 18 points 7 hours ago (2 children)

Inline completions are genuinely useful, I’m mostly replacing them with local models though. They are slower but free as in beer (once you pay the hardware cost).

[–] eager_eagle@lemmy.world 2 points 4 hours ago (1 children)

I'm interested in setting it up, are you using vs code? Which extension or editor?

[–] panda_abyss@lemmy.ca 3 points 4 hours ago* (last edited 4 hours ago)

I’m using vim with minuet-ai, and it plugs the AI suggestion into my completion module. I found the Copilot style virtual text interfaces all janky.

[–] frongt@lemmy.zip 2 points 6 hours ago (1 children)

So, not free. Just capital expense instead of operational.

[–] panda_abyss@lemmy.ca 4 points 5 hours ago

Sure, but hopefully small code completion (2-4b range) models can run locally on a lot of things. They’re just less good.

[–] BannedVoice@lemmy.zip 14 points 7 hours ago (2 children)

Claude is cutting back on usage policy for pro users…

GitHub CoPilot is now doing it too…

It’s not hard to see the future, AI companies bought up all the RAM creating a shortage which raises prices for all of us across the board.

Now they’re going to throttle and limit access to it behind a paywall per transaction. The future is so stupid. Can we just go back to dial up internet and IRC? It was a much simpler time.

[–] DeckPacker@piefed.social 12 points 5 hours ago

You could also just not use these services. You don't need them. They were a stupid idea to begin with.

[–] Casterial@lemmy.world 3 points 6 hours ago

It's why I only use the free versions of these lol

[–] unitedwithme@lemmy.today 19 points 8 hours ago (2 children)

Of course it is... Funny though, bc nobody really uses it at work, so our pricing should theoretically go down... But I doubt it, MS will find a way to make it go up

[–] trem@lemmy.blahaj.zone 4 points 6 hours ago

Well, base prices stay the same. They seem to just be billing more per usage on top of that...

load more comments (1 replies)
[–] vane@lemmy.world 10 points 7 hours ago (1 children)

All 0 multiplier gone from pricing.

[–] Nighed@feddit.uk 2 points 7 hours ago

People were using free agent swarms for stuff.

[–] vala@lemmy.dbzer0.com 8 points 7 hours ago (1 children)

Well, I'm officially canceling my GH copilot sub. Wtf is this?

Been feeling like we were going to see a cheap ai compute rug pull soon.

Imagine paying for gpt-4o in 2026.

[–] calcopiritus@lemmy.world 27 points 6 hours ago

Wtf is this?

The most foreseeable event of the last 20 years.

Massive out of this world investment + no demand = prices so cheap they were operating at a huge loss

Operating at a huge loss + time = huge enshittification

Raising prices is the easiest form of enshittification. Ads are coming too. Lastly it will be degrading features. Incorporating more features that no one wants, and bundling with other services that no one wants.

[–] GreenBeanMachine@lemmy.world 5 points 6 hours ago

Time to try those Chinese models. They just released a new Deepseek V4 Pro and I'm hearing great things and it's super cheap

[–] Nighed@feddit.uk 2 points 7 hours ago (2 children)

The almost endless opus usage couldn't last to be fair. I assume those costs are closer to the real cost to provide the service, ouch!

load more comments (2 replies)
load more comments
view more: next ›