Nevoic

joined 1 year ago
[–] Nevoic@lemm.ee 7 points 3 months ago (1 children)

Sadly defunding of the space program has rarely meant funding proper welfare. It's not really an either or situation, or at least it hasn't been yet.

[–] Nevoic@lemm.ee 1 points 3 months ago* (last edited 3 months ago) (1 children)

Weird thing about this group of fascists is they're particularly old. Most previous fascist revolutions have been from younger(ish) people, like 25-50, as opposed to the literally geriatric Trump supporters. Unless the fascists figure out how to capture the youth, this is just reactionaries getting loud before they finally die off.

[–] Nevoic@lemm.ee 2 points 3 months ago

Holy shit you're right, I'm an idiot. Thanks for helping me shift my perspective.

[–] Nevoic@lemm.ee 3 points 3 months ago (2 children)

If you didn't have an agenda/preconceived idea you wanted proven, you'd understand that a single study has never been used by any credible scientist to say anything is proven, ever.

Only people who don't understand how data works will say a single study from a single university proves anything, let alone anything about a model trained on billions of parameters across a field as broad as "programming".

I could feed GPT "programming" tasks that I know it would fail on 100% of the time. I also could feed it "programming" tasks I know it would succeed on 100% of the time. If you think LLMs have nothing to offer programmers, you have no idea how to use them. I've been successfully using GPT4T for months now, and it's been very good. It's better in static environments where it can be fed compiler errors to fix itself continually (if you ever look at more than a headline about GPT performance you'd know there's a substantial difference between zero-shot and 3-shot performance).

Bugs exist, but code heavily written by LLMs has not been proven to be any more or less buggy than code heavily written by junior devs. Our internal metrics have them within any reasonable margin of error (senior+GPT recently beating out senior+junior, but it's been flipping back and forth), and senior+GPT tickets get done much faster. The downside is GPT doesn't become a senior, where a junior does with years of training, though 2 years ago LLMs were at a 5th grade coding level on average, and going from 5th grade to surpassing college level and matching junior output is a massive feat, even if some luddites like yourself refuse to accept it.

[–] Nevoic@lemm.ee 2 points 3 months ago* (last edited 3 months ago) (6 children)

In my line of work (programming) they absolutely do not have a 52% failure rate by any reasonable definition of the word "failure". More than 9/10 times they'll produce code at at least a junior level. It won't be the best code, sometimes it'll have trivial mistakes in it, but junior developers do the same thing.

The main issue is confidence, it's essentially like having a junior developer that is way overconfident for 1/1000th of the cost. This is extremely manageable, and June 2024 is not the end all be all of LLMs. Even if LLMs only got worse, and this is the literal peak, it will still reshape entire industries. Junior developers cannot find a job, and with the massive reduction in junior devs we'll see a massive reaction in senior devs down the line.

In the short term the same quality work will be done with far, far fewer programmers required. In 10-20 years time if we get literally no progress in the field of LLMs or other model architectures then yeah it's going to be fucked. If there is advancement to the degree of replacing senior developers, then humans won't be required anyway, and we're still fucked (assuming we still live in a capitalist society). In a proper society less work would actually be a positive for humanity, but under capitalism less work is an existential threat to our existence.

[–] Nevoic@lemm.ee 26 points 3 months ago (2 children)

Any chance you have an nvidia card? Nvidia for a long time has been in a worse spot on Linux than AMD, which interestingly is the inverse of Windows. A lot of AMD users complain of driver issues on Windows and swap to Nvidia as a result, and the exact opposite happens on Linux.

Nvidia is getting much better on Linux though, and Wayland+explicit sync is coming down the pipeline. With NVK in a couple years it's quite possible that nvidia/amd Linux experience will be very similar.

[–] Nevoic@lemm.ee 3 points 4 months ago* (last edited 4 months ago)

People who go out and counter protest actively have given it more than a cursory thought. They know BLM isn't advocating for white genocide (okay, most of them understand this. There are some literal nazis/skin heads/white nationalists in the counter protesting groups that believe in The Great Replacement, but they believed this prior to BLM existing).

Yet they still go out and counter protest. It's not confusion at that point. You can't go up to an all lives matter reactionary and say "Hey! Did you know BLM doesn't actually want to murder all white people? Are you a fan of BLM now?" and actually expect any progress.

[–] Nevoic@lemm.ee 3 points 4 months ago (4 children)

Is your argument that a genuine, good faith interpretation of "Black Lives Matter" is "Only Black Lives Matter"?

This isn't how English works. If I say "I like your mom" to an SO, they wouldn't interpret it as I don't like them and instead like their mom. I don't have to say "I like your mom too".

[–] Nevoic@lemm.ee 1 points 4 months ago

Glad someone said this, it bothers me even with human ages. Like there's this perception that as you get older you simply gain knowledge, wisdom, world experience, etc. Not a lot of people account for biological limits for knowledge/memory, nor degradation from aging.

If some young intern decided to try to have sex with Biden, I think there's genuinely a conversation to be had about if that's statutory rape. I think you'd need a healthcare professional to rule on if Biden has the mental capacity to fully consent. Similar to a drunk person. They're still obviously a person able to think/engage with the world, but they're heavily impaired and unable to fully consent as a result. Age impairs cognition too.

[–] Nevoic@lemm.ee 5 points 4 months ago

"they can't learn anything" is too reductive. Try feeding GPT4 a language specification for a language that didn't exist at the time of its training, and then tell it to program in that language given a library that you give it.

It won't do well, but neither would a junior developer in raw vim/nano without compiler/linter feedback. It will roughly construct something that looks like that new language you fed it that it wasn't trained on. This is something that in theory LLMs can do well, so GPT5/6/etc. will do better, perhaps as well as any professional human programmer.

Their context windows have increased many times over. We're no longer operating in the 4/8k range, but instead 128k->1024k range. That's enough context to, from the perspective of an observer, learn an entirely new language, framework, and then write something almost usable in it. And 2024 isn't the end for context window size.

With the right tools (e.g input compiler errors and have the LLM reflect on how to fix said compiler errors), you'd get even more reliability, with just modern day LLMs. Get something more reliable, and effectively it'll do what we can do by learning.

So much work in programming isn't novel. You're not making something really new, but instead piecing together work other people did. Even when you make an entirely new library, it's using a language someone else wrote, libraries other people wrote, in an editor someone else wrote, on an O.S someone else wrote. We're all standing on the shoulders of giants.

[–] Nevoic@lemm.ee 18 points 4 months ago* (last edited 4 months ago) (3 children)

18 months ago, chatgpt didn't exist. GPT3.5 wasn't publicly available.

At that same point 18 months ago, iPhone 14 was available. Now we have the iPhone 15.

People are used to LLMs/AI developing much faster, but you really have to keep in perspective how different this tech was 18 months ago. Comparing LLM and smartphone plateaus is just silly at the moment.

Yes they've been refining the GPT4 model for about a year now, but we've also got major competitors in the space that didn't exist 12 months ago. We got multimodality that didn't exist 12 months ago. Sora is mind bogglingly realistic; didn't exist 12 months ago.

GPT5 is just a few months away. If 4->5 is anything like 3->4, my career as a programmer will be over in the next 5 years. GPT4 already consistently outperforms college students that I help, and can often match junior developers in terms of reliability (though with far more confidence, which is problematic obviously). I don't think people realize how big of a deal that is.

[–] Nevoic@lemm.ee -1 points 4 months ago

Are you making a descriptive or normative claim in your first paragraph?

view more: next ›