this post was submitted on 01 Oct 2023
45 points (100.0% liked)

Technology

37604 readers
148 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
all 42 comments
sorted by: hot top controversial new old
[–] Uniquitous@lemmy.one 62 points 11 months ago (1 children)

AI is actually the king of bullshit. It might give you some code, but will it compile? Probably about as well as an AI art knee would actually bear your weight.

[–] RandoCalrandian@kbin.social 8 points 11 months ago

And the author clearly has no idea what she’s talking about, or the impact of AI on CS.

I use chatgpt regularly to build outlines and boilerplate to code I want to write. Yes it’s code I can’t trust and almost entirely rewrite, but simply the act of it naming the variables saves me time.

And even if it did get to the point of good code, you have to be a developer to know what it created is what you wanted in the first place.

This is fearmongering targeting tech people by claiming their jobs are at the same level of risk of disruption by ai as other white collar jobs (like hers!).

[–] Naatan@lemmy.one 41 points 11 months ago (4 children)

We are nowhere near AI writing our software unattended. Not even close. People really over estimate the state of AI.

[–] ConsciousCode@beehaw.org 7 points 11 months ago (1 children)

I'm an AI nerd and yes, nowhere close. AI can write code snippets pretty well, and that'll get better with time, but a huge part of software development is translating client demands into something sane and actionable. If a CEO of a 1-man billion dollar company asks his super-AI to "build the next Twitter", that leaves so many questions on the table that the result will be completely unpredictable. Humans have preferences and experiences which can inform and fill in those implicit questions. They're generally much better suited as tools and copilots than autonomous entities.

Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results, but I wouldn't trust that without a lot more research. I've found individual LLMs with a given task tend to get tunnel vision, so they could easily get stuck in a loop trying the same wrong code or design repeatedly.

(I think this was the paper, reminiscent of the generative agent simulacra paper, but I also found this)

[–] realharo@lemm.ee 3 points 11 months ago* (last edited 11 months ago)

Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results

Dude, you need to take a closer look at that paper you linked, if you consider that "pretty good results". They have a github repo with screenshots of some of the "products", which should give you some idea https://github.com/OpenBMB/ChatDev/tree/main/misc .

Not to mention the terrible decision making of the fake company (desktop app you have to download? no web/mobile version? for a virtual board game?)

(Also the paper never even tried to prove its main hypothesis, that all this multi agent song and dance would somehow reduce hallucinations and improve performance. There is a lot of good AI stuff coming out daily, but that particular paper - and the articles reporting on it - was pure garbage.)

[–] realharo@lemm.ee 4 points 11 months ago* (last edited 11 months ago) (1 children)

True, as of today. On the other hand, future advancements could very easily change that. On the other other hand, people have been saying the same about self driving cars 10 years ago, and while they do basically work, and are coming eventually, progress there has been a lot slower than predicted.

So who knows. Could go either way.

[–] Naatan@lemmy.one 1 points 11 months ago

It’s almost a philosophical question of whether I can replace us though. Because for it to be anything more than a tool it needs real intelligence, compassion, etc. Basically it would need a conscious.

I’m certain it’ll replace some jobs without that, just because being a tool it’ll make us more efficient and that efficiency will eliminate jobs. But I’m not seeing it replace or assimilate entire industries at this stage.

[–] violetsareblue@beehaw.org 3 points 11 months ago

Yea…anyone who has asked chatgpt to help them fix a piece of code or write one would know it requires a lot of human editing/good prompting. And a lot of time, what I was trying to accomplish still wouldn’t work.

[–] abhibeckert@beehaw.org 2 points 11 months ago* (last edited 11 months ago) (1 children)

True. But if AI makes people more productive it could make it really hard to find work. Especially if you're straight out of college with zero experience.

[–] sanzky@beehaw.org 1 points 11 months ago

finding job as a junior is already a bit harder than it was because so many developers are working remote, which is way harder to do when you are a junior developer.

[–] AaronMaria@lemmy.ml 35 points 11 months ago (1 children)

You can't write this kind of thing if you understand what a programmer does. The biggest part of the job is finding a good way to break down a problem into executable steps, not just actually writing the code.

[–] RandoCalrandian@kbin.social 14 points 11 months ago

Executable and maintainable

AI generated code can’t, as of yet, go in and fix a bug

[–] missmystique@beehaw.org 30 points 11 months ago (2 children)

This feels like a more cogent opinion piece on AI's potential impact on programming:

ChatGPT Isn't Coming for Your Coding Job

[–] Sharpiemarker@feddit.de 20 points 11 months ago (1 children)

AI is coming for your shitty "journalism" jobs writing about AI taking your jobs.

[–] astronaut_sloth@mander.xyz 11 points 11 months ago (1 children)

This is a much better article. OP's article just shows the author's surface understanding of how coding works and how well an LLM can actually code. There's way more that goes into a programming task than just coding.

I see LLMs as having the potential of being almost like a super library. I can prompt GPT, Claude, etc. to write me a custom function that I copy, paste, test, scrutinize, and almost certainly change. It's a tool that will make someone a more productive programmer. It won't completely subsume a human's ability to be creative and put the pieces together.

At the absolute worst over the next decade, I could see programming changing from writing and debugging code to prompting, stitching together, and debugging.

[–] SenorBolsa@beehaw.org 3 points 11 months ago

It's the same with CAM software in CNC, like sure, If you set it up right (which is a skill in and of itself) it can spit out a decent toolpath, but there's tons of magic to be done by hand and understanding the way the underlying G code works allows you to make small changes on the fly.

[–] dark_stang@beehaw.org 30 points 11 months ago (1 children)

Stakeholders struggle to give accurate requirements most of the time, they're not gonna be programming with ChatGPT soon. AI can really improve a good developer's output though.

[–] ryannathans@aussie.zone 13 points 11 months ago (2 children)

Haven't found a usecase for it yet where it doesn't shit out gift wrapped garbage

[–] dark_stang@beehaw.org 13 points 11 months ago (1 children)

You have to give it very specific instructions and small, targeted things to do. I've used it to write a lot of terraform, I hate writing IaC.

[–] thesmokingman@programming.dev 1 points 11 months ago

Holy shit the speed increase with HCL is fucking nuts

[–] AbstractifyBot@beehaw.org 7 points 11 months ago (1 children)

TL;DR for the linked article


The article discusses how the rise of AI may impact computer science careers going forward. While coding jobs have long been seen as stable career paths, chatbots can now generate code in various languages. Developers are using AI tools like Copilot to accelerate routine coding tasks. Within a decade, coding bots may be able to do much more than basic tasks. However, programmers will still be needed to guide AI toward productive solutions. Teaching coding is also becoming more challenging, as students could use chatbots to cheat. Conceptual problem-solving skills will remain important for programmers to apply their expertise where AI falls short. The future may belong to those who can think entrepreneurially about how technology solves problems.

In the end, what students study may matter less than their ability to apply knowledge to technology challenges.


This comment was generated by a bot. Send comments and complaints via private message.

[–] otter@lemmy.ca 10 points 11 months ago (2 children)

However, programmers will still be needed to guide AI toward productive solutions

So it would still be safe, they'd just be doing different work from what they do now. Same as how other advances in tech stacks made it so we do things differently now than 30 years ago.

People are very adaptable

[–] adespoton@lemmy.ca 4 points 11 months ago (5 children)

Indeed. Do people still use emacs to code, for example?

Technologies evolve. People coding today in COBOL or Fortran are few and far between (but very well compensated).

[–] RickRussell_CA@beehaw.org 10 points 11 months ago* (last edited 11 months ago)

Do people still use emacs to code, for example?

Umm. Yes.

[–] Nyoelle@beehaw.org 4 points 11 months ago

Hell yea we do use emacs!

[–] sfera@beehaw.org 3 points 11 months ago* (last edited 11 months ago)

Do people still use emacs to code, for example?

Sure. Why wouldn't they?

[–] MostlyBlindGamer@rblind.com 3 points 11 months ago

No, all the cool kids use Vim.

[–] nickwitha_k@lemmy.sdf.org 1 points 11 months ago* (last edited 11 months ago)

Indeed. Do people still use emacs to code, for example?

Not sure if that's a serious question. Yes. They do. And many use it effectively. I use (neo)vim though because it works for me

[–] anlumo@feddit.de 1 points 11 months ago

Yes, that's the key. I haven't written assembly code since the 1990s, I use higher-level abstractions to get to the goal more quickly now. AI-generated code is just yet another layer of abstraction away from machine language.

[–] mPony@kbin.social 5 points 11 months ago (1 children)

I just want to mention the clever graphic design of the Illustration by Ben Kothe

[–] miracleorange@beehaw.org 1 points 11 months ago

It's a multilayered visual pun. A visual punion, if you will.

[–] xantoxis@lemmy.one 4 points 11 months ago