this post was submitted on 15 Sep 2023
415 points (97.3% liked)
Technology
59157 readers
2446 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I really hope public opinion on AI starts to change. LLMs aren't going to make anyone's life easier, except in that they take jobs away once the corporate world determines that they are in a "good-enough" state -- desensitizing people to this kind of stupid output is just one step on that trail.
The whole point is just to save the corporate world money. There will never, ever be a content advantage over a human author.
The thing is LLMs are extremely useful at aiding humans. I use one all the time at work and it has made me faster at my job, but left unchecked they do really stupid shit.
I agree they can be useful (I've found intelligent code snippet autocompletion to be great), but it's really important that the humans using the tool are very skilled and aware of the limitations of AI.
Eg, my usage generates only very, very small amounts of code (usually a few lines). I have to very carefully read those lines to make sure they are correct. It's never generating something innovative. It simply guesses what I was going to type anyways. So it only saved me time spent typing and the AI is by no means in charge of logic. It also is wrong a lot of the time. Anyone who lets AI generate a substantial amount of code or lets it generate code you don't understand thoroughly is both a fool and a danger.
It does save me time, especially on boilerplate and common constructs, but it's certainly not revolutionary and it's far too inaccurate to do the kinds of things non programmers tend to think AI can do.
I'm going to fight the machines for the right to keep slaving away myself
And when I'm done, capitalism will give me an off day as a treat!
You're missing the point. If you don't have a job to "slave away" at, you don't have the money to afford food and shelter. Any changes to that situation, if they ever come, are going to lag far behind whatever events cause a mass explosion of unemployment.
It's not about licking a boot, it's that we don't want to let the boot just use something that should be a net good as extra weight as they step on us.
I am not going to purposefully waste human life on tasks that machines could perform or help us be faster at just because late capitalism doesn't let me, the worker, reap the value from them.
It removes human labor
On a bigger scale we had the loom, the printing press, the steam engine the computer. Imagine if we'd refused them
I can't see us get ensnared into some neu dark age propelled by some "i need to keep my job" status quo just because we found ourselves with a moronic economic system that makes innovations bad news for the workers it replaces
If it takes AI taking away our livelihoods to get a chance to rework this failing doctrine so be it
I'm not talking communism I'm barely hoping for an organic response to it, likely a UBI
As someone who works in content marketing, this is already untrue at the current quality of LLMs. It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.
Some examples are writing outside of a person's subject expertise at a relatively basic level. This used to take hours or days of entirely self-directed research on a given topic, even if the ultimate article was going to be written for beginners and therefore in broad strokes. With diligent fact-checking and ChatGPT alone, the whole process, including final copy, takes maybe 4 hours.
It's also an enormously useful research tool. Rather than poring over research journals, you can ask LLMs with academic plug-ins to give a list of studies that fit very specific criteria and link to full texts. Sometimes it misfires, of course, hence the need for a good writer still, but on average this can cut hours from journalistic and review pieces without harming (often improving) quality.
All the time writers save by having AI do legwork is then time they can instead spend improving the actual prose and content of an article, post, whatever it is. The folks I know who were hired as writers because they love writing and have incredible commitment to quality are actually happier now using AI and being more "productive" because it deals mostly with the shittiest parts of writing to a deadline and leaves the rest to the human.
I'm talking about future state. The goal clearly is to avoid the need of human oversight altogether. The purpose of that is saving some rich people more money. I also disagree that LLMs improve output of good writers, but even if they did, the cost to society is high.
I'd much rather just have the human author, and I just hope that saying "we don't use AI" becomes a plus for PR due to shifting public opinion.