this post was submitted on 03 Jul 2023
3 points (100.0% liked)

Technology

37708 readers
424 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

G/O Media, a major online media company that runs publications including Gizmodo, Kotaku, Quartz, Jezebel, and Deadspin, has announced that it will begin a "modest test" of AI content on its sites.

The trial will include "producing just a handful of stories for most of our sites that are basically built around lists and data," Brown wrote. "These features aren't replacing work currently being done by writers and editors, and we hope that over time if we get these forms of content right and produced at scale, AI will, via search and promotion, help us grow our audience."

top 9 comments
sorted by: hot top controversial new old
[–] ConsciousCode@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

As someone working on LLM-based stuff, this is a terrible idea with current models and techniques unless they have a dedicated team of human editors to make sure the AI doesn't go off the rails, to say nothing of the cruelty of firing people to save maybe a few hundred thousand dollars with a substantial drop in quality. They can be very smart with proper prompting, but are also inconsistent and require a lot of handholding for anything requiring executive function or deliberation (like... writing an article meant to make a point). It might be possible with current models, but the field is way too new and techniques too crude to make this work without a few million dollars in R&D, at which point it'll probably be completely wasted when new developments come out nearly every week anyway.

Also wait, wtf are they going to do for game reviews? RL can barely complete Minecraft (which is an astonishing development, but it's so bleeding edge it might just cut to the bone). Even if they got some ultra-high-tech multimodal multi-model AI to play a game and review it, it would need to be an artificial person (AGI + autonomy) to even approximate human sensibilities and preferences.

[–] prole@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

This is just going to get worse and worse. Corporations are going to continue to do everything they can to increase profits, and that means getting rid of human employees and replacing them (regardless of how effectively) with AI.

We are going to end up with a ton of people out of work, and zero safety net. Instead of the utopian AI future possibility where everyone can live and enjoy fulfilling lives because they don't need to work anymore, we end up with a massive population of people who can't afford a roof over their head because they were laid off and replaced with AI.

The working class needs to wake the fuck up and unify against the cancer of late stage capitalism.

[–] PaupersSerenade@beehaw.org 1 points 1 year ago

Why do I feel like I stepped into r/KotakuinAction‽ This is shitty, no matter who it happens too. And EVERY news label has junk this day and age. The vitriol for this publication seems way more than necessary.

[–] valvin@beehaw.org 1 points 1 year ago

Maybe it'll reveal that many websites doesn't want to give interesting news to its audience but only want them to watch ads to have more money.

[–] storksforlegs@beehaw.org 1 points 1 year ago (2 children)

I know there are already people working on creating AI filters, to filter out spam articles and other AI-created content.

I'd pay for that, it'll be the new adblocker. Fuck any company that does this.

[–] shanghaibebop@beehaw.org 1 points 1 year ago

We really need AI content label regulations.

[–] rwhitisissle@beehaw.org 1 points 1 year ago

I know there are already people working on creating AI filters, to filter out spam articles and other AI-created content.

These will probably (ironically) be largely labeled by AI. As in, you get an AI to detect AI text and content generation and flag those websites as likely AI generated, with some kind of scaling probability index. That said, I think you could use AI to enhance human writing and that's fine. Maybe write something on your own and then have an AI restructure it or reword things for clarity, fixing grammar mistakes and other things. But full on "write me an article on [insert random thing here]" is where shit gets tedious.

[–] Meloku@feddit.cl 0 points 1 year ago (1 children)

What all these trend chasing CEOs fail to grasp about ChatGPT is that the Neural Network is trained to return what looks like like a human written answer, but it is NOT, IN ANY CASE, GOING TO RETURN INFORMATION. If you ask ChatGPT to write an essay with sources, ChatGPT is going to write a somewhat coherent essay with what looks like sources, but it's going to be a crapshot if the sources are even real, because you asked for an essay with sources, not an essay USING any given source. Anyways, I'm going to heat some popcorn and wait for the inevitable fake articles and the associated debacle.

ChatGPT is an engineering marvel in that it has understood the semantics of language. However, it has absolutely no idea what it is talking about beyond generating the next token in a string of what sounds like natural language. I wish more people would understand this nuance.