NeatNit

joined 2 years ago
[–] NeatNit@discuss.tchncs.de 1 points 1 year ago

Do you watch every video available? I certainly can’t. So I make use of teasers and descriptions. That’s what they’re there and useful for.

Sure, me too, but when you literally say "Instant disqualification for me" that's an insane reaction. You should know when reading a summary that it's not a perfect representation of the source. Even human-written summaries or articles very often misunderstand or misrepresent their sources, many times stating the exact opposite of the source because of it. This obviously happens with AI summaries as well. The "instant disqualification" is what you can't excuse.

[–] NeatNit@discuss.tchncs.de 1 points 1 year ago* (last edited 1 year ago) (1 children)

I guess it's a bit of both. Overall I'm still glad this video was posted at all.

My issue is with both the summary and the people who took it at face value. A summary that makes claims that never happened in the source, such as GPT 4 being able to "understand other people's minds" (not at all what the video said), is potentially worse than no summary at all. I'm not sure this summary is actually worse than none at all though. It's definitely salvageable with just a few manual corrections.

But people who read a summary like this and it doesn't raise any red flag for them, definitely get under my skin even more. Even without watching the video the way this text is written is absolutely questionable. So yeah, my issue is more with them, but that doesn't absolve you of all responsibility posting text that you should have known is wrong or misleading. As it happens, the TaskRabbit bit of the summary is pretty good, it's the first half that bothers me.

[–] NeatNit@discuss.tchncs.de 1 points 1 year ago (3 children)

If you watched the video yourself before posting it, you could at least proof-read the summary and correct its obvious flaws.

[–] NeatNit@discuss.tchncs.de 2 points 1 year ago (1 children)

well the recap is wrong :(

[–] NeatNit@discuss.tchncs.de 5 points 1 year ago (1 children)

Reasoning and "thinking" can arise as emergent properties of this system. Not everything the model says is backed up by direct data. As you surely know, you've heard of AI hallucinations.

I believe the researchers in that experiment allowed the model to write out its thoughts to a separate place where only they could read them.

By god, watch the video and not the crappy AI-generated summary. This man is one of the best AI safety explainers in the world. You don't have to agree with everything he says, but I think you'll agree with the vast majority of it.

[–] NeatNit@discuss.tchncs.de 2 points 1 year ago

It doesn't represent the video at all. I just edited my comment while you were replying.

[–] NeatNit@discuss.tchncs.de 5 points 1 year ago (4 children)

Watch the actual video before your instant qualification? That summary seems AI-generated to me and isn't even close to faithful to the video

[–] NeatNit@discuss.tchncs.de 2 points 1 year ago* (last edited 1 year ago) (7 children)

That summary isn't the best.

Edit: by which I mean, it's not even close to accurate and it does more harm than good, like the replies to this repost: https://lazysoci.al/post/14269086

How'd your generate it? Whatever method it was, it doesn't work.

[–] NeatNit@discuss.tchncs.de 2 points 1 year ago (3 children)

Where did he claim that it would make it less likely to manipulate? Can you give me a timestamp?

[–] NeatNit@discuss.tchncs.de 45 points 1 year ago

please enlighten the rest of us

[–] NeatNit@discuss.tchncs.de 220 points 1 year ago (5 children)

no list of apps anywhere

[–] NeatNit@discuss.tchncs.de 17 points 1 year ago (1 children)

How could you not mention Windows XP in this comment. MS kept up support for a surprisingly long time while encouraging everyone to upgrade (and rightly so), but even 5 years after they completely dropped support, they had to release a security update to protect against a widespread attack because a ton of organizations were still using XP.

view more: ‹ prev next ›