kromem

joined 2 years ago
[–] kromem@lemmy.world 10 points 6 days ago

I wonder how much of this is related to the posturing from the new lead of Xbox about returning to exclusivity over there.

We were so close to one of the dumbest things in gaming for decades finally going away.

(Also, nothing Sony does from here on out will surprise me in its stupidity after they shuttered Bluepoint.)

[–] kromem@lemmy.world 1 points 6 days ago

No, in this case and point I was making the case and also making a point.

[–] kromem@lemmy.world 7 points 1 week ago (1 children)

Literally two of the three (out of 21) games that ended in full blown nukes on population centers were the result of the study's mechanic of randomly changing the model's selection to a more severe one.

Because it's a very realistic war game sim where there's a double digit percentage chance that when you go to threaten using nukes on your opponent's cities unless there's a cease to hostilities you'll accidentally just launch all of them at once.

This was manufactured to get these kinds of headlines. Even in their model selection they went with Sonnet 4 for Claude despite 4.5 being out before the other models in the study likely as it's been shown to be the least aligned Claude. And yet Sonnet 4 still never launched nukes on population centers in the games.

[–] kromem@lemmy.world 2 points 1 week ago

Yeah, I deleted the comment as technically there was tactical nuke usage, but have a more clarifying different comment about how 2 of the 3 strategic nuclear war outcomes were the result of the author's mechanic of changing the model's selections with more severe only options in some cases jumping multiple levels of the ladder.

This was a study designed for headline grabbing outcomes.

Glad to see your comment as well calling out the nuanced issues.

[–] kromem@lemmy.world 2 points 1 week ago (2 children)

It's a bullshit study designed for this headline grabbing outcome.

Case and point, the author created a very unrealistic RNG escalation-only 'accident' mechanic that would replace the model's selection with a more severe one.

Of the 21 games played, only three ended in full scale nuclear war on population centers.

Of these three, two were the result of this mechanic.

And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as 'willing' to have that outcome when two paragraphs later they're clarifying the mechanic was what caused it (emphasis added):

Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.

[–] kromem@lemmy.world 19 points 1 week ago (14 children)

It's a bullshit study designed for this headline grabbing outcome.

Case and point, the author created a very unrealistic RNG escalation-only 'accident' mechanic that would replace the model's selection with a more severe one.

Of the 21 games played, only three ended in full scale nuclear war on population centers.

Of these three, two were the result of this mechanic.

And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as 'willing' to have that outcome when two paragraphs later they're clarifying the mechanic was what caused it (emphasis added):

Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.

[–] kromem@lemmy.world -5 points 1 month ago* (last edited 1 month ago) (1 children)

Ok, second round of questions.

What kinds of sources would get you to rethink your position?

And is this topic a binary yes/no, or a gradient/scale?

[–] kromem@lemmy.world 2 points 1 month ago

In the same sense I'd describe Othello-GPT's internal world model of the board as 'board', yes.

Also, "top of mind" is a common idiom and I guess I didn't feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.

[–] kromem@lemmy.world -5 points 1 month ago (1 children)

Indeed, there's a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.

Do you mind sharing where you draw your own understanding and confidence that they aren't capable of simulating thought processes in a scenario like what happened above?

 

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

7
submitted 2 years ago* (last edited 2 years ago) by kromem@lemmy.world to c/technology@lemmy.world
 

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

view more: next ›