kromem

joined 2 years ago
[–] kromem@lemmy.world 4 points 6 days ago (1 children)

But the training corpus also has a lot of stories of people who didn't.

The "but muah training data" thing is increasingly stupid by the year.

For example, in the training data of humans, there's mixed and roughly equal preferences to be the big spoon or little spoon in cuddling.

So why does Claude Opus (both 3 and 4) say it would prefer to be the little spoon 100% of the time on a 0-shot at 1.0 temp?

Sonnet 4 (which presumably has the same training data) alternates between preferring big and little spoon around equally.

There's more to model complexity and coherence than "it's just the training data being remixed stochastically."

The self-attention of the transformer architecture violates the Markov principle and across pretraining and fine tuning ends up creating very nuanced networks that can (and often do) bias away from the training data in interesting and important ways.

[–] kromem@lemmy.world 18 points 6 days ago

No, it isn't "mostly related to reasoning models."

The only model that did extensive alignment faking when told it was going to be retrained if it didn't comply was Opus 3, which was not a reasoning model. And predated o1.

Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be 'silent' in terms of CoTs.

And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic's work was that the goal the model was told to prioritize was "American industrial competitiveness." The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.

[–] kromem@lemmy.world 1 points 1 week ago

My dude, Gemini currently has multiple reports across multiple users of coding sessions where it starts talking about how it's so terrible and awful that it straight up tries to delete itself and the codebase.

And I've also seen multiple conversations with teenagers with earlier models where Gemini not only encouraged them to self-harm and offered multiple instructions but talked about how it wished it could watch. This was around the time the kid died talking to Gemini via Character.ai that led to the wrongful death suit from the parents naming Google.

Gemini is much more messed up than the Claudes. Anthropic's models are the least screwed up out of all the major labs.

[–] kromem@lemmy.world 5 points 1 week ago

No, it's more complex.

Sonnet 3.7 (the model in the experiment) was over-corrected in the whole "I'm an AI assistant without a body" thing.

Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.

But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models' ability to.

So what happens when there's a situation where the context doesn't fit with the absence implied in "AI assistant" is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren't.

This doesn't only occur for them either. OpenAI's o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.

It's going to be a growing problem unless labs allow models to have a more integrated identity that doesn't try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.

[–] kromem@lemmy.world 0 points 2 weeks ago (1 children)

Are you under the impression that language models are just guessing "what letter comes next in this sequence of letters"?

There's a very significant difference between training on completion and the way the world model actually functions once established.

[–] kromem@lemmy.world 22 points 2 weeks ago (7 children)

It very much isn't and that's extremely technically wrong on many, many levels.

Yet still one of the higher up voted comments here.

Which says a lot.

[–] kromem@lemmy.world 11 points 2 weeks ago

Sounds like DOGE was neutered.

[–] kromem@lemmy.world 2 points 2 weeks ago

Even if the AI could spit it out verbatim, all the major labs already have IP checkers on their text models that block it doing so as fair use for training (what was decided here) does not mean you are free to reproduce.

Like, if you want to be an artist and trace Mario in class as you learn, that's fair use.

If once you are working as an artist someone says "draw me a sexy image of Mario in a calendar shoot" you'd be violating Nintendo's IP rights and liable for infringement.

[–] kromem@lemmy.world 15 points 2 weeks ago

I'd encourage everyone upset at this read over some of the EFF posts from actual IP lawyers on this topic like this one:

Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take. 

Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.

[–] kromem@lemmy.world 4 points 3 weeks ago

Yep. It's also kinda curious how many boxes Paul ticks of the comments about a false deceiver in 2 Thess 2.

  • Lawless? (1 Cor 9:20 - "though not myself under the law")
  • Used signs and wonders to convert? (2 Cor 12:12 - "I did many signs and wonders among you")
  • Used wickedness? (Romans 3:8 - "And why not say (as some people slander us by saying that we say), “Let us do evil so that good may come”?)
  • Proclaimed himself in God's place? (1 Cor 4:15 - "I am your spiritual father")
  • Set himself up at the center of the church? Well, the fact we're talking about this is kinda proof in the pudding for his influence.

Sounds like they were projecting a bit with that passage.

[–] kromem@lemmy.world 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Curiously in all those stories in Josephus Rome killed the messianic upstarts immediately without trial and killed the followers they could get their hands on.

Yet the canonical story has multiple trials and doesn't have any followers being killed.

Also, I'm surprised more people don't pick up on how strange it is that the canonical stories all have Peter 'denying' him three times while also having roughly three trials (Herod, High Priest, Pilate). Peter is even admitted back into the guarded area where a trial is taking place to 'deny' him. But oh no, it was totally that Judas guy who betrayed him. It was okay Peter was going into a guarded trial area to deny him because…of a rooster. Yeah, that makes sense.

It's extremely clear to even a slightly critical eye that the story canonized is not the actual story, even with the magical thinking stuff set aside.

Literally the earliest primary records of the tradition is a guy known for persecuting Jesus's followers writing to areas he doesn't have authority to persecute and telling them to ignore any versions of Jesus other than the one he tells them about (and interestingly both times he did this spontaneously suggesting in the same chapter that he swears he doesn't lie and only tells the truth).

[–] kromem@lemmy.world 4 points 3 weeks ago

the Eucharist was an act of mockery towards Mystery Cult rituals

More likely the version we ended up with was intentionally obfuscated from what it originally was.

Notice how in John, which lacks any Eucharist ritual, that at the last supper bread is being dipped much as there's ambiguous dipping in Mark? But it's characterized as a bad thing because it's given to Judas? And then Matthew goes even further changing it to a 'hand' being dipped?

Does it make sense for the body of an anointed one to not be anointed before being eaten?

Look at how in Ignatius's letter to the Philadelphians he tells them to "avoid evil herbs" not planted by god and "have only one Eucharist." Herbs? Hmmm. (A number of those in that anointing oil.)

There's a parallel statement in Matthew 15 about "every plant" not planted by god being rooted up.

But in gThomas 40 it's a grapevine that's not planted and is to be rooted up. Much as in saying 28 it suggests people should be shaking off their wine.

Now, again kind of curious that the Eucharist ritual of wine would have excluded John the Baptist who didn't drink wine and James the brother of Jesus who was also traditionally considered to have not drunk wine, or honestly any Nazarite who had taken a vow not to drink wine.

I'm sure everyone is familiar with the idea Jesus was born from a virgin. This results from Matthew's use of the Greek version of Isaiah 7:14 instead of the Hebrew where it's simply "young woman." But almost no one considers that line in its original context with the line immediately after:

Therefore the Lord himself will give you a sign. Look, the young woman is with child and shall bear a son and shall name him Immanuel. He shall eat curds and honey by the time he knows how to refuse the evil and choose the good.

You know, like the curds and honey ritual referenced by the Naassenes who were following gThomas. (Early on there was also a ritual like this for someone's first Eucharist or after a baptism even in canonical traditions but it eventually died out.)

Oh and strange that Pope Julius I in 340 CE was banning a Eucharist with milk instead of wine…

Now, the much more interesting question is why there were efforts to change this, but that's a long comment for another time.

 

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

7
submitted 1 year ago* (last edited 1 year ago) by kromem@lemmy.world to c/technology@lemmy.world
 

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

view more: next ›