HorseRabbit

joined 1 year ago
[–] HorseRabbit@lemmy.sdf.org 3 points 1 day ago

Yeah remember when he met Tuco that one time and used exploding meth?

[–] HorseRabbit@lemmy.sdf.org 4 points 3 weeks ago

Currently it's just a Lemmy client. It'll be cool to watch the development, but at present I don't see how it's any better than voyager. At least on mobile the interface has a lot more dead space than voyager.

[–] HorseRabbit@lemmy.sdf.org 1 points 3 weeks ago (1 children)

Is that why people that smoke every day act like fucking children?

[–] HorseRabbit@lemmy.sdf.org 29 points 1 month ago

Man that last paragraph is kind of a train wreck isn't it?

[–] HorseRabbit@lemmy.sdf.org 1 points 1 month ago (1 children)

Militants like the Taliban?

[–] HorseRabbit@lemmy.sdf.org 3 points 4 months ago

Maybe I misunderstood the OP? Idk

[–] HorseRabbit@lemmy.sdf.org 13 points 4 months ago* (last edited 4 months ago)

People sometimes act like the models can only reproduce their training data, which is what I'm saying is wrong. They do generalise.

During training the models are trained to predict the next word, but after training the network is always effectively interpolating between the training examples it has memorised. But this interpolation doesn't happen in text space but in a very high dimensional abstract semantic representation space, a 'concept space'.

Now imagine that you have memorised two paragraphs that occupy two points in concept space. And then you interpolate between them. This gives you a new point, potentially unseen during training, a new concept, that is in some ways analogous to the two paragraphs you memorised, but still fundamentally different, and potentially novel.

[–] HorseRabbit@lemmy.sdf.org 16 points 4 months ago (6 children)

Not an ELI5, sorry. I'm an AI PhD, and I want to push back against the premises a lil bit.

Why do you assume they don't know? Like what do you mean by "know"? Are you taking about conscious subjective experience? or consistency of output? or an internal world model?

There's lots of evidence to indicate they are not conscious, although they can exhibit theory of mind. Eg: https://arxiv.org/pdf/2308.08708.pdf

For consistency of output and internal world models, however, their is mounting evidence to suggest convergence on a shared representation of reality. Eg this paper published 2 days ago: https://arxiv.org/abs/2405.07987

The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn't correct, although it is often repeated online for some reason.

A little evidence that comes to my mind is this paper showing models can understand rare English grammatical structures even if those structures are deliberately withheld during training: https://arxiv.org/abs/2403.19827

[–] HorseRabbit@lemmy.sdf.org 0 points 5 months ago

What are hoops that you chase with a stick doing to children? Demands grow to restrict kids access to hoops that you chase with a stick

[–] HorseRabbit@lemmy.sdf.org 4 points 6 months ago (1 children)

I still don't get why no one has murdered the CEO or ExxonMobil

[–] HorseRabbit@lemmy.sdf.org 6 points 7 months ago

Weird dichotomy.

Protests can be entirely rational without the main motivation being recruitment.

view more: next ›