BackwardsPuzzleBox

joined 10 months ago
[–] BackwardsPuzzleBox@alien.top 1 points 10 months ago (1 children)

I mean if we consider the LLM to be the brain, it should ideally be able to master Math similar to how human brain does right

Except it's not a brain. It's, at best, a tiny portion of a brain's processing, hence the move toward multi-modal models.

It's a bit like trying to do poetry with your visual cortex, or math using your autonomic system. I mean, more strength to you, but while you can use a hammer on a screw, it doesn't mean you should.

[–] BackwardsPuzzleBox@alien.top 1 points 10 months ago

Forcing probabilistic models to engage in determinatic processes is perhaps the most fantastic form of thumb twiddling we have yet invented.

[–] BackwardsPuzzleBox@alien.top 1 points 10 months ago

The very idea of using GPT models to create datasets is such a mind-numbing, dumb incestuous decision to begin with. Essentially the 21st century version creating a xerox of a xerox.

In a lot of ways, it's kind of heralding the future enshitification of AI as dabblers think every problem can be automated away without human judgement or editorialisation.