squareOfTwo

joined 2 years ago
[–] squareOfTwo@alien.top 1 points 2 years ago (1 children)

these things don't "understand". Ask it something which is "to much OOD" and you get wrong answers, even when a human would give the correct answer according to the training set.

[–] squareOfTwo@alien.top 1 points 2 years ago

Julia offers as a language a good tool for some ML tasks. Also some libraries are usable for ML https://juliapackages.com/p/autograd . Most of it is for CPU but there are some GPU libs too.

[–] squareOfTwo@alien.top 1 points 2 years ago (2 children)

Python is a degenerated language without strong typing etc. which will die out at some point just like Perl or Cobol. Don't listen to short cut answers like "Python is only glue!!!!".

Not all ML workload is best to be written in Python.

Use your tools wisely.

[–] squareOfTwo@alien.top 1 points 2 years ago

effective altruism

[–] squareOfTwo@alien.top 1 points 2 years ago

looks like a great model to test / use for certain "reasoning" / reasoning use cases.

[–] squareOfTwo@alien.top 1 points 2 years ago

it's more like I am sorry Dave, I am to fucking stupid to correctly parse your request.

[–] squareOfTwo@alien.top 1 points 2 years ago (1 children)

will be a lot harder to do it in real life, hahaha

[–] squareOfTwo@alien.top 1 points 2 years ago

0% consciousness / agency

100% confusion

as usual

[–] squareOfTwo@alien.top 1 points 2 years ago

They and their made up pseudo-scienfific pseudo "alignment" piss me so off.

No, a model won't just have a stroke of genius and decide to hack into a computer. For many reasons.

Halluscination is one of them. Guessed a wrong token for a program? Oops the attack doesn't work. Oh and don't forget that tokens don't fit into ctx.

[–] squareOfTwo@alien.top 1 points 2 years ago

good. Screw "alignment"

view more: next ›