squareOfTwo

joined 1 year ago
[–] squareOfTwo@alien.top 1 points 11 months ago (1 children)

these things don't "understand". Ask it something which is "to much OOD" and you get wrong answers, even when a human would give the correct answer according to the training set.

[–] squareOfTwo@alien.top 1 points 11 months ago

Julia offers as a language a good tool for some ML tasks. Also some libraries are usable for ML https://juliapackages.com/p/autograd . Most of it is for CPU but there are some GPU libs too.

[–] squareOfTwo@alien.top 1 points 11 months ago (2 children)

Python is a degenerated language without strong typing etc. which will die out at some point just like Perl or Cobol. Don't listen to short cut answers like "Python is only glue!!!!".

Not all ML workload is best to be written in Python.

Use your tools wisely.

[–] squareOfTwo@alien.top 1 points 11 months ago

effective altruism

[–] squareOfTwo@alien.top 1 points 11 months ago

looks like a great model to test / use for certain "reasoning" / reasoning use cases.

[–] squareOfTwo@alien.top 1 points 11 months ago

it's more like I am sorry Dave, I am to fucking stupid to correctly parse your request.

[–] squareOfTwo@alien.top 1 points 1 year ago (1 children)

will be a lot harder to do it in real life, hahaha

[–] squareOfTwo@alien.top 1 points 1 year ago

0% consciousness / agency

100% confusion

as usual

[–] squareOfTwo@alien.top 1 points 1 year ago

They and their made up pseudo-scienfific pseudo "alignment" piss me so off.

No, a model won't just have a stroke of genius and decide to hack into a computer. For many reasons.

Halluscination is one of them. Guessed a wrong token for a program? Oops the attack doesn't work. Oh and don't forget that tokens don't fit into ctx.

[–] squareOfTwo@alien.top 1 points 1 year ago

good. Screw "alignment"

view more: next ›