ReasonablyBadass

joined 1 year ago
[–] ReasonablyBadass@alien.top 1 points 11 months ago (3 children)

This has damaged AGI safety research massively.

If OpenAI goes to slow, others will over take it.

Other companies will look at any safety oriented researchers as potential "traitors".

If GPT is shuttered, people will turn to more ruthless competitors.

And last, they may even turn to open source directly, massively accelerating research there.

[–] ReasonablyBadass@alien.top 1 points 1 year ago

You can use a pretrained LLM as the core of a system capable of learning though. Like in the MemGPT paper

[–] ReasonablyBadass@alien.top 1 points 1 year ago

The way the current agent experiments are going it would seem Competent AGI can be built from Emerging AGI modules.

[–] ReasonablyBadass@alien.top 1 points 1 year ago

Really wary of big tech trying to relate moats.

[–] ReasonablyBadass@alien.top 1 points 1 year ago

To 1: I remember a recent paper saying they got better results without tokenisation, at least in one area. Don't have the link right now though.

[–] ReasonablyBadass@alien.top 1 points 1 year ago (1 children)

The abstract says they trained on a labeled dataset. ViTs work on unlabeled ones, right?