this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

(page 2) 17 comments
sorted by: hot top controversial new old
[–] racc15@alien.top 1 points 1 year ago (1 children)

I am slightly confused.

So, Sam Altman and collegues discovered a very powerful thing called Q*. This will make OpenAI very powerful and will make the board a lot of money.

So, why did this cause the board to fire him?

From the article, it seemed like the board was too afraid of Q* and fired him to stop it being released without proper security features.

Could someone please help clarify this?

Thanks.

[–] galactictock@alien.top 1 points 1 year ago

It’s complicated. The board at OpenAI is (or was) focused on AI safety and is not entirely comprised of investors. Their goal was not to maximize profits.

I forget the exact phrasing, but the board said they fired Altman for not being completely honest with them. Based on the wording of the board’s rationale for firing Altman, it seems likely that Altman was not forthright about the capabilities of this breakthrough, possibly because the board would then halt its development out of safety concerns.

[–] Seankala@alien.top 1 points 1 year ago

I'm a little curious why this post has so many upvotes. I guess it shows that things really have changed a lot.

[–] mrscepticism@alien.top 1 points 1 year ago (1 children)

I know very little about ML (essentially nothing, I have a background in economics and, a bit, of statistics), but isn't AGI still miles away from these models?

Like, my understanding of LLMs is that they essentially "predict" the right word to respond to a prompt and then write a new word based on the previous one and so on. Actual human level intelligence seems to me to be a degree of complexity higher.

[–] blabboy@alien.top 1 points 1 year ago (1 children)

Does it? In what quantitative way?

[–] mrscepticism@alien.top 1 points 1 year ago (1 children)
[–] olliereid@alien.top 1 points 1 year ago

Im not that clued up either. But given how the 'predict the next word' method trivially enables the wonders of AI photo, video, audio, code, etc. generation it's not inconceivable that it can also be extended to areas like logic, cognition.

The rhetorical question of other guy is actually quite a good one. Since predicting the next best word alone already seems to do such a great job of convincing us of intelligence, perhaps the onus is on us to describe how our intelligence is anything more than essentially a predictor of next words.

[–] Melodic_Hair3832@alien.top 1 points 1 year ago

It's Q-asterisk, not Q-star. Somebody read the footnote please

[–] andrewlapp@alien.top 1 points 1 year ago

You might rent a GPU from runpod or another cloud provider.

Memory requirements:

34B Model Memory Requirements (infer)

Seq Len vs Bit Precision
SL / BP |     4      |     6      |     8      |     16    
-----------------------------------------------------------
    512 |     15.9GB |     23.8GB |     31.8GB |     63.6GB
   1024 |     16.0GB |     23.9GB |     31.9GB |     63.8GB
   2048 |     16.1GB |     24.1GB |     32.2GB |     64.3GB
   4096 |     16.3GB |     24.5GB |     32.7GB |     65.3GB
   8192 |     16.8GB |     25.2GB |     33.7GB |     67.3GB
  16384 |     17.8GB |     26.7GB |     35.7GB |     71.3GB
[–] PlacidRaccoon@alien.top 1 points 1 year ago

I don't understand how a change in management is linked to these news. If anything, shouldn't it go the other way ? That sounds like good news ?

load more comments
view more: ‹ prev next ›