this post was submitted on 19 Nov 2023
88 points (92.3% liked)

Technology

59427 readers
3085 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Questions remain about what spurred the board’s decision to oust Altman, but growing tensions became impossible to ignore as Altman rushed to make OpenAI the next big technology company.

top 14 comments
sorted by: hot top controversial new old
[–] alienanimals@lemmy.world 10 points 1 year ago (2 children)

People who have put next to 0 time into understanding generative pre-trained transformers: It's just PR bullshit!

People who have looked into how it works: This has more applications than previously thought.

[–] PipedLinkBot@feddit.rocks 2 points 1 year ago

Here is an alternative Piped link(s):

This has more applications than previously thought.

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] jacksilver@lemmy.world 1 points 1 year ago

I haven't watched a lot of two-minute papers, but this video is very misleading. Simulated environments have been used for years to speed up DeepRL. The only ChatGPT/LLM portion was about defining a scoring mechanism and there video gives no indication of if it did a better job or not, not to mention the problem the LLM was solving is one that's been studied for decades, which reduces the "it generalizes better".

I'm not saying LLMs have a lot of potential, but that video isn't really supportive of that stance.

[–] autotldr@lemmings.world 6 points 1 year ago

This is the best summary I could come up with:


The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter.

Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by chief operating officer Brad Lightcap that was obtained by The Washington Post.

During its first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators.

Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”

Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year.

Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.


The original article contains 1,563 words, the summary contains 268 words. Saved 83%. I'm a bot and I'm open source!