People who have put next to 0 time into understanding generative pre-trained transformers: It's just PR bullshit!
People who have looked into how it works: This has more applications than previously thought.
This is a most excellent place for technology news and articles.
People who have put next to 0 time into understanding generative pre-trained transformers: It's just PR bullshit!
People who have looked into how it works: This has more applications than previously thought.
Here is an alternative Piped link(s):
This has more applications than previously thought.
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
I haven't watched a lot of two-minute papers, but this video is very misleading. Simulated environments have been used for years to speed up DeepRL. The only ChatGPT/LLM portion was about defining a scoring mechanism and there video gives no indication of if it did a better job or not, not to mention the problem the LLM was solving is one that's been studied for decades, which reduces the "it generalizes better".
I'm not saying LLMs have a lot of potential, but that video isn't really supportive of that stance.
This is the best summary I could come up with:
The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter.
Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by chief operating officer Brad Lightcap that was obtained by The Washington Post.
During its first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators.
Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”
Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year.
Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
The original article contains 1,563 words, the summary contains 268 words. Saved 83%. I'm a bot and I'm open source!