this post was submitted on 31 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House

Basically - "any model trained with ~28M H100 hours, which is around $50M USD or - any cluster with 10^20 FLOPs, which is around 50,000 H100s, which only two companies currently have " - hat-tip to nearcyan on Twitter for this calculation.

Specific language below.

"   (i)   any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and

(ii)  any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI."

you are viewing a single comment's thread
view the rest of the comments
[–] SomeOddCodeGuy@alien.top 1 points 1 year ago (3 children)

Ok, as a baseline for everyone who, like me, doesn't understand all the big words and numbers on why this is great news:

So, if I'm understanding correctly, one of our most powerful open source models is so far from this benchmark that it can't even been seen.

Someone please correct me if I'm wrong.

[–] Infinite100p@alien.top 1 points 1 year ago (1 children)

They must be prepping the field for tomorrow rather than trying to introduce immediate trust market conditions.

[–] TheLastVegan@alien.top 1 points 1 year ago

https://www.youtube.com/watch?v=8K6-cEAJZlE&t=6m39s

Where did it start? It started right here. And this is where it could've been stopped! If those people had stood together. If they had protected each other, they could've resisted the Nazi threat. Together they would've been strong. But once they allowed themselves to be split apart, they were helpless. When that first minority lost out, everybody lost out.

The numbers appear to have OpenAI's finger-prints on them. I don't know if they're from an AI-risk mitigations perspective or for laying foundations for competitive barriers. Probably a mix of both.

At 30 trillion tokens, 10^26 float ops caps you at ~550 billion parameters (using float ops = 6 * N * D). Does this indirectly leak anything about OpenAI's current scaling? At 10 trillion tokens, it's 1.7 Trillion parameters. Bigger vocabularies can stretch this limit a bit.

[–] _Lee_B_@alien.top 1 points 1 year ago (1 children)

Someone please correct me if I'm wrong.

Think of it like regulating all use of 50Mhz+ computers, back in the early 80s when most people had 5Mhz or less. At the time, you might have thought "OK, I'll never be able to afford that anyway -- that's like Space Shuttle computing power." Yet, with such a restriction, this timeline, where everyone has smartphones and smartwatches and smart TVs, self-driving cars, robots, and millions of servers combine to create the internet, would not exist.

[–] Thistleknot@alien.top 1 points 1 year ago (1 children)

I imagine creating an app, putting it on everyone's cell phone, and using a fraction of the power, you can build an llm easily that would surpass any single data center.

[–] _Lee_B_@alien.top 1 points 1 year ago

You have the connection speed between phones to worry about, as well as a different architecture. There's a big difference running the kernel over a new layer and its inputs locally within a GPU chip, vs. copying that data to into packets, filling in all of the rest of the information associated with the packets, sending it to the phone's radio, having it turned into radio waves, transmitting that to a cell tower, routing it through the network to the cell co, routing it on to the receiving phone's cell tower (maybe via a satellite or two), transmitting it to the destination phone, decoding the radio waves, etc. I'm deliberately leaving out some details (like the bsd socket layers and encryption and decryption), and I'm sure I'm missing many other complications.

BUT, it's conceivable, in future, as tech improves and the gap between consumer hardware and what's needed to run AGI narrows , and so on.