this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

This is purely out of curiosity, but if anybody has some insights I'd love to hear it.

I am running 70B Q4 models on my M1 Max Macbook Pro (10 CPU, 32 GPU, 64 GB RAM). The lid is closed because I have an external monitor 4K attached via USB-C, so the display won't draw any power.

I am using both llama.cpp and LM Studio, and in both cases I run the LLMs with Metal acceleration.

Now, when running the LLM, I notice that according to iStat Menus my macbook is drawing between 95 and 110W ๐Ÿ˜ฎ

(The fans get loud quickly, just like the good old intel days. But it seems to be able to sustain this)

But how is that possible?

Where is that power draw coming from? The GPU alone is max 45W, and the CPU is something around ~30W max (I forgot the exact value), but it's not even used much. In the screenshot it pulls a meager ~12W. So That's a total of ~57W for CPU+GPU combined. Where do the other 50W+ go?

Where is the additional power draw coming from? I know there are lots of other components here: RAM (probably single digit power draw?), fans, memory controller, etc etc. But we are talking about a large chunk of power.

Does anybody know? :)

https://preview.redd.it/6xxet64ash0c1.png?width=2869&format=png&auto=webp&s=ca3a1f416b9f2764e7143d262a5540fb2d02fa44

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Herr_Drosselmeyer@alien.top 0 points 10 months ago (3 children)

Under full load and if thermals allow it, that machine can draw up to 120 from the wall. Likely the tool isn't reading the SOC power draw correctly.

[โ€“] k_michael@alien.top 0 points 10 months ago (1 children)

Hm, you are right, I do also remember the anandtech article on m1 max power draw. Maybe the tool really isn't reading reading the draw correctly ๐Ÿค” It's still interesting though, if I run a 3D game on my MBP i draw maybe 65-70W under full load. The LLM must be using some component that the 3D game isn't ๐Ÿคทโ€โ™‚๏ธ

[โ€“] FlishFlashman@alien.top 0 points 10 months ago (1 children)

Game GPU-use probably hits the cache. LLM really won't since each token involves reading all the model data.

[โ€“] k_michael@alien.top 1 points 10 months ago

That's a good point actually! Thanks

load more comments (1 replies)