this post was submitted on 01 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

My current PC laptop is soon ready to retire, having worked for seven years. As a replacement I'm considering the new Macbook Pros. It is mainly the battery time which makes me consider Apple. These are my requirements for the laptop:

  • great battery time
  • 16" since I'm old and my eyes are degraded
  • dual external monitors
  • software engineering including running some local docker images

Then I have two ML requirements which I don't know if I could fulfill using a laptop:

  • good performance for working with local LLMs (30B and maybe larger)
  • good performance for ML stuff like Pytorch, stable baselines and sklearn

In order to fulfill the MUST items I think the following variant would meet the requirements:

Apple M3 Pro chip with 12‑core CPU, 18‑core GPU, 16‑core Neural Engine
36 GB memory
512 GB SSD
Price: $2899

Question: Do you think I could fulfill the ML requirements using a Macbook Pro M3? Which config would be smart to buy in such case?

Thankful for advice!

you are viewing a single comment's thread
view the rest of the comments
[–] Ok-Zookeepergame6084@alien.top 1 points 1 year ago (1 children)

From direct exp Mac m1 and m2 air and mini run 7b quantized models ok but barely, they prefer 3b like orca mini. So your goal for running a 30b quantized is realistic, but only for inference and I would use ollama or other model servers that run as api's and then run your interface client separately, strealit, chainl et al. From a devs standpoint I prefer mac's just bec the Linux core in macos. My experience with pc laptop nvidia gpu's for inference isn't great. If you want plug it and go setup I rec mb. Dealing with c compliers is a pia in windows which includes llama.cpp, solidity and various others. Training or fine tuning on any consumer level hw isn't practical to me unless it's a million param model.

[–] jjaicosmel68@alien.top 1 points 11 months ago

that went over my head! so basically i have an m3 max. and i want to run an LLM on this hunk of junk. How the hell do i even get to upload it on. i am just asking copilot. ill let yo uguys know of what actually performs.