I was thinking as a cost cutting measure. As long as performance is comparable to a moderate CPU GPU combination, it's less silicone, interconnections, ram, and coolers and less likely to break during shipping / assembly. Like a gaming console.
Such a PC could still use sockets with upgradable APUs or CPUs, as well as PCI slots for dedicated gpus, retaining basic upgradability. A lot depends on the upcoming AMD APUs.
This article is comparing apples to oranges here. The deepseek R1 model is a mixture of experts, reasoning model with 600 billion parameters, and the meta model is a dense 70 billion parameter model without reasoning which preforms much worse.
They should be comparing deepseek to reasoning models such as openai's O1. They are comparable with results, but O1 cost significantly more to run. It's impossible to know how much energy it uses because it's a closed source model and openai doesn't publish that information, but they charge a lot for it on their API.
Tldr: It's a bad faith comparison. Like comparing a train to a car and complaining about how much more diesel the train used on a 3 mile trip between stations.