I assume microcontrollers. Most of those are invisible to consumers.
stsquad
I recently joined the ranks of 3d printer owners. The first thing I printed where a pair of risers for mounting some hall effect sensors on my garage door mechanism. Very simple shapes but super handy.
Absolutely. Linux on the desktop, f-droid on my Android phone. The fact if something irritates me enough I can download the code and fix it.
I'm lucky I have a job working with FLOSS software. I don't think I could go back to hacking on propriety code.
There have been a number of documentaries about HMP Broodmoor which is where our criminally insane prisoners tend to go.
I would not want anything that requires a cloud connection to be responsible for securing my house. The security record of these smart locks also isn't great.
The final question you need to ask yourself is how they fail safe? There have been Tesla owners trapped in burning cars. If, god forbid, your house caught fire can you get out of your door secured with a smart lock?
Someone was telling me it was Kool Aids competitor which they did such a good job discrediting that eventually the Kool Aid brand got the association.
Was that Donald Glover in one of the scenes?
The demand for LLM inference will drop off when people finally realise it is not the road to AGI. However there is still plenty of things GPU compute can be applied to and maybe spot prices will come down again.
Thanks for that. I shall have to try out Reader.
I did watch the two LLM related talks and tried out editor-code-assistant as a result. It's really nice being able to play with the powerful agent based workflow directly in my favourite (only) editor.
Once we summit the peak of inflated expectations and the bubble bursts hopefully we'll get back to evaluating the technology on its merits.
LLM's definitely have some interesting properties but they are not universal problem solvers. They are great at parsing and summarizing language. There ability to vibe code is entirely based on how closely your needs match the (vast) training data. They can synthesise tutorials and stack overflow answers much faster than you can. But if you are writing something new or specialised the limits of their "reasoning" soon show up in dead ends and sycophantic "you are absolutely right, I missed that" responses.
More than the technology the social context is a challenge. We are already seeing humans form dangerous parasocial relationships with token predictors with some tragic results. If you abdicate your learning to an LLM you are not really learning and that could have profound impacts on the current cohort of learners who might be assuming they no longer need to learn as the computer can do it for them.
We are certainly experiencing a very fast technological disruption event and it's hard to predict where the next few years will take us.
The term I've heard is the "right wing grift drift". Even the left leaning Russell Brand went through the drift when he got cancelled after SA accusations.
Now I've read the article it's unnamed industry analysts and it's written by an AI. For all I know the AI has hallucinated the number.