I'm finding it useful for detecting / correcting really simple mistakes, syntax errors and stuff like that.
But I'm finding it mostly useless for anything more complicated.
I'm finding it useful for detecting / correcting really simple mistakes, syntax errors and stuff like that.
But I'm finding it mostly useless for anything more complicated.
And the dystopia continues....
This is very much the default in the Netherlands. Yes theft happens, but your license plate will be clearly visibly on CCTV meaning you will get a visit by police soon after.
Would this plan remove all ads? Only then I'd consider it.
No NDA, completely different sector. Stated reason was the same as in the article: "we need your full undivided attention", followed by some bullshit that they were "concerned" I would be overworking myself. Maybe they should have reduced the work load at work if they were truly concerned, as I was pulling 60+ hour work weeks at the time.
You never talk about what you did in the weekend over the water cooler?
Also: she, not he.
I turn off almost all notifications. I only allow messaging apps, and system notifications. Even then I find it too much to be honest.
My employer is the same. I was almost fired for attending a (unpaid!) hackathon during a weekend. A colleague was fired for doing volunteer work in weekends.
Yes, I'm looking for a new job.
Pinduoduo is the parent company of Temu. Of course it's going to be the same dev team.
This is like saying you're surprised Instagram shares code and engineers with Facebook.
Referral codes aren't exactly uncommon for new apps, especially if VC money is involved.
That's about time.
Chatgpt flat out hallucinates quite frequently in my experience. It never says "I don't know / that is impossible / no one knows" to queries that simply don't have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.
A good AI system wouldn't do this. It would be honest, and give no results when the information simply doesn't exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn't on accuracy of information, it's on plausible-sounding conversation.