sndrtj

joined 1 year ago
[–] sndrtj@feddit.nl 11 points 1 year ago

As a owner of an ARM laptop: wtf are they smoking?

[–] sndrtj@feddit.nl 18 points 1 year ago
  1. Content that is pulled halfway through watching it.
[–] sndrtj@feddit.nl 6 points 1 year ago

Nutsack+ 🤣🤣

[–] sndrtj@feddit.nl 2 points 1 year ago

The US will probably ban it for geopolitical reasons.

I'm in Europe, BYD already is in this market.

[–] sndrtj@feddit.nl 8 points 1 year ago (2 children)

Conclusion: my next car will be an affordable Chinese car.

[–] sndrtj@feddit.nl 1 points 1 year ago

Excel is never ever going to break backwards compatability. In fact, quite some "features" in Excel are just there to stay bug-for-bug compatible with existing systems.

Example: Excel stores dates internally as a float - called the serial date, you can view it by running DATEVALUE on any cell that contains a date. It is supposed to be the number of days since 1 January 1900. However, since early Excel versions had to be compatible with Lotus1-2-3, Excel had to be compatible with a bug in Lotus123: they had erroneously assumed 1900 to be a leap year. In addition, the indexing is off by one. So the actual 0 epoch of an Excel serial date is 30 December 1899 for all dates starting 1 March 1900.

[–] sndrtj@feddit.nl 10 points 1 year ago (1 children)

7 isn't random. A lunar cycle (ever wondered where the word month comes from - the moon of course) is 28 days. Aka exactly 4 weeks.

[–] sndrtj@feddit.nl 3 points 1 year ago

Oh this one is absolutely golden!

[–] sndrtj@feddit.nl 10 points 1 year ago

Good riddance.

[–] sndrtj@feddit.nl 12 points 1 year ago (3 children)

Chatgpt flat out hallucinates quite frequently in my experience. It never says "I don't know / that is impossible / no one knows" to queries that simply don't have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.

A good AI system wouldn't do this. It would be honest, and give no results when the information simply doesn't exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn't on accuracy of information, it's on plausible-sounding conversation.

[–] sndrtj@feddit.nl 4 points 1 year ago

I'm finding it useful for detecting / correcting really simple mistakes, syntax errors and stuff like that.

But I'm finding it mostly useless for anything more complicated.

[–] sndrtj@feddit.nl 5 points 1 year ago

And the dystopia continues....

view more: ‹ prev next ›