kibiz0r

joined 1 year ago
[–] kibiz0r@midwest.social 31 points 9 months ago

Also pulled her catalog from Spotify to protest their scummy royalty payouts. They changed the payouts for everyone as a result.

[–] kibiz0r@midwest.social 3 points 9 months ago (1 children)

You see the irony right? I genuinely can’t fathom your intent when telling this story, but it is an absolutely stellar example.

Yes, I did mean for it to be an example.

And yes, I do think that correctly framing a question is crucial whether you're dealing with a person or an LLM. But I was elaborating on whether a person's process of answering a question is fundamentally similar to an LLM's process. And this is one way that it's noticeably different. A person will size up who is asking, what they're asking, and how they're asking it... and consider whether they should actually answer the exact question that was asked or suggest a better question instead.

You can certainly work around it, as the asker, but it does require deliberate disambiguation. I think programmers are used to doing that, so it may feel like not that big of a deal, but if you start paying attention to how often people are tossing around half-formed questions or statements and just expecting the recipient to fill in the gaps... It's basically 100% of the time.

We're fundamentally social creatures first, and intelligent creatures second. (Or third, or not at all, depending.) We think better as groups. If you give 10 individuals a set of difficult questions, they'll bomb almost all of them. If you give the questions to a group of 10, they'll get almost all of them right. (There's several You Are Not So Smart episodes on this, but the main one is 111.)

Asking a question to an LLM is just completely different from asking a person. We're not optimized for correctly filling out scantron sheets as individuals, we're optimized for brainstorming ideas and pruning them as a group.

[–] kibiz0r@midwest.social 1 points 9 months ago* (last edited 9 months ago) (3 children)

I dare say that if you ask a human "Why should I not stick my hand in a fire?" their process for answering the question is going to be very different from an LLM.

ETA: Also, working in software development, I'll tell ya... Most of the time, when people ask me a question, it's the wrong question and they just didn't know to ask a different question instead. LLMs don't handle that scenario.

I've tried asking ChatGPT "How do I get the relative path from a string that might be either an absolute URI or a relative path?" It spat out 15 lines of code for doing it manually. I ain't gonna throw that maintenance burden into my codebase. So I clarified: "I want a library that does this in a single line." And it found one.

An LLM can be a handy tool, but you have to remember that it's also a plagiarizing, shameless bullshitter of a monkey paw.

[–] kibiz0r@midwest.social 5 points 9 months ago

For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn't extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.

[–] kibiz0r@midwest.social 5 points 9 months ago (1 children)

They joke and speculate a lot, for sure. But what did he make up that has any bearing on his argument?

[–] kibiz0r@midwest.social 3 points 9 months ago (2 children)

Better be careful what you say

I know it’s not the point, but that always strikes me as so dumb. Wouldn’t a superintelligent being know that you were simply hiding your true feelings?

[–] kibiz0r@midwest.social 10 points 9 months ago

Oh hey, it’s the guy Dan Olsen was talking about in that Folding Ideas video!

[–] kibiz0r@midwest.social 8 points 9 months ago (1 children)

The United States does not seek conflict in the Middle East or anywhere else in the world

Narrator: It does.

[–] kibiz0r@midwest.social 1 points 9 months ago

Yeah, the privacy-minded socially-averse demographic is a well-documented stronghold of feminist support.

[–] kibiz0r@midwest.social 7 points 9 months ago* (last edited 9 months ago) (1 children)

It really isn’t that simple.

If all your system cares about is recording incoming events at a discrete time, then sure: UTC for persistence and localization for display solves all your problems.

But if you have any concept of user-defined time ranges or periodic scheduling, you get in the weeds real quick.

There is a difference between saying “this time tomorrow” vs. “24 hours from now”, because of DST, leap years, and leap seconds.

Time zones (and who observes them) change over time. As does DST.

If you allow monthly scheduling, you have to account for some days not being valid for some months and that this changes on a leap year.

If you allow daily scheduling, you need to be aware that some hours of the day may not exist on certain days or may exist twice.

If you poll a client device and do any datetime comparisons, you need to decide whether you care about elapsed time or calendar time.

I worked on some code that was deployed to aircraft carriers in the Pacific. “This event already happened tomorrow” is completely possible when you cross the international date line.

Add to all of this the fact that there are different calendars across the world, even if the change is as small as a different “first day of the week”.

[–] kibiz0r@midwest.social 9 points 9 months ago

If only Adobe implemented some way to verify how images had been modified… https://c2pa.org/

[–] kibiz0r@midwest.social 5 points 9 months ago (1 children)

Reminds me of the lessons from Chad Fowler’s talk on “Legacy”. https://youtu.be/P4xSmYr7PEg

view more: ‹ prev next ›