this post was submitted on 17 Mar 2025
586 points (97.0% liked)

Technology

67987 readers
3488 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 3) 50 comments
sorted by: hot top controversial new old
[–] curiousaur@reddthat.com 5 points 1 week ago* (last edited 1 week ago) (1 children)

This is hard to quantify. I use them constantly throughout my work day now.

Are they smarter than me? I'm not sure. Haven't thought too much about it.

What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.

I can read error stacks coming from my code, but before I've even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.

My point is, I'm not sure how to compare smarter vs orders of magnitude faster.

load more comments (1 replies)
[–] SocialMediaRefugee@lemmy.world 5 points 1 week ago (2 children)
load more comments (2 replies)
[–] DarrinBrunner@lemmy.world 4 points 1 week ago (1 children)

Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.

[–] ZephyrXero@lemmy.world 2 points 1 week ago

My new mental model for LLMs is that they're like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.

[–] technocrit@lemmy.dbzer0.com 4 points 1 week ago

What that overwhelming, uncritical, capitalist propaganda do...

[–] ZephyrXero@lemmy.world 3 points 1 week ago

What a very unfortunate name for a university.

[–] CalipherJones@lemmy.world 3 points 1 week ago (1 children)

AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.

[–] Waraugh@lemmy.dbzer0.com 3 points 1 week ago (6 children)

Is stringing words together really considered knowledge?

load more comments (6 replies)
[–] 1984@lemmy.today 3 points 1 week ago (1 children)

An llm simply has remembered facts. If that is smart, then sure, no human can compete.

Now ask an llm to build a house. Oh shit, no legs and cant walk. A human can walk without thinking about it even.

In the future though, there will be robots who can build houses using AI models to learn from. But not in a long time.

[–] Omgpwnies@lemmy.world 3 points 1 week ago (1 children)

3d-printed concrete houses are already a thing, there's no need for human-like machines to build stuff. They can be purpose-built to perform whatever portion of the house-building task they need to do. There's absolutely no barrier today from having a hive of machines built for specific purposes build houses, besides the fact that no-one as of yet has stitched the necessary components together.

It's not at all out of the question that an AI can be trained up on a dataset of engineering diagrams, house layouts, materials, and construction methods, with subordinate AIs trained on the specific aspects of housing systems like insulation, roofing, plumbing, framing, electrical, etc. which are then used to drive the actual machines building the house. The principal human requirement at that point would be the need for engineers to check the math and sign-off on a design for safety purposes.

load more comments (1 replies)
[–] aceshigh@lemmy.world 3 points 1 week ago

Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.

[–] LovableSidekick@lemmy.world 2 points 1 week ago

I'm surprised it's not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.

[–] forrcaho@lemmy.world 2 points 1 week ago

As far as I can tell from the article, the definition of "smarter" was left to the respondents, and "answers as if it knows many things that I don't know" is certainly a reasonable definition -- even if you understand that, technically speaking, an LLM doesn't know anything.

As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty "smart":

what's a good word to describe the people in a poll who answer the questions? I didn't want to use "subjects" because that could get confused with the topics covered in the poll.

"Respondents" is a good choice. It clearly refers to the people answering the questions without ambiguity.

The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.

load more comments
view more: ‹ prev next ›