this post was submitted on 04 Sep 2023
35 points (100.0% liked)

Technology

37712 readers
541 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Thoughts from James who recently held a Gen AI literacy workshop for older teenagers.

On risks:

One idea I had was to ask a generative model a question and fact check points in front of students, allowing them to see fact checking as part of the process. Upfront, it must be clear that while AI-generated text may be convincing, it may not be accurate.

On usage:

Generative text should not be positioned as, or used as, a tool to entirely replace tasks; that could disempower. Rather, it should be taught to be used as a creativity aid. Such a class should involve an exercise of making something.

you are viewing a single comment's thread
view the rest of the comments
[–] lvxferre@lemmy.ml 3 points 1 year ago* (last edited 1 year ago)

I propose that the specifics of the internals don’t matter in this case because LLMs are made of dozens of layers which can easily explain higher orders of abstraction

They do because the "layers" that you're talking about (feed forward, embedding, attention layers etc.) are still handling tokens and their relationship, and nothing else. LLMs were built for that.

[see context] and they exist as black boxes beyond the mechanics of the model

This is like saying "we don't know, so let's assume that it doesn't matter". It does matter, as shown.

I’m taking for granted that they can as the null hypothesis because they can readily produce outputs that appear for all intents and purposes to conceptualize.

I'm quoting out of order because this is relevant: by default, h₀ is always "the phenomenon doesn't happen", "there is no such attribute", "this doesn't exist", things like this. It's scepticism, not belief; otherwise we're incurring in a fallacy known as "inversion of the burden of proof".

In this case, h₀ should be that LLMs do not have the ability to handle concepts. That said:

Is there an experiment you can propose which would falsify your assertion that LLMs cannot conceptualize?

If you can show a LLM chatbot that never hallucinates, even when we submit prompts designed to make it go nuts, it would be decent albeit inductive evidence that that chatbot in question is handling more than just tokens/morphemes. Note: it would not be enough to show that the bot got it right once or twice, you need to show that it consistently gets it right.

If necessary/desired I can pull out some definition of hallucination to fit this test.

EDIT: it should also show some awareness of the contextual relevance of the tidbits of information that it pours down, regardless of their accuracy.