this post was submitted on 25 Feb 2024
164 points (91.0% liked)

Technology

59446 readers
3569 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] andrewrgross@slrpnk.net 2 points 8 months ago* (last edited 8 months ago) (1 children)

I agree with your factual assessments.

The points on which I think it makes sense to remain open minded are these:

  1. The question we're examining is not whether current LLMs or any LLM by itself is sentient, but whether they're a step towards it. I think we need to be humble because the end point of AGI is not something we can claim to understand at the stage. We can make very reasonable assessments like the ones you're making about what these specifically can't do by themselves. But could an could an LLM constitute a potential module within an AGI, for instance? If a future system combined an LLM with a mechanism for self examination and self-guided retraining, what might be the product? I think these are reasonable ideas to consider.

  2. I really think we need recognize the subjectivity at play here and formulate our inquiry around what functions it can perform without getting sidetracked into its internal state. We can never know if any machine can experience love. But we can assess whether a machine can convince a human that it loves them. If a machine were to create a work of art that humans found beautiful and innovative, we can't know if the machine is able to appreciate beauty, but we can infer that it's achieved a certain level of capability which we associate with artistry when demonstrated by humans. This is an issue that arrises when discussing art made by elephants. Are elephant painters truly creative, or just experimenting with the tools? I think that's an unproductive question to ask. I think we need to benchmark primarily based on overall performance regardless of internal states, because of point three:

  3. I think we're comparing these systems to humans based on misconceptions of how sentient humans really are. Humans do many things which appear more intentional or motivated than we know them truly to be based on cognitive neuroscience. What we know about humans is based on our individual experiences within our own minds and observations of the performance of others. And this is remarkably biased toward overestimating the depth of our own facilities. We grossly overestimate how much we talk before we think, for instance. And we cannot measure or prove a human's ability to feel love any more than we can for a machine. We know these things exist because we can experience them, and others have the persuasive ability to convince us that they experience them as well. But epistemologically, how do we define our experience of pain as essentially different from a machine which reports a diagnostic that it is damaged?

Ultimately, I agree with you on the broad strokes. I agree about the state of the current technology. I disagree with some of your certainty of the future of this technology, and the ways in which we assess it.

[โ€“] huginn@feddit.it 3 points 8 months ago

Working through a response on mobile so it's a bit chunked. I'll answer each point in series but it may take a bit.

  1. that's not really what the video above claims. The presenter explicitly states that he believes GPT4 is intelligent, and that increasing the size of the LLM will make it true AGI. My entire premise here is not that an LLM is useless but that AGI is still entirely fantastical. Could an LLM conceivably be some building block of AGI? Sure, just like it could not be a building block of AGI. The "humble" position here is keeping AGI out of the picture because we have no idea what that is or how to get there, while we do know exactly what an LLM is and how it works. At its core an LLM is a complex dictionary. It is a queryable encoding of all the data that was passed through it.

Can that model be tweaked and tuned and updated? Sure. But there's no reason to think that it demonstrates any capability out of the ordinary for "queryable encoded data", and plenty of questions as to why natural language would be the queryable encoding of choice for an artificial intelligence. Your brain doesn't encode your thoughts in English, or whatever language your internal thoughts use if you're ESL+, language is a specific function of the brain. That's why damage to language centers in the brain can render people illiterate or mute without affecting any other capacities.

I firmly believe that LLMs as a component of broader AGI is certainly worth exploring just like any of the other hundreds of forms of genetic models or specialized "AI" tools: but that's not the language used to talk about it. The overwhelming majority of online discourse is AI maximalist, delusional claims about the impending singularity or endless claims of job loss and full replacement of customer support with ChatGPT.

Having professionally worked with GitHub Copilot for months now I can confidently say that it's useful for the tasks that any competent programmer can do as long as you babysit it. Beyond that any programmer who can do the more complex work that an LLC can't will need to understand the basics that an LLC generates in order to grasp the advanced. Generally it's faster for me to just write things myself than it is for Copilot to generate responses. The use cases I've found where it actually saves any time are:

  1. Generating documentation (has at least 1 error in every javadoc comment that you have to fix but is mostly correct). Trying documentation first and code generated from it never worked well enough to be worth doing.

  2. Filling out else cases or other branches of unit test code. Once you've written a pattern for one test it stamps out the permutations fairly well. Still usually has issues.

  3. Inserting logging statements. I basically never have to tweak these, except prompting for more detail by writing a ,

This all is expected behavior for a model that has been trained on all examples of code patterns that have ever been uploaded online. It has general patterns and does a good job taking the input and adapting it to look like the training data.

But that's all it does. Fed more training data it does a better job of distinguishing patterns, but it doesn't change its core role or competencies: it takes an input and tries to make it's pattern match other examples of similar text.