this post was submitted on 25 Sep 2023
474 points (96.3% liked)

Technology

59377 readers
5695 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.

you are viewing a single comment's thread
view the rest of the comments
[–] sudoshakes@reddthat.com 16 points 1 year ago (4 children)

A large language model took a 3 second snippet of a voice and extrapolated from that the whole spoken English lexicon from that voice in a way that was indistinguishable from the real person to banking voice verification algorithms.

We are so far beyond what you think of when we say the word AI, because we replaced the underlying thing that it is without most people realizing it. The speed of large language models progress at current is mind boggling.

These models when shown FMRI data for a patient, can figure out what image the patient is looking at, and then render it. Patient looks at a picture of a giraffe in a jungle, and the model renders it having never before seen a giraffe… from brain scan data, in real time.

Not good enough? The same FMRI data was examined in real time by a large language model while a patient was watching a short movie and asked to think about what they saw in words. The sentence the person thought, was rendered as English sentences by the model, in real time, looking at fMRI data.

That’s a step from reading dreams and that too will happen inside 20 months.

We, are very much there.

[–] hobovision@lemm.ee 6 points 1 year ago (3 children)
[–] Pantoffel@feddit.de 1 points 1 year ago (1 children)

For the last example: Here

Rendering dreams from fMRI is also already reality. Please, google that yourself if you'd like to see the sources. However, the image quality is not yet very good, but nevertheless it is possible. It is just a question of when the quality will be better.

Now think about smart glasses or whatever display you like, controlling it with your mind. You'd need Jedi concentration :D But I sure do think I will live long enough to see this technology.

load more comments (1 replies)
load more comments (1 replies)