this post was submitted on 13 May 2024
81 points (80.0% liked)

Technology

59427 readers
4563 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.

you are viewing a single comment's thread
view the rest of the comments
[–] Sabata11792@kbin.social 54 points 6 months ago (15 children)

I can't wait till someone dose this, but open source and running on not billionaire hardware.

[–] Dyf_Tfh@lemmy.sdf.org 14 points 6 months ago (8 children)

If you already didn't know, you can run locally some small models with an entry level GPU.

For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.

Although there is quite a bit of controversy of what is an "open source" model, most are only "open weight"

[–] abhibeckert@lemmy.world 4 points 6 months ago* (last edited 6 months ago) (1 children)

you can run locally some small models

Emphasis on "small" models. The large ones need over a terabyte of RAM and it has to be high bandwidth (DDR is not fast enough).

And for most tasks, smaller models hallucinate way too often. Even the largest models are only just barely good enough.

[–] bamboo@lemm.ee 1 points 6 months ago

Llama 2 70B can run on a specc-ed out current gen MacBook Pro. Not cheap hardware in any sense, but it isn’t a large data center cluster.

load more comments (6 replies)
load more comments (12 replies)