this post was submitted on 13 May 2024
29 points (100.0% liked)

Technology

37728 readers
964 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] key@lemmy.keychat.org 19 points 6 months ago (1 children)

The demo was so fucking creepy. Would rather be in a dark room surrounded by victorian dolls that sometimes seem to turn their head towards you and blink.

[–] Zworf@beehaw.org 6 points 6 months ago

I didn't think it was super creepy but I thought the voice was so overly enthusiastic and overacted and soooo sugary. bleh.

This won't work for me unless that can be customised and toned down a lot.

[–] luciole@beehaw.org 17 points 6 months ago

Reducing emotion to voice intonation and facial expression is trivializing what it means to feel. This kind of approach dates from the 70s (promoted namely by Paul Elkman) and has been widely criticized from the get-go. It’s telling of the serious lack of emotional intelligence of the makers of such models. This field keeps redefining words pointing to deep concepts with their superficial facsimiles. If "emotion" is reduced to a smirk and "learning" to a calibrated variable, then of course OpenAI will be able to claim grand things based on that amputated view of the human experience.

[–] Powderhorn@beehaw.org 13 points 6 months ago (2 children)

How impressive this is will hinge on whether there were any shenanigans behind the demos. I find it difficult to take breathless announcements at face value given recent issues.

[–] entropicdrift@lemmy.sdf.org 8 points 6 months ago* (last edited 6 months ago)

I pay for ChatGPT+ and it's real. I talked to it for about an hour today from my Android phone.

There were occasionally longer pauses than shown in the promo video, but only ever between when I spoke and when it started replying

[–] Zworf@beehaw.org 1 points 6 months ago

The audio from the AI also seemed to cut out a lot during the demo. So it does appear like no shenanigans to me.

[–] Muffi@programming.dev 7 points 6 months ago

The way the presenters had to talk over the voice to interrupt it was awkward as hell. It also seemed to pick up on background noise from the audience often and interrupt itself. That makes it unusable in loud public settings (which imo is great, I hope it will never be socially acceptable to chat loudly with your AI in public).

[–] NigelFrobisher@aussie.zone 6 points 6 months ago

God, it’s difficult enough having to talk to emotional people, and now this…

[–] Megaman_EXE@beehaw.org 4 points 6 months ago (1 children)

This looks...well amazing but also horrifying. When they showed of GPT assisting with math equations, it made me think of how much better I would be at math if I had an assistant like that growing up.

It also makes me think about how there are going to be so many scams and fraud in the future. It's already starting, and it's only going to get worse. I'm sure I'll be duped by something like this in the future.

Also, people are going to totally be marrying gpt bots in the future lol.

[–] And009@lemmynsfw.com 2 points 6 months ago

If they can marry cars and plants.. Possible to lose a generation like Japan did

[–] autotldr@lemmings.world 3 points 6 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryOn Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input.

OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models.

With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."

The AI assistant seemed to easily pick up on emotions, adapted its tone and style to match the user's requests, and even incorporated sound effects, laughing, and singing into its responses.

By uploading screenshots, documents containing text and images, or charts, users can apparently hold conversations about the visual content and receive data analysis from GPT-4o.

In the live demo, the AI assistant demonstrated its ability to analyze selfies, detect emotions, and engage in lighthearted banter about the images.


Saved 77% of original text.