this post was submitted on 26 Feb 2024
93 points (85.5% liked)

Technology

59377 readers
4364 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Jensen Huang says kids shouldn't learn to code — they should leave it up to AI.::At the recent World Government Summit in Dubai, Nvidia CEO Jensen Huang made a counterintuitive break with tech leader wisdom by saying that programming is no longer a vital skill due to the AI revolution.

you are viewing a single comment's thread
view the rest of the comments
[–] dojan@lemmy.world 24 points 8 months ago (10 children)

Absolutely. The calculator is a tool to help you solve a problem. If you don’t understand the problem, then at best you can’t confirm if the answer is correct or not, and at worst the entire exercise is completely lost on you.

The same applies to LLMs. Sure you can get them to spit out code, but unless you understand the code it might be tough to verify that it does what you want. Further, if the code needs adapting (as it often does) then you are shit out of luck if you don’t understand it.

Sure you can ask the LLM to make changes, but the moment something goes wrong in the prompt you have an error sitting there polluting all future output.

[–] wewbull@feddit.uk 9 points 8 months ago (9 children)

Indeed. I've been watching a number of evaluations of different LLMs, where people give it a set of problems and then evaluate the results. The number of times I've seen "Well it got that wrong, but if we let it re-evaluate it, it gets it right". If that's the case, the model is useless. You have to know the right answer before you can ask the model for an answer because the answer you'll get can't be trusted.

Might as well flip a coin.

[–] dojan@lemmy.world 9 points 8 months ago* (last edited 8 months ago) (8 children)

Yeah. I was tasked with evaluating LLMs for software dev at my company last year. Tried a few solutions and tools, and various workflows from just using it as a crutch to basically instructing the LLM to make the application. The former was rarely necessary (but sometimes helpful) and the latter was ridiculously cumbersome.

You need to be specific, and leave no room for interpretation, because the moment you do the latter it'll start making stuff up that doesn't necessarily fit in with the spec, and while you can correct that, that's tedious in and of itself, and once it's already had the idea it'll often have a hard time letting go of it.

I also had several cases where it outright ignored provided context. That was even more frustrating because then it made assumptions that I'd already proven to be false.

The best use cases I got from it was

  • Explaining unclear code
  • Writing clear documentation (it was really good at this)
  • Rubberducking

Essentially, it was a great helper, but a horrendous developer. Felt more like I was tutoring it than anything else.

[–] skvlp@lemm.ee 3 points 8 months ago (1 children)

I haven’t seen anyone mention rubberducking or documentation or understanding code as use cases for AI before, but those are truly useful and meaningful advantages. Thanks for bringing that to my attention :)

[–] dojan@lemmy.world 3 points 8 months ago (1 children)

There are definitely ways in which LLMs and imaging models are useful. Hell I've been playing around with vocal synthesis for years, SynthV's AI models are amazing, so even for music there's use cases. The problem is big corporations just fucking it up. Rampant theft, no compensation for the original creators, and then they're sitting on the models like dragons. OpenAI needs to rename themselves, preferably years ago, because there's nothing open about them.

The way I see it, the way SynthV (and VOCALOID prior to that) works is great; you hire a vocalist with the express purpose of making a model out of their voice. They know what they're getting into, and are getting compensated for it. Then there are licenses and such on these models. In some cases, like those produced by Eclipsed Sounds, anyone that uses a model to create a song gets decently free reign. In others, like the Bushiroad models, you are fairly restricted in what you can do with them.

Meaning the original artist has a say. It's why some models, like Cangqiong, will never get AI updates; the voice provider's wishes matter.

Using computer generated stuff as a crutch in the creation process is perfectly fine I feel, but outright trying to replace humans with "AI" is a ridiculous notion.

[–] skvlp@lemm.ee 2 points 8 months ago (1 children)

There has been synths that has been used to trigger vocal samples, among other things, for like 40(?) years, and this almost sounds like an evolution to that?

There are a lot of technological innovations in music (vax roll recording, tape recording, DAW recording, tube amps, transistor amps, amp modellers, Mellotron, analog synths, modular synths, digital synths, soft-synths, etc, etc, etc), and I think there’s surely more to come, and awesome new music to be made possible from the technological advantages.

I agree that the technology is not the problem, but how it’s used. If, let’s say, giant corporations feed all of human art into their closed, proprietary models only to churn out endless amounts of disposable entertainment, it would be detrimental to the creation of original art and I’d look upon that as a bad thing. But I guess we as a society has decided that we want to empower our corporate overlords at the expense of ourselves, to go far off topic of the original thread :/

[–] dojan@lemmy.world 2 points 8 months ago (2 children)

There has been synths that has been used to trigger vocal samples, among other things, for like 40(?) years, and this almost sounds like an evolution to that?

Kind of? But I think, particularly with SynthV offering such realistic vocals, it might be useful for producers that can't easily get vocalists, or don't want to/can't sing themselves. You can also obviously use it to create backing vocals and fill things out if you realise that you need more vocals and your vocalist isn't available.

Or, maybe you're like me and just enjoy tinkering with the voice. Here's an example song by someone that's pretty talented at tuning these.

let’s say, giant corporations feed all of human art into their closed, proprietary models only to churn out endless amounts of disposable entertainment, it would be detrimental to the creation of original art and I’d look upon that as a bad thing. But I guess we as a society has decided that we want to empower our corporate overlords at the expense of ourselves, to go far off topic of the original thread :/

This is the road I fear we're heading down and it's so dystopic. 😭

[–] PipedLinkBot@feddit.rocks 1 points 8 months ago

Here is an alternative Piped link(s):

Here's an example song by someone that's pretty talented at tuning these.

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] skvlp@lemm.ee 1 points 8 months ago (1 children)

Those vocals are pretty good for being computer generated. It’s no replacement for greats like Bowie, Simone, Jagger, Winehouse, Yorke, etc, etc, but it’s not supposed to be. Sometimes it’ll do the trick, sometimes it’ll be a necessity, it’ll work for some backing vocals, demos, sketches, songwriting experimentation, guide vocals, and so on. I hope we’ll see awesome AI tools being used to make awesome music.

I definitely have that fear myself, but I hope human resilience hangs in there. Besides, I don’t think I’d care if the masses listen to bland shit by 17 songwriters or bland shit by AI ;)

[–] dojan@lemmy.world 2 points 8 months ago* (last edited 8 months ago) (1 children)

The quality of the vocals are now honestly less dependant on the synthesis engine than on the skill of the original singer, and the intent of the production team. Hayden is a first-party library produced by Dreamtonics, and they tend to be very focused on having their voices do a specific thing. Ninezero for example is all-in on that gravelly rock type voice and won't do soft ballads easily or with any particular quality.

This was true even for VOCALOID; most of the VOCALOID libraries are absolute bunk. YAMAHA's (developer of VOCALOID) first signature English library, CYBER DIVA sounds so bad. The (in my opinion) best library for VOCALOID happens to be a Hello Kitty collaboration. For some reason they chose a traditional Japanese singer with an incredible vocal range to be the voice provider rather than a voice actor, and the quality of that voice is reflected in the voice library.

EclipsedSounds has three libraries now, and they've focused more on capturing the qualities of the original singer. Their first library SOLARIA is a Soprano whose voice is provided by Emma Rowley. Their second library ASTERIAN is a bass, voiced by Eric Hollaway (known as 'thatbassvoice'). Their third, SAROS, is a tenor whose provider I don't think has come forth yet. They are much more expressive than most libraries produced by Dreamtonics. SAROS' second vocal demo is a great example.

One of the neat things about them being synthesized is that these libraries can sing in English, Japanese, Mandarin, Cantonese, and Spanish (and with some fiddling, likely in other languages too - I managed to get SAROS to perform in Norwegian thanks to the Spanish update). Where SynthV really falls short is the occasional glitches when you push the vocals, as well as the lack of vocal ornamentation; there's no good way of performing say, growls at the moment.


I think ultimately human creativity will preservere. We'll likely see a lot of AI generated garbage as people are getting used to the tools and finding ways of working with them in the next couple of years. After that, I don't know. Even then there'll be people that prefer to just do everything by themselves.

We manage to make garbage even without AI. Disney's "Wish" was so bad people think AI was used, but I think it's more a matter of "direction by corporate." Corporate decided to seagull the entire project and the original creative vision was basically destroyed by corporate interests. You see it all the time in the games industry as well; creativity is set aside for proven established ideas, and market appeal. Risks are not allowed.

[–] PipedLinkBot@feddit.rocks 1 points 8 months ago

Here is an alternative Piped link(s):

Ninezero for example is all-in on that gravelly rock type voice

SAROS' second vocal demo is a great example

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

load more comments (6 replies)
load more comments (6 replies)
load more comments (6 replies)