this post was submitted on 31 Oct 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
top 40 comments
sorted by: hot top controversial new old
[–] floridamoron@alien.top 1 points 10 months ago
[–] Herr_Drosselmeyer@alien.top 1 points 10 months ago (2 children)

Oldest trick in the book. Once you've established yourself as a market leader, regulations strongly favour you over smaller competitors since you're better prepared to comply with them, especially if you were involved in shaping them.You'll gladly trade slightly reduced profit margins for prolonged market dominance.

Thus, it makes sense to trigger regulations, even if it requires you to embellish the truth.

[–] PSMF_Canuck@alien.top 1 points 10 months ago (1 children)

It won’t work. US regulations that don’t make sense will just shift the interesting product development overseas. As has happened in every industry up to now…

[–] Herr_Drosselmeyer@alien.top 1 points 10 months ago

It'll work enough for it to be worth it. People don't like jumping through hoops and often end up just going with what's easily available. Enthusiasts will download Chinese LLMs but a large American corp? Not going to happen. Truth is, people like us are barely a tiny blip on the radar of commercial enterprises.

[–] KillerMiller13@alien.top 1 points 10 months ago

They're willing to sabotage everyone because they won't be sabotaging themselves that much? That's messed up.

[–] WiSaGaN@alien.top 1 points 10 months ago (6 children)

I always thought of Andrew Ng as a more recognizable name than Google Brain.

[–] faldore@alien.top 1 points 10 months ago (1 children)

Agree it's very strange they didn't use his name

[–] DeathsCompanion@alien.top 1 points 10 months ago

Possibly to avoid political biases/aversions.

[–] Amgadoz@alien.top 1 points 10 months ago (1 children)

Yes, every ML practitioner knows him but probably the prompt engineer aren't aware of his massive contributions to the field.

[–] trollsalot1234@alien.top 1 points 10 months ago

I mean the closest I've ever come to being a prompt engineer is using Sillytavern to jack off and I've heard of Andrew Ng.

[–] yashpathack@alien.top 1 points 10 months ago

Yup, the guy is a legend.

[–] hipster-coder@alien.top 1 points 10 months ago

I only read the article because I recognized Andrew Ng in the photo.

[–] heuristic_al@alien.top 1 points 10 months ago (2 children)

Real insiders know that arrogance is measured in nano-Ngs. He has a bit of a sordid reputation with people that have worked with him.

[–] emrys95@alien.top 1 points 10 months ago

Tell me what you knoww

[–] timtulloch11@alien.top 1 points 10 months ago

I'd be curious to hear this as well, only seen him in webinars and stuff

[–] rorschach200@alien.top 1 points 10 months ago

That article is on businessinsider.com, its readers know neither Andrew Ng, nor Google Brain, but they all know Google, and Google Brain has "Google" in it (and Andrew Ng does not).

[–] Infinite100p@alien.top 1 points 10 months ago (2 children)

I assumed that this was the case since Altman started moaning about dangers of AI waay back.

[–] noioiomio@alien.top 1 points 10 months ago

Moaning about it while still developing SOTA models

[–] DataAvailability@alien.top 1 points 10 months ago

Does the argument not make sense? Why not first evaluate the arguments for not open sourcing models on the face instead of reaching for people’s personal incentives to lie about it? Seems like people forgot step one and just went to the assumption of mal intent, like you said you did.

Given that we don’t really know how AI can be used for malicious purposes, might it make sense that the org with by-far the most powerful model chooses not to release their secrets, as to slow the pace of malicious use?

Is it possible that Altman believes this, or does his incentive to lie about it so greatly outweigh anything else that you can’t even consider the merits of the argument? I hear way too much about why OAI must be lying about this, not enough considering what they have to say.

[–] poetworrier@alien.top 1 points 10 months ago

That's exactly what an AI would say!

[–] Zelenskyobama2@alien.top 1 points 10 months ago (2 children)

Capitalism = free-market competition

Socialism = public monopoly (the state, or a private monopoly)

These AI companies want to put us under a socialist regime. We must stop them.

[–] ninjasaid13@alien.top 1 points 10 months ago (2 children)

Socialism = public monopoly (the state, or a private monopoly)

How is a private monopoly a public monopoly?

[–] Zelenskyobama2@alien.top 1 points 10 months ago

A company can achieve such high levels of productivity that it no longer relies on the profit motive, whether it's privately or publicly owned. When I refer to a "private monopoly," I'm describing a private enterprise that has effectively transitioned into a publicly regulated entity with a focus on improving societal ills.

[–] rjames24000@alien.top 1 points 10 months ago

reminds me of the DMV .. which is a flaming pile of dogshit and treats people like shit because they can do whatever they want and still be in business.. you keep your socialism

[–] ab2377@alien.top 1 points 10 months ago

interesting!

[–] Betaglutamate2@alien.top 1 points 10 months ago (3 children)

I honestly think that if there is a chance of AI wiping out humanity we should take it seriously.

Companies are right now doing the easy thing and saying there is no danger. What I want is proof. Physicists showed proof for example that the LHC was safe.

If there is even a 0.1% chance of an AI taking over we should be serious about it. Alternatively imagine if I said the following: I am genetically engineering monkeys to be super smart way smarter than humans. I am also going to give them the tools to improve themselves further and to reproduce near instantly then I'm going to release them for 5.99 a month as servants to humans. Hoe many of you would be worried.

[–] stonesst@alien.top 1 points 10 months ago

Some companies are doing the easy thing and saying there’s no danger, and then there’s others that are being honest about the risks they see over the horizon which causes moronic threads like this. There’s no winning… people are too damn cynical

[–] justgetoffmylawn@alien.top 1 points 10 months ago

I'd watch. Movie or episodic format?

[–] DeathsCompanion@alien.top 1 points 10 months ago

People barely take climate change seriously and we're fairly confident it's going to wipe us out.

[–] yashpathack@alien.top 1 points 10 months ago (1 children)

In the pursuit of getting rich, transparency and responsibility often take a back seat.

[–] fab_space@alien.top 1 points 10 months ago

👏👏👏

[–] psi-love@alien.top 1 points 10 months ago

The bad thing is, just because Andrew Ng states this, doesn't make it true or the possibility of dangers less relevant. There are people outside of big businesses like Hinton who also warn, even though he is not part of "big companies" anymore.

Also, what is all of this about? In the end there are multiple scenarios in which ways AI can harm society. It probably won't be about Terminator rising. On the other hand precausions revolve around the fact that we actually don't really know, because this technology is so new.

I also don't think that "big companies" like OpenAI even need to shut down smaller businesses, because - as Sam Altman stated - incoming money really isn't any issue for them at all. They are drowning in money.

While there are certainly people who only care for money and other kinds of status symbols, I still believe that many people working within those companies actually try to be truthful about their work as individuals.

[–] whyzantium@alien.top 1 points 10 months ago

How could they be "lying" about risks? Risks aren't facts, they're statements of probability. Big tech companies may benefit from propagating the existence of risk, but they can't be "lying" about risks unless there's some scientific study showing that the risk doesn't exist.

[–] The_One_Who_Slays@alien.top 1 points 10 months ago

In other news: the water is wet.

[–] Sabin_Stargem@alien.top 1 points 10 months ago

I am not concerned about the AI itself. Rather, it is about who instills "the rules" into that AI. Are they going to be Asimovian or Robocopic?

Knowing human history, I am not optimistic.

[–] laveshnk@alien.top 1 points 10 months ago

Doesn’t basically everyone know who Andrew Ng is? Or at least change google brain to Coursera’s founder maybe..?

[–] Naiw80@alien.top 1 points 10 months ago

I'm so baffled this has not been realised by people before, it's so obvious and it's not the first time in history it happens either.

First of all Max Tegmark, it's not even the slightest suspicious that his "non profit" organisation received millions of donations from Elon Musk? I have not figured out what Elons stake in this is yet but I have absolutely no doubt in my mind it's economical, basically everything he ever did and said has been to manipulate the Stockmarket etc, I doubt that changed recently.

Then you have OpenAI, that first and formost is everything but Open and very ProprietaryAI nowadays, and what seriously annoys me is that OpenAI in particular been "teasing" about "AGI in n days" etc on several occasions for what purpose if not to manipulate expectations and investors, yet they are one of the most driving in this matter- are people really that stupid that they can't put together 1 and 1?

[–] ahmmu20@alien.top 1 points 10 months ago

I mean it's not a complete lie, but definitely over exaggerated!

There much more subtle and urgent risks to mitigate, which haven't been caused by AI -- though the AI has the potential to accelerate/widen them.

Governments and authorities need to be very careful with regulations, let alone over regulating -- because the trust of people in governments is very low. Making it really hard for people to follow and for organizations to adhere.

Minding the fact that the world is not aligned on almost anything, other countries will for sure take advantage of the situation, offer almost regulation-free zone for AI development -- so they attract as many experts and businesses as possible.

On a personal note, I'm following the EU AI regulations closely, to see where it goes -- and in case the EU ended up forcing heavy regulations that slow down the development. Then I'm one of the people who will look for other place to move to, where I can get access the latests with minimum-to-no concerns.

[–] 218-69@alien.top 1 points 10 months ago

No shit, they can loophole around any regulation with their gazillions, it's everyone else that gets fucked.

Young Man! OpenAI and friends

[–] mwax321@alien.top 1 points 10 months ago

Honestly would make a good skynet origin story for the next terminator